Search This Blog

Sunday, September 8, 2019

Quantum entanglement

From Wikipedia, the free encyclopedia
 
Spontaneous parametric down-conversion process can split photons into type II photon pairs with mutually perpendicular polarization.
 
Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated, interact, or share spatial proximity in ways such that the quantum state of each particle cannot be described independently of the state of the others, even when the particles are separated by a large distance. 

Measurements of physical properties such as position, momentum, spin, and polarization, performed on entangled particles are found to be perfectly correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, the spin of the other particle, measured on the same axis, will be found to be counterclockwise, as is to be expected due to their entanglement. However, this behavior gives rise to seemingly paradoxical effects: any measurement of a property of a particle performs an irreversible collapse on that particle and will change the original quantum state. In the case of entangled particles, such a measurement will be on the entangled system as a whole.

Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen, and several papers by Erwin Schrödinger shortly thereafter, describing what came to be known as the EPR paradox. Einstein and others considered such behavior to be impossible, as it violated the local realism view of causality (Einstein referring to it as "spooky action at a distance") and argued that the accepted formulation of quantum mechanics must therefore be incomplete.

Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally in tests where the polarization or spin of entangled particles were measured at separate locations, statistically violating Bell's inequality. In earlier tests it couldn't be absolutely ruled out that the test result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location. However so-called "loophole-free" Bell tests have been performed in which the locations were separated such that communications at the speed of light would have taken longer—in one case 10,000 times longer—than the interval between the measurements.

According to some interpretations of quantum mechanics, the effect of one measurement occurs instantly. Other interpretations which don't recognize wavefunction collapse dispute that there is any "effect" at all. However, all interpretations agree that entanglement produces correlation between the measurements and that the mutual information between the entangled particles can be exploited, but that any transmission of information at faster-than-light speeds is impossible.

Quantum entanglement has been demonstrated experimentally with photons, neutrinos, electrons, molecules as large as buckyballs, and even small diamonds. On 13 July 2019, scientists from the University of Glasgow reported taking the first ever photo of a strong form of quantum entanglement known as Bell entanglement. The utilization of entanglement in communication and computation is a very active area of research.

History

Article headline regarding the Einstein–Podolsky–Rosen paradox (EPR paradox) paper, in the May 4, 1935 issue of The New York Times.
 
The counterintuitive predictions of quantum mechanics about strongly correlated systems were first discussed by Albert Einstein in 1935, in a joint paper with Boris Podolsky and Nathan Rosen. In this study, the three formulated the Einstein–Podolsky–Rosen paradox (EPR paradox), a thought experiment that attempted to show that quantum mechanical theory was incomplete. They wrote: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete."

However, the three scientists did not coin the word entanglement, nor did they generalize the special properties of the state they considered. Following the EPR paper, Erwin Schrödinger wrote a letter to Einstein in German in which he used the word Verschränkung (translated by himself as entanglement) "to describe the correlations between two particles that interact and then separate, as in the EPR experiment."

Schrödinger shortly thereafter published a seminal paper defining and discussing the notion of "entanglement." In the paper he recognized the importance of the concept, and stated: "I would not call [entanglement] one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought." 

Like Einstein, Schrödinger was dissatisfied with the concept of entanglement, because it seemed to violate the speed limit on the transmission of information implicit in the theory of relativity. Einstein later famously derided entanglement as "spukhafte Fernwirkung" or "spooky action at a distance." 

The EPR paper generated significant interest among physicists which inspired much discussion about the foundations of quantum mechanics (perhaps most famously Bohm's interpretation of quantum mechanics), but produced relatively little other published work. Despite the interest, the weak point in EPR's argument was not discovered until 1964, when John Stewart Bell proved that one of their key assumptions, the principle of locality, as applied to the kind of hidden variables interpretation hoped for by EPR, was mathematically inconsistent with the predictions of quantum theory. 

Specifically, Bell demonstrated an upper limit, seen in Bell's inequality, regarding the strength of correlations that can be produced in any theory obeying local realism, and showed that quantum theory predicts violations of this limit for certain entangled systems. His inequality is experimentally testable, and there have been numerous relevant experiments, starting with the pioneering work of Stuart Freedman and John Clauser in 1972 and Alain Aspect's experiments in 1982, all of which have shown agreement with quantum mechanics rather than the principle of local realism.

For decades, each had left open at least one loophole by which it was possible to question the validity of the results. However, in 2015 an experiment was performed that simultaneously closed both the detection and locality loopholes, and was heralded as "loophole-free"; this experiment ruled out a large class of local realism theories with certainty. Alain Aspect notes that the "setting-independence loophole" – which he refers to as "far-fetched", yet, a "residual loophole" that "cannot be ignored" – has yet to be closed, and the free-will / superdeterminism loophole is unclosable; saying "no experiment, as ideal as it is, can be said to be totally loophole-free."

A minority opinion holds that although quantum mechanics is correct, there is no superluminal instantaneous action-at-a-distance between entangled particles once the particles are separated.

Bell's work raised the possibility of using these super-strong correlations as a resource for communication. It led to the 1984 discovery of quantum key distribution protocols, most famously BB84 by Charles H. Bennett and Gilles Brassard and E91 by Artur Ekert. Although BB84 does not use entanglement, Ekert's protocol uses the violation of a Bell's inequality as a proof of security.

In October 2018, physicists reported that quantum behavior can be explained with classical physics for a single particle, but not for multiple particles as in quantum entanglement and related nonlocality phenomena.

In July 2019 physicists reported, for the first time, capturing an image of quantum entanglement.

Concept

Meaning of entanglement

An entangled system is defined to be one whose quantum state cannot be factored as a product of states of its local constituents; that is to say, they are not individual particles but are an inseparable whole. In entanglement, one constituent cannot be fully described without considering the other(s). The state of a composite system is always expressible as a sum, or superposition, of products of states of local constituents; it is entangled if this sum necessarily has more than one term.

Quantum systems can become entangled through various types of interactions. For some ways in which entanglement may be achieved for experimental purposes, see the section below on methods. Entanglement is broken when the entangled particles decohere through interaction with the environment; for example, when a measurement is made.

As an example of entanglement: a subatomic particle decays into an entangled pair of other particles. The decay events obey the various conservation laws, and as a result, the measurement outcomes of one daughter particle must be highly correlated with the measurement outcomes of the other daughter particle (so that the total momenta, angular momenta, energy, and so forth remains roughly the same before and after this process). For instance, a spin-zero particle could decay into a pair of spin-½ particles. Since the total spin before and after this decay must be zero (conservation of angular momentum), whenever the first particle is measured to be spin up on some axis, the other, when measured on the same axis, is always found to be spin down. (This is called the spin anti-correlated case; and if the prior probabilities for measuring each spin are equal, the pair is said to be in the singlet state.) 

The special property of entanglement can be better observed if we separate the said two particles. Let's put one of them in the White House in Washington and the other in Buckingham Palace (think about this as a thought experiment, not an actual one). Now, if we measure a particular characteristic of one of these particles (say, for example, spin), get a result, and then measure the other particle using the same criterion (spin along the same axis), we find that the result of the measurement of the second particle will match (in a complementary sense) the result of the measurement of the first particle, in that they will be opposite in their values.

The above result may or may not be perceived as surprising. A classical system would display the same property, and a hidden variable theory (see below) would certainly be required to do so, based on conservation of angular momentum in classical and quantum mechanics alike. The difference is that a classical system has definite values for all the observables all along, while the quantum system does not. In a sense to be discussed below, the quantum system considered here seems to acquire a probability distribution for the outcome of a measurement of the spin along any axis of the other particle upon measurement of the first particle. This probability distribution is in general different from what it would be without measurement of the first particle. This may certainly be perceived as surprising in the case of spatially separated entangled particles.

Paradox

The paradox is that a measurement made on either of the particles apparently collapses the state of the entire entangled system—and does so instantaneously, before any information about the measurement result could have been communicated to the other particle (assuming that information cannot travel faster than light) and hence assured the "proper" outcome of the measurement of the other part of the entangled pair. In the Copenhagen interpretation, the result of a spin measurement on one of the particles is a collapse into a state in which each particle has a definite spin (either up or down) along the axis of measurement. The outcome is taken to be random, with each possibility having a probability of 50%. However, if both spins are measured along the same axis, they are found to be anti-correlated. This means that the random outcome of the measurement made on one particle seems to have been transmitted to the other, so that it can make the "right choice" when it too is measured.

The distance and timing of the measurements can be chosen so as to make the interval between the two measurements spacelike, hence, any causal effect connecting the events would have to travel faster than light. According to the principles of special relativity, it is not possible for any information to travel between two such measuring events. It is not even possible to say which of the measurements came first. For two spacelike separated events x1 and x2 there are inertial frames in which x1 is first and others in which x2 is first. Therefore, the correlation between the two measurements cannot be explained as one measurement determining the other: different observers would disagree about the role of cause and effect.

(In fact similar paradoxes can arise even without entanglement: the position of a single particle is spread out over space, and two widely separated detectors attempting to detect the particle in two different places must instantaneously attain appropriate correlation, so that they do not both detect the particle.)

Hidden variables theory

A possible resolution to the paradox is to assume that quantum theory is incomplete, and the result of measurements depends on predetermined "hidden variables". The state of the particles being measured contains some hidden variables, whose values effectively determine, right from the moment of separation, what the outcomes of the spin measurements are going to be. This would mean that each particle carries all the required information with it, and nothing needs to be transmitted from one particle to the other at the time of measurement. Einstein and others (see the previous section) originally believed this was the only way out of the paradox, and the accepted quantum mechanical description (with a random measurement outcome) must be incomplete.

Violations of Bell's inequality

The hidden variables theory fails, however, when measurements of the spin of entangled particles along different axes are considered (e.g., along any of three axes that make angles of 120 degrees). If a large number of pairs of such measurements are made (on a large number of pairs of entangled particles), then statistically, if the local realist or hidden variables view were correct, the results would always satisfy Bell's inequality. A number of experiments have shown in practice that Bell's inequality is not satisfied. However, prior to 2015, all of these had loophole problems that were considered the most important by the community of physicists. When measurements of the entangled particles are made in moving relativistic reference frames, in which each measurement (in its own relativistic time frame) occurs before the other, the measurement results remain correlated.

The fundamental issue about measuring spin along different axes is that these measurements cannot have definite values at the same time―they are incompatible in the sense that these measurements' maximum simultaneous precision is constrained by the uncertainty principle. This is contrary to what is found in classical physics, where any number of properties can be measured simultaneously with arbitrary accuracy. It has been proven mathematically that compatible measurements cannot show Bell-inequality-violating correlations, and thus entanglement is a fundamentally non-classical phenomenon.

Other types of experiments

In experiments in 2012 and 2013, polarization correlation was created between photons that never coexisted in time. The authors claimed that this result was achieved by entanglement swapping between two pairs of entangled photons after measuring the polarization of one photon of the early pair, and that it proves that quantum non-locality applies not only to space but also to time.

In three independent experiments in 2013 it was shown that classically-communicated separable quantum states can be used to carry entangled states. The first loophole-free Bell test was held in TU Delft in 2015 confirming the violation of Bell inequality.

In August 2014, Brazilian researcher Gabriela Barreto Lemos and team were able to "take pictures" of objects using photons that had not interacted with the subjects, but were entangled with photons that did interact with such objects. Lemos, from the University of Vienna, is confident that this new quantum imaging technique could find application where low light imaging is imperative, in fields like biological or medical imaging.

In 2015, Markus Greiner's group at Harvard performed a direct measurement of Renyi entanglement in a system of ultracold bosonic atoms.

From 2016 various companies like IBM, Microsoft etc. have successfully created quantum computers and allowed developers and tech enthusiasts to openly experiment with concepts of quantum mechanics including quantum entanglement.

Mystery of time

There have been suggestions to look at the concept of time as an emergent phenomenon that is a side effect of quantum entanglement. In other words, time is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks, or of any objects usable as clocks) into the same history. This was first fully theorized by Don Page and William Wootters in 1983. The Wheeler–DeWitt equation that combines general relativity and quantum mechanics – by leaving out time altogether – was introduced in the 1960s and it was taken up again in 1983, when Page and Wootters made a solution based on quantum entanglement. Page and Wootters argued that entanglement can be used to measure time.

In 2013, at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, researchers performed the first experimental test of Page and Wootters' ideas. Their result has been interpreted to confirm that time is an emergent phenomenon for internal observers but absent for external observers of the universe just as the Wheeler-DeWitt equation predicts.

Source for the arrow of time

Physicist Seth Lloyd says that quantum uncertainty gives rise to entanglement, the putative source of the arrow of time. According to Lloyd; "The arrow of time is an arrow of increasing correlations." The approach to entanglement would be from the perspective of the causal arrow of time, with the assumption that the cause of the measurement of one particle determines the effect of the result of the other particle's measurement.

Emergent gravity

Based on AdS/CFT correspondence, Mark Van Raamsdonk suggested that spacetime arises as an emergent phenomenon of the quantum degrees of freedom that are entangled and live in the boundary of the space-time. Induced gravity can emerge from the entanglement first law.

Non-locality and entanglement

In the media and popular science, quantum non-locality is often portrayed as being equivalent to entanglement. While this is true for pure bipartite quantum states, in general entanglement is only necessary for non-local correlations, but there exist mixed entangled states that do not produce such correlations. A well-known example is the Werner states that are entangled for certain values of , but can always be described using local hidden variables. Moreover, it was shown that, for arbitrary numbers of parties, there exist states that are genuinely entangled but admit a local model. The mentioned proofs about the existence of local models assume that there is only one copy of the quantum state available at a time. If the parties are allowed to perform local measurements on many copies of such states, then many apparently local states (e.g., the qubit Werner states) can no longer be described by a local model. This is, in particular, true for all distillable states. However, it remains an open question whether all entangled states become non-local given sufficiently many copies.

In short, entanglement of a state shared by two parties is necessary but not sufficient for that state to be non-local. It is important to recognize that entanglement is more commonly viewed as an algebraic concept, noted for being a prerequisite to non-locality as well as to quantum teleportation and to superdense coding, whereas non-locality is defined according to experimental statistics and is much more involved with the foundations and interpretations of quantum mechanics.

Climate effects of particulates and aerosols

From Wikipedia, the free encyclopedia

2005 radiative forcings and uncertainties as estimated by the IPCC.
 
Atmospheric aerosols affect the climate of the earth by changing the amount of incoming solar radiation and outgoing terrestrial longwave radiation retained in the earth's system. This occurs through several distinct mechanisms which are split into direct, indirect and semi-direct aerosol effects. The aerosol climate effects are the biggest source of uncertainty in future climate predictions. The Intergovernmental Panel on Climate Change, Third Assessment Report, says: While the radiative forcing due to greenhouse gases may be determined to a reasonably high degree of accuracy... the uncertainties relating to aerosol radiative forcings remain large, and rely to a large extent on the estimates from global modelling studies that are difficult to verify at the present time.

Aerosol radiative effects

Global aerosol optical thickness. The aerosol scale (yellow to dark reddish-brown) indicates the relative amount of particles that absorb sunlight.

Direct effect

Particulates in the air causing shades of grey and pink in Mumbai during sunset
 
The direct aerosol effect consists of any direct interaction of radiation with atmospheric aerosols, such as absorption or scattering. It affects both short and longwave radiation to produce a net negative radiative forcing. The magnitude of the resultant radiative forcing due to the direct effect of an aerosol is dependent on the albedo of the underlying surface, as this affects the net amount of radiation absorbed or scattered to space. e.g. if a highly scattering aerosol is above a surface of low albedo it has a greater radiative forcing than if it was above a surface of high albedo. The converse is true of absorbing aerosol, with the greatest radiative forcing arising from a highly absorbing aerosol over a surface of high albedo. The direct aerosol effect is a first order effect and is therefore classified as a radiative forcing by the IPCC. The interaction of an aerosol with radiation is quantified by the single-scattering albedo (SSA), the ratio of scattering alone to scattering plus absorption (extinction) of radiation by a particle. The SSA tends to unity if scattering dominates, with relatively little absorption, and decreases as absorption increases, becoming zero for infinite absorption. For example, the sea-salt aerosol has an SSA of 1, as a sea-salt particle only scatters, whereas soot has an SSA of 0.23, showing that it is a major atmospheric aerosol absorber.

Indirect effect

The Indirect aerosol effect consists of any change to the earth's radiative budget due to the modification of clouds by atmospheric aerosols, and consists of several distinct effects. Cloud droplets form onto pre-existing aerosol particles, known as cloud condensation nuclei (CCN). 

For any given meteorological conditions, an increase in CCN leads to an increase in the number of cloud droplets. This leads to more scattering of shortwave radiation i.e. an increase in the albedo of the cloud, known as the Cloud albedo effect, First indirect effect or Twomey effect. Evidence supporting the cloud albedo effect has been observed from the effects of ship exhaust plumes and biomass burning on cloud albedo compared to ambient clouds. The Cloud albedo aerosol effect is a first order effect and therefore classified as a radiative forcing by the IPCC.

An increase in cloud droplet number due to the introduction of aerosol acts to reduce the cloud droplet size, as the same amount of water is divided into more droplets. This has the effect of suppressing precipitation, increasing the cloud lifetime, known as the cloud lifetime aerosol effect, second indirect effect or Albrecht effect. This has been observed as the suppression of drizzle in ship exhaust plume compared to ambient clouds, and inhibited precipitation in biomass burning plumes. This cloud lifetime effect is classified as a climate feedback (rather than a radiative forcing) by the IPCC due to the interdependence between it and the hydrological cycle. However, it has previously been classified as a negative radiative forcing.

Semi-direct effect

The Semi-direct effect concerns any radiative effect caused by absorbing atmospheric aerosol such as soot, apart from direct scattering and absorption, which is classified as the direct effect. It encompasses many individual mechanisms, and in general is more poorly defined and understood than the direct and indirect aerosol effects. For instance, if absorbing aerosols are present in a layer aloft in the atmosphere, they can heat surrounding air which inhibits the condensation of water vapour, resulting in less cloud formation. Additionally, heating a layer of the atmosphere relative to the surface results in a more stable atmosphere due to the inhibition of atmospheric convection. This inhibits the convective uplift of moisture, which in turn reduces cloud formation. The heating of the atmosphere aloft also leads to a cooling of the surface, resulting in less evaporation of surface water. The effects described here all lead to a reduction in cloud cover i.e. an increase in planetary albedo. The semi-direct effect classified as a climate feedback) by the IPCC due to the interdependence between it and the hydrological cycle. However, it has previously been classified as a negative radiative forcing.

Roles of different aerosol species

Sulfate aerosol

Sulfate aerosol has two main effects, direct and indirect. The direct effect, via albedo, is a cooling effect that slows the overall rate of global warming: the IPCC's best estimate of the radiative forcing is −0.4 watts per square meter with a range of −0.2 to −0.8 W/m² but there are substantial uncertainties. The effect varies strongly geographically, with most cooling believed to be at and downwind of major industrial centres. Modern climate models addressing the attribution of recent climate change take into account sulfate forcing, which appears to account (at least partly) for the slight drop in global temperature in the middle of the 20th century. The indirect effect (via the aerosol acting as cloud condensation nuclei, CCN, and thereby modifying the cloud properties -albedo and lifetime-) is more uncertain but is believed to be a cooling.

Black carbon

Black carbon (BC), or carbon black, or elemental carbon (EC), often called soot, is composed of pure carbon clusters, skeleton balls and buckyballs, and is one of the most important absorbing aerosol species in the atmosphere. It should be distinguished from organic carbon (OC): clustered or aggregated organic molecules on their own or permeating an EC buckyball. BC from fossil fuels is estimated by the IPCC in the Fourth Assessment Report of the IPCC, 4AR, to contribute a global mean radiative forcing of +0.2 W/m² (was +0.1 W/m² in the Second Assessment Report of the IPCC, SAR), with a range +0.1 to +0.4 W/m². Bond et al., however, states that "the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W/m² with 90% uncertainty bounds of (+0.08, +1.27) W/m²" with "total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W/m²"

Instances of aerosol affecting climate

Solar radiation reduction due to volcanic eruptions
 
Volcanoes are a large natural source of aerosol and have been linked to changes in the earth's climate often with consequences for the human population. Eruptions linked to changes in climate include the 1600 eruption of Huaynaputina which was linked to the Russian famine of 1601 - 1603, leading to the deaths of two million, and the 1991 eruption of Mount Pinatubo which caused a global cooling of approximately 0.5 °C lasting several years. Research tracking the effect of light-scattering aerosols in the stratosphere during 2000 and 2010 and comparing its pattern to volcanic activity show a close correlation. Simulations of the effect of anthropogenic particles showed little influence at present levels.

Aerosols are also thought to affect weather and climate on a regional scale. The failure of the Indian Monsoon has been linked to the suppression of evaporation of water from the Indian Ocean due to the semi-direct effect of anthropogenic aerosol.

Recent studies of the Sahel drought and major increases since 1967 in rainfall over the Northern Territory, Kimberley, Pilbara and around the Nullarbor Plain have led some scientists to conclude that the aerosol haze over South and East Asia has been steadily shifting tropical rainfall in both hemispheres southward.

The latest studies of severe rainfall decline over southern Australia since 1997 have led climatologists there to consider the possibility that these Asian aerosols have shifted not only tropical but also midlatitude systems southward.

Saturday, September 7, 2019

SN 1987A

From Wikipedia, the free encyclopedia
 
SN 1987A
Eso0708a.jpg
Supernova 1987A is the bright star at the centre of the image, near the Tarantula nebula.
Other designationsSN 1987A, AAVSO 0534-69
Event typeSupernova edit this on wikidata
Spectral classType II (peculiar)
DateFebruary 24, 1987 (23:00 UTC) Las Campanas Observatory
ConstellationDorado
Right ascension 05h 35m 28.03s
Declination−69° 16′ 11.79″
EpochJ2000
Galactic coordinatesG279.7-31.9
Distance51.4 kpc (168,000 ly)
HostLarge Magellanic Cloud
ProgenitorSanduleak -69 202
Progenitor typeB3 supergiant
Colour (B-V)+0.085
Notable featuresClosest recorded supernova since invention of telescope
Peak apparent magnitude+2.9

SN 1987A was a type II supernova in the Large Magellanic Cloud, a dwarf galaxy satellite of the Milky Way. It occurred approximately 51.4 kiloparsecs (168,000 light-years) from Earth and was the closest observed supernova since Kepler's Supernova, visible from earth in 1604. 1987A's light reached Earth on February 23, 1987, and as the earliest supernova discovered that year, was labeled "1987A". Its brightness peaked in May, with an apparent magnitude of about 3.

It was the first supernova that modern astronomers were able to study in great detail, and its observations have provided much insight into core-collapse supernovae.

SN 1987A provided the first opportunity to confirm by direct observation the radioactive source of the energy for visible light emissions, by detecting predicted gamma-ray line radiation from two of its abundant radioactive nuclei. This proved the radioactive nature of the long-duration post-explosion glow of supernovae.

Discovery

SN 1987A within the Large Magellanic Cloud
 
SN 1987A was discovered independently by Ian Shelton and Oscar Duhalde at the Las Campanas Observatory in Chile on February 24, 1987, and within the same 24 hours by Albert Jones in New Zealand. On March 4–12, 1987, it was observed from space by Astron, the largest ultraviolet space telescope of that time.

Progenitor

The remnant of SN 1987A

Four days after the event was recorded, the progenitor star was tentatively identified as Sanduleak −69 202 (Sk -69 202), a blue supergiant. After the supernova faded, that identification was definitely confirmed by Sk −69 202 having disappeared. This was an unexpected identification, because models of high mass stellar evolution at the time did not predict that blue supergiants are susceptible to a supernova event. 

Some models of the progenitor attributed the color to its chemical composition rather than its evolutionary state, particularly the low levels of heavy elements, among other factors. There was some speculation that the star might have merged with a companion star before the supernova. However, it is now widely understood that blue supergiants are natural progenitors of some supernovae, although there is still speculation that the evolution of such stars could require mass loss involving a binary companion.

Neutrino emissions

Remnant of SN 1987A seen in light overlays of different spectra. ALMA data (radio, in red) shows newly formed dust in the center of the remnant. Hubble (visible, in green) and Chandra (X-ray, in blue) data show the expanding shock wave.
 
Approximately two to three hours before the visible light from SN 1987A reached Earth, a burst of neutrinos was observed at three neutrino observatories. This was likely due to neutrino emission, which occurs simultaneously with core collapse, but before visible light was emitted. Visible light is transmitted only after the shock wave reaches the stellar surface. At 07:35 UT, Kamiokande II detected 12 antineutrinos; IMB, 8 antineutrinos; and Baksan, 5 antineutrinos; in a burst lasting less than 13 seconds. Approximately three hours earlier, the Mont Blanc liquid scintillator detected a five-neutrino burst, but this is generally not believed to be associated with SN 1987A.

The Kamiokande II detection, which at 12 neutrinos had the largest sample population, showed the neutrinos arriving in two distinct pulses. The first pulse started at 07:35:35 and comprised 9 neutrinos, all of which arrived over a period of 1.915 seconds. A second pulse of three neutrinos arrived between 9.219 and 12.439 seconds after the first neutrino was detected, for a pulse duration of 3.220 seconds.

Although only 25 neutrinos were detected during the event, it was a significant increase from the previously observed background level. This was the first time neutrinos known to be emitted from a supernova had been observed directly, which marked the beginning of neutrino astronomy. The observations were consistent with theoretical supernova models in which 99% of the energy of the collapse is radiated away in the form of neutrinos. The observations are also consistent with the models' estimates of a total neutrino count of 1058 with a total energy of 1046 joules, i.e. a mean value of some dozens of MeV per neutrino.

The neutrino measurements allowed upper bounds on neutrino mass and charge, as well as the number of flavors of neutrinos and other properties. For example, the data show that within 5% confidence, the rest mass of the electron neutrino is at most 16 eV/c2, 1/30,000 the mass of an electron. The data suggest that the total number of neutrino flavors is at most 8 but other observations and experiments give tighter estimates. Many of these results have since been confirmed or tightened by other neutrino experiments such as more careful analysis of solar neutrinos and atmospheric neutrinos as well as experiments with artificial neutrino sources.

Missing neutron star

The bright ring around the central region of the exploded star is composed of ejected material.
 
SN 1987A appears to be a core-collapse supernova, which should result in a neutron star given the size of the original star. The neutrino data indicate that a compact object did form at the star's core. However, since the supernova first became visible, astronomers have been searching for the collapsed core but have not detected it. The Hubble Space Telescope has taken images of the supernova regularly since August 1990, but, so far, the images have shown no evidence of a neutron star. A number of possibilities for the 'missing' neutron star are being considered. The first is that the neutron star is enshrouded in dense dust clouds so that it cannot be seen. Another is that a pulsar was formed, but with either an unusually large or small magnetic field. It is also possible that large amounts of material fell back on the neutron star, so that it further collapsed into a black hole. Neutron stars and black holes often give off light as material falls onto them. If there is a compact object in the supernova remnant, but no material to fall onto it, it would be very dim and could therefore avoid detection. Other scenarios have also been considered, such as whether the collapsed core became a quark star.

Light curve

Much of the light curve, or graph of luminosity as a function of time, after the explosion of a type II supernova such as SN 1987A is provided its energy by radioactive decay. Although the luminous emission consists of optical photons, it is the radioactive power absorbed that keeps the remnant hot enough to radiate light. Without radioactive heat it would quickly dim. The radioactive decay of 56Ni through its daughters 56Co to 56Fe produces gamma-ray photons that are absorbed and dominate the heating and thus the luminosity of the ejecta at intermediate times (several weeks) to late times (several months). Energy for the peak of the light curve of SN1987A was provided by the decay of 56Ni to 56Co (half life of 6 days) while energy for the later light curve in particular fit very closely with the 77.3-day half-life of 56Co decaying to 56Fe. Later measurements by space gamma-ray telescopes of the small fraction of the 56Co and 57Co gamma rays that escaped the SN1987A remnant without absorption confirmed earlier predictions that those two radioactive nuclei were the power source.

Because the 56Co in SN1987A has now completely decayed, it no longer supports the luminosity of the SN 1987A ejecta. That is currently powered by the radioactive decay of 44Ti with a half life of about 60 years. With this change, X-rays produced by the ring interactions of the ejecta began to contribute significantly to the total light curve. This was noticed by the Hubble Space Telescope as a steady increase in luminosity 10,000 days after the event in the blue and red spectral bands. X-ray lines 44Ti observed by the INTEGRAL space X-ray telescope showed that the total mass of radioactive 44Ti synthesized during the explosion was 3.1 ± 0.8×10−4 M.

Observations of the radioactive power from their decays in the 1987A light curve have measured accurate total masses of the 56Ni, 57Ni, and 44Ti created in the explosion, which agree with the masses measured by gamma-ray line space telescopes and provides nucleosynthesis constraints on the computed supernova model.

Interaction with circumstellar material

The expanding ring-shaped remnant of SN 1987A and its interaction with its surroundings, seen in X-ray and visible light.
 
Sequence of HST images from 1994 to 2009, showing the collision of the expanding remnant with a ring of material ejected by the progenitor 20,000 years before the supernova
 
The three bright rings around SN 1987A that were visible after a few months in images by the Hubble Space Telescope are material from the stellar wind of the progenitor. These rings were ionized by the ultraviolet flash from the supernova explosion, and consequently began emitting in various emission lines. These rings did not "turn on" until several months after the supernova; the turn-on process can be very accurately studied through spectroscopy. The rings are large enough that their angular size can be measured accurately: the inner ring is 0.808 arcseconds in radius. The time light traveled to light up the inner ring gives its radius of 0.66 (ly) light years. Using this as the base of a right angle triangle and the angular size as seen from the Earth for the local angle, one can use basic trigonometry to calculate the distance to SN 1987A, which is about 168,000 light-years. The material from the explosion is catching up with the material expelled during both its red and blue supergiant phases and heating it, so we observe ring structures about the star.

Around 2001, the expanding (>7000 km/s) supernova ejecta collided with the inner ring. This caused its heating and the generation of x-rays—the x-ray flux from the ring increased by a factor of three between 2001 and 2009. A part of the x-ray radiation, which is absorbed by the dense ejecta close to the center, is responsible for a comparable increase in the optical flux from the supernova remnant in 2001–2009. This increase of the brightness of the remnant reversed the trend observed before 2001, when the optical flux was decreasing due to the decaying of 44Ti isotope.

A study reported in June 2015, using images from the Hubble Space Telescope and the Very Large Telescope taken between 1994 and 2014, shows that the emissions from the clumps of matter making up the rings are fading as the clumps are destroyed by the shock wave. It is predicted the ring will fade away between 2020 and 2030. These findings are also supported by the results of a three-dimensional hydrodynamic model which describes the interaction of the blast wave with the circumstellar nebula. The model also shows that X-ray emission from ejecta heated up by the shock will be dominant very soon, after the ring will fade away. As the shock wave passes the circumstellar ring it will trace the history of mass loss of the supernova's progenitor and provide useful information for discriminating among various models for the progenitor of SN 1987A.

In 2018, radio observations from the interaction between the circumstellar ring of dust and the shockwave has confirmed the shockwave has now left the circumstellar material. It also shows that the speed of the shockwave, which slowed down to 2,300 km/s while interacting with the dust in the ring, has now re-accelerated to 3,600 km/s.

Condensation of warm dust in the ejecta

Images of the SN 1987A debris obtained with the instruments T-ReCS at the 8-m Gemini telescope and VISIR at one of the four VLT. Dates are indicated. An HST image is inserted at the bottom right (credits Patrice Bouchet, CEA-Saclay)
 
Soon after the SN 1987A outburst, three major groups embarked in a photometric monitoring of the supernova: SAAO, CTIO, and ESO. In particular, the ESO team reported an infrared excess which became apparent beginning less than one month after the explosion (March 11, 1987). Three possible interpretations for it were discussed in this work: the infrared echo hypothesis was discarded, and thermal emission from dust that could have condensed in the ejecta was favoured (in which case the estimated temperature at that epoch was ~ 1250 K, and the dust mass was approximately 6.6×10−7 M). The possibility that the IR excess could be produced by optically thick free-free emission seemed unlikely because the luminosity in UV photons needed to keep the envelope ionized was much larger than what was available, but it was not ruled out in view of the eventuality of electron scattering, which had not been considered. 

However, none of these three groups had sufficiently convincing proofs to claim for a dusty ejecta on the basis of an IR excess alone. 

Distribution of the dust inside the SN 1987A ejecta, as from the Lucy et al.'s model built at ESO
 
An independent Australian team advanced several argument in favour of an echo interpretation. This seemingly straightforward interpretation of the nature of the IR emission was challenged by the ESO group and definitively ruled out after presenting optical evidence for the presence of dust in the SN ejecta. To discriminate between the two interpretations, they considered the implication of the presence of an echoing dust cloud on the optical light curve, and on the existence of diffuse optical emission around the SN. They concluded that the expected optical echo from the cloud should be resolvable, and could be very bright with an integrated visual brightness of magnitude 10.3 around day 650. However, further optical observations, as expressed in SN light curve, showed no inflection in the light curve at the predicted level. Finally, the ESO team presented a convincing clumpy model for dust condensation in the ejecta.

Although it had been thought more than 50 years ago that dust could form in the ejecta of a core-collapse supernova, which in particular could explain the origin of the dust seen in young galaxies, that was the first time that such a condensation was observed. If SN 1987A is a typical representative of its class then the derived mass of the warm dust formed in the debris of core collapse supernovae is not sufficient to account for all the dust observed in the early universe. However, a much larger reservoir of ~0.25 solar mass of colder dust (at ~26 K) in the ejecta of SN 1987A was found with the Hershel infrared space telescope in 2011 and confirmed by ALMA later on (in 2014).

ALMA observations

Following the confirmation of a large amount of cold dust in the ejecta, ALMA has continued observing SN 1987A. Synchrotron radiation due to shock interaction in the equatorial ring has been measured. Cold (20–100K) carbon monoxide (CO) and silicate molecules (SiO) were observed. The data show that CO and SiO distributions are clumpy, and that different nucleosynthesis products (C, O and Si) are located in different places of the ejecta, indicating the footprints of the stellar interior at the time of the explosion.

Friday, September 6, 2019

History of supernova observation

From Wikipedia, the free encyclopedia
 
The Crab Nebula is a pulsar wind nebula associated with the 1054 supernova.
 
The known history of supernova observation goes back to 185 AD, when supernova SN 185 appeared, the oldest appearance of a supernova recorded by humankind. Several additional supernovae within the Milky Way galaxy have been recorded since that time, with SN 1604 being the most recent supernova to be observed in this galaxy.

Since the development of the telescope, the field of supernova discovery has expanded to other galaxies. These occurrences provide important information on the distances of galaxies. Successful models of supernova behavior have also been developed, and the role of supernovae in the star formation process is now increasingly understood.

Early history

The guest star reported by Chinese astronomers in 1054 is identified as SN 1054. The highlighted passages refer to the supernova.
 
The supernova explosion that formed the Vela Supernova Remnant most likely occurred 10,000–20,000 years ago. In 1976, NASA astronomers suggested that inhabitants of the southern hemisphere may have witnessed this explosion and recorded it symbolically. A year later, archaeologist George Michanowsky recalled some incomprehensible ancient markings in Bolivia that were left by Native Americans. The carvings showed four small circles flanked by two larger circles. The smaller circles resemble stellar groupings in the constellations Vela and Carina. One of the larger circles may represent the star Capella. Another circle is located near the position of the supernova remnant, George Michanowsky suggested this may represent the supernova explosion as witnessed by the indigenous residents.

In 185 CE, Chinese astronomers recorded the appearance of a bright star in the sky, and observed that it took about eight months to fade from the sky. It was observed to sparkle like a star and did not move across the heavens like a comet. These observations are consistent with the appearance of a supernova, and this is believed to be the oldest confirmed record of a supernova event by humankind. SN 185 may have also possibly been recorded in Roman literature, though no records have survived. The gaseous shell RCW 86 is suspected as being the remnant of this event, and recent X-ray studies show a good match for the expected age.

In 393 CE, the Chinese recorded the appearance of another "guest star", SN 393, in the modern constellation of Scorpius. Additional unconfirmed supernovae events may have been observed in 369 CE, 386 CE, 437 CE, 827 CE and 902 CE. However these have not yet been associated with a supernova remnant, and so they remain only candidates. Over a span of about 2,000 years, Chinese astronomers recorded a total of twenty such candidate events, including later explosions noted by Islamic, European, and possibly Indian and other observers.

The supernova SN 1006 appeared in the southern constellation of Lupus during the year 1006 CE. This was the brightest recorded star ever to appear in the night sky, and its presence was noted in China, Egypt, Iraq, Italy, Japan and Switzerland. It may also have been noted in France, Syria, and North America. Egyptian physician, astronomer and astrologer Ali ibn Ridwan gave the brightness of this star as one-quarter the brightness of the Moon. Modern astronomers have discovered the faint remnant of this explosion and determined that it was only 7,100 light-years from the Earth.

Supernova SN 1054 was another widely observed event, with Arab, Chinese, and Japanese astronomers recording the star's appearance in 1054 CE. It may also have been recorded by the Anasazi as a petroglyph. This explosion appeared in the constellation of Taurus, where it produced the Crab Nebula remnant. At its peak, the luminosity of SN 1054 may have been four times as bright as Venus, and it remained visible in daylight for 23 days and was visible in the night sky for 653 days.

There are fewer records of supernova SN 1181, which occurred in the constellation Cassiopeia just over a century after SN 1054. It was noted by Chinese and Japanese astronomers, however. The pulsar 3C58 may be the stellar relic from this event.

The Danish astronomer Tycho Brahe was noted for his careful observations of the night sky from his observatory on the island of Hven. In 1572 he noted the appearance of a new star, also in the constellation Cassiopeia. Later called SN 1572, this supernova was associated with a remnant during the 1960s.

A common belief in Europe during this period was the Aristotelian idea that the world beyond the Moon and planets was immutable. So observers argued that the phenomenon was something in the Earth's atmosphere. However Tycho noted that the object remained stationary from night to night—never changing its parallax—so it must lie far away. He published his observations in the small book De nova et nullius aevi memoria prius visa stella (Latin for "Concerning the new and previously unseen star") in 1573. It is from the title of this book that the modern word nova for cataclysmic variable stars is derived.

Multiwavelength X-ray image of the remnant of Kepler's Supernova, SN 1604. (Chandra X-ray Observatory)
 
The most recent supernova to be seen in the Milky Way galaxy was SN 1604, which was observed October 9, 1604. Several people, including Johannes van Heeck, noted the sudden appearance of this star, but it was Johannes Kepler who became noted for his systematic study of the object. He published his observations in the work De Stella nova in pede Serpentarii.

Galileo, like Tycho before him, tried in vain to measure the parallax of this new star, and then argued against the Aristotelian view of an immutable heavens. The remnant of this supernova was identified in 1941 at the Mount Wilson Observatory.

Telescope observation

The true nature of the supernova remained obscure for some time. Observers slowly came to recognize a class of stars that undergo long-term periodic fluctuations in luminosity. Both John Russell Hind in 1848 and Norman Pogson in 1863 had charted stars that underwent sudden changes in brightness. However, these received little attention from the astronomical community. Finally, in 1866, English astronomer William Huggins made the first spectroscopic observations of a nova, discovering lines of hydrogen in the unusual spectrum of the recurrent nova T Coronae Borealis. Huggins proposed a cataclysmic explosion as the underlying mechanism, and his efforts drew interest from other astronomers.

Animation showing the sky position of supernovae discovered since 1885. Some recent survey contributions are highlighted in color.
 
In 1885, a nova-like outburst was observed in the direction of the Andromeda Galaxy by Ernst Hartwig in Estonia. S  Andromedae increased to 6th magnitude, outshining the entire nucleus of the galaxy, then faded in a manner much like a nova. In 1917, George W. Ritchey measured the distance to the Andromeda Galaxy and discovered it lay much farther than had previously been thought. This meant that S  Andromedae, which did not just lie along the line of sight to the galaxy but had actually resided in the nucleus, released a much greater amount of energy than was typical for a nova.

Early work on this new category of nova was performed during the 1930s by Walter Baade and Fritz Zwicky at Mount Wilson Observatory. They identified S Andromedae, what they considered a typical supernova, as an explosive event that released radiation approximately equal to the Sun's total energy output for 107 years. They decided to call this new class of cataclysmic variables super-novae, and postulated that the energy was generated by the gravitational collapse of ordinary stars into neutron stars. The name super-novae was first used in a 1931 lecture at Caltech by Zwicky, then used publicly in 1933 at a meeting of the American Physical Society. By 1938, the hyphen had been lost and the modern name was in use.

Although supernovae are relatively rare events, occurring on average about once every 50 years in the Milky Way, observations of distant galaxies allowed supernovae to be discovered and examined more frequently. The first supernova detection patrol was begun by Zwicky in 1933. He was joined by Josef J. Johnson from Caltech in 1936. Using a 45-cm Schmidt telescope at Palomar observatory, they discovered twelve new supernovae within three years by comparing new photographic plates to reference images of extragalactic regions.

In 1938, Walter Baade became the first astronomer to identify a nebula as a supernova remnant when he suggested that the Crab Nebula was the remains of SN 1054. He noted that, while it had the appearance of a planetary nebula, the measured velocity of expansion was much too large to belong to that classification. During the same year, Baade first proposed the use of the Type Ia supernova as a secondary distance indicator. Later, the work of Allan Sandage and Gustav Tammann helped refine the process so that Type Ia supernovae became a type of standard candle for measuring large distances across the cosmos.

The first spectral classification of these distant supernovae was performed by Rudolph Minkowski in 1941. He categorized them into two types, based on whether or not lines of the element hydrogen appeared in the supernova spectrum. Zwicky later proposed additional types III, IV, and V, although these are no longer used and now appear to be associated with single peculiar supernova types. Further sub-division of the spectra categories resulted in the modern supernova classification scheme.

In the aftermath of the Second World War, Fred Hoyle worked on the problem of how the various observed elements in the universe were produced. In 1946 he proposed that a massive star could generate the necessary thermonuclear reactions, and the nuclear reactions of heavy elements were responsible for the removal of energy necessary for a gravitational collapse to occur. The collapsing star became rotationally unstable, and produced an explosive expulsion of elements that were distributed into interstellar space. The concept that rapid nuclear fusion was the source of energy for a supernova explosion was developed by Hoyle and William Fowler during the 1960s.

The first computer-controlled search for supernovae was begun in the 1960s at Northwestern University. They built a 24-inch telescope at Corralitos Observatory in New Mexico that could be repositioned under computer control. The telescope displayed a new galaxy each minute, with observers checking the view on a television screen. By this means, they discovered 14 supernovae over a period of two years.

1970–1999

The modern standard model for Type Ia supernovae explosions is founded on a proposal by Whelan and Iben in 1973, and is based upon a mass-transfer scenario to a degenerate companion star. In particular, the light curve of SN1972e in NGC 5253, which was observed for more than a year, was followed long enough to discover that after its broad "hump" in brightness, the supernova faded at a nearly constant rate of about 0.01 magnitudes per day. Translated to another system of units, this is nearly the same as the decay rate of cobalt-56 (56Co), whose half-life is 77 days. The degenerate explosion model predicts the production of about a solar mass of nickel-56 (56Ni) by the exploding star. The 56Ni decays with a half-life of 6.8 days to 56Co, and the decay of the nickel and cobalt provides the energy radiated away by the supernova late in its history. The agreement in both total energy production and the fade rate between the theoretical models and the observations of 1972e led to rapid acceptance of the degenerate-explosion model.

Through observation of the light curves of many Type Ia supernovae, it was discovered that they appear to have a common peak luminosity. By measuring the luminosity of these events, the distance to their host galaxy can be estimated with good accuracy. Thus this category of supernovae has become highly useful as a standard candle for measuring cosmic distances. In 1998, the High-Z Supernova Search and the Supernova Cosmology Project discovered that the most distant Type Ia supernovae appeared dimmer than expected. This has provided evidence that the expansion of the universe may be accelerating.

Although no supernova has been observed in the Milky Way since 1604, it appears that a supernova exploded in the constellation Cassiopeia about 300 years ago, around the year 1667 or 1680. The remnant of this explosion, Cassiopeia A—is heavily obscured by interstellar dust, which is possibly why it did not make a notable appearance. However it can be observed in other parts of the spectrum, and it is currently the brightest radio source beyond our solar system.

Supernova 1987A remnant near the center
 
In 1987, Supernova 1987A in the Large Magellanic Cloud was observed within hours of its start. It was the first supernova to be detected through its neutrino emission and the first to be observed across every band of the electromagnetic spectrum. The relative proximity of this supernova has allowed detailed observation, and it provided the first opportunity for modern theories of supernova formation to be tested against observations.

The rate of supernova discovery steadily increased throughout the twentieth century. In the 1990s, several automated supernova search programs were initiated. The Leuschner Observatory Supernova Search program was begun in 1992 at Leuschner Observatory. It was joined the same year by the Berkeley Automated Imaging Telescope program. These were succeeded in 1996 by the Katzman Automatic Imaging Telescope at Lick Observatory, which was primarily used for the Lick Observatory Supernova Search (LOSS). By 2000, the Lick program resulted in the discovery of 96 supernovae, making it the world's most successful Supernova search program.

In the late 1990s it was proposed that recent supernova remnants could be found by looking for gamma rays from the decay of titanium-44. This has a half-life of 90 years and the gamma rays can traverse the galaxy easily, so it permits us to see any remnants from the last millennium or so. Two sources were found, the previously discovered Cassiopeia A remnant, and the RX J0852.0-4622 remnant, which had just been discovered overlapping the Vela Supernova Remnant 

In 1999 a star within IC 755 was seen to explode as a supernova and named SN 1999an.
 
This remnant (RX J0852.0-4622) had been found in front (apparently) of the larger Vela Supernova Remnant. The gamma rays from the decay of titanium-44 showed that it must have exploded fairly recently (perhaps around 1200 AD), but there is no historical record of it. The flux of gamma rays and x-rays indicates that the supernova was relatively close to us (perhaps 200 parsecs or 600 ly). If so, this is a surprising event because supernovae less than 200 parsecs away are estimated to occur less than once per 100,000 years.

2000 to present

Cosmic lens MACS J1720+35 helps Hubble to find a distant supernova.
 
The "SN 2003fg" was discovered in a forming galaxy in 2003. The appearance of this supernova was studied in "real-time", and it has posed several major physical questions as it seems more massive than the Chandrasekhar limit would allow.

First observed in September 2006, the supernova SN 2006gy, which occurred in a galaxy called NGC 1260 (240 million light-years away), is the largest and, until confirmation of luminosity of SN 2005ap in October 2007, the most luminous supernova ever observed. The explosion was at least 100 times more luminous than any previously observed supernova, with the progenitor star being estimated 150 times more massive than the Sun. Although this had some characteristics of a Type Ia supernova, Hydrogen was found in the spectrum. It is thought that SN 2006gy is a likely candidate for a pair-instability supernova. SN 2005ap, which was discovered by Robert Quimby who also discovered SN 2006gy, was about twice as bright as SN 2006gy and about 300 times as bright as a normal type II supernova.

Host Galaxies of Calcium-Rich Supernovae.
 
On May 21, 2008, astronomers announced that they had for the first time caught a supernova on camera just as it was exploding. By chance, a burst of X-rays was noticed while looking at galaxy NGC 2770, 88 million light-years from Earth, and a variety of telescopes were aimed in that direction just in time to capture what has been named SN 2008D. "This eventually confirmed that the big X-ray blast marked the birth of a supernova," said Alicia Soderberg of Princeton University.

One of the many amateur astronomers looking for supernovae, Caroline Moore, a member of the Puckett Observatory Supernova Search Team, found supernova SN 2008ha late November 2008. At the age of 14 she had been declared the youngest person ever to find a supernova. However, in January 2011, 10-year-old Kathryn Aurora Gray from Canada was reported to have discovered a supernova, making her the youngest ever to find a supernova. Mr. Gray, her father, and a friend spotted SN 2010lt, a magnitude 17 supernova in galaxy UGC 3378 in the constellation Camelopardalis, about 240 million light years away. 

 
In 2009, researchers have found nitrates in ice cores from Antarctica at depths corresponding to the known supernovae of 1006 and 1054 AD, as well as from around 1060 AD. The nitrates were apparently formed from nitrogen oxides created by gamma rays from the supernovae. This technique should be able to detect supernovae going back several thousand years.

On November 15, 2010, astronomers using NASA's Chandra X-ray Observatory announced that, while viewing the remnant of SN 1979C in the galaxy Messier 100, they have discovered an object which could be a young, 30-year-old black hole. NASA also noted the possibility this object could be a spinning neutron star producing a wind of high energy particles.

On August 24, 2011, the Palomar Transient Factory automated survey discovered a new Type Ia supernova (SN 2011fe) in the Pinwheel Galaxy (M101) shortly after it burst into existence. Being only 21 million lightyears away and detected so early after the event started, it will allow scientists to learn more about the early developments of these types of supernovae.

On 16 March 2012, a Type II supernova, designated as SN 2012aw, was discovered in M95.

On January 22, 2014, students at the University of London Observatory spotted an exploding star SN 2014J in the nearby galaxy M82 (the Cigar Galaxy). At a distance of around 12 million light years, the supernova is one of the nearest to be observed in recent decades.

Future

The estimated rate of supernova production in a galaxy the size of the Milky Way is about twice per century. This is much higher than the actual observed rate, implying that a portion of these events have been obscured from the Earth by interstellar dust. The deployment of new instruments that can observe across a wide range of the electromagnetic spectrum, along with neutrino detectors, means that the next such event will almost certainly be detected.

The Large Synoptic Survey Telescope (LSST) is predicted to discover three to four million supernovae during its ten-year survey, over a broad range of distances.

Cousin marriage in the Middle East

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cou...