Search This Blog

Thursday, April 11, 2019

Accretion disk

From Wikipedia, the free encyclopedia
Image taken by the Hubble Space Telescope of what may be gas accreting onto a black hole in the elliptical galaxy NGC 4261
 
An accretion disk is a structure (often a circumstellar disk) formed by diffuse material in orbital motion around a massive central body. The central body is typically a star. Friction causes orbiting material in the disk to spiral inward towards the central body. Gravitational and frictional forces compress and raise the temperature of the material, causing the emission of electromagnetic radiation. The frequency range of that radiation depends on the central object's mass. Accretion disks of young stars and protostars radiate in the infrared; those around neutron stars and black holes in the X-ray part of the spectrum. The study of oscillation modes in accretion disks is referred to as diskoseismology.

Manifestations

Accretion disks are a ubiquitous phenomenon in astrophysics; active galactic nuclei, protoplanetary disks, and gamma ray bursts all involve accretion disks. These disks very often give rise to astrophysical jets coming from the vicinity of the central object. Jets are an efficient way for the star-disk system to shed angular momentum without losing too much mass

The most spectacular accretion disks found in nature are those of active galactic nuclei and of quasars, which are thought to be massive black holes at the center of galaxies. As matter enters the accretion disc, it follows a trajectory called a tendex line, which describes an inward spiral. This is because particles rub and bounce against each other in a turbulent flow, causing frictional heating which radiates energy away, reducing the particles' angular momentum, allowing the particle to drift inwards, driving the inward spiral. The loss of angular momentum manifests as a reduction in velocity; at a slower velocity, the particle wants to adopt a lower orbit. As the particle falls to this lower orbit, a portion of its gravitational potential energy is converted to increased velocity and the particle gains speed. Thus, the particle has lost energy even though it is now travelling faster than before; however, it has lost angular momentum. As a particle orbits closer and closer, its velocity increases, as velocity increases frictional heating increases as more and more of the particle's potential energy (relative to the black hole) is radiated away; the accretion disk of a black hole is hot enough to emit X-rays just outside the event horizon. The large luminosity of quasars is believed to be a result of gas being accreted by supermassive black holes. Elliptical accretion disks formed at tidal disruption of stars can be typical in galactic nuclei and quasars. Accretion process can convert about 10 percent to over 40 percent of the mass of an object into energy as compared to around 0.7 percent for nuclear fusion processes. In close binary systems the more massive primary component evolves faster and has already become a white dwarf, a neutron star, or a black hole, when the less massive companion reaches the giant state and exceeds its Roche lobe. A gas flow then develops from the companion star to the primary. Angular momentum conservation prevents a straight flow from one star to the other and an accretion disk forms instead. 

Accretion disks surrounding T Tauri stars or Herbig stars are called protoplanetary disks because they are thought to be the progenitors of planetary systems. The accreted gas in this case comes from the molecular cloud out of which the star has formed rather than a companion star. 

Artist's view of a star with accretion disk

Accretion disk physics

Artist's conception of a black hole drawing matter from a nearby star, forming an accretion disk.
 
In the 1940s, models were first derived from basic physical principles. In order to agree with observations, those models had to invoke a yet unknown mechanism for angular momentum redistribution. If matter is to fall inwards it must lose not only gravitational energy but also lose angular momentum. Since the total angular momentum of the disk is conserved, the angular momentum loss of the mass falling into the center has to be compensated by an angular momentum gain of the mass far from the center. In other words, angular momentum should be transported outwards for matter to accrete. According to the Rayleigh stability criterion,
where represents the angular velocity of a fluid element and its distance to the rotation center, an accretion disk is expected to be a laminar flow. This prevents the existence of a hydrodynamic mechanism for angular momentum transport. 

On one hand, it was clear that viscous stresses would eventually cause the matter towards the center to heat up and radiate away some of its gravitational energy. On the other hand, viscosity itself was not enough to explain the transport of angular momentum to the exterior parts of the disk. Turbulence-enhanced viscosity was the mechanism thought to be responsible for such angular-momentum redistribution, although the origin of the turbulence itself was not well understood. The conventional -model (discussed below) introduces an adjustable parameter describing the effective increase of viscosity due to turbulent eddies within the disk. In 1991, with the rediscovery of the magnetorotational instability (MRI), S. A. Balbus and J. F. Hawley established that a weakly magnetized disk accreting around a heavy, compact central object would be highly unstable, providing a direct mechanism for angular-momentum redistribution.

α-Disk Model

Shakura and Sunyaev (1973) proposed turbulence in the gas as the source of an increased viscosity. Assuming subsonic turbulence and the disk height as an upper limit for the size of the eddies, the disk viscosity can be estimated as where is the sound speed, is the FWHM of the disk, and is a free parameter between zero (no accretion) and approximately one. In a turbulent medium , where is the velocity of turbulent cells relative to the mean gas motion, and is the size of the largest turbulent cells, which is estimated as and , where is the Keplerian orbital angular velocity, is the radial distance from the central object of mass . By using the equation of hydrostatic equilibrium, combined with conservation of angular momentum and assuming that the disk is thin, the equations of disk structure may be solved in terms of the parameter. Many of the observables depend only weakly on , so this theory is predictive even though it has a free parameter. 

Using Kramers' law for the opacity it is found that
where and are the mid-plane temperature and density respectively. is the accretion rate, in units of , is the mass of the central accreting object in units of a solar mass, , is the radius of a point in the disk, in units of , and , where is the radius where angular momentum stops being transported inwards. 

The Shakura-Sunyaev α-Disk model is both thermally and viscously unstable. An alternative model, known as the -disk, which is stable in both senses assumes that the viscosity is proportional to the gas pressure . In the standard Shakura-Sunyaev model, viscosity is assumed to be proportional to the total pressure since

The Shakura-Sunyaev model assumes that the disk is in local thermal equilibrium, and can radiate its heat efficiently. In this case, the disk radiates away the viscous heat, cools, and becomes geometrically thin. However, this assumption may break down. In the radiatively inefficient case, the disk may "puff up" into a torus or some other three-dimensional solution like an Advection Dominated Accretion Flow (ADAF). The ADAF solutions usually require that the accretion rate is smaller than a few percent of the Eddington limit. Another extreme is the case of Saturn's rings, where the disk is so gas poor that its angular momentum transport is dominated by solid body collisions and disk-moon gravitational interactions. The model is in agreement with recent astrophysical measurements using gravitational lensing.

Magnetorotational instability

HH-30, a Herbig–Haro object surrounded by an accretion disk
 
Balbus and Hawley (1991) proposed a mechanism which involves magnetic fields to generate the angular momentum transport. A simple system displaying this mechanism is a gas disk in the presence of a weak axial magnetic field. Two radially neighboring fluid elements will behave as two mass points connected by a massless spring, the spring tension playing the role of the magnetic tension. In a Keplerian disk the inner fluid element would be orbiting more rapidly than the outer, causing the spring to stretch. The inner fluid element is then forced by the spring to slow down, reduce correspondingly its angular momentum causing it to move to a lower orbit. The outer fluid element being pulled forward will speed up, increasing its angular momentum and move to a larger radius orbit. The spring tension will increase as the two fluid elements move further apart and the process runs away.

It can be shown that in the presence of such a spring-like tension the Rayleigh stability criterion is replaced by
Most astrophysical disks do not meet this criterion and are therefore prone to this magnetorotational instability. The magnetic fields present in astrophysical objects (required for the instability to occur) are believed to be generated via dynamo action.

Magnetic fields and jets

Accretion disks are usually assumed to be threaded by the external magnetic fields present in the interstellar medium. These fields are typically weak (about few micro-Gauss), but they can get anchored to the matter in the disk, because of its high electrical conductivity, and carried inward toward the central star. This process can concentrate the magnetic flux around the centre of the disk giving rise to very strong magnetic fields. Formation of powerful astrophysical jets along the rotation axis of accretion disks requires a large scale poloidal magnetic field in the inner regions of the disk.

Such magnetic fields may be advected inward from the interstellar medium or generated by a magnetic dynamo within the disk. Magnetic fields strengths at least of order 100 Gauss seem necessary for the magneto-centrifugal mechanism to launch powerful jets. There are problems, however, in carrying external magnetic flux inward towards the central star of the disk. High electric conductivity dictates that the magnetic field is frozen into the matter which is being accreted onto the central object with a slow velocity. However, the plasma is not a perfect electric conductor, so there is always some degree of dissipation. The magnetic field diffuses away faster than the rate at which it is being carried inward by accretion of matter.

A simple solution is assuming a viscosity much larger than the magnetic diffusivity in the disk. However, numerical simulations, and theoretical models, show that the viscosity and magnetic diffusivity have almost the same order of magnitude in magneto-rotationally turbulent disks. Some other factors may possibly affect the advection/diffusion rate: reduced turbulent magnetic diffusion on the surface layers; reduction of the Shakura-Sunyaev viscosity by magnetic fields; and the generation of large scale fields by small scale MHD turbulence –a large scale dynamo.

Analytic models of sub-Eddington accretion disks (thin disks, ADAFs)

When the accretion rate is sub-Eddington and the opacity very high, the standard thin accretion disk is formed. It is geometrically thin in the vertical direction (has a disk-like shape), and is made of a relatively cold gas, with a negligible radiation pressure. The gas goes down on very tight spirals, resembling almost circular, almost free (Keplerian) orbits. Thin disks are relatively luminous and they have thermal electromagnetic spectra, i.e. not much different from that of a sum of black bodies. Radiative cooling is very efficient in thin disks. The classic 1974 work by Shakura and Sunyaev on thin accretion disks is one of the most often quoted papers in modern astrophysics. Thin disks were independently worked out by Lynden-Bell, Pringle and Rees. Pringle contributed in the past thirty years many key results to accretion disk theory, and wrote the classic 1981 review that for many years was the main source of information about accretion disks, and is still very useful today. 

Simulation by J.A. Marck of optical appearance of Schwarzschild black hole with thin (Keplerian) disk.
 
A fully general relativistic treatment, as needed for the inner part of the disk when the central object is a black hole, has been provided by Page and Thorne, and used for producing simulated optical images by Luminet and Marck, in which it is to be noted that, although such a system is intrinsically symmetric its image is not, because the relativistic rotation speed needed for centrifugal equilibrium in the very strong gravitational field near the black hole produces a strong Doppler redshift on the receding side (taken here to be on the right) whereas there will be a strong blueshift on the approaching side. It is also to be noted that due to light bending, the disk appears distorted but is nowhere hidden by the black hole (in contrast with what is shown in the misinformed artist's impression presented below). 

When the accretion rate is sub-Eddington and the opacity very low, an ADAF is formed. This type of accretion disk was predicted in 1977 by Ichimaru. Although Ichimaru's paper was largely ignored, some elements of the ADAF model were present in the influential 1982 ion-tori paper by Rees, Phinney, Begelman and Blandford. ADAFs started to be intensely studied by many authors only after their rediscovery in the mid-1990 by Narayan and Yi, and independently by Abramowicz, Chen, Kato, Lasota (who coined the name ADAF), and Regev. Most important contributions to astrophysical applications of ADAFs have been made by Narayan and his collaborators. ADAFs are cooled by advection (heat captured in matter) rather than by radiation. They are very radiatively inefficient, geometrically extended, similar in shape to a sphere (or a "corona") rather than a disk, and very hot (close to the virial temperature). Because of their low efficiency, ADAFs are much less luminous than the Shakura-Sunyaev thin disks. ADAFs emit a power-law, non-thermal radiation, often with a strong Compton component. 

Blurring of X-rays near Black hole (NuSTAR; 12 August 2014).
Credit: NASA/JPL-Caltech

Analytic models of super-Eddington accretion disks (slim disks, Polish doughnuts)

The theory of highly super-Eddington black hole accretion, M>>MEdd, was developed in the 1980s by Abramowicz, Jaroszynski, Paczyński, Sikora and others in terms of "Polish doughnuts" (the name was coined by Rees). Polish doughnuts are low viscosity, optically thick, radiation pressure supported accretion disks cooled by advection. They are radiatively very inefficient. Polish doughnuts resemble in shape a fat torus (a doughnut) with two narrow funnels along the rotation axis. The funnels collimate the radiation into beams with highly super-Eddington luminosities. 

Slim disks (name coined by Kolakowska) have only moderately super-Eddington accretion rates, M≥MEdd, rather disk-like shapes, and almost thermal spectra. They are cooled by advection, and are radiatively ineffective. They were introduced by Abramowicz, Lasota, Czerny and Szuszkiewicz in 1988.

Excretion disk

The opposite of an accretion disk is an excretion disk where instead of material accreting from a disk on to a central object, material is excreted from the center outwards on to the disk. Excretion disks are formed when stars merge.

Supermassive black hole (updated from 2/19)

From Wikipedia, the free encyclopedia

The supermassive black hole inside the core of the supergiant elliptical galaxy Messier 87 in the constellation Virgo. Its mass, estimated at 7.22+0.34
−0.40
×109
M, is billions of times that of the Sun. It was the first black hole to be directly imaged by the Event Horizon Telescope (image released on 10 April 2019). The ring has a diameter of some 700 AU, around ten times larger than the orbit of Neptune around the sun. Its apparent diameter is 42±3 μas.

A supermassive black hole (SMBH or sometimes SBH) is the largest type of black hole, containing a mass of the order of hundreds of thousands, to billions of times, the mass of the Sun (M). This is a class of astronomical objects that has undergone gravitational collapse, leaving behind a spheroidal region of space from which nothing can escape, not even light. Observational evidence indicates that all, or nearly all, massive galaxies contain a supermassive black hole, located at the galaxy's center. In the case of the Milky Way, the supermassive black hole corresponds to the location of Sagittarius A* at the Galactic Core. Accretion of interstellar gas onto supermassive black holes is the process responsible for powering quasars and other types of active galactic nuclei.

Description

Supermassive black holes have properties that distinguish them from lower-mass classifications. First, the average density of a SMBH (defined as the mass of the black hole divided by the volume within its Schwarzschild radius) can be less than the density of water in the case of some SMBHs. This is because the Schwarzschild radius is directly proportional to its mass. Since the volume of a spherical object (such as the event horizon of a non-rotating black hole) is directly proportional to the cube of the radius, the density of a black hole is inversely proportional to the square of the mass, and thus higher mass black holes have lower average density. In addition, the tidal forces in the vicinity of the event horizon are significantly weaker for supermassive black holes. The tidal force on a body at the event horizon is likewise inversely proportional to the square of the mass: a person on the surface of the Earth and one at the event horizon of a 10 million M black hole experience about the same tidal force between their head and feet. Unlike with stellar mass black holes, one would not experience significant tidal force until very deep into the black hole.

Some astronomers have begun labeling black holes of at least 10 billion M as ultramassive black holes. Most of these (such as TON 618) are associated with exceptionally energetic quasars.

History of research

The story of how supermassive black holes were found began with the investigation by Maarten Schmidt of the radio source 3C 273 in 1963. Initially this was thought to be a star, but the spectrum proved puzzling. It was determined to be hydrogen emission lines that had been red shifted, indicating the object was moving away from the Earth. Hubble's law showed that the object was located several billion light-years away, and thus must be emitting the energy equivalent of hundreds of galaxies. The rate of light variations of the source, dubbed a quasi-stellar object, or quasar, suggested the emitting region had a diameter of one parsec or less. Four such sources had been identified by 1964.

In 1963, Fred Hoyle and W. A. Fowler proposed the existence of hydrogen burning supermassive stars (SMS) as an explanation for the compact dimensions and high energy output of quasars. These would have a mass of about 105 – 109 M. However, Richard Feynman noted stars above a certain critical mass are dynamically unstable and would collapse into a black hole, at least if they were non-rotating. Fowler then proposed that these supermassive stars would undergo a series of collapse and explosion oscillations, thereby explaining the energy output pattern. Appenzeller and Fricke (1972) built models of this behavior, but found that the resulting star would still undergo collapse, concluding that a non-rotating 0.75×106 M SMS "cannot escape collapse to a black hole by burning its hydrogen through the CNO cycle".

Edwin E. Salpeter and Yakov B. Zel'dovich made the proposal in 1964 that matter falling onto a massive compact object would explain the properties of quasars. It would require a mass of around 108 M to match the output of these objects. Donald Lynden-Bell noted in 1969 that the infalling gas would form a flat disk that spirals into the central "Schwarzschild throat". He noted that the relatively low output of nearby galactic cores implied these were old, inactive quasars. Meanwhile, in 1967, Martin Ryle and Malcolm Longair suggested that nearly all sources of extra-galactic radio emission could be explained by a model in which particles are ejected from galaxies at relativistic velocities; meaning they are moving near the speed of light. Martin Ryle, Malcolm Longair, and Peter Scheuer then proposed in 1973 that the compact central nucleus could be the original energy source for these relativistic jets.

Arthur M. Wolfe and Geoffrey Burbidge noted in 1970 that the large velocity dispersion of the stars in the nuclear region of elliptical galaxies could only be explained by a large mass concentration at the nucleus; larger than could be explained by ordinary stars. They showed that the behavior could be explained by a massive black hole with up to 1010 M, or a large number of smaller black holes with masses below 103 M. Dynamical evidence for a massive dark object was found at the core of the active elliptical galaxy Messier 87 in 1978, initially estimated at 5×109 M. Discovery of similar behavior in other galaxies soon followed, including the Andromeda Galaxy in 1984 and the Sombrero Galaxy in 1988.

Donald Lynden-Bell and Martin Rees hypothesized in 1971 that the center of the Milky Way galaxy would contain a massive black hole. Sagittarius A* was discovered and named on February 13 and 15, 1974, by astronomers Bruce Balick and Robert Brown using the Green Bank Interferometer of the National Radio Astronomy Observatory. They discovered a radio source that emits synchrotron radiation; it was found to be dense and immobile because of its gravitation. This was, therefore, the first indication that a supermassive black hole exists in the center of the Milky Way. 

The Hubble Space Telescope, launched in 1990, provided the resolution needed to perform more refined observations of galactic nuclei. In 1994 the Faint Object Spectrograph on the Hubble was used to observe Messier 87, finding that ionized gas was orbiting the central part of the nucleus at a velocity of ±500 km/s. The data indicated a concentrated mass of (2.4±0.7)×109 M lay within a 0.25 span, providing strong evidence of a supermassive black hole. Using the Very Long Baseline Array to observe Messier 106 , Miyoshi et al. (1995) were able to demonstrate that the emission from an H2O maser in this galaxy came from a gaseous disk in the nucleus that orbited a concentrated mass of 3.6×107 M, which was constrained to a radius of 0.13 parsecs. They noted that a swarm of solar mass black holes within a radius this small would not survive for long without undergoing collisions, making a supermassive black hole the sole viable candidate.

Formation

An artist's conception of a supermassive black hole surrounded by an accretion disk and emitting a relativistic jet
 
The origin of supermassive black holes remains an open field of research. Astrophysicists agree that once a black hole is in place in the center of a galaxy, it can grow by accretion of matter and by merging with other black holes. There are, however, several hypotheses for the formation mechanisms and initial masses of the progenitors, or "seeds", of supermassive black holes. 

One hypothesis is that the seeds are black holes of tens or perhaps hundreds of solar masses that are left behind by the explosions of massive stars and grow by accretion of matter. Another model hypothesizes that before the first stars, large gas clouds could collapse into a "quasi-star", which would in turn collapse into a black hole of around 20 M. These stars may have also been formed by dark matter halos drawing in enormous amounts of gas by gravity, which would then produce supermassive stars with tens of thousands of solar masses. The "quasi-star" becomes unstable to radial perturbations because of electron-positron pair production in its core and could collapse directly into a black hole without a supernova explosion (which would eject most of its mass, preventing the black hole from growing as fast). Given sufficient mass nearby, the black hole could accrete to become an intermediate-mass black hole and possibly a SMBH if the accretion rate persists.

Artist's impression of the huge outflow ejected from the quasar SDSS J1106+1939
 
Artist's illustration of galaxy with jets from a supermassive black hole.
 
Another model involves a dense stellar cluster undergoing core-collapse as the negative heat capacity of the system drives the velocity dispersion in the core to relativistic speeds. Finally, primordial black holes could have been produced directly from external pressure in the first moments after the Big Bang. These primordial black holes would then have more time than any of the above models to accrete, allowing them sufficient time to reach supermassive sizes. Formation of black holes from the deaths of the first stars has been extensively studied and corroborated by observations. The other models for black hole formation listed above are theoretical.

The difficulty in forming a supermassive black hole resides in the need for enough matter to be in a small enough volume. This matter needs to have very little angular momentum in order for this to happen. Normally, the process of accretion involves transporting a large initial endowment of angular momentum outwards, and this appears to be the limiting factor in black hole growth. This is a major component of the theory of accretion disks. Gas accretion is the most efficient and also the most conspicuous way in which black holes grow. The majority of the mass growth of supermassive black holes is thought to occur through episodes of rapid gas accretion, which are observable as active galactic nuclei or quasars. Observations reveal that quasars were much more frequent when the Universe was younger, indicating that supermassive black holes formed and grew early. A major constraining factor for theories of supermassive black hole formation is the observation of distant luminous quasars, which indicate that supermassive black holes of billions of solar masses had already formed when the Universe was less than one billion years old. This suggests that supermassive black holes arose very early in the Universe, inside the first massive galaxies. 

Artist's impression of stars born in winds from supermassive black holes.
 
A vacancy exists in the observed mass distribution of black holes. Black holes that spawn from dying stars have masses 5–80 M. The minimal supermassive black hole is approximately a hundred thousand solar masses. Mass scales between these ranges are dubbed intermediate-mass black holes. Such a gap suggests a different formation process. However, some models suggest that ultraluminous X-ray sources (ULXs) may be black holes from this missing group. 

There is, however, an upper limit to how large supermassive black holes can grow. So-called ultramassive black holes (UMBHs), which are at least ten times the size of most supermassive black holes, at 10 billion solar masses or more, appear to have a theoretical upper limit of around 50 billion solar masses, as anything above this slows growth down to a crawl (the slowdown tends to start around 10 billion solar masses) and causes the unstable accretion disk surrounding the black hole to coalesce into stars that orbit it.

A small minority of sources argue that distant supermassive black holes whose large size is hard to explain so soon after the Big Bang, such as ULAS J1342+0928, may be evidence that our universe is the result of a Big Bounce, instead of a Big Bang, with these supermassive black holes being formed before the Big Bounce.

Doppler measurements

Simulation of a side view of black hole with transparent toroidal ring of ionised matter according to a proposed model  for Sgr A*. This image shows the result of bending of light from behind the black hole, and it also shows the asymmetry arising by the Doppler effect from the extremely high orbital speed of the matter in the ring.
 
Some of the best evidence for the presence of black holes is provided by the Doppler effect whereby light from nearby orbiting matter is red-shifted when receding and blue-shifted when advancing. For matter very close to a black hole the orbital speed must be comparable with the speed of light, so receding matter will appear very faint compared with advancing matter, which means that systems with intrinsically symmetric discs and rings will acquire a highly asymmetric visual appearance. This effect has been allowed for in modern computer generated images such as the example presented here, based on a plausible model for the supermassive black hole in Sgr A* at the centre of our own galaxy. However the resolution provided by presently available telescope technology is still insufficient to confirm such predictions directly. 

What already has been observed directly in many systems are the lower non-relativistic velocities of matter orbiting further out from what are presumed to be black holes. Direct Doppler measures of water masers surrounding the nuclei of nearby galaxies have revealed a very fast Keplerian motion, only possible with a high concentration of matter in the center. Currently, the only known objects that can pack enough matter in such a small space are black holes, or things that will evolve into black holes within astrophysically short timescales. For active galaxies farther away, the width of broad spectral lines can be used to probe the gas orbiting near the event horizon. The technique of reverberation mapping uses variability of these lines to measure the mass and perhaps the spin of the black hole that powers active galaxies. 

Gravitation from supermassive black holes in the center of many galaxies is thought to power active objects such as Seyfert galaxies and quasars. 

An empirical correlation between the size of supermassive black holes and the stellar velocity dispersion of a galaxy bulge is called the M-sigma relation.

In the Milky Way

Inferred orbits of 6 stars around supermassive black hole candidate Sagittarius A* at the Milky Way galactic center
 
Astronomers are very confident that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because:
  • The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light-hours (1.8×1013 m or 120 AU) from the center of the central object.
  • From the motion of star S2, the object's mass can be estimated as 4.1 million M, or about 8.2×1036 kg.
  • The radius of the central object must be less than 17 light-hours, because otherwise S2 would collide with it. Observations of the star S14 indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit.
  • No known astronomical object other than a black hole can contain 4.1 million M in this volume of space.
Infrared observations of bright flare activity near Sagittarius A* show orbital motion of plasma with a period of 45±15 min at a separation of six to ten times the gravitational radius of the candidate SMBH. This emission is consistent with a circularized orbit of a polarized "hot spot" on an accretion disk in a strong magnetic field. The radiating matter is orbiting at 30% of the speed of light just outside the innermost stable circular orbit.

On January 5, 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers.

Detection of an unusually bright X-ray flare from Sagittarius A*, a supermassive black hole in the center of the Milky Way galaxy.

Outside the Milky Way

Artist's impression of a supermassive black hole tearing apart a star. Below: supermassive black hole devouring a star in galaxy RX J1242-11 – X-ray (left) and optical (right).
 
Unambiguous dynamical evidence for supermassive black holes exists only in a handful of galaxies; these include the Milky Way, the Local Group galaxies M31 and M32, and a few galaxies beyond the Local Group, e.g. NGC 4395. In these galaxies, the mean square (or rms) velocities of the stars or gas rises proportionally to 1/r near the center, indicating a central point mass. In all other galaxies observed to date, the rms velocities are flat, or even falling, toward the center, making it impossible to state with certainty that a supermassive black hole is present. Nevertheless, it is commonly accepted that the center of nearly every galaxy contains a supermassive black hole. The reason for this assumption is the M-sigma relation, a tight (low scatter) relation between the mass of the hole in the 10 or so galaxies with secure detections, and the velocity dispersion of the stars in the bulges of those galaxies. This correlation, although based on just a handful of galaxies, suggests to many astronomers a strong connection between the formation of the black hole and the galaxy itself.

Hubble Space Telescope photograph of the 4,400 light-year long relativistic jet of Messier 87, which is matter being ejected by the 6.4×109 M supermassive black hole at the center of the galaxy
 
The nearby Andromeda Galaxy, 2.5 million light-years away, contains a (1.1–2.3)×108 (110–230 million) M central black hole, significantly larger than the Milky Way's. The largest supermassive black hole in the Milky Way's vicinity appears to be that of M87, at a mass of (6.4±0.5)×109 (c. 6.4 billion) M at a distance of 53.5 million light-years. The supergiant elliptical galaxy NGC 4889, at a distance of 336 million light-years away in the Coma Berenices constellation, contains a black hole measured to be 2.1×1010 (21 billion) M.

Masses of black holes in quasars can be estimated via indirect methods that are subject to substantial uncertainty. The quasar TON 618 is an example of an object with an extremely large black hole, estimated at 6.6×1010 (66 billion) M. Its redshift is 2.219. Other examples of quasars with large estimated black hole masses are the hyperluminous quasar APM 08279+5255, with an estimated mass of 2.3×1010 (23 billion) M, and the quasar S5 0014+81, with a mass of 4.0×1010 (40 billion) M, or 10,000 times the mass of the black hole at the Milky Way Galactic Center. 

Some galaxies, such as the galaxy 4C +37.11, appear to have two supermassive black holes at their centers, forming a binary system. If they collided, the event would create strong gravitational waves. Binary supermassive black holes are believed to be a common consequence of galactic mergers. The binary pair in OJ 287, 3.5 billion light-years away, contains the most massive black hole in a pair, with a mass estimated at 18 billion M. In 2011, a super-massive black hole was discovered in the dwarf galaxy Henize 2-10, which has no bulge. The precise implications for this discovery on black hole formation are unknown, but may indicate that black holes formed before bulges.

On March 28, 2011, a supermassive black hole was seen tearing a mid-size star apart. That is the only likely explanation of the observations that day of sudden X-ray radiation and the follow-up broad-band observations. The source was previously an inactive galactic nucleus, and from study of the outburst the galactic nucleus is estimated to be a SMBH with mass of the order of a million solar masses. This rare event is assumed to be a relativistic outflow (material being emitted in a jet at a significant fraction of the speed of light) from a star tidally disrupted by the SMBH. A significant fraction of a solar mass of material is expected to have accreted onto the SMBH. Subsequent long-term observation will allow this assumption to be confirmed if the emission from the jet decays at the expected rate for mass accretion onto a SMBH. 

In 2012, astronomers reported an unusually large mass of approximately 17 billion M for the black hole in the compact, lenticular galaxyNGC 1277, which lies 220 million light-years away in the constellation Perseus. The putative black hole has approximately 59 percent of the mass of the bulge of this lenticular galaxy (14 percent of the total stellar mass of the galaxy). Another study reached a very different conclusion: this black hole is not particularly overmassive, estimated at between 2 and 5 billion M with 5 billion M being the most likely value. On February 28, 2013 astronomers reported on the use of the NuSTAR satellite to accurately measure the spin of a supermassive black hole for the first time, in NGC 1365, reporting that the event horizon was spinning at almost the speed of light.
 
Hubble view of a supermassive black hole "burping".
 
In September 2014, data from different X-ray telescopes has shown that the extremely small, dense, ultracompact dwarf galaxy M60-UCD1 hosts a 20 million solar mass black hole at its center, accounting for more than 10% of the total mass of the galaxy. The discovery is quite surprising, since the black hole is five times more massive than the Milky Way's black hole despite the galaxy being less than five-thousandths the mass of the Milky Way. 

Some galaxies, however, lack any supermassive black holes in their centers. Although most galaxies with no supermassive black holes are very small, dwarf galaxies, one discovery remains mysterious: The supergiant elliptical cD galaxy A2261-BCG has not been found to contain an active supermassive black hole, despite the galaxy being one of the largest galaxies known; ten times the size and one thousand times the mass of the Milky Way. Since a supermassive black hole will only be visible while it is accreting, a supermassive black hole can be nearly invisible, except in its effects on stellar orbits. 

In December 2017, astronomers reported the detection of the most distant quasar currently known, ULAS J1342+0928, containing the most distant supermassive black hole, at a reported redshift of z = 7.54, surpassing the redshift of 7 for the previously known most distant quasar ULAS J1120+0641.

Hawking Radiation

If black holes evaporate via Hawking radiation, a supermassive black hole with a mass of 1011 (100 billion) M will evaporate in around 2×10100 years.

Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014 M during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10106 years.

Mathematical sociology

From Wikipedia, the free encyclopedia

Mathematical sociology is the area of sociology that uses mathematics to construct social theories. Mathematical sociology aims to take sociological theory, which is strong in intuitive content but weak from a formal point of view, and to express it in formal terms. The benefits of this approach include increased clarity and the ability to use mathematics to derive implications of a theory that cannot be arrived at intuitively. In mathematical sociology, the preferred style is encapsulated in the phrase "constructing a mathematical model." This means making specified assumptions about some social phenomenon, expressing them in formal mathematics, and providing an empirical interpretation for the ideas. It also means deducing properties of the model and comparing these with relevant empirical data. Social network analysis is the best-known contribution of this subfield to sociology as a whole and to the scientific community at large. The models typically used in mathematical sociology allow sociologists to understand how predictable local interactions are and they are often able to elicit global patterns of social structure.

History

Starting in the early 1940s, Nicolas Rashevsky, and subsequently in the late 1940s, Anatol Rapoport and others, developed a relational and probabilistic approach to the characterization of large social networks in which the nodes are persons and the links are acquaintanceship. During the late 1940s, formulas were derived that connected local parameters such as closure of contacts – if A is linked to both B and C, then there is a greater than chance probability that B and C are linked to each other – to the global network property of connectivity.

Moreover, acquaintanceship is a positive tie, but what about negative ties such as animosity among persons? To tackle this problem, graph theory, which is the mathematical study of abstract representations of networks of points and lines, can be extended to include these two types of links and thereby to create models that represent both positive and negative sentiment relations, which are represented as signed graphs. A signed graph is called balanced if the product of the signs of all relations in every cycle (links in every graph cycle) is positive. Through formalization by mathematician Frank Harary this work produced the fundamental theorem of this theory. It says that if a network of interrelated positive and negative ties is balanced, e.g. as illustrated by the psychological principle that "my friend's enemy is my enemy", then it consists of two subnetworks such that each has positive ties among its nodes and there are only negative ties between nodes in distinct subnetworks. The imagery here is of a social system that splits into two cliques. There is, however, a special case where one of the two subnetworks is empty, which might occur in very small networks. In another model, ties have relative strengths. 'Acquaintanceship' can be viewed as a 'weak' tie and 'friendship' is represented as a strong tie. Like its uniform cousin discussed above, there is a concept of closure, called strong triadic closure. A graph satisfies strong triadic closure If A is strongly connected to B, and B is strongly connected to C, then A and C must have a tie (either weak or strong). 

n these two developments we have mathematical models bearing upon the analysis of structure. Other early influential developments in mathematical sociology pertained to process. For instance, in 1952 Herbert A. Simon produced a mathematical formalization of a published theory of social groups by constructing a model consisting of a deterministic system of differential equations. A formal study of the system led to theorems about the dynamics and the implied equilibrium states of any group. 

The emergence of mathematical models in the social sciences was part of the zeitgeist in the 1940s and 1950s in which a variety of new interdisciplinary scientific innovations occurred, such as information theory, game theory, cybernetics and mathematical model building in the social and behavioral sciences.

Further developments

In 1954, a critical expository analysis of Rashevsky's social behavior models was written by sociologist James S. Coleman. Rashevsky's models and as well as the model constructed by Simon raise a question: how can one connect such theoretical models to the data of sociology, which often take the form of surveys in which the results are expressed in the form of proportions of people believing or doing something. This suggests deriving the equations from assumptions about the chances of an individual changing state in a small interval of time, a procedure well known in the mathematics of stochastic processes

Coleman embodied this idea in his 1964 book Introduction to Mathematical Sociology, which showed how stochastic processes in social networks could be analyzed in such a way as to enable testing of the constructed model by comparison with the relevant data. The same idea can and has been applied to processes of change in social relations, an active research theme in the study of social networks, illustrated by an empirical study appearing in the journal Science.

In other work, Coleman employed mathematical ideas drawn from economics, such as general equilibrium theory, to argue that general social theory should begin with a concept of purposive action and, for analytical reasons, approximate such action by the use of rational choice models (Coleman, 1990). This argument is similar to viewpoints expressed by other sociologists in their efforts to use rational choice theory in sociological analysis although such efforts have met with substantive and philosophical criticisms.

Meanwhile, structural analysis of the type indicated earlier received a further extension to social networks based on institutionalized social relations, notably those of kinship. The linkage of mathematics and sociology here involved abstract algebra, in particular, group theory. This, in turn, led to a focus on a data-analytical version of homomorphic reduction of a complex social network (which along with many other techniques is presented in Wasserman and Faust 1994). 

In regard to Rapoport's random and biased net theory, his 1961 study of a large sociogram, co-authored with Horvath turned out to become a very influential paper.  There was early evidence of this influence. In 1964, Thomas Fararo and a co-author analyzed another large friendship sociogram using a biased net model. Later in the 1960s, Stanley Milgram described the small world problem and undertook a field experiment dealing with it. A highly fertile idea was suggested and applied by Mark Granovetter in which he drew upon Rapoport's 1961 paper to suggest and apply a distinction between weak and strong ties. The key idea was that there was "strength" in weak ties. 

Some programs of research in sociology employ experimental methods to study social interaction processes. Joseph Berger and his colleagues initiated such a program in which the central idea is the use of the theoretical concept "expectation state" to construct theoretical models to explain interpersonal processes, e.g., those linking external status in society to differential influence in local group decision-making. Much of this theoretical work is linked to mathematical model building, especially after the late 1970s adoption of a graph theoretic representation of social information processing, as Berger (2000) describes in looking back upon the development of his program of research. In 1962 he and his collaborators explained model building by reference to the goal of the model builder, which could be explication of a concept in a theory, representation of a single recurrent social process, or a broad theory based on a theoretical construct, such as, respectively, the concept of balance in psychological and social structures, the process of conformity in an experimental situation, and stimulus sampling theory.

The generations of mathematical sociologists that followed Rapoport, Simon, Harary, Coleman, White and Berger, including those entering the field in the 1960s such as Thomas Fararo, Philip Bonacich, and Tom Mayer, among others, drew upon their work in a variety of ways.

Present research

Mathematical sociology remains a small subfield within the discipline, but it has succeeded in spawning a number of other subfields which share its goals of formally modeling social life. The foremost of these fields is social network analysis, which has become among the fastest growing areas of sociology in the 21st century. The other major development in the field is the rise of computational sociology, which expands the mathematical toolkit with the use of computer simulations, artificial intelligence and advanced statistical methods. The latter subfield also makes use of the vast new data sets on social activity generated by social interaction on the internet.

One important indicator of the significance of mathematical sociology is that the general interest journals in the field, including such central journals as The American Journal of Sociology and The American Sociological Review, have published mathematical models that became influential in the field at large. 

More recent trends in mathematical sociology are evident in contributions to The Journal of Mathematical Sociology (JMS). Several trends stand out: the further development of formal theories that explain experimental data dealing with small group processes, the continuing interest in structural balance as a major mathematical and theoretical idea, the interpenetration of mathematical models oriented to theory and innovative quantitative techniques relating to methodology, the use of computer simulations to study problems in social complexity, interest in micro–macro linkage and the problem of emergence, and ever-increasing research on networks of social relations. 

Thus, topics from the earliest days, like balance and network models, continue to be of contemporary interest. The formal techniques employed remain many of the standard and well-known methods of mathematics: differential equations, stochastic processes and game theory. Newer tools like agent-based models used in computer simulation studies are prominently represented. Perennial substantive problems still drive research: social diffusion, social influence, social status origins and consequences, segregation, cooperation, collective action, power, and much more.

Research programs

Many of the developments in mathematical sociology, including formal theory, have exhibited notable decades-long advances that began with path-setting contributions by leading mathematical sociologists and formal theorists. This provides another way of taking note of recent contributions but with an emphasis on continuity with early work through the use of the idea of “research program,” which is a coherent series of theoretical and empirical studies based on some fundamental principle or approach. There are more than a few of these programs and what follows is no more than a brief capsule description of leading exemplars of this idea in which there is an emphasis on the originating leadership in each program and its further development over decades. 
  • Rational Choice Theory and James S. Coleman: After his 1964 pioneering Introduction to Mathematical Sociology, Coleman continued to make contributions to social theory and mathematical model building and his 1990 volume, Foundations of Social Theory was the major theoretical work of a career that spanned the period from 1950s to 1990s and included many other research-based contributions.. The Foundation book combined accessible examples of how rational choice theory could function in the analysis of such sociological topics as authority, trust, social capital and the norms (in particular, their emergence). In this way, the book showed how rational choice theory could provide an effective basis for making the transition from micro to macro levels of sociological explanation. An important feature of the book is its use of mathematical ideas in generalizing the rational choice model to include interpersonal sentiment relations as modifiers of outcomes and doing so such that the generalized theory captures the original more self-oriented theory as a special case, as point emphasized in a later analysis of the theory. The rationality presupposition of the theory led to debates among sociological theorists. Nevertheless, many sociologists drew upon Coleman’s formulation of a general template for micro-macro transition to gain leverage on the continuation of topics central to his and the discipline's explanatory focus on a variety of macrosocial phenomena in which rational choice simplified the micro level in the interest of combining individual actions to account for macro outcomes of social processes.
  • Structuralism (Formal) and Harrison C. White: In the decades since his earliest contributions, Harrison White has led the field in putting social structural analysis on a mathematical and empirical basis, including the 1970 publication of Chains of Opportunity: System Models of Mobility in Organizations which set out and applied to data a vacancy chain model for mobility in and across organizations. His very influential other work includes the operational concepts of blockmodel and structural equivalence which start from a body of social relational data to produce analytical results using these procedures and concepts. These ideas and methods were developed in collaboration with his former students François Lorraine, Ronald Breiger, and Scott Boorman. These three are among the more than 30 students who earned their doctorates under White in the period 1963-1986.  The theory and application of blockmodels has been set out in detail in a recent monograph.. White's later contributions include a structuralist approach to markets and, in 1992, a general theoretical framework, later appearing in a revised edition
  • Expectation states theory and Joseph Berger: Under Berger’s intellectual and organizational leadership, Expectation States Theory branched out into a large number of specific programs of research on specific problems, each treated in terms of the master concept of expectation states. He and his colleague and frequent collaborator Morris Zelditch Jr not only produced work of their own but created a doctoral program at Stanford University that led to an enormous outpouring of research by notable former students, including Murray Webster, David Wagner, and Hamit Fisek. Collaboration with mathematician Robert Z. Norman led to the use of mathematical graph theory as a way of representing and analyzing social information processing in self-other(s) interactions. Berger and Zelditch also advanced work in formal theorizing and mathematical model building as early as 1962 with a collaborative expository analysis of types of models. Berger and Zelditch stimulated advances in other theoretical research programs by providing outlets for the publication of new work, culminating in a 2002 edited volume that includes a chapter that presents an authoritative overview of Expectation states theory as a program of cumulative research dealing with group processes
  • Formalization in Theoretical Sociology and Thomas J. Fararo: Many of this sociologist’s contributions have been devoted to bringing mathematical thinking into greater contact with sociological theory. He organized a symposium attended by sociological theorists in which formal theorists delivered papers that were subsequently published in 2000. Through collaborations with students and colleagues his own theoretical research program dealt with such topics as macrostructural theory and E-state structuralism (both with former student John Skvoretz), subjective images of stratification (with former student Kenji Kosaka), tripartite structural analysis (with colleague Patrick Doreian) and computational sociology (with colleague Norman P. Hummon). Two of his books are extended treatments of his approach to theoretical sociology.
  • Social Network Analysis and Linton C. Freeman: In the early 1960s Freeman directed a sophisticated empirical study of community power structure. In 1978 he established the journal Social Networks. It rapidly became a major outlet for original research papers that used mathematical techniques to analyze network data. The journal also publishes conceptual and theoretical contributions, including his paper “Centrality in Social Networks: Conceptual Clarification.” The paper has been cited more than 13,000 times. In turn, the mathematical concept defined in that paper led to further elaborations of the ideas, to experimental tests, and to numerous applications in empirical studies. He is the author of a study of the history and sociology of the field of social network analysis.
  • Quantitative Methodology and Kenneth C. Land: Kenneth Land has been on the frontier of quantitative methodology in sociology as well as formal theoretical model building. The influential yearly volume Sociological Methodology has been one of Land’s favorite outlets for the publication of papers that often lie in the intersection of quantitative methodology and mathematical sociology. Two of his theoretical papers appeared early in this journal: “Mathematical Formalization of Durkheim's Theory of Division of Labor” (1970) and “Formal Theory” (1971). His decades-long research program includes contributions relating to numerous special topics and methods, including social statistics, social indicators, stochastic processes, mathematical criminology, demography and social forecasting. Thus Land brings to these fields the skills of a statistician, a mathematician and a sociologist, combined. 
  • Affect Control Theory and David R. Heise: In 1979, Heise published a groundbreaking formal and empirical study in the tradition of interpretive sociology, especially symbolic interactionism,Understanding Events: Affect and the Construction of Social Action. It was the origination of a research program that has included his further theoretical and empirical studies and those of other sociologists, such as Lynn Smith-Lovin, Dawn Robinson and Neil MacKinnon. Definition of the situation and self-other definitions are two of the leading concepts in affect control theory. The formalism used by Heise and other contributors uses a validated form of measurement and a cybernetic control mechanism in which immediate feelings and compared with fundamental sentiments in such a way as to generate an effort to bring immediate feelings in a situation into correspondence with sentiments. In the simplest models, each person in an interactive pair, is represented in terms of one side of a role relationship in which fundamental sentiments associated with each role guide the process of immediate interaction. A higher level of the control process can be activated in which the definition of the situation is transformed. This research program comprises several of the key chapters in a 2006 volume of contributions to control systems theory (in the sense of Powers 1975) in sociology.
  • "Distributive Justice Theory" and Guillermina Jasso: Since 1980, Jasso has treated problems of distributive justice with an original theory that uses mathematical methods. She has elaborated upon and applied this theory to a wide range of social phenomena. Her most general mathematical apparatus – with the theory of distributive justice as a special case -- deals with any subjective comparison between some actual state and some reference level for it, e.g., a comparison of an actual reward with an expected reward. In her justice theory, she starts with a very simple premise, the justice evaluation function (the natural logarithm of the ratio of actual to just reward) and then derives numerous empirically testable implications.
  • Collaborative research and John Skvoretz. A major feature of modern science is collaborative research in which the distinctive skills of the participants combine to produce original research. Skvoretz, in addition to this other contributions, has been a frequent collaborator in a variety of theoretical research programs, often using mathematical expertise as well as skills in experimental design, statistical data analysis and simulation methods. Some examples are:
    • Collaborative work on theoretical, statistical and mathematical problems in biased net theory.
    • Collaborative contributions to Expectation States Theory.
    • Collaborative contributions to Elementary Theory.
    • Collaboration with Bruce Mayhew in a structuralist research program. From the early 1970s, Skvoretz has been one of the most prolific of contributors to the advance of mathematical sociology.
The above discussion could be expanded to include many other programs and individuals including European sociologists such as Peter Abell and the late Raymond Boudon.

Awards in mathematical sociology

The Mathematical Sociology section of The American Sociological Association in 2002 initiated awards for contributions to the field, including The James S. Coleman Distinguished Career Achievement Award. (Coleman had died in 1995 before the section had been established.) Given every other year, the awardees include some of those just listed in terms of their career-long research programs:
The section's other categories of awards and their recipients are listed at ASA Section on Mathematical Sociology

Texts and journals

Mathematical sociology textbooks cover a variety of models, usually explaining the required mathematical background before discussing important work in the literature (Fararo 1973, Leik and Meeker 1975, Bonacich and Lu 2012). An earlier text by Otomar Bartos (1967) is still of relevance. Of wider scope and mathematical sophistication is the text by Rapoport (1983). A very reader-friendly and imaginative introduction to explanatory thinking leading to models is Lave and March (1975, reprinted 1993). The Journal of Mathematical Sociology (started in 1971) has been open to papers covering a broad spectrum of topics employing a variety of types of mathematics, especially through frequent special issues. Other journals in sociology who publish papers with substantial use of mathematics are Computational and Mathematical Organization Theory, Journal of social structure, Journal of Artificial Societies and Social Simulation.
 
Articles in Social Networks, a journal devoted to social structural analysis, very often employ mathematical models and related structural data analyses. In addition – importantly indicating the penetration of mathematical model building into sociological research – the major comprehensive journals in sociology, especially The American Journal of Sociology and The American Sociological Review, regularly publish articles featuring mathematical formulations.

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...