Search This Blog

Thursday, October 8, 2020

Neutron star

From Wikipedia, the free encyclopedia
 
Simulated view of a neutron star gravitationally lensing the background, making it appear distorted.
 
Radiation from the rapidly spinning pulsar PSR B1509-58 makes nearby gas emit X-rays (gold) and illuminates the rest of the nebula, here seen in infrared (blue and red).

A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich. Neutron stars are the smallest and densest stellar objects, excluding black holes and hypothetical white holes, quark stars, and strange stars. Neutron stars have a radius on the order of 10 kilometres (6.2 mi) and a mass of about 1.4 solar masses. They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei.

Once formed, they no longer actively generate heat, and cool over time; however, they may still evolve further through collision or accretion. Most of the basic models for these objects imply that neutron stars are composed almost entirely of neutrons (subatomic particles with no net electrical charge and with slightly larger mass than protons); the electrons and protons present in normal matter combine to produce neutrons at the conditions in a neutron star. Neutron stars are partially supported against further collapse by neutron degeneracy pressure, a phenomenon described by the Pauli exclusion principle, just as white dwarfs are supported against collapse by electron degeneracy pressure. However, neutron degeneracy pressure is not by itself sufficient to hold up an object beyond 0.7M and repulsive nuclear forces play a larger role in supporting more massive neutron stars. If the remnant star has a mass exceeding the Tolman–Oppenheimer–Volkoff limit of around 2 solar masses, the combination of degeneracy pressure and nuclear forces is insufficient to support the neutron star and it continues collapsing to form a black hole.

Neutron stars that can be observed are very hot and typically have a surface temperature of around 600000 K. They are so dense that a normal-sized matchbox containing neutron-star material would have a weight of approximately 3 billion tonnes, the same weight as a 0.5 cubic kilometre chunk of the Earth (a cube with edges of about 800 metres) from Earth's surface. Their magnetic fields are between 108 and 1015 (100 million to 1 quadrillion) times stronger than Earth's magnetic field. The gravitational field at the neutron star's surface is about 2×1011 (200 billion) times that of Earth's gravitational field.

As the star's core collapses, its rotation rate increases as a result of conservation of angular momentum, and newly formed neutron stars hence rotate at up to several hundred times per second. Some neutron stars emit beams of electromagnetic radiation that make them detectable as pulsars. Indeed, the discovery of pulsars by Jocelyn Bell Burnell and Antony Hewish in 1967 was the first observational suggestion that neutron stars exist. The radiation from pulsars is thought to be primarily emitted from regions near their magnetic poles. If the magnetic poles do not coincide with the rotational axis of the neutron star, the emission beam will sweep the sky, and when seen from a distance, if the observer is somewhere in the path of the beam, it will appear as pulses of radiation coming from a fixed point in space (the so-called "lighthouse effect"). The fastest-spinning neutron star known is PSR J1748-2446ad, rotating at a rate of 716 times a second or 43,000 revolutions per minute, giving a linear speed at the surface on the order of 0.24 c (i.e., nearly a quarter the speed of light).

There are thought to be around one billion neutron stars in the Milky Way, and at a minimum several hundred million, a figure obtained by estimating the number of stars that have undergone supernova explosions. However, most are old and cold and radiate very little; most neutron stars that have been detected occur only in certain situations in which they do radiate, such as if they are a pulsar or part of a binary system. Slow-rotating and non-accreting neutron stars are almost undetectable; however, since the Hubble Space Telescope detection of RX J185635−3754, a few nearby neutron stars that appear to emit only thermal radiation have been detected. Soft gamma repeaters are conjectured to be a type of neutron star with very strong magnetic fields, known as magnetars, or alternatively, neutron stars with fossil disks around them.

Neutron stars in binary systems can undergo accretion which typically makes the system bright in X-rays while the material falling onto the neutron star can form hotspots that rotate in and out of view in identified X-ray pulsar systems. Additionally, such accretion can "recycle" old pulsars and potentially cause them to gain mass and spin-up to very fast rotation rates, forming the so-called millisecond pulsars. These binary systems will continue to evolve, and eventually the companions can become compact objects such as white dwarfs or neutron stars themselves, though other possibilities include a complete destruction of the companion through ablation or merger. The merger of binary neutron stars may be the source of short-duration gamma-ray bursts and are likely strong sources of gravitational waves. In 2017, a direct detection (GW170817) of the gravitational waves from such an event was made, and gravitational waves have also been indirectly detected in a system where two neutron stars orbit each other.

Formation

Simplistic representation of the formation of neutron stars.

Any main-sequence star with an initial mass of above 8 times the mass of the sun (8 M) has the potential to produce a neutron star. As the star evolves away from the main sequence, subsequent nuclear burning produces an iron-rich core. When all nuclear fuel in the core has been exhausted, the core must be supported by degeneracy pressure alone. Further deposits of mass from shell burning cause the core to exceed the Chandrasekhar limit. Electron-degeneracy pressure is overcome and the core collapses further, sending temperatures soaring to over 5×109 K. At these temperatures, photodisintegration (the breaking up of iron nuclei into alpha particles by high-energy gamma rays) occurs. As the temperature climbs even higher, electrons and protons combine to form neutrons via electron capture, releasing a flood of neutrinos. When densities reach nuclear density of 4×1017 kg/m3, a combination of strong force repulsion and neutron degeneracy pressure halts the contraction.[20] The infalling outer envelope of the star is halted and flung outwards by a flux of neutrinos produced in the creation of the neutrons, becoming a supernova. The remnant left is a neutron star. If the remnant has a mass greater than about 3 M, it collapses further to become a black hole.

As the core of a massive star is compressed during a Type II supernova or a Type Ib or Type Ic supernova, and collapses into a neutron star, it retains most of its angular momentum. But, because it has only a tiny fraction of its parent's radius (and therefore its moment of inertia is sharply reduced), a neutron star is formed with very high rotation speed, and then over a very long period it slows. Neutron stars are known that have rotation periods from about 1.4 ms to 30 s. The neutron star's density also gives it very high surface gravity, with typical values ranging from 1012 to 1013 m/s2 (more than 1011 times that of Earth). One measure of such immense gravity is the fact that neutron stars have an escape velocity ranging from 100,000 km/s to 150,000 km/s, that is, from a third to half the speed of light. The neutron star's gravity accelerates infalling matter to tremendous speed. The force of its impact would likely destroy the object's component atoms, rendering all the matter identical, in most respects, to the rest of the neutron star.

Properties

Mass and temperature

A neutron star has a mass of at least 1.1 solar masses (M). The upper limit of mass for a neutron star is called the Tolman–Oppenheimer–Volkoff limit and is generally held to be around 2.1 M, but a recent estimate puts the upper limit at 2.16 M. The maximum observed mass of neutron stars is about 2.14 M for PSR J0740+6620 discovered in September, 2019. Compact stars below the Chandrasekhar limit of 1.39 M are generally white dwarfs whereas compact stars with a mass between 1.4 M and 2.16 M are expected to be neutron stars, but there is an interval of a few tenths of a solar mass where the masses of low-mass neutron stars and high-mass white dwarfs can overlap. It is thought that beyond 2.16 M the stellar remnant will overcome the strong force repulsion and neutron degeneracy pressure so that gravitational collapse will occur to produce a black hole, but the smallest observed mass of a stellar black hole is about 5 M. Between 2.16 M and 5 M, hypothetical intermediate-mass stars such as quark stars and electroweak stars have been proposed, but none have been shown to exist.

The temperature inside a newly formed neutron star is from around 1011 to 1012 kelvin. However, the huge number of neutrinos it emits carry away so much energy that the temperature of an isolated neutron star falls within a few years to around 106 kelvin. At this lower temperature, most of the light generated by a neutron star is in X-rays.

Some researchers have proposed a neutron star classification system using Roman numerals (not to be confused with the Yerkes luminosity classes for non-degenerate stars) to sort neutron stars by their mass and cooling rates: type I for neutron stars with low mass and cooling rates, type II for neutron stars with higher mass and cooling rates, and a proposed type III for neutron stars with even higher mass, approaching 2 M, and with higher cooling rates and possibly candidates for exotic stars.

Density and pressure

Neutron stars have overall densities of 3.7×1017 to 5.9×1017 kg/m3 (2.6×1014 to 4.1×1014 times the density of the Sun), which is comparable to the approximate density of an atomic nucleus of 3×1017 kg/m3. The neutron star's density varies from about 1×109 kg/m3 in the crust—increasing with depth—to about 6×1017 or 8×1017 kg/m3 (denser than an atomic nucleus) deeper inside. A neutron star is so dense that one teaspoon (5 milliliters) of its material would have a mass over 5.5×1012 kg, about 900 times the mass of the Great Pyramid of Giza. In the enormous gravitational field of a neutron star, that teaspoon of material would weigh 1.1×1025 N, which is 15 times what the Moon would weigh if it were placed on the surface of the Earth. The entire mass of the Earth at neutron star density would fit into a sphere of 305 m in diameter (the size of the Arecibo Observatory). The pressure increases from 3.2×1031 to 1.6×1034 Pa from the inner crust to the center.

The equation of state of matter at such high densities is not precisely known because of the theoretical difficulties associated with extrapolating the likely behavior of quantum chromodynamics, superconductivity, and superfluidity of matter in such states. The problem is exacerbated by the empirical difficulties of observing the characteristics of any object that is hundreds of parsecs away, or farther.

A neutron star has some of the properties of an atomic nucleus, including density (within an order of magnitude) and being composed of nucleons. In popular scientific writing, neutron stars are therefore sometimes described as "giant nuclei". However, in other respects, neutron stars and atomic nuclei are quite different. A nucleus is held together by the strong interaction, whereas a neutron star is held together by gravity. The density of a nucleus is uniform, while neutron stars are predicted to consist of multiple layers with varying compositions and densities.

Magnetic field

The magnetic field strength on the surface of neutron stars ranges from c. 104 to 1011 tesla. These are orders of magnitude higher than in any other object: For comparison, a continuous 16 T field has been achieved in the laboratory and is sufficient to levitate a living frog due to diamagnetic levitation. Variations in magnetic field strengths are most likely the main factor that allows different types of neutron stars to be distinguished by their spectra, and explains the periodicity of pulsars.

The neutron stars known as magnetars have the strongest magnetic fields, in the range of 108 to 1011 tesla, and have become the widely accepted hypothesis for neutron star types soft gamma repeaters (SGRs) and anomalous X-ray pulsars (AXPs). The magnetic energy density of a 108 T field is extreme, greatly exceeding the mass-energy density of ordinary matter. Fields of this strength are able to polarize the vacuum to the point that the vacuum becomes birefringent. Photons can merge or split in two, and virtual particle-antiparticle pairs are produced. The field changes electron energy levels and atoms are forced into thin cylinders. Unlike in an ordinary pulsar, magnetar spin-down can be directly powered by its magnetic field, and the magnetic field is strong enough to stress the crust to the point of fracture. Fractures of the crust cause starquakes, observed as extremely luminous millisecond hard gamma ray bursts. The fireball is trapped by the magnetic field, and comes in and out of view when the star rotates, which is observed as a periodic soft gamma repeater (SGR) emission with a period of 5–8 seconds and which lasts for a few minutes.

The origins of the strong magnetic field are as yet unclear. One hypothesis is that of "flux freezing", or conservation of the original magnetic flux during the formation of the neutron star. If an object has a certain magnetic flux over its surface area, and that area shrinks to a smaller area, but the magnetic flux is conserved, then the magnetic field would correspondingly increase. Likewise, a collapsing star begins with a much larger surface area than the resulting neutron star, and conservation of magnetic flux would result in a far stronger magnetic field. However, this simple explanation does not fully explain magnetic field strengths of neutron stars.

Gravity and equation of state

Gravitational light deflection at a neutron star. Due to relativistic light deflection over half the surface is visible (each grid patch represents 30 by 30 degrees). In natural units, this star's mass is 1 and its radius is 4, or twice its Schwarzschild radius.

The gravitational field at a neutron star's surface is about 2×1011 times stronger than on Earth, at around 2.0×1012 m/s2. Such a strong gravitational field acts as a gravitational lens and bends the radiation emitted by the neutron star such that parts of the normally invisible rear surface become visible. If the radius of the neutron star is 3GM/c2 or less, then the photons may be trapped in an orbit, thus making the whole surface of that neutron star visible from a single vantage point, along with destabilizing photon orbits at or below the 1 radius distance of the star.

A fraction of the mass of a star that collapses to form a neutron star is released in the supernova explosion from which it forms (from the law of mass–energy equivalence, E = mc2). The energy comes from the gravitational binding energy of a neutron star.

Hence, the gravitational force of a typical neutron star is huge. If an object were to fall from a height of one meter on a neutron star 12 kilometers in radius, it would reach the ground at around 1400 kilometers per second. However, even before impact, the tidal force would cause spaghettification, breaking any sort of an ordinary object into a stream of material.

Because of the enormous gravity, time dilation between a neutron star and Earth is significant. For example, eight years could pass on the surface of a neutron star, yet ten years would have passed on Earth, not including the time-dilation effect of its very rapid rotation.

Neutron star relativistic equations of state describe the relation of radius vs. mass for various models. The most likely radii for a given neutron star mass are bracketed by models AP4 (smallest radius) and MS2 (largest radius). BE is the ratio of gravitational binding energy mass equivalent to the observed neutron star gravitational mass of "M" kilograms with radius "R" meters,

      

Given current values

and star masses "M" commonly reported as multiples of one solar mass,

then the relativistic fractional binding energy of a neutron star is

A 2 M neutron star would not be more compact than 10,970 meters radius (AP4 model). Its mass fraction gravitational binding energy would then be 0.187, −18.7% (exothermic). This is not near 0.6/2 = 0.3, −30%.

The equation of state for a neutron star is not yet known. It is assumed that it differs significantly from that of a white dwarf, whose equation of state is that of a degenerate gas that can be described in close agreement with special relativity. However, with a neutron star the increased effects of general relativity can no longer be ignored. Several equations of state have been proposed (FPS, UU, APR, L, SLy, and others) and current research is still attempting to constrain the theories to make predictions of neutron star matter. This means that the relation between density and mass is not fully known, and this causes uncertainties in radius estimates. For example, a 1.5 M neutron star could have a radius of 10.7, 11.1, 12.1 or 15.1 kilometers (for EOS FPS, UU, APR or L respectively).

Structure

Cross-section of neutron star. Densities are in terms of ρ0 the saturation nuclear matter density, where nucleons begin to touch.

Current understanding of the structure of neutron stars is defined by existing mathematical models, but it might be possible to infer some details through studies of neutron-star oscillations. Asteroseismology, a study applied to ordinary stars, can reveal the inner structure of neutron stars by analyzing observed spectra of stellar oscillations.

Current models indicate that matter at the surface of a neutron star is composed of ordinary atomic nuclei crushed into a solid lattice with a sea of electrons flowing through the gaps between them. It is possible that the nuclei at the surface are iron, due to iron's high binding energy per nucleon. It is also possible that heavy elements, such as iron, simply sink beneath the surface, leaving only light nuclei like helium and hydrogen. If the surface temperature exceeds 106 kelvin (as in the case of a young pulsar), the surface should be fluid instead of the solid phase that might exist in cooler neutron stars (temperature <106 kelvin).

The "atmosphere" of a neutron star is hypothesized to be at most several micrometers thick, and its dynamics are fully controlled by the neutron star's magnetic field. Below the atmosphere one encounters a solid "crust". This crust is extremely hard and very smooth (with maximum surface irregularities of ~5 mm), due to the extreme gravitational field.

Proceeding inward, one encounters nuclei with ever-increasing numbers of neutrons; such nuclei would decay quickly on Earth, but are kept stable by tremendous pressures. As this process continues at increasing depths, the neutron drip becomes overwhelming, and the concentration of free neutrons increases rapidly. In that region, there are nuclei, free electrons, and free neutrons. The nuclei become increasingly small (gravity and pressure overwhelming the strong force) until the core is reached, by definition the point where mostly neutrons exist. The expected hierarchy of phases of nuclear matter in the inner crust has been characterized as "nuclear pasta", with fewer voids and larger structures towards higher pressures. The composition of the superdense matter in the core remains uncertain. One model describes the core as superfluid neutron-degenerate matter (mostly neutrons, with some protons and electrons). More exotic forms of matter are possible, including degenerate strange matter (containing strange quarks in addition to up and down quarks), matter containing high-energy pions and kaons in addition to neutrons, or ultra-dense quark-degenerate matter.

Radiation

Animation of a rotating pulsar. The sphere in the middle represents the neutron star, the curves indicate the magnetic field lines and the protruding cones represent the emission zones.

Pulsars

Neutron stars are detected from their electromagnetic radiation. Neutron stars are usually observed to pulse radio waves and other electromagnetic radiation, and neutron stars observed with pulses are called pulsars.

Pulsars' radiation is thought to be caused by particle acceleration near their magnetic poles, which need not be aligned with the rotational axis of the neutron star. It is thought that a large electrostatic field builds up near the magnetic poles, leading to electron emission. These electrons are magnetically accelerated along the field lines, leading to curvature radiation, with the radiation being strongly polarized towards the plane of curvature. In addition, high energy photons can interact with lower energy photons and the magnetic field for electron−positron pair production, which through electron–positron annihilation leads to further high energy photons.

The radiation emanating from the magnetic poles of neutron stars can be described as magnetospheric radiation, in reference to the magnetosphere of the neutron star. It is not to be confused with magnetic dipole radiation, which is emitted because the magnetic axis is not aligned with the rotational axis, with a radiation frequency the same as the neutron star's rotational frequency.

If the axis of rotation of the neutron star is different to the magnetic axis, external viewers will only see these beams of radiation whenever the magnetic axis point towards them during the neutron star rotation. Therefore, periodic pulses are observed, at the same rate as the rotation of the neutron star.

Non-pulsating neutron stars

In addition to pulsars, non-pulsating neutron stars have also been identified, although they may have minor periodic variation in luminosity. This seems to be a characteristic of the X-ray sources known as Central Compact Objects in Supernova remnants (CCOs in SNRs), which are thought to be young, radio-quiet isolated neutron stars.

Spectra

In addition to radio emissions, neutron stars have also been identified in other parts of the electromagnetic spectrum. This includes visible light, near infrared, ultraviolet, X-rays, and gamma rays. Pulsars observed in X-rays are known as X-ray pulsars if accretion-powered, while those identified in visible light are known as optical pulsars. The majority of neutron stars detected, including those identified in optical, X-ray, and gamma rays, also emit radio waves; the Crab Pulsar produces electromagnetic emissions across the spectrum. However, there exist neutron stars called radio-quiet neutron stars, with no radio emissions detected.

Rotation

Neutron stars rotate extremely rapidly after their formation due to the conservation of angular momentum; in analogy to spinning ice skaters pulling in their arms, the slow rotation of the original star's core speeds up as it shrinks. A newborn neutron star can rotate many times a second.

Spin down

PP-dot diagram for known rotation-powered pulsars (red), anomalous X-ray pulsars (green), high-energy emission pulsars (blue) and binary pulsars (pink)

Over time, neutron stars slow, as their rotating magnetic fields in effect radiate energy associated with the rotation; older neutron stars may take several seconds for each revolution. This is called spin down. The rate at which a neutron star slows its rotation is usually constant and very small.

The periodic time (P) is the rotational period, the time for one rotation of a neutron star. The spin-down rate, the rate of slowing of rotation, is then given the symbol (P-dot), the derivative of P with respect to time. It is defined as periodic time increase per unit time; it is a dimensionless quantity, but can be given the units of s⋅s−1 (seconds per second).

The spin-down rate (P-dot) of neutron stars usually falls within the range of 10−22 to 10−9 s⋅s−1, with the shorter period (or faster rotating) observable neutron stars usually having smaller P-dot. As a neutron star ages, its rotation slows (as P increases); eventually, the rate of rotation will become too slow to power the radio-emission mechanism, and the neutron star can no longer be detected.

P and P-dot allow minimum magnetic fields of neutron stars to be estimated. P and P-dot can be also used to calculate the characteristic age of a pulsar, but gives an estimate which is somewhat larger than the true age when it is applied to young pulsars.

P and P-dot can also be combined with neutron star's moment of inertia to estimate a quantity called spin-down luminosity, which is given the symbol (E-dot). It is not the measured luminosity, but rather the calculated loss rate of rotational energy that would manifest itself as radiation. For neutron stars where the spin-down luminosity is comparable to the actual luminosity, the neutron stars are said to be "rotation powered". The observed luminosity of the Crab Pulsar is comparable to the spin-down luminosity, supporting the model that rotational kinetic energy powers the radiation from it. With neutron stars such as magnetars, where the actual luminosity exceeds the spin-down luminosity by about a factor of one hundred, it is assumed that the luminosity is powered by magnetic dissipation, rather than being rotation powered.

P and P-dot can also be plotted for neutron stars to create a PP-dot diagram. It encodes a tremendous amount of information about the pulsar population and its properties, and has been likened to the Hertzsprung–Russell diagram in its importance for neutron stars.

Spin up

Neutron star rotational speeds can increase, a process known as spin up. Sometimes neutron stars absorb orbiting matter from companion stars, increasing the rotation rate and reshaping the neutron star into an oblate spheroid. This causes an increase in the rate of rotation of the neutron star of over a hundred times per second in the case of millisecond pulsars.

The most rapidly rotating neutron star currently known, PSR J1748-2446ad, rotates at 716 revolutions per second. A 2007 paper reported the detection of an X-ray burst oscillation, which provides an indirect measure of spin, of 1122 Hz from the neutron star XTE J1739-285, suggesting 1122 rotations a second. However, at present, this signal has only been seen once, and should be regarded as tentative until confirmed in another burst from that star.

Glitches and starquakes

NASA artist's conception of a "starquake", or "stellar quake".

Sometimes a neutron star will undergo a glitch, a sudden small increase of its rotational speed or spin up. Glitches are thought to be the effect of a starquake—as the rotation of the neutron star slows, its shape becomes more spherical. Due to the stiffness of the "neutron" crust, this happens as discrete events when the crust ruptures, creating a starquake similar to earthquakes. After the starquake, the star will have a smaller equatorial radius, and because angular momentum is conserved, its rotational speed has increased.

Starquakes occurring in magnetars, with a resulting glitch, is the leading hypothesis for the gamma-ray sources known as soft gamma repeaters.

Recent work, however, suggests that a starquake would not release sufficient energy for a neutron star glitch; it has been suggested that glitches may instead be caused by transitions of vortices in the theoretical superfluid core of the neutron star from one metastable energy state to a lower one, thereby releasing energy that appears as an increase in the rotation rate.

"Anti-glitches"

An "anti-glitch", a sudden small decrease in rotational speed, or spin down, of a neutron star has also been reported. It occurred in the magnetar 1E 2259+586, that in one case produced an X-ray luminosity increase of a factor of 20, and a significant spin-down rate change. Current neutron star models do not predict this behavior. If the cause was internal, it suggests differential rotation of solid outer crust and the superfluid component of the magnetar's inner structure.

Population and distances

Central neutron star at the heart of the Crab Nebula.

At present, there are about 2,000 known neutron stars in the Milky Way and the Magellanic Clouds, the majority of which have been detected as radio pulsars. Neutron stars are mostly concentrated along the disk of the Milky Way, although the spread perpendicular to the disk is large because the supernova explosion process can impart high translational speeds (400 km/s) to the newly formed neutron star.

Some of the closest known neutron stars are RX J1856.5−3754, which is about 400 light-years from Earth, and PSR J0108−1431 about 424 light years. RX J1856.5-3754 is a member of a close group of neutron stars called The Magnificent Seven. Another nearby neutron star that was detected transiting the backdrop of the constellation Ursa Minor has been nicknamed Calvera by its Canadian and American discoverers, after the villain in the 1960 film The Magnificent Seven. This rapidly moving object was discovered using the ROSAT/Bright Source Catalog.

Neutron stars are only detectable with modern technology during the earliest stages of their lives (almost always less than 1 million years) and are vastly outnumbered by older neutron stars that would only be detectable through their blackbody radiation and gravitational effects on other stars.

Binary neutron star systems

Circinus X-1: X-ray light rings from a binary neutron star (24 June 2015; Chandra X-ray Observatory)

About 5% of all known neutron stars are members of a binary system. The formation and evolution of binary neutron stars can be a complex process. Neutron stars have been observed in binaries with ordinary main-sequence stars, red giants, white dwarfs, or other neutron stars. According to modern theories of binary evolution, it is expected that neutron stars also exist in binary systems with black hole companions. The merger of binaries containing two neutron stars, or a neutron star and a black hole, has been observed through the emission of gravitational waves.

X-ray binaries

Binary systems containing neutron stars often emit X-rays, which are emitted by hot gas as it falls towards the surface of the neutron star. The source of the gas is the companion star, the outer layers of which can be stripped off by the gravitational force of the neutron star if the two stars are sufficiently close. As the neutron star accretes this gas, its mass can increase; if enough mass is accreted, the neutron star may collapse into a black hole.

Neutron star binary mergers and nucleosynthesis

The distance between two neutron stars in a close binary system is observed to shrink as gravitational waves are emitted. Ultimately, the neutron stars will come into contact and coalesce. The coalescence of binary neutron stars is one of the leading models for the origin of short gamma-ray bursts. Strong evidence for this model came from the observation of a kilonova associated with the short-duration gamma-ray burst GRB 130603B, and finally confirmed by detection of gravitational wave GW170817 and short GRB 170817A by LIGO, Virgo, and 70 observatories covering the electromagnetic spectrum observing the event. The light emitted in the kilonova is believed to come from the radioactive decay of material ejected in the merger of the two neutron stars. This material may be responsible for the production of many of the chemical elements beyond iron, as opposed to the supernova nucleosynthesis theory.

Planets

An artist's conception of a pulsar planet with bright aurorae.

Neutron stars can host exoplanets. These can be original, circumbinary, captured, or the result of a second round of planet formation. Pulsars can also strip the atmosphere off from a star, leaving a planetary-mass remnant, which may be understood as a chthonian planet or a stellar object depending on interpretation. For pulsars, such pulsar planets can be detected with the pulsar timing method, which allows for high precision and detection of much smaller planets than with other methods. Two systems have been definitively confirmed. The first exoplanets ever to be detected were the three planets Draugr, Poltergeist and Phobetor around PSR B1257+12, discovered in 1992–1994. Of these, Draugr is the smallest exoplanet ever detected, at a mass of twice that of the Moon. Another system is PSR B1620−26, where a circumbinary planet orbits a neutron star-white dwarf binary system. Also, there are several unconfirmed candidates. Pulsar planets receive little visible light, but massive amounts of ionizing radiation and high-energy stellar wind, which makes them rather hostile environments.

History of discoveries

The first direct observation of a neutron star in visible light. The neutron star is RX J1856.5−3754.

At the meeting of the American Physical Society in December 1933 (the proceedings were published in January 1934), Walter Baade and Fritz Zwicky proposed the existence of neutron stars, less than two years after the discovery of the neutron by James Chadwick.seeking an explanation for the origin of a supernova, they tentatively proposed that in supernova explosions ordinary stars are turned into stars that consist of extremely closely packed neutrons that they called neutron stars. Baade and Zwicky correctly proposed at that time that the release of the gravitational binding energy of the neutron stars powers the supernova: "In the supernova process, mass in bulk is annihilated". Neutron stars were thought to be too faint to be detectable and little work was done on them until November 1967, when Franco Pacini pointed out that if the neutron stars were spinning and had large magnetic fields, then electromagnetic waves would be emitted. Unbeknown to him, radio astronomer Antony Hewish and his research assistant Jocelyn Bell at Cambridge were shortly to detect radio pulses from stars that are now believed to be highly magnetized, rapidly spinning neutron stars, known as pulsars.

In 1965, Antony Hewish and Samuel Okoye discovered "an unusual source of high radio brightness temperature in the Crab Nebula". This source turned out to be the Crab Pulsar that resulted from the great supernova of 1054.

In 1967, Iosif Shklovsky examined the X-ray and optical observations of Scorpius X-1 and correctly concluded that the radiation comes from a neutron star at the stage of accretion.

In 1967, Jocelyn Bell Burnell and Antony Hewish discovered regular radio pulses from PSR B1919+21. This pulsar was later interpreted as an isolated, rotating neutron star. The energy source of the pulsar is the rotational energy of the neutron star. The majority of known neutron stars (about 2000, as of 2010) have been discovered as pulsars, emitting regular radio pulses.

In 1971, Riccardo Giacconi, Herbert Gursky, Ed Kellogg, R. Levinson, E. Schreier, and H. Tananbaum discovered 4.8 second pulsations in an X-ray source in the constellation Centaurus, Cen X-3. They interpreted this as resulting from a rotating hot neutron star. The energy source is gravitational and results from a rain of gas falling onto the surface of the neutron star from a companion star or the interstellar medium.

In 1974, Antony Hewish was awarded the Nobel Prize in Physics "for his decisive role in the discovery of pulsars" without Jocelyn Bell who shared in the discovery.

In 1974, Joseph Taylor and Russell Hulse discovered the first binary pulsar, PSR B1913+16, which consists of two neutron stars (one seen as a pulsar) orbiting around their center of mass. Albert Einstein's general theory of relativity predicts that massive objects in short binary orbits should emit gravitational waves, and thus that their orbit should decay with time. This was indeed observed, precisely as general relativity predicts, and in 1993, Taylor and Hulse were awarded the Nobel Prize in Physics for this discovery.

In 1982, Don Backer and colleagues discovered the first millisecond pulsar, PSR B1937+21. This object spins 642 times per second, a value that placed fundamental constraints on the mass and radius of neutron stars. Many millisecond pulsars were later discovered, but PSR B1937+21 remained the fastest-spinning known pulsar for 24 years, until PSR J1748-2446ad (which spins more than 700 times a second) was discovered.

In 2003, Marta Burgay and colleagues discovered the first double neutron star system where both components are detectable as pulsars, PSR J0737−3039. The discovery of this system allows a total of 5 different tests of general relativity, some of these with unprecedented precision.

In 2010, Paul Demorest and colleagues measured the mass of the millisecond pulsar PSR J1614−2230 to be 1.97±0.04 M, using Shapiro delay. This was substantially higher than any previously measured neutron star mass (1.67 M, see PSR J1903+0327), and places strong constraints on the interior composition of neutron stars.

In 2013, John Antoniadis and colleagues measured the mass of PSR J0348+0432 to be 2.01±0.04 M, using white dwarf spectroscopy. This confirmed the existence of such massive stars using a different method. Furthermore, this allowed, for the first time, a test of general relativity using such a massive neutron star.

In August 2017, LIGO and Virgo made first detection of gravitational waves produced by colliding neutron stars.

In October 2018, astronomers reported that GRB 150101B, a gamma-ray burst event detected in 2015, may be directly related to the historic GW170817 and associated with the merger of two neutron stars. The similarities between the two events, in terms of gamma ray, optical and x-ray emissions, as well as to the nature of the associated host galaxies, are "striking", suggesting the two separate events may both be the result of the merger of neutron stars, and both may be a kilonova, which may be more common in the universe than previously understood, according to the researchers.

In July 2019, astronomers reported that a new method to determine the Hubble constant, and resolve the discrepancy of earlier methods, has been proposed based on the mergers of pairs of neutron stars, following the detection of the neutron star merger of GW170817. Their measurement of the Hubble constant is 70.3+5.3
−5.0
(km/s)/Mpc.

Subtypes table

Different Types of Neutron Stars (24 June 2020)
  • Neutron star
    • Isolated neutron star (INS): not in a binary system.
      • Rotation-powered pulsar (RPP or "radio pulsar"): neutron stars that emit directed pulses of radiation towards us at regular intervals (due to their strong magnetic fields).
        • Rotating radio transient (RRATs): are thought to be pulsars which emit more sporadically and/or with higher pulse-to-pulse variability than the bulk of the known pulsars.
      • Magnetar: a neutron star with an extremely strong magnetic field (1000 times more than a regular neutron star), and long rotation periods (5 to 12 seconds).
      • Radio-quiet neutron stars.
        • X-ray dim isolated neutron stars.
        • Central compact objects in supernova remnants (CCOs in SNRs): young, radio-quiet non-pulsating X-ray sources, thought to be Isolated Neutron Stars surrounded by supernova remnants.
    • X-ray pulsars or "accretion-powered pulsars": a class of X-ray binaries.
      • Low-mass X-ray binary pulsars: a class of low-mass X-ray binaries (LMXB), a pulsar with a main sequence star, white dwarf or red giant.
        • Millisecond pulsar (MSP) ("recycled pulsar").
          • "Spider Pulsar", a pulsar where their companion is a semi-degenerate star.
            • "Black Widow" pulsar, a pulsar that falls under the "Spider Pulsar" if the companion has extremely low mass (less than 0.1 solar masses).
            • "Redback" pulsar, are if the companion is more massive.
          • Sub-millisecond pulsar.
        • X-ray burster: a neutron star with a low mass binary companion from which matter is accreted resulting in irregular bursts of energy from the surface of the neutron star.
      • Intermediate-mass X-ray binary pulsars: a class of intermediate-mass X-ray binaries (IMXB), a pulsar with an intermediate mass star.
      • High-mass X-ray binary pulsars: a class of high-mass X-ray binaries (HMXB), a pulsar with a massive star.
      • Binary pulsars: a pulsar with a binary companion, often a white dwarf or neutron star.
      • X-ray tertiary (theorized).
  • Theorized compact stars with similar properties.
    • Protoneutron star (PNS), theorized.
    • Exotic star
      • Thorne–Żytkow object: currently a hypothetical merger of a neutron star into a red giant star.
      • Quark star: currently a hypothetical type of neutron star composed of quark matter, or strange matter. As of 2018, there are three candidates.
      • Electroweak star: currently a hypothetical type of extremely heavy neutron star, in which the quarks are converted to leptons through the electroweak force, but the gravitational collapse of the neutron star is prevented by radiation pressure. As of 2018, there is no evidence for their existence.
      • Preon star: currently a hypothetical type of neutron star composed of preon matter. As of 2018, there is no evidence for the existence of preons.

Examples of neutron stars

Artist's impression of disc around a neutron star RX J0806.4-4123.

Numerical relativity

From Wikipedia, the free encyclopedia

Numerical relativity is one of the branches of general relativity that uses numerical methods and algorithms to solve and analyze problems. To this end, supercomputers are often employed to study black holes, gravitational waves, neutron stars and many other phenomena governed by Einstein's theory of general relativity. A currently active field of research in numerical relativity is the simulation of relativistic binaries and their associated gravitational waves. Other branches are also active.

Overview

A primary goal of numerical relativity is to study spacetimes whose exact form is not known. The spacetimes so found computationally can either be fully dynamical, stationary or static and may contain matter fields or vacuum. In the case of stationary and static solutions, numerical methods may also be used to study the stability of the equilibrium spacetimes. In the case of dynamical spacetimes, the problem may be divided into the initial value problem and the evolution, each requiring different methods.

Numerical relativity is applied to many areas, such as cosmological models, critical phenomena, perturbed black holes and neutron stars, and the coalescence of black holes and neutron stars, for example. In any of these cases, Einstein's equations can be formulated in several ways that allow us to evolve the dynamics. While Cauchy methods have received a majority of the attention, characteristic and Regge calculus based methods have also been used. All of these methods begin with a snapshot of the gravitational fields on some hypersurface, the initial data, and evolve these data to neighboring hypersurfaces.

Like all problems in numerical analysis, careful attention is paid to the stability and convergence of the numerical solutions. In this line, much attention is paid to the gauge conditions, coordinates, and various formulations of the Einstein equations and the effect they have on the ability to produce accurate numerical solutions.

Numerical relativity research is distinct from work on classical field theories as many techniques implemented in these areas are inapplicable in relativity. Many facets are however shared with large scale problems in other computational sciences like computational fluid dynamics, electromagnetics, and solid mechanics. Numerical relativists often work with applied mathematicians and draw insight from numerical analysis, scientific computation, partial differential equations, and geometry among other mathematical areas of specialization.

History

Foundations in theory

Albert Einstein published his theory of general relativity in 1915. It, like his earlier theory of special relativity, described space and time as a unified spacetime subject to what are now known as the Einstein field equations. These form a set of coupled nonlinear partial differential equations (PDEs). After more than 100 years since the first publication of the theory, relatively few closed-form solutions are known for the field equations, and, of those, most are cosmological solutions that assume special symmetry to reduce the complexity of the equations.

The field of numerical relativity emerged from the desire to construct and study more general solutions to the field equations by approximately solving the Einstein equations numerically. A necessary precursor to such attempts was a decomposition of spacetime back into separated space and time. This was first published by Richard Arnowitt, Stanley Deser, and Charles W. Misner in the late 1950s in what has become known as the ADM formalism. Although for technical reasons the precise equations formulated in the original ADM paper are rarely used in numerical simulations, most practical approaches to numerical relativity use a "3+1 decomposition" of spacetime into three-dimensional space and one-dimensional time that is closely related to the ADM formulation, because the ADM procedure reformulates the Einstein field equations into a constrained initial value problem that can be addressed using computational methodologies.

At the time that ADM published their original paper, computer technology would not have supported numerical solution to their equations on any problem of any substantial size. The first documented attempt to solve the Einstein field equations numerically appears to be Hahn and Lindquist in 1964, followed soon thereafter by Smarr and by Eppley. These early attempts were focused on evolving Misner data in axisymmetry (also known as "2+1 dimensions"). At around the same time Tsvi Piran wrote the first code that evolved a system with gravitational radiation using a cylindrical symmetry. In this calculation Piran has set the foundation for many of the concepts used today in evolving ADM equations, like "free evolution" versus "constrained evolution", which deal with the fundamental problem of treating the constraint equations that arise in the ADM formalism. Applying symmetry reduced the computational and memory requirements associated with the problem, allowing the researchers to obtain results on the supercomputers available at the time.

Early results

The first realistic calculations of rotating collapse were carried out in the early eighties by Richard Stark and Tsvi Piran in which the gravitational wave forms resulting from formation of a rotating black hole were calculated for the first time. For nearly 20 years following the initial results, there were fairly few other published results in numerical relativity, probably due to the lack of sufficiently powerful computers to address the problem. In the late 1990s, the Binary Black Hole Grand Challenge Alliance successfully simulated a head-on binary black hole collision. As a post-processing step the group computed the event horizon for the spacetime. This result still required imposing and exploiting axisymmetry in the calculations.

Some of the first documented attempts to solve the Einstein equations in three dimensions were focused on a single Schwarzschild black hole, which is described by a static and spherically symmetric solution to the Einstein field equations. This provides an excellent test case in numerical relativity because it does have a closed-form solution so that numerical results can be compared to an exact solution, because it is static, and because it contains one of the most numerically challenging features of relativity theory, a physical singularity. One of the earliest groups to attempt to simulate this solution was Anninos et al. in 1995. In their paper they point out that

"Progress in three dimensional numerical relativity has been impeded in part by lack of computers with sufficient memory and computational power to perform well resolved calculations of 3D spacetimes."

Maturation of the field

In the years that followed, not only did computers become more powerful, but also various research groups developed alternate techniques to improve the efficiency of the calculations. With respect to black hole simulations specifically, two techniques were devised to avoid problems associated with the existence of physical singularities in the solutions to the equations: (1) Excision, and (2) the "puncture" method. In addition the Lazarus group developed techniques for using early results from a short-lived simulation solving the nonlinear ADM equations, in order to provide initial data for a more stable code based on linearized equations derived from perturbation theory. More generally, adaptive mesh refinement techniques, already used in computational fluid dynamics were introduced to the field of numerical relativity.

Excision

In the excision technique, which was first proposed in the late 1990s, a portion of a spacetime inside of the event horizon surrounding the singularity of a black hole is simply not evolved. In theory this should not affect the solution to the equations outside of the event horizon because of the principle of causality and properties of the event horizon (i.e. nothing physical inside the black hole can influence any of the physics outside the horizon). Thus if one simply does not solve the equations inside the horizon one should still be able to obtain valid solutions outside. One "excises" the interior by imposing ingoing boundary conditions on a boundary surrounding the singularity but inside the horizon. While the implementation of excision has been very successful, the technique has two minor problems. The first is that one has to be careful about the coordinate conditions. While physical effects cannot propagate from inside to outside, coordinate effects could. For example, if the coordinate conditions were elliptical, coordinate changes inside could instantly propagate out through the horizon. This then means that one needs hyperbolic type coordinate conditions with characteristic velocities less than that of light for the propagation of coordinate effects (e.g., using harmonic coordinates coordinate conditions). The second problem is that as the black holes move, one must continually adjust the location of the excision region to move with the black hole.

The excision technique was developed over several years including the development of new gauge conditions that increased stability and work that demonstrated the ability of the excision regions to move through the computational grid. The first stable, long-term evolution of the orbit and merger of two black holes using this technique was published in 2005.

Punctures

In the puncture method the solution is factored into an analytical part, which contains the singularity of the black hole, and a numerically constructed part, which is then singularity free. This is a generalization of the Brill-Lindquist  prescription for initial data of black holes at rest and can be generalized to the Bowen-York prescription for spinning and moving black hole initial data. Until 2005, all published usage of the puncture method required that the coordinate position of all punctures remain fixed during the course of the simulation. Of course black holes in proximity to each other will tend to move under the force of gravity, so the fact that the coordinate position of the puncture remained fixed meant that the coordinate systems themselves became "stretched" or "twisted," and this typically led to numerical instabilities at some stage of the simulation.

Breakthrough

In 2005 researchers demonstrated for the first time the ability to allow punctures to move through the coordinate system, thus eliminating some of the earlier problems with the method. This allowed accurate long-term evolutions of black holes. By choosing appropriate coordinate conditions and making crude analytic assumption about the fields near the singularity (since no physical effects can propagate out of the black hole, the crudeness of the approximations does not matter), numerical solutions could be obtained to the problem of two black holes orbiting each other, as well as accurate computation of gravitational radiation (ripples in spacetime) emitted by them.

Lazarus project

The Lazarus project (1998–2005) was developed as a post-Grand Challenge technique to extract astrophysical results from short lived full numerical simulations of binary black holes. It combined approximation techniques before (post-Newtonian trajectories) and after (perturbations of single black holes) with full numerical simulations attempting to solve General Relativity field equations. All previous attempts to numerically integrate in supercomputers the Hilbert-Einstein equations describing the gravitational field around binary black holes led to software failure before a single orbit was completed.

The Lazarus approach, in the meantime, gave the best insight into the binary black hole problem and produced numerous and relatively accurate results, such as the radiated energy and angular momentum emitted in the latest merging state, the linear momentum radiated by unequal mass holes, and the final mass and spin of the remnant black hole. The method also computed detailed gravitational waves emitted by the merger process and predicted that the collision of black holes is the most energetic single event in the Universe, releasing more energy in a fraction of a second in the form of gravitational radiation than an entire galaxy in its lifetime.

Adaptive mesh refinement

Adaptive mesh refinement (AMR) as a numerical method has roots that go well beyond its first application in the field of numerical relativity. Mesh refinement first appears in the numerical relativity literature in the 1980s, through the work of Choptuik in his studies of critical collapse of scalar fields. The original work was in one dimension, but it was subsequently extended to two dimensions. In two dimensions, AMR has also been applied to the study of inhomogeneous cosmologies, and to the study of Schwarzschild black holes. The technique has now become a standard tool in numerical relativity and has been used to study the merger of black holes and other compact objects in addition to the propagation of gravitational radiation generated by such astronomical events.

Recent developments

In the past few years, hundreds of research papers have been published leading to a wide spectrum of mathematical relativity, gravitational wave, and astrophysical results for the orbiting black hole problem. This technique extended to astrophysical binary systems involving neutron stars and black holes, and multiple black holes. One of the most surprising predictions is that the merger of two black holes can give the remnant hole a speed of up to 4000 km/s that can allow it to escape from any known galaxy. The simulations also predict an enormous release of gravitational energy in this merger process, amounting up to 8% of its total rest mass.

Quantum cognition

From Wikipedia, the free encyclopedia

Quantum cognition is an emerging field which applies the mathematical formalism of quantum theory to model cognitive phenomena such as information processing by the human brain, language, decision making, human memory, concepts and conceptual reasoning, human judgment, and perception. The field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum mechanical about the brain. Quantum cognition is based on the quantum-like paradigm or generalized quantum paradigm or quantum structure paradigm that information processing by complex systems such as the brain, taking into account contextual dependence of information and probabilistic reasoning, can be mathematically described in the framework of quantum information and quantum probability theory.

Quantum cognition uses the mathematical formalism of quantum theory to inspire and formalize models of cognition that aim to be an advance over models based on traditional classical probability theory. The field focuses on modeling phenomena in cognitive science that have resisted traditional techniques or where traditional models seem to have reached a barrier (e.g., human memory), and modeling preferences in decision theory that seem paradoxical from a traditional rational point of view (e.g., preference reversals). Since the use of a quantum-theoretic framework is for modeling purposes, the identification of quantum structures in cognitive phenomena does not presuppose the existence of microscopic quantum processes in the human brain.

Main subjects of research

Quantum-like models of information processing ("quantum-like brain")

The brain is definitely a macroscopic physical system operating on the scales (of time, space, temperature) which differ crucially from the corresponding quantum scales. (The macroscopic quantum physical phenomena, such as the Bose-Einstein condensate, are also characterized by the special conditions which are definitely not fulfilled in the brain.) In particular, the brain’s temperature is simply too high to be able to perform the real quantum information processing, i.e., to use the quantum carriers of information such as photons, ions, electrons. As is commonly accepted in brain science, the basic unit of information processing is a neuron. It is clear that a neuron cannot be in the superposition of two states: firing and non-firing. Hence, it cannot produce superposition playing the basic role in the quantum information processing. Superpositions of mental states are created by complex networks of neurons (and these are classical neural networks). Quantum cognition community states that the activity of such neural networks can produce effects formally described as interference (of probabilities) and entanglement. In principle, the community does not try to create the concrete models of quantum (-like) representation of information in the brain.

The quantum cognition project is based on the observation that various cognitive phenomena are more adequately described by quantum information theory and quantum probability than by the corresponding classical theories (see examples below). Thus the quantum formalism is considered an operational formalism that describes nonclassical processing of probabilistic data. Recent derivations of the complete quantum formalism from simple operational principles for representation of information support the foundations of quantum cognition. The subjective probability viewpoint on quantum probability developed by C. Fuchs and his collaborators also supports the quantum cognition approach, especially using quantum probabilities to describe the process of decision making.

Although at the moment we cannot present the concrete neurophysiological mechanisms of creation of the quantum-like representation of information in the brain, we can present general informational considerations supporting the idea that information processing in the brain matches with quantum information and probability. Here, contextuality is the key word, see the monograph of Khrennikov for detailed representation of this viewpoint. Quantum mechanics is fundamentally contextual.

Quantum systems do not have objective properties which can be defined independently of measurement context. (As was pointed by N. Bohr, the whole experimental arrangement must be taken into account.) Contextuality implies existence of incompatible mental variables, violation of the classical law of total probability and (constructive and destructive) interference effects. Thus the quantum cognition approach can be considered as an attempt to formalize contextuality of mental processes by using the mathematical apparatus of quantum mechanics.

Decision making

Suppose a person is given an opportunity to play two rounds of the following gamble: a coin toss will determine whether the subject wins $200 or loses $100. Suppose the subject has decided to play the first round, and does so. Some subjects are then given the result (win or lose) of the first round, while other subjects are not yet given any information about the results. The experimenter then asks whether the subject wishes to play the second round. Performing this experiment with real subjects gives the following results:

  1. When subjects believe they won the first round, the majority of subjects choose to play again on the second round.
  2. When subjects believe they lost the first round, the majority of subjects choose to play again on the second round.

Given these two separate choices, according to the sure thing principle of rational decision theory, they should also play the second round even if they don't know or think about the outcome of the first round. But, experimentally, when subjects are not told the results of the first round, the majority of them decline to play a second round. This finding violates the law of total probability, yet it can be explained as a quantum interference effect in a manner similar to the explanation for the results from double-slit experiment in quantum physics. Similar violations of the sure-thing principle are seen in empirical studies of the Prisoner's Dilemma and have likewise been modeled in terms of quantum interference.

The above deviations from classical rational expectations in agents’ decisions under uncertainty produce well known paradoxes in behavioral economics, that is, the Allais, Ellsberg and Machina paradoxes. These deviations can be explained if one assumes that the overall conceptual landscape influences the subject's choice in a neither predictable nor controllable way. A decision process is thus an intrinsically contextual process, hence it cannot be modeled in a single Kolmogorovian probability space, which justifies the employment of quantum probability models in decision theory. More explicitly, the paradoxical situations above can be represented in a unified Hilbert space formalism where human behavior under uncertainty is explained in terms of genuine quantum aspects, namely, superposition, interference, contextuality and incompatibility.

Considering automated decision making, quantum decision trees have different structure compared to classical decision trees. Data can be analyzed to see if a quantum decision tree model fits the data better.

Human probability judgments

Quantum probability provides a new way to explain human probability judgment errors including the conjunction and disjunction errors. A conjunction error occurs when a person judges the probability of a likely event L and an unlikely event U to be greater than the unlikely event U; a disjunction error occurs when a person judges the probability of a likely event L to be greater than the probability of the likely event L or an unlikely event U. Quantum probability theory is a generalization of Bayesian probability theory because it is based on a set of von Neumann axioms that relax some of the classic Kolmogorov axioms. The quantum model introduces a new fundamental concept to cognition—the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments. Quantum probability provides a simple account of conjunction and disjunction errors as well as many other findings such as order effects on probability judgments.

The liar paradox - The contextual influence of a human subject on the truth behavior of a cognitive entity is explicitly exhibited by the so-called liar paradox, that is, the truth value of a sentence like "this sentence is false". One can show that the true-false state of this paradox is represented in a complex Hilbert space, while the typical oscillations between true and false are dynamically described by the Schrödinger equation.

Knowledge representation

Concepts are basic cognitive phenomena, which provide the content for inference, explanation, and language understanding. Cognitive psychology has researched different approaches for understanding concepts including exemplars, prototypes, and neural networks, and different fundamental problems have been identified, such as the experimentally tested non classical behavior for the conjunction and disjunction of concepts, more specifically the Pet-Fish problem or guppy effect, and the overextension and underextension of typicality and membership weight for conjunction and disjunction. By and large, quantum cognition has drawn on quantum theory in three ways to model concepts.

  1. Exploit the contextuality of quantum theory to account for the contextuality of concepts in cognition and language and the phenomenon of emergent properties when concepts combine
  2. Use quantum entanglement to model the semantics of concept combinations in a non-decompositional way, and to account for the emergent properties/associates/inferences in relation to concept combinations
  3. Use quantum superposition to account for the emergence of a new concept when concepts are combined, and as a consequence put forward an explanatory model for the Pet-Fish problem situation, and the overextension and underextension of membership weights for the conjunction and disjunction of concepts.

The large amount of data collected by Hampton on the combination of two concepts can be modeled in a specific quantum-theoretic framework in Fock space where the observed deviations from classical set (fuzzy set) theory, the above-mentioned over- and under- extension of membership weights, are explained in terms of contextual interactions, superposition, interference, entanglement and emergence. And, more, a cognitive test on a specific concept combination has been performed which directly reveals, through the violation of Bell's inequalities, quantum entanglement between the component concepts.

Human memory

The hypothesis that there may be something quantum-like about the human mental function was put forward with the quantum entanglement formula which attempted to model the effect that when a word's associative network is activated during study in memory experiment, it behaves like a quantum-entangled system. Models of cognitive agents and memory based on quantum collectives have been proposed by Subhash Kak. But he also points to specific problems of limits on observation and control of these memories due to fundamental logical reasons.

Semantic analysis and information retrieval

The research in (iv) had a deep impact on the understanding and initial development of a formalism to obtain semantic information when dealing with concepts, their combinations and variable contexts in a corpus of unstructured documents. This conundrum of natural language processing (NLP) and information retrieval (IR) on the web – and data bases in general – can be addressed using the mathematical formalism of quantum theory. As basic steps, (a) K. Van Rijsbergen introduced a quantum structure approach to IR, (b) Widdows and Peters utilised a quantum logical negation for a concrete search system, and Aerts and Czachor identified quantum structure in semantic space theories, such as latent semantic analysis. Since then, the employment of techniques and procedures induced from the mathematical formalisms of quantum theory – Hilbert space, quantum logic and probability, non-commutative algebras, etc. – in fields such as IR and NLP, has produced significant results.

Human perception

Bi-stable perceptual phenomena is a fascinating topic in the area of perception. If a stimulus has an ambiguous interpretation, such as a Necker cube, the interpretation tends to oscillate across time. Quantum models have been developed to predict the time period between oscillations and how these periods change with frequency of measurement. Quantum theory and an appropriate model have been developed by Elio Conte to account for interference effects obtained with measurements of ambiguous figures.

Gestalt perception

There are apparent similarities between Gestalt perception and quantum theory. In an article discussing the application of Gestalt to chemistry, Anton Amann writes: "Quantum mechanics does not explain Gestalt perception, of course, but in quantum mechanics and Gestalt psychology there exist almost isomorphic conceptions and problems:

  • Similarly as with the Gestalt concept, the shape of a quantum object does not a priori exist but it depends on the interaction of this quantum object with the environment (for example: an observer or a measurement apparatus).
  • Quantum mechanics and Gestalt perception are organized in a holistic way. Subentities do not necessarily exist in a distinct, individual sense.
  • In quantum mechanics and Gestalt perception objects have to be created by elimination of holistic correlations with the 'rest of the world'."

Each of the points mentioned in the above text in a simplified manner (Below explanations correlate respectively with the above-mentioned points):

  • As an object in quantum physics doesn't have any shape until and unless it interacts with its environment; Objects according to Gestalt perspective do not hold much of a meaning individually as they do when there is a "group" of them or when they are present in an environment.
  • Both in quantum mechanics and Gestalt perception, the objects must be studied as a whole rather than finding properties of individual components and interpolating the whole object.
  • In Gestalt concept creation of a new object from another previously existing object means that the previously existing object now becomes a sub entity of the new object, and hence "elimination of holistic correlations" occurs. Similarly a new quantum object made from a previously existing object means that the previously existing object looses its holistic view.

Amann comments: "The structural similarities between Gestalt perception and quantum mechanics are on a level of a parable, but even parables can teach us something, for example, that quantum mechanics is more than just production of numerical results or that the Gestalt concept is more than just a silly idea, incompatible with atomistic conceptions."

Quantum-like models of cognition in economics and finance

The assumption that information processing by the agents of the market follows the laws of quantum information theory and quantum probability was actively explored by many authors, e.g., E. Haven, O. Choustova, A. Khrennikov, see the book of E. Haven and A. Khrennikov, for detailed bibliography. We can mention, e.g., the Bohmian model of dynamics of prices of shares in which the quantum(-like) potential is generated by expectations of agents of the financial market and, hence, it has the mental nature. This approach can be used to model real financial data, see the book of E. Haven and A. Khrennikov (2012).

Application of theory of open quantum systems to decision making and "cell's cognition"

An isolated quantum system is an idealized theoretical entity. In reality interactions with environment have to be taken into account. This is the subject of theory of open quantum systems. Cognition is also fundamentally contextual. The brain is a kind of (self-)observer which makes context dependent decisions. Mental environment plays a crucial role in information processing. Therefore, it is natural to apply theory of open quantum systems to describe the process of decision making as the result of quantum-like dynamics of the mental state of a system interacting with an environment. The description of the process of decision making is mathematically equivalent to the description of the process of decoherence. This idea was explored in a series of works of the multidisciplinary group of researchers at Tokyo University of Science.

Since in the quantum-like approach the formalism of quantum mechanics is considered as a purely operational formalism, it can be applied to the description of information processing by any biological system, i.e., not only by human beings.

Operationally it is very convenient to consider e.g. a cell as a kind of decision maker processing information in the quantum information framework. This idea was explored in a series of papers of the Swedish-Japanese research group using the methods of theory of open quantum systems: genes expressions were modeled as decision making in the process of interaction with environment.

History

Here is a short history of applying the formalisms of quantum theory to topics in psychology. Ideas for applying quantum formalisms to cognition first appeared in the 1990s by Diederik Aerts and his collaborators Jan Broekaert, Sonja Smets and Liane Gabora, by Harald Atmanspacher, Robert Bordley, and Andrei Khrennikov. A special issue on Quantum Cognition and Decision appeared in the Journal of Mathematical Psychology (2009, vol 53.), which planted a flag for the field. A few books related to quantum cognition have been published including those by Khrennikov (2004, 2010), Ivancivic and Ivancivic (2010), Busemeyer and Bruza (2012), E. Conte (2012). The first Quantum Interaction workshop was held at Stanford in 2007 organized by Peter Bruza, William Lawless, C. J. van Rijsbergen, and Don Sofge as part of the 2007 AAAI Spring Symposium Series. This was followed by workshops at Oxford in 2008, Saarbrücken in 2009, at the 2010 AAAI Fall Symposium Series held in Washington, D.C., 2011 in Aberdeen, 2012 in Paris, and 2013 in Leicester. Tutorials also were presented annually beginning in 2007 until 2013 at the annual meeting of the Cognitive Science Society. A Special Issue on Quantum models of Cognition appeared in 2013 in the journal Topics in Cognitive Science.

Related theories

It was suggested by theoretical physicists David Bohm and Basil Hiley that mind and matter both emerge from an "implicate order". Bohm and Hiley's approach to mind and matter is supported by philosopher Paavo Pylkkänen. Pylkkänen underlines "unpredictable, uncontrollable, indivisible and non-logical" features of conscious thought and draws parallels to a philosophical movement some call "post-phenomenology", in particular to Pauli Pylkkö's notion of the "aconceptual experience", an unstructured, unarticulated and pre-logical experience.

The mathematical techniques of both Conte's group and Hiley's group involve the use of Clifford algebras. These algebras account for "non-commutativity" of thought processes (for an example, see: noncommutative operations in everyday life).

However, an area that needs to be investigated is the concept lateralised brain functioning. Some studies in marketing have related lateral influences on cognition and emotion in processing of attachment related stimuli.

Holonomic brain theory

From Wikipedia, the free encyclopedia

Holonomic brain theory is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. This is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry, and which assumes that any quantum effects will not be significant at this scale. The entire field of quantum consciousness is often criticized as pseudoscience, as detailed on the main article thereof.

This specific theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm building on the initial theories of Holograms originally formulated by Dennis Gabor. It describes human cognition by modeling the brain as a holographic storage network. Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These oscillations are waves and create wave interference patterns in which memory is encoded naturally, and the Wave function may be analyzed by a Fourier transform. Gabor, Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which can also be analyzed with a Fourier transform. In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network. This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain cluster of neurons).

Origins and development

In 1946 Dennis Gabor invented the hologram mathematically, describing a system where an image can be reconstructed through information that is stored throughout the hologram. He demonstrated that the information pattern of a three-dimensional object can be encoded in a beam of light, which is more-or-less two-dimensional. Gabor also developed a mathematical model for demonstrating a holographic associative memory. One of Gabor's colleagues, Pieter Jacobus Van Heerden, also developed a related holographic mathematical memory model in 1963. This model contained the key aspect of non-locality, which became important years later when, in 1967, experiments by both Braitenberg and Kirschfield showed that exact localization of memory in the brain was false.

Karl Pribram had worked with psychologist Karl Lashley on Lashley's engram experiments, which used lesions to determine the exact location of specific memories in primate brains.[1] Lashley made small lesions in the brains and found that these had little effect on memory. On the other hand, Pribram removed large areas of cortex, leading to multiple serious deficits in memory and cognitive function. Memories were not stored in a single neuron or exact location, but were spread over the entirety of a neural network. Lashley suggested that brain interference patterns could play a role in perception, but was unsure how such patterns might be generated in the brain or how they would lead to brain function.

Several years later an article by neurophysiologist John Eccles described how a wave could be generated at the branching ends of pre-synaptic axons. Multiple of these waves could create interference patterns. Soon after, Emmett Leith was successful in storing visual images through the interference patterns of laser beams, inspired by Gabor's previous use of Fourier transformations to store information within a hologram. After studying the work of Eccles and that of Leith, Pribram put forward the hypothesis that memory might take the form of interference patterns that resemble laser-produced holograms. Physicist David Bohm presented his ideas of holomovement and implicate and explicate order. Pribram became aware of Bohm's work in 1975 and realized that, since a hologram could store information within patterns of interference and then recreate that information when activated, it could serve as a strong metaphor for brain function. Pribram was further encouraged in this line of speculation by the fact that neurophysiologists Russell and Karen DeValois together established "the spatial frequency encoding displayed by cells of the visual cortex was best described as a Fourier transform of the input pattern."

Theory overview

The hologram and holonomy

Diagram of one possible hologram setup.

A main characteristic of a hologram is that every part of the stored information is distributed over the entire hologram. Both processes of storage and retrieval are carried out in a way described by Fourier transformation equations. As long as a part of the hologram is large enough to contain the interference pattern, that part can recreate the entirety of the stored image, but the image may have unwanted changes, called noise.

An analogy to this is the broadcasting region of a radio antenna. In each smaller individual location within the entire area it is possible to access every channel, similar to how the entirety of the information of a hologram is contained within a part. Another analogy of a hologram is the way sunlight illuminates objects in the visual field of an observer. It doesn't matter how narrow the beam of sunlight is. The beam always contains all the information of the object, and when conjugated by a lens of a camera or the eyeball, produces the same full three-dimensional image. The Fourier transform formula converts spatial forms to spatial wave frequencies and vice versa, as all objects are in essence vibratory structures. Different types of lenses, acting similarly to optic lenses, can alter the frequency nature of information that is transferred.

This non-locality of information storage within the hologram is crucial, because even if most parts are damaged, the entirety will be contained within even a single remaining part of sufficient size. Pribram and others noted the similarities between an optical hologram and memory storage in the human brain. According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. This allows the brain to maintain function and memory even when it is damaged. It is only when there exist no parts big enough to contain the whole that the memory is lost.

 This can also explain why some children retain normal intelligence when large portions of their brain—in some cases, half—are removed. It can also explain why memory is not lost when the brain is sliced in different cross-sections.

A single hologram can store 3D information in a 2D way. Such properties may explain some of the brain's abilities, including the ability to recognize objects at different angles and sizes than in the original stored memory.

Pribram proposed that neural holograms were formed by the diffraction patterns of oscillating electric waves within the cortex. It is important to note the difference between the idea of a holonomic brain and a holographic one. Pribram does not suggest that the brain functions as a single hologram. Rather, the waves within smaller neural networks create localized holograms within the larger workings of the brain. This patch holography is called holonomy or windowed Fourier transformations.

A holographic model can also account for other features of memory that more traditional models cannot. The Hopfield memory model has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable. On the other hand, holographic memory models have much larger theoretical storage capacities. Holographic models can also demonstrate associative memory, store complex connections between different concepts, and resemble forgetting through "lossy storage."

The synaptodendritic web

A Few of the Various Types of Synapses

In classic brain theory the summation of electrical inputs to the dendrites and soma (cell body) of a neuron either inhibit the neuron or excite it and set off an action potential down the axon to where it synapses with the next neuron. However, this fails to account for different varieties of synapses beyond the traditional axodendritic (axon to dendrite). There is evidence for the existence of other kinds of synapses, including serial synapses and those between dendrites and soma and between different dendrites. Many synaptic locations are functionally bipolar, meaning they can both send and receive impulses from each neuron, distributing input and output over the entire group of dendrites.

Processes in this dendritic arbor, the network of teledendrons and dendrites, occur due to the oscillations of polarizations in the membrane of the fine-fibered dendrites, not due to the propagated nerve impulses associated with action potentials. Pribram posits that the length of the delay of an input signal in the dendritic arbor before it travels down the axon is related to mental awareness. The shorter the delay the more unconscious the action, while a longer delay indicates a longer period of awareness. A study by David Alkon showed that after unconscious Pavlovian conditioning there was a proportionally greater reduction in the volume of the dendritic arbor, akin to synaptic elimination when experience increases the automaticity of an action. Pribram and others theorize that, while unconscious behavior is mediated by impulses through nerve circuits, conscious behavior arises from microprocesses in the dendritic arbor.

At the same time, the dendritic network is extremely complex, able to receive 100,000 to 200,000 inputs in a single tree, due to the large amount of branching and the many dendritic spines protruding from the branches. Furthermore, synaptic hyperpolarization and depolarization remains somewhat isolated due to the resistance from the narrow dendritic spine stalk, allowing a polarization to spread without much interruption to the other spines. This spread is further aided intracellularly by the microtubules and extracellularly by glial cells. These polarizations act as waves in the synaptodendritic network, and the existence of multiple waves at once gives rise to interference patterns.

Deep and surface structure of memory

Pribram suggests that there are two layers of cortical processing: a surface structure of separated and localized neural circuits and a deep structure of the dendritic arborization that binds the surface structure together. The deep structure contains distributed memory, while the surface structure acts as the retrieval mechanism. Binding occurs through the temporal synchronization of the oscillating polarizations in the synaptodendritic web. It had been thought that binding only occurred when there was no phase lead or lag present, but a study by Saul and Humphrey found that cells in the lateral geniculate nucleus do in fact produce these. Here phase lead and lag act to enhance sensory discrimination, acting as a frame to capture important features. These filters are also similar to the lenses necessary for holographic functioning.

Recent studies

While Pribram originally developed the holonomic brain theory as an analogy for certain brain processes, several papers (including some more recent ones by Pribram himself) have proposed that the similarity between hologram and certain brain functions is more than just metaphorical, but actually structural. Others still maintain that the relationship is only analogical. Several studies have shown that the same series of operations used in holographic memory models are performed in certain processes concerning temporal memory and optomotor responses. This indicates at least the possibility of the existence of neurological structures with certain holonomic properties. Other studies have demonstrated the possibility that biophoton emission (biological electrical signals that are converted to weak electromagnetic waves in the visible range) may be a necessary condition for the electric activity in the brain to store holographic images. These may play a role in cell communication and certain brain processes including sleep, but further studies are needed to strengthen current ones. Other studies have shown the correlation between more advanced cognitive function and homeothermy. Taking holographic brain models into account, this temperature regulation would reduce distortion of the signal waves, an important condition for holographic systems.

Criticism and alternative models

Pribram's holonomic model of brain function did not receive widespread attention at the time, but other quantum models have been developed since, including brain dynamics by Jibu & Yasue and Vitiello's dissipative quantum brain dynamics. Though not directly related to the holonomic model, they continue to move beyond approaches based solely in classic brain theory.

Correlograph

In 1969 scientists D. Wilshaw, O. P. Buneman and H. Longuet-Higgins proposed an alternative, non-holographic model that fulfilled many of the same requirements as Gabor's original holographic model. The Gabor model did not explain how the brain could use Fourier analysis on incoming signals or how it would deal with the low signal-noise ratio in reconstructed memories. Longuet-Higgin's correlograph model built on the idea that any system could perform the same functions as a Fourier holograph if it could correlate pairs of patterns. It uses minute pinholes that do not produce diffraction patterns to create a similar reconstruction as that in Fourier holography. Like a hologram, a discrete correlograph can recognize displaced patterns and store information in a parallel and non-local way so it usually will not be destroyed by localized damage. They then expanded the model beyond the correlograph to an associative net where the points become parallel lines arranged in a grid. Horizontal lines represent axons of input neurons while vertical lines represent output neurons. Each intersection represents a modifiable synapse. Though this cannot recognize displaced patterns, it has a greater potential storage capacity. This was not necessarily meant to show how the brain is organized, but instead to show the possibility of improving on Gabor's original model. P. Van Heerden countered this model by demonstrating mathematically that the signal-noise ratio of a hologram could reach 50% of ideal. He also used a model with a 2D neural hologram network for fast searching imposed upon a 3D network for large storage capacity. A key quality of this model was its flexibility to change the orientation and fix distortions of stored information, which is important for our ability to recognize an object as the same entity from different angles and positions, something the correlograph and association network models lack.

Applications

Holographic models of memory and consciousness may be related to several brain disorders involving disunity of sensory input within a unified consciousness, including Charles Bonnet Syndrome, Disjunctive Agnosia, and Schizophrenia. Charles Bonnet Syndrome patients experience two vastly different worlds within one consciousness. They see the world that psychologically normal people perceive, but also a simplified world riddled with Pseudohallucination. These patients can differentiate these two worlds easily. Since dynamic core and global workspace theories insist that a distinct area of the brain is responsible for consciousness, the only way a patient would perceive two worlds was if this dynamic core and global workspace were split. But such does not explain how different content can be perceived within one single consciousness since these theories assume that each dynamic core or global workspace creates a single coherent reality. The primary symptom of Disjunctive Agnosia is an inconsistency of sensory information within a unified consciousness. They may see one thing, but hear something entirely incompatible with that image. Schizophrenics often report experiencing thoughts that do not seem to originate from themselves, as if the idea was inserted exogenously. The individual feels no control over certain thoughts existing within their consciousness.

Bayesian inference

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Bayesian_inference Bayesian inference ( / ...