Search This Blog

Wednesday, July 19, 2023

Methods of detecting exoplanets

 From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Methods_of_detecting_exoplanets

Number of extrasolar planet discoveries per year through 2022, with colors indicating method of detection:

Any planet is an extremely faint light source compared to its parent star. For example, a star like the Sun is about a billion times as bright as the reflected light from any of the planets orbiting it. In addition to the intrinsic difficulty of detecting such a faint light source, the light from the parent star causes a glare that washes it out. For those reasons, very few of the exoplanets reported as of April 2014 have been observed directly, with even fewer being resolved from their host star.

Instead, astronomers have generally had to resort to indirect methods to detect extrasolar planets. As of 2016, several different indirect methods have yielded success.

Established detection methods

The following methods have at least once proved successful for discovering a new planet or detecting an already discovered planet:

Radial velocity

Radial velocity graph of 18 Delphini b.

A star with a planet will move in its own small orbit in response to the planet's gravity. This leads to variations in the speed with which the star moves toward or away from Earth, i.e. the variations are in the radial velocity of the star with respect to Earth. The radial velocity can be deduced from the displacement in the parent star's spectral lines due to the Doppler effect. The radial-velocity method measures these variations in order to confirm the presence of the planet using the binary mass function.

The speed of the star around the system's center of mass is much smaller than that of the planet, because the radius of its orbit around the center of mass is so small. (For example, the Sun moves by about 13 m/s due to Jupiter, but only about 9 cm/s due to Earth). However, velocity variations down to 3 m/s or even somewhat less can be detected with modern spectrometers, such as the HARPS (High Accuracy Radial Velocity Planet Searcher) spectrometer at the ESO 3.6 meter telescope in La Silla Observatory, Chile, the HIRES spectrometer at the Keck telescopes or EXPRES at the Lowell Discovery Telescope. An especially simple and inexpensive method for measuring radial velocity is "externally dispersed interferometry".

Until around 2012, the radial-velocity method (also known as Doppler spectroscopy) was by far the most productive technique used by planet hunters. (After 2012, the transit method from the Kepler spacecraft overtook it in number.) The radial velocity signal is distance independent, but requires high signal-to-noise ratio spectra to achieve high precision, and so is generally used only for relatively nearby stars, out to about 160 light-years from Earth, to find lower-mass planets. It is also not possible to simultaneously observe many target stars at a time with a single telescope. Planets of Jovian mass can be detectable around stars up to a few thousand light years away. This method easily finds massive planets that are close to stars. Modern spectrographs can also easily detect Jupiter-mass planets orbiting 10 astronomical units away from the parent star, but detection of those planets requires many years of observation. Earth-mass planets are currently detectable only in very small orbits around low-mass stars, e.g. Proxima b.

It is easier to detect planets around low-mass stars, for two reasons: First, these stars are more affected by gravitational tug from planets. The second reason is that low-mass main-sequence stars generally rotate relatively slowly. Fast rotation makes spectral-line data less clear because half of the star quickly rotates away from observer's viewpoint while the other half approaches. Detecting planets around more massive stars is easier if the star has left the main sequence, because leaving the main sequence slows down the star's rotation.

Sometimes Doppler spectrography produces false signals, especially in multi-planet and multi-star systems. Magnetic fields and certain types of stellar activity can also give false signals. When the host star has multiple planets, false signals can also arise from having insufficient data, so that multiple solutions can fit the data, as stars are not generally observed continuously. Some of the false signals can be eliminated by analyzing the stability of the planetary system, conducting photometry analysis on the host star and knowing its rotation period and stellar activity cycle periods.

Planets with orbits highly inclined to the line of sight from Earth produce smaller visible wobbles, and are thus more difficult to detect. One of the advantages of the radial velocity method is that eccentricity of the planet's orbit can be measured directly. One of the main disadvantages of the radial-velocity method is that it can only estimate a planet's minimum mass (). The posterior distribution of the inclination angle i depends on the true mass distribution of the planets. However, when there are multiple planets in the system that orbit relatively close to each other and have sufficient mass, orbital stability analysis allows one to constrain the maximum mass of these planets. The radial-velocity method can be used to confirm findings made by the transit method. When both methods are used in combination, then the planet's true mass can be estimated.

Although radial velocity of the star only gives a planet's minimum mass, if the planet's spectral lines can be distinguished from the star's spectral lines then the radial velocity of the planet itself can be found, and this gives the inclination of the planet's orbit. This enables measurement of the planet's actual mass. This also rules out false positives, and also provides data about the composition of the planet. The main issue is that such detection is possible only if the planet orbits around a relatively bright star and if the planet reflects or emits a lot of light.

Transit photometry

Technique, advantages, and disadvantages

Transit method of detecting extrasolar planets. The graph below the picture demonstrates the light levels received over time by Earth.
Kepler-6b photometry.
A simulated silhouette of Jupiter (and 2 of its moons) transiting the Sun, as seen from another star system.
Theoretical transiting exoplanet light curve. This image shows the transit depth (δ), transit duration (T), and ingress/egress duration (τ) of a transiting exoplanet relative to the position that the exoplanet is to the star.

While the radial velocity method provides information about a planet's mass, the photometric method can determine the planet's radius. If a planet crosses (transits) in front of its parent star's disk, then the observed visual brightness of the star drops by a small amount, depending on the relative sizes of the star and the planet. For example, in the case of HD 209458, the star dims by 1.7%. However, most transit signals are considerably smaller; for example, an Earth-size planet transiting a Sun-like star produces a dimming of only 80 parts per million (0.008 percent).

A theoretical transiting exoplanet light curve model predicts the following characteristics of an observed planetary system: transit depth (δ), transit duration (T), the ingress/egress duration (τ), and period of the exoplanet (P). However, these observed quantities are based on several assumptions. For convenience in the calculations, we assume that the planet and star are spherical, the stellar disk is uniform, and the orbit is circular. Depending on the relative position that an observed transiting exoplanet is while transiting a star, the observed physical parameters of the light curve will change. The transit depth (δ) of a transiting light curve describes the decrease in the normalized flux of the star during a transit. This details the radius of an exoplanet compared to the radius of the star. For example, if an exoplanet transits a solar radius size star, a planet with a larger radius would increase the transit depth and a planet with a smaller radius would decrease the transit depth. The transit duration (T) of an exoplanet is the length of time that a planet spends transiting a star. This observed parameter changes relative to how fast or slow a planet is moving in its orbit as it transits the star. The ingress/egress duration (τ) of a transiting light curve describes the length of time the planet takes to fully cover the star (ingress) and fully uncover the star (egress). If a planet transits from the one end of the diameter of the star to the other end, the ingress/egress duration is shorter because it takes less time for a planet to fully cover the star. If a planet transits a star relative to any other point other than the diameter, the ingress/egress duration lengthens as you move further away from the diameter because the planet spends a longer time partially covering the star during its transit. From these observable parameters, a number of different physical parameters (semi-major axis, star mass, star radius, planet radius, eccentricity, and inclination) are determined through calculations. With the combination of radial velocity measurements of the star, the mass of the planet is also determined.

This method has two major disadvantages. First, planetary transits are observable only when the planet's orbit happens to be perfectly aligned from the astronomers' vantage point. The probability of a planetary orbital plane being directly on the line-of-sight to a star is the ratio of the diameter of the star to the diameter of the orbit (in small stars, the radius of the planet is also an important factor). About 10% of planets with small orbits have such an alignment, and the fraction decreases for planets with larger orbits. For a planet orbiting a Sun-sized star at 1 AU, the probability of a random alignment producing a transit is 0.47%. Therefore, the method cannot guarantee that any particular star is not a host to planets. However, by scanning large areas of the sky containing thousands or even hundreds of thousands of stars at once, transit surveys can find more extrasolar planets than the radial-velocity method. Several surveys have taken that approach, such as the ground-based MEarth Project, SuperWASP, KELT, and HATNet, as well as the space-based COROT, Kepler and TESS missions. The transit method has also the advantage of detecting planets around stars that are located a few thousand light years away. The most distant planets detected by Sagittarius Window Eclipsing Extrasolar Planet Search are located near the galactic center. However, reliable follow-up observations of these stars are nearly impossible with current technology.

The second disadvantage of this method is a high rate of false detections. A 2012 study found that the rate of false positives for transits observed by the Kepler mission could be as high as 40% in single-planet systems. For this reason, a star with a single transit detection requires additional confirmation, typically from the radial-velocity method or orbital brightness modulation method. The radial velocity method is especially necessary for Jupiter-sized or larger planets, as objects of that size encompass not only planets, but also brown dwarfs and even small stars. As the false positive rate is very low in stars with two or more planet candidates, such detections often can be validated without extensive follow-up observations. Some can also be confirmed through the transit timing variation method.

Many points of light in the sky have brightness variations that may appear as transiting planets by flux measurements. False-positives in the transit photometry method arise in three common forms: blended eclipsing binary systems, grazing eclipsing binary systems, and transits by planet sized stars. Eclipsing binary systems usually produce deep eclipses that distinguish them from exoplanet transits, since planets are usually smaller than about 2RJ, but eclipses are shallower for blended or grazing eclipsing binary systems.

Blended eclipsing binary systems consist of a normal eclipsing binary blended with a third (usually brighter) star along the same line of sight, usually at a different distance. The constant light of the third star dilutes the measured eclipse depth, so the light-curve may resemble that for a transiting exoplanet. In these cases, the target most often contains a large main sequence primary with a small main sequence secondary or a giant star with a main sequence secondary.

Grazing eclipsing binary systems are systems in which one object will just barely graze the limb of the other. In these cases, the maximum transit depth of the light curve will not be proportional to the ratio of the squares of the radii of the two stars, but will instead depend solely on the small fraction of the primary that is blocked by the secondary. The small measured dip in flux can mimic that of an exoplanet transit. Some of the false positive cases of this category can be easily found if the eclipsing binary system has a circular orbit, with the two companions having different masses. Due to the cyclic nature of the orbit, there would be two eclipsing events, one of the primary occulting the secondary and vice versa. If the two stars have significantly different masses, and this different radii and luminosities, then these two eclipses would have different depths. This repetition of a shallow and deep transit event can easily be detected and thus allow the system to be recognized as a grazing eclipsing binary system. However, if the two stellar companions are approximately the same mass, then these two eclipses would be indistinguishable, thus making it impossible to demonstrate that a grazing eclipsing binary system is being observed using only the transit photometry measurements.

This image shows the relative sizes of brown dwarfs and large planets.

Finally, there are two types of stars that are approximately the same size as gas giant planets, white dwarfs and brown dwarfs. This is due to the fact that gas giant planets, white dwarfs, and brown dwarfs, are all supported by degenerate electron pressure. The light curve does not discriminate between masses as it only depends on the size of the transiting object. When possible, radial velocity measurements are used to verify that the transiting or eclipsing body is of planetary mass, meaning less than 13MJ. Transit Time Variations can also determine MP. Doppler Tomography with a known radial velocity orbit can obtain minimum MP and projected sing-orbit alignment.

Red giant branch stars have another issue for detecting planets around them: while planets around these stars are much more likely to transit due to the larger star size, these transit signals are hard to separate from the main star's brightness light curve as red giants have frequent pulsations in brightness with a period of a few hours to days. This is especially notable with subgiants. In addition, these stars are much more luminous, and transiting planets block a much smaller percentage of light coming from these stars. In contrast, planets can completely occult a very small star such as a neutron star or white dwarf, an event which would be easily detectable from Earth. However, due to the small star sizes, the chance of a planet aligning with such a stellar remnant is extremely small.

Properties (mass and radius) of planets discovered using the transit method, compared with the distribution, n (light gray bar chart), of minimum masses of transiting and non-transiting exoplanets. Super-Earths are black.

The main advantage of the transit method is that the size of the planet can be determined from the light curve. When combined with the radial-velocity method (which determines the planet's mass), one can determine the density of the planet, and hence learn something about the planet's physical structure. The planets that have been studied by both methods are by far the best-characterized of all known exoplanets.

The transit method also makes it possible to study the atmosphere of the transiting planet. When the planet transits the star, light from the star passes through the upper atmosphere of the planet. By studying the high-resolution stellar spectrum carefully, one can detect elements present in the planet's atmosphere. A planetary atmosphere, and planet for that matter, could also be detected by measuring the polarization of the starlight as it passed through or is reflected off the planet's atmosphere.

Additionally, the secondary eclipse (when the planet is blocked by its star) allows direct measurement of the planet's radiation and helps to constrain the planet's orbital eccentricity without needing the presence of other planets. If the star's photometric intensity during the secondary eclipse is subtracted from its intensity before or after, only the signal caused by the planet remains. It is then possible to measure the planet's temperature and even to detect possible signs of cloud formations on it. In March 2005, two groups of scientists carried out measurements using this technique with the Spitzer Space Telescope. The two teams, from the Harvard-Smithsonian Center for Astrophysics, led by David Charbonneau, and the Goddard Space Flight Center, led by L. D. Deming, studied the planets TrES-1 and HD 209458b respectively. The measurements revealed the planets' temperatures: 1,060 K (790°C) for TrES-1 and about 1,130 K (860 °C) for HD 209458b. In addition, the hot Neptune Gliese 436 b is known to enter secondary eclipse. However, some transiting planets orbit such that they do not enter secondary eclipse relative to Earth; HD 17156 b is over 90% likely to be one of the latter.

History

The first exoplanet for which transits were observed for HD 209458 b, which was discovered using radial velocity technique. These transits were oberved in 1999 by two teams led David Charbonneau and Gregory W. Henry. The first exoplanet to be discovered with the transit method was OGLE-TR-56b in 2002 by the OGLE project.

A French Space Agency mission, CoRoT, began in 2006 to search for planetary transits from orbit, where the absence of atmospheric scintillation allows improved accuracy. This mission was designed to be able to detect planets "a few times to several times larger than Earth" and performed "better than expected", with two exoplanet discoveries (both of the "hot Jupiter" type) as of early 2008. In June 2013, CoRoT's exoplanet count was 32 with several still to be confirmed. The satellite unexpectedly stopped transmitting data in November 2012 (after its mission had twice been extended), and was retired in June 2013.

In March 2009, NASA mission Kepler was launched to scan a large number of stars in the constellation Cygnus with a measurement precision expected to detect and characterize Earth-sized planets. The NASA Kepler Mission uses the transit method to scan a hundred thousand stars for planets. It was hoped that by the end of its mission of 3.5 years, the satellite would have collected enough data to reveal planets even smaller than Earth. By scanning a hundred thousand stars simultaneously, it was not only able to detect Earth-sized planets, it was able to collect statistics on the numbers of such planets around Sun-like stars.

On 2 February 2011, the Kepler team released a list of 1,235 extrasolar planet candidates, including 54 that may be in the habitable zone. On 5 December 2011, the Kepler team announced that they had discovered 2,326 planetary candidates, of which 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Compared to the February 2011 figures, the number of Earth-size and super-Earth-size planets increased by 200% and 140% respectively. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars, marking a decrease from the February figure; this was due to the more stringent criteria in use in the December data. By June 2013, the number of planet candidates was increased to 3,278 and some confirmed planets were smaller than Earth, some even Mars-sized (such as Kepler-62c) and one even smaller than Mercury (Kepler-37b).

The Transiting Exoplanet Survey Satellite launched in April 2018.

Reflection and emission modulations

Short-period planets in close orbits around their stars will undergo reflected light variations because, like the Moon, they will go through phases from full to new and back again. In addition, as these planets receive a lot of starlight, it heats them, making thermal emissions potentially detectable. Since telescopes cannot resolve the planet from the star, they see only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small — the photometric precision required is about the same as to detect an Earth-sized planet in transit across a solar-type star – such Jupiter-sized planets with an orbital period of a few days are detectable by space telescopes such as the Kepler Space Observatory. Like with the transit method, it is easier to detect large planets orbiting close to their parent star than other planets as these planets catch more light from their parent star. When a planet has a high albedo and is situated around a relatively luminous star, its light variations are easier to detect in visible light while darker planets or planets around low-temperature stars are more easily detectable with infrared light with this method. In the long run, this method may find the most planets that will be discovered by that mission because the reflected light variation with orbital phase is largely independent of orbital inclination and does not require the planet to pass in front of the disk of the star. It still cannot detect planets with circular face-on orbits from Earth's viewpoint as the amount of reflected light does not change during its orbit.

The phase function of the giant planet is also a function of its thermal properties and atmosphere, if any. Therefore, the phase curve may constrain other planet properties, such as the size distribution of atmospheric particles. When a planet is found transiting and its size is known, the phase variations curve helps calculate or constrain the planet's albedo. It is more difficult with very hot planets as the glow of the planet can interfere when trying to calculate albedo. In theory, albedo can also be found in non-transiting planets when observing the light variations with multiple wavelengths. This allows scientists to find the size of the planet even if the planet is not transiting the star.

The first-ever direct detection of the spectrum of visible light reflected from an exoplanet was made in 2015 by an international team of astronomers. The astronomers studied light from 51 Pegasi b – the first exoplanet discovered orbiting a main-sequence star (a Sunlike star), using the High Accuracy Radial velocity Planet Searcher (HARPS) instrument at the European Southern Observatory's La Silla Observatory in Chile.

Both CoRoT and Kepler have measured the reflected light from planets. However, these planets were already known since they transit their host star. The first planets discovered by this method are Kepler-70b and Kepler-70c, found by Kepler.

Relativistic beaming

A separate novel method to detect exoplanets from light variations uses relativistic beaming of the observed flux from the star due to its motion. It is also known as Doppler beaming or Doppler boosting. The method was first proposed by Abraham Loeb and Scott Gaudi in 2003. As the planet tugs the star with its gravitation, the density of photons and therefore the apparent brightness of the star changes from observer's viewpoint. Like the radial velocity method, it can be used to determine the orbital eccentricity and the minimum mass of the planet. With this method, it is easier to detect massive planets close to their stars as these factors increase the star's motion. Unlike the radial velocity method, it does not require an accurate spectrum of a star, and therefore can be used more easily to find planets around fast-rotating stars and more distant stars.

One of the biggest disadvantages of this method is that the light variation effect is very small. A Jovian-mass planet orbiting 0.025 AU away from a Sun-like star is barely detectable even when the orbit is edge-on. This is not an ideal method for discovering new planets, as the amount of emitted and reflected starlight from the planet is usually much larger than light variations due to relativistic beaming. This method is still useful, however, as it allows for measurement of the planet's mass without the need for follow-up data collection from radial velocity observations.

The first discovery of a planet using this method (Kepler-76b) was announced in 2013.

Ellipsoidal variations

Massive planets can cause slight tidal distortions to their host stars. When a star has a slightly ellipsoidal shape, its apparent brightness varies, depending if the oblate part of the star is facing the observer's viewpoint. Like with the relativistic beaming method, it helps to determine the minimum mass of the planet, and its sensitivity depends on the planet's orbital inclination. The extent of the effect on a star's apparent brightness can be much larger than with the relativistic beaming method, but the brightness changing cycle is twice as fast. In addition, the planet distorts the shape of the star more if it has a low semi-major axis to stellar radius ratio and the density of the star is low. This makes this method suitable for finding planets around stars that have left the main sequence.

Pulsar timing

Artist's impression of the pulsar PSR 1257+12's planetary system.

A pulsar is a neutron star: the small, ultradense remnant of a star that has exploded as a supernova. Pulsars emit radio waves extremely regularly as they rotate. Because the intrinsic rotation of a pulsar is so regular, slight anomalies in the timing of its observed radio pulses can be used to track the pulsar's motion. Like an ordinary star, a pulsar will move in its own small orbit if it has a planet. Calculations based on pulse-timing observations can then reveal the parameters of that orbit.

This method was not originally designed for the detection of planets, but is so sensitive that it is capable of detecting planets far smaller than any other method can, down to less than a tenth the mass of Earth. It is also capable of detecting mutual gravitational perturbations between the various members of a planetary system, thereby revealing further information about those planets and their orbital parameters. In addition, it can easily detect planets which are relatively far away from the pulsar.

There are two main drawbacks to the pulsar timing method: pulsars are relatively rare, and special circumstances are required for a planet to form around a pulsar. Therefore, it is unlikely that a large number of planets will be found this way. Additionally, life would likely not survive on planets orbiting pulsars due to the high intensity of ambient radiation.

In 1992, Aleksander Wolszczan and Dale Frail used this method to discover planets around the pulsar PSR 1257+12. Their discovery was quickly confirmed, making it the first confirmation of planets outside the Solar System.

Variable star timing

Like pulsars, some other types of pulsating variable stars are regular enough that radial velocity could be determined purely photometrically from the Doppler shift of the pulsation frequency, without needing spectroscopy. This method is not as sensitive as the pulsar timing variation method, due to the periodic activity being longer and less regular. The ease of detecting planets around a variable star depends on the pulsation period of the star, the regularity of pulsations, the mass of the planet, and its distance from the host star.

The first success with this method came in 2007, when V391 Pegasi b was discovered around a pulsating subdwarf star.

Transit timing

The Kepler Mission, A NASA mission which is able to detect extrasolar planets

The transit timing variation method considers whether transits occur with strict periodicity, or if there is a variation. When multiple transiting planets are detected, they can often be confirmed with the transit timing variation method. This is useful in planetary systems far from the Sun, where radial velocity methods cannot detect them due to the low signal-to-noise ratio. If a planet has been detected by the transit method, then variations in the timing of the transit provide an extremely sensitive method of detecting additional non-transiting planets in the system with masses comparable to Earth's. It is easier to detect transit-timing variations if planets have relatively close orbits, and when at least one of the planets is more massive, causing the orbital period of a less massive planet to be more perturbed.

The main drawback of the transit timing method is that usually not much can be learnt about the planet itself. Transit timing variation can help to determine the maximum mass of a planet. In most cases, it can confirm if an object has a planetary mass, but it does not put narrow constraints on its mass. There are exceptions though, as planets in the Kepler-36 and Kepler-88 systems orbit close enough to accurately determine their masses.

The first significant detection of a non-transiting planet using TTV was carried out with NASA's Kepler spacecraft. The transiting planet Kepler-19b shows TTV with an amplitude of five minutes and a period of about 300 days, indicating the presence of a second planet, Kepler-19c, which has a period which is a near-rational multiple of the period of the transiting planet.

In circumbinary planets, variations of transit timing are mainly caused by the orbital motion of the stars, instead of gravitational perturbations by other planets. These variations make it harder to detect these planets through automated methods. However, it makes these planets easy to confirm once they are detected.

Transit duration variation

"Duration variation" refers to changes in how long the transit takes. Duration variations may be caused by an exomoon, apsidal precession for eccentric planets due to another planet in the same system, or general relativity.

When a circumbinary planet is found through the transit method, it can be easily confirmed with the transit duration variation method. In close binary systems, the stars significantly alter the motion of the companion, meaning that any transiting planet has significant variation in transit duration. The first such confirmation came from Kepler-16b.

Eclipsing binary minima timing

When a binary star system is aligned such that – from the Earth's point of view – the stars pass in front of each other in their orbits, the system is called an "eclipsing binary" star system. The time of minimum light, when the star with the brighter surface is at least partially obscured by the disc of the other star, is called the primary eclipse, and approximately half an orbit later, the secondary eclipse occurs when the brighter surface area star obscures some portion of the other star. These times of minimum light, or central eclipses, constitute a time stamp on the system, much like the pulses from a pulsar (except that rather than a flash, they are a dip in brightness). If there is a planet in circumbinary orbit around the binary stars, the stars will be offset around a binary-planet center of mass. As the stars in the binary are displaced back and forth by the planet, the times of the eclipse minima will vary. The periodicity of this offset may be the most reliable way to detect extrasolar planets around close binary systems. With this method, planets are more easily detectable if they are more massive, orbit relatively closely around the system, and if the stars have low masses.

The eclipsing timing method allows the detection of planets further away from the host star than the transit method. However, signals around cataclysmic variable stars hinting for planets tend to match with unstable orbits. In 2011, Kepler-16b became the first planet to be definitely characterized via eclipsing binary timing variations.

Gravitational microlensing

Gravitational microlensing.

Gravitational microlensing occurs when the gravitational field of a star acts like a lens, magnifying the light of a distant background star. This effect occurs only when the two stars are almost exactly aligned. Lensing events are brief, lasting for weeks or days, as the two stars and Earth are all moving relative to each other. More than a thousand such events have been observed over the past ten years.

If the foreground lensing star has a planet, then that planet's own gravitational field can make a detectable contribution to the lensing effect. Since that requires a highly improbable alignment, a very large number of distant stars must be continuously monitored in order to detect planetary microlensing contributions at a reasonable rate. This method is most fruitful for planets between Earth and the center of the galaxy, as the galactic center provides a large number of background stars.

In 1991, astronomers Shude Mao and Bohdan Paczyński proposed using gravitational microlensing to look for binary companions to stars, and their proposal was refined by Andy Gould and Abraham Loeb in 1992 as a method to detect exoplanets. Successes with the method date back to 2002, when a group of Polish astronomers (Andrzej Udalski, Marcin Kubiak and Michał Szymański from Warsaw, and Bohdan Paczyński) during project OGLE (the Optical Gravitational Lensing Experiment) developed a workable technique. During one month, they found several possible planets, though limitations in the observations prevented clear confirmation. Since then, several confirmed extrasolar planets have been detected using microlensing. This was the first method capable of detecting planets of Earth-like mass around ordinary main-sequence stars.

Unlike most other methods, which have detection bias towards planets with small (or for resolved imaging, large) orbits, the microlensing method is most sensitive to detecting planets around 1-10 astronomical units away from Sun-like stars.

A notable disadvantage of the method is that the lensing cannot be repeated, because the chance alignment never occurs again. Also, the detected planets will tend to be several kiloparsecs away, so follow-up observations with other methods are usually impossible. In addition, the only physical characteristic that can be determined by microlensing is the mass of the planet, within loose constraints. Orbital properties also tend to be unclear, as the only orbital characteristic that can be directly determined is its current semi-major axis from the parent star, which can be misleading if the planet follows an eccentric orbit. When the planet is far away from its star, it spends only a tiny portion of its orbit in a state where it is detectable with this method, so the orbital period of the planet cannot be easily determined. It is also easier to detect planets around low-mass stars, as the gravitational microlensing effect increases with the planet-to-star mass ratio.

The main advantages of the gravitational microlensing method are that it can detect low-mass planets (in principle down to Mars mass with future space projects such as WFIRST); it can detect planets in wide orbits comparable to Saturn and Uranus, which have orbital periods too long for the radial velocity or transit methods; and it can detect planets around very distant stars. When enough background stars can be observed with enough accuracy, then the method should eventually reveal how common Earth-like planets are in the galaxy.

Observations are usually performed using networks of robotic telescopes. In addition to the European Research Council-funded OGLE, the Microlensing Observations in Astrophysics (MOA) group is working to perfect this approach.

The PLANET (Probing Lensing Anomalies NETwork)/RoboNet project is even more ambitious. It allows nearly continuous round-the-clock coverage by a world-spanning telescope network, providing the opportunity to pick up microlensing contributions from planets with masses as low as Earth's. This strategy was successful in detecting the first low-mass planet on a wide orbit, designated OGLE-2005-BLG-390Lb.

Direct imaging

Direct image of exoplanets around the star HR8799 using a Vortex coronagraph on a 1.5m portion of the Hale telescope
ESO image of a planet near Beta Pictoris

Planets are extremely faint light sources compared to stars, and what little light comes from them tends to be lost in the glare from their parent star. So in general, it is very difficult to detect and resolve them directly from their host star. Planets orbiting far enough from stars to be resolved reflect very little starlight, so planets are detected through their thermal emission instead. It is easier to obtain images when the star system is relatively near to the Sun, and when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation; images have then been made in the infrared, where the planet is brighter than it is at visible wavelengths. Coronagraphs are used to block light from the star, while leaving the planet visible. Direct imaging of an Earth-like exoplanet requires extreme optothermal stability. During the accretion phase of planetary formation, the star-planet contrast may be even better in H alpha than it is in infrared – an H alpha survey is currently underway.

The ExTrA telescopes at La Silla observes at infrared wavelengths and adds spectral information to the usual photometric measurements.

Direct imaging can give only loose constraints of the planet's mass, which is derived from the age of the star and the temperature of the planet. Mass can vary considerably, as planets can form several million years after the star has formed. The cooler the planet is, the less the planet's mass needs to be. In some cases it is possible to give reasonable constraints to the radius of a planet based on planet's temperature, its apparent brightness, and its distance from Earth. The spectra emitted from planets do not have to be separated from the star, which eases determining the chemical composition of planets.

Sometimes observations at multiple wavelengths are needed to rule out the planet being a brown dwarf. Direct imaging can be used to accurately measure the planet's orbit around the star. Unlike the majority of other methods, direct imaging works better with planets with face-on orbits rather than edge-on orbits, as a planet in a face-on orbit is observable during the entirety of the planet's orbit, while planets with edge-on orbits are most easily observable during their period of largest apparent separation from the parent star.

The planets detected through direct imaging currently fall into two categories. First, planets are found around stars more massive than the Sun which are young enough to have protoplanetary disks. The second category consists of possible sub-brown dwarfs found around very dim stars, or brown dwarfs which are at least 100 AU away from their parent stars.

Planetary-mass objects not gravitationally bound to a star are found through direct imaging as well.

Early discoveries

The large central object is the star CVSO 30; the small dot up and to the left is exoplanet CVSO 30c. This image was made using astrometry data from VLT's NACO and SINFONI instruments.

In 2004, a group of astronomers used the European Southern Observatory's Very Large Telescope array in Chile to produce an image of 2M1207b, a companion to the brown dwarf 2M1207. In the following year, the planetary status of the companion was confirmed. The planet is estimated to be several times more massive than Jupiter, and to have an orbital radius greater than 40 AU.

In September 2008, an object was imaged at a separation of 330 AU from the star 1RXS J160929.1−210524, but it was not until 2010, that it was confirmed to be a companion planet to the star and not just a chance alignment.

The first multiplanet system, announced on 13 November 2008, was imaged in 2007, using telescopes at both the Keck Observatory and Gemini Observatory. Three planets were directly observed orbiting HR 8799, whose masses are approximately ten, ten, and seven times that of Jupiter. On the same day, 13 November 2008, it was announced that the Hubble Space Telescope directly observed an exoplanet orbiting Fomalhaut, with a mass no more than 3 MJ. Both systems are surrounded by disks not unlike the Kuiper belt.

In 2009, it was announced that analysis of images dating back to 2003, revealed a planet orbiting Beta Pictoris.

In 2012, it was announced that a "Super-Jupiter" planet with a mass about 12.8 MJ orbiting Kappa Andromedae was directly imaged using the Subaru Telescope in Hawaii. It orbits its parent star at a distance of about 55 AU, or nearly twice the distance of Neptune from the sun.

An additional system, GJ 758, was imaged in November 2009, by a team using the HiCIAO instrument of the Subaru Telescope, but it was a brown dwarf.

Other possible exoplanets to have been directly imaged include GQ Lupi b, AB Pictoris b, and SCR 1845 b. As of March 2006, none have been confirmed as planets; instead, they might themselves be small brown dwarfs.

Imaging instruments

ESO VLT image of exoplanet HD 95086 b

Some projects to equip telescopes with planet-imaging-capable instruments include the ground-based telescopes Gemini Planet Imager, VLT-SPHERE, the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument, Palomar Project 1640, and the space telescope WFIRST. The New Worlds Mission proposes a large occulter in space designed to block the light of nearby stars in order to observe their orbiting planets. This could be used with existing, already planned or new, purpose-built telescopes.

In 2010, a team from NASA's Jet Propulsion Laboratory demonstrated that a vortex coronagraph could enable small scopes to directly image planets. They did this by imaging the previously imaged HR 8799 planets, using just a 1.5 meter-wide portion of the Hale Telescope.

Another promising approach is nulling interferometry.

It has also been proposed that space-telescopes that focus light using zone plates instead of mirrors would provide higher-contrast imaging, and be cheaper to launch into space due to being able to fold up the lightweight foil zone plate.

Data Reduction Techniques

Post-processing of observational data to enhance signal strength of off-axial bodies (i.e. exoplanets) can be accomplished in a variety of ways. Likely the earliest and most popular is the technique of Angualar Differential Imaging (ADI), where exposures are averaged, each exposure undergoes subtraction by the average, and then they are (de-)rotated to stack the feint planetary signal all in one place.

Specral Differential Imaging (SDI) performs an analygous procedure, but for radial changes in brightness (as a function of spectra or wavelength) instead of angular changes.

Combinations of the two are possible (ASDI, SADI, or Combined Differential Imaging "CODI").

Polarimetry

Light given off by a star is un-polarized, i.e. the direction of oscillation of the light wave is random. However, when the light is reflected off the atmosphere of a planet, the light waves interact with the molecules in the atmosphere and become polarized.

By analyzing the polarization in the combined light of the planet and star (about one part in a million), these measurements can in principle be made with very high sensitivity, as polarimetry is not limited by the stability of the Earth's atmosphere. Another main advantage is that polarimetry allows for determination of the composition of the planet's atmosphere. The main disadvantage is that it will not be able to detect planets without atmospheres. Larger planets and planets with higher albedo are easier to detect through polarimetry, as they reflect more light.

Astronomical devices used for polarimetry, called polarimeters, are capable of detecting polarized light and rejecting unpolarized beams. Groups such as ZIMPOL/CHEOPS and PlanetPol are currently using polarimeters to search for extrasolar planets. The first successful detection of an extrasolar planet using this method came in 2008, when HD 189733 b, a planet discovered three years earlier, was detected using polarimetry. However, no new planets have yet been discovered using this method.

Astrometry

In this diagram a planet (smaller object) orbits a star, which itself moves in a small orbit. The system's center of mass is shown with a red plus sign. (In this case, it always lies within the star.)

This method consists of precisely measuring a star's position in the sky, and observing how that position changes over time. Originally, this was done visually, with hand-written records. By the end of the 19th century, this method used photographic plates, greatly improving the accuracy of the measurements as well as creating a data archive. If a star has a planet, then the gravitational influence of the planet will cause the star itself to move in a tiny circular or elliptical orbit. Effectively, star and planet each orbit around their mutual centre of mass (barycenter), as explained by solutions to the two-body problem. Since the star is much more massive, its orbit will be much smaller. Frequently, the mutual centre of mass will lie within the radius of the larger body. Consequently, it is easier to find planets around low-mass stars, especially brown dwarfs.

Motion of the center of mass (barycenter) of the Solar System relative to the Sun

Astrometry is the oldest search method for extrasolar planets, and was originally popular because of its success in characterizing astrometric binary star systems. It dates back at least to statements made by William Herschel in the late 18th century. He claimed that an unseen companion was affecting the position of the star he cataloged as 70 Ophiuchi. The first known formal astrometric calculation for an extrasolar planet was made by William Stephen Jacob in 1855 for this star. Similar calculations were repeated by others for another half-century until finally refuted in the early 20th century. For two centuries claims circulated of the discovery of unseen companions in orbit around nearby star systems that all were reportedly found using this method, culminating in the prominent 1996 announcement, of multiple planets orbiting the nearby star Lalande 21185 by George Gatewood. None of these claims survived scrutiny by other astronomers, and the technique fell into disrepute. Unfortunately, changes in stellar position are so small—and atmospheric and systematic distortions so large—that even the best ground-based telescopes cannot produce precise enough measurements. All claims of a planetary companion of less than 0.1 solar mass, as the mass of the planet, made before 1996 using this method are likely spurious. In 2002, the Hubble Space Telescope did succeed in using astrometry to characterize a previously discovered planet around the star Gliese 876.

The space-based observatory Gaia, launched in 2013, is expected to find thousands of planets via astrometry, but prior to the launch of Gaia, no planet detected by astrometry had been confirmed. SIM PlanetQuest was a US project (cancelled in 2010) that would have had similar exoplanet finding capabilities to Gaia.

One potential advantage of the astrometric method is that it is most sensitive to planets with large orbits. This makes it complementary to other methods that are most sensitive to planets with small orbits. However, very long observation times will be required — years, and possibly decades, as planets far enough from their star to allow detection via astrometry also take a long time to complete an orbit. Planets orbiting around one of the stars in binary systems are more easily detectable, as they cause perturbations in the orbits of stars themselves. However, with this method, follow-up observations are needed to determine which star the planet orbits around.

In 2009, the discovery of VB 10b by astrometry was announced. This planetary object, orbiting the low mass red dwarf star VB 10, was reported to have a mass seven times that of Jupiter. If confirmed, this would be the first exoplanet discovered by astrometry, of the many that have been claimed through the years. However recent radial velocity independent studies rule out the existence of the claimed planet.

In 2010, six binary stars were astrometrically measured. One of the star systems, called HD 176051, was found with "high confidence" to have a planet.

In 2018, a study comparing observations from the Gaia spacecraft to Hipparcos data for the Beta Pictoris system was able to measure the mass of Beta Pictoris b, constraining it to 11±2 Jupiter masses. This is in good agreement with previous mass estimations of roughly 13 Jupiter masses.

In 2019, data from the Gaia spacecraft and its predecessor Hipparcos was complemented with HARPS data enabling a better description of ε Indi Ab as the second-closest Jupiter-like exoplanet with a mass of 3 Jupiters on a slightly eccentric orbit with an orbital period of 45 years.

As of 2022, especially thanks to Gaia, the combination of radial velocity and astrometry has been used to detect and characterize numerous Jovian planets, including the nearest Jupiter analogues ε Eridani b and ε Indi Ab. In addition, radio astrometry using the VLBA has been used to discover planets in orbit around TVLM 513-46546 and EQ Pegasi A.

X-ray eclipse

In September 2020, the detection of a candidate planet orbiting the high-mass X-ray binary M51-ULS-1 in the Whirlpool Galaxy was announced. The planet was detected by eclipses of the X-ray source, which consists of a stellar remnant (either a neutron star or a black hole) and a massive star, likely a B-type supergiant. This is the only method capable of detecting a planet in another galaxy.

Disc kinematics

Planets can be detected by the gaps they produce in protoplanetary discs.

Other possible methods

Flare and variability echo detection

Non-periodic variability events, such as flares, can produce extremely faint echoes in the light curve if they reflect off an exoplanet or other scattering medium in the star system. More recently, motivated by advances in instrumentation and signal processing technologies, echoes from exoplanets are predicted to be recoverable from high-cadence photometric and spectroscopic measurements of active star systems, such as M dwarfs. These echoes are theoretically observable in all orbital inclinations.

Transit imaging

An optical/infrared interferometer array doesn't collect as much light as a single telescope of equivalent size, but has the resolution of a single telescope the size of the array. For bright stars, this resolving power could be used to image a star's surface during a transit event and see the shadow of the planet transiting. This could provide a direct measurement of the planet's angular radius and, via parallax, its actual radius. This is more accurate than radius estimates based on transit photometry, which are dependent on stellar radius estimates which depend on models of star characteristics. Imaging also provides more accurate determination of the inclination than photometry does.

Magnetospheric radio emissions

Radio emissions from magnetospheres could be detected with future radio telescopes. This could enable determination of the rotation rate of a planet, which is difficult to detect otherwise.

Auroral radio emissions

Auroral radio emissions from giant planets with plasma sources, such as Jupiter's volcanic moon Io, could be detected with radio telescopes such as LOFAR.

Optical interferometry

In March 2019, ESO astronomers, employing the GRAVITY instrument on their Very Large Telescope Interferometer (VLTI), announced the first direct detection of an exoplanet, HR 8799 e, using optical interferometry.

Modified interferometry

By looking at the wiggles of an interferogram using a Fourier-Transform-Spectrometer, enhanced sensitivity could be obtained in order to detect faint signals from Earth-like planets.

Detection of Dust Trapping around Lagrangian Points

Identification of dust clumps along a protoplanetary disk demonstrate trace accumulation around Lagrangian points. From the detection of this dust, it can be inferred that a planet exists such that it has created those Lagrange points.

Detection of extrasolar asteroids and debris disks

Circumstellar disks

An artist's conception of two Pluto-sized dwarf planets in a collision around Vega

Disks of space dust (debris disks) surround many stars. The dust can be detected because it absorbs ordinary starlight and re-emits it as infrared radiation. Even if the dust particles have a total mass well less than that of Earth, they can still have a large enough total surface area that they outshine their parent star in infrared wavelengths.

The Hubble Space Telescope is capable of observing dust disks with its NICMOS (Near Infrared Camera and Multi-Object Spectrometer) instrument. Even better images have now been taken by its sister instrument, the Spitzer Space Telescope, and by the European Space Agency's Herschel Space Observatory, which can see far deeper into infrared wavelengths than the Hubble can. Dust disks have now been found around more than 15% of nearby sunlike stars.

The dust is thought to be generated by collisions among comets and asteroids. Radiation pressure from the star will push the dust particles away into interstellar space over a relatively short timescale. Therefore, the detection of dust indicates continual replenishment by new collisions, and provides strong indirect evidence of the presence of small bodies like comets and asteroids that orbit the parent star.[127] For example, the dust disk around the star Tau Ceti indicates that that star has a population of objects analogous to our own Solar System's Kuiper Belt, but at least ten times thicker.

More speculatively, features in dust disks sometimes suggest the presence of full-sized planets. Some disks have a central cavity, meaning that they are really ring-shaped. The central cavity may be caused by a planet "clearing out" the dust inside its orbit. Other disks contain clumps that may be caused by the gravitational influence of a planet. Both these kinds of features are present in the dust disk around Epsilon Eridani, hinting at the presence of a planet with an orbital radius of around 40 AU (in addition to the inner planet detected through the radial-velocity method). These kinds of planet-disk interactions can be modeled numerically using collisional grooming techniques.

Contamination of stellar atmospheres

Spectral analysis of white dwarfs' atmospheres often finds contamination of heavier elements like magnesium and calcium. These elements cannot originate from the stars' core, and it is probable that the contamination comes from asteroids that got too close (within the Roche limit) to these stars by gravitational interaction with larger planets and were torn apart by star's tidal forces. Up to 50% of young white dwarfs may be contaminated in this manner.

Additionally, the dust responsible for the atmospheric pollution may be detected by infrared radiation if it exists in sufficient quantity, similar to the detection of debris discs around main sequence stars. Data from the Spitzer Space Telescope suggests that 1-3% of white dwarfs possess detectable circumstellar dust.

In 2015, minor planets were discovered transiting the white dwarf WD 1145+017. This material orbits with a period of around 4.5 hours, and the shapes of the transit light curves suggest that the larger bodies are disintegrating, contributing to the contamination in the white dwarf's atmosphere.

Space telescopes

Most confirmed extrasolar planets have been found using space-based telescopes (as of 01/2015). Many of the detection methods can work more effectively with space-based telescopes that avoid atmospheric haze and turbulence. COROT (2007-2012) and Kepler were space missions dedicated to searching for extrasolar planets using transits. COROT discovered about 30 new exoplanets. Kepler (2009-2013) and K2 (2013- ) have discovered over 2000 verified exoplanets. Hubble Space Telescope and MOST have also found or confirmed a few planets. The infrared Spitzer Space Telescope has been used to detect transits of extrasolar planets, as well as occultations of the planets by their host star and phase curves.

The Gaia mission, launched in December 2013, will use astrometry to determine the true masses of 1000 nearby exoplanets. TESS, launched in 2018, CHEOPS launched in 2019 and PLATO in 2026 will use the transit method.

Primary and secondary detection

Method Primary Secondary
Transit Primary eclipse. Planet passes in front of star. Secondary eclipse. Star passes in front of planet.
Radial velocity Radial velocity of star Radial velocity of planet. This has been done for Tau Boötis b.
Astrometry Astrometry of star. Position of star moves more for large planets with large orbits. Astrometry of planet. Color-differential astrometry. Position of planet moves quicker for planets with small orbits. Theoretical method—has been proposed for use for the SPICA spacecraft.

Verification and falsification methods

  • Verification by multiplicity
  • Transit color signature
  • Doppler tomography
  • Dynamical stability testing
  • Distinguishing between planets and stellar activity
  • Transit offset

Characterization methods

Information society

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Information_society

An information society is a society where the usage, creation, distribution, manipulation and integration of information is a significant activity. Its main drivers are information and communication technologies, which have resulted in rapid growth of a variety of forms of information. Proponents of this theory posit that these technologies are impacting most important forms of social organization, including education, economy, health, government, warfare, and levels of democracy. The people who are able to partake in this form of society are sometimes called either computer users or even digital citizens, defined by K. Mossberger as “Those who use the Internet regularly and effectively”. This is one of many dozen internet terms that have been identified to suggest that humans are entering a new and different phase of society.

Some of the markers of this steady change may be technological, economic, occupational, spatial, cultural, or a combination of all of these. Information society is seen as a successor to industrial society. Closely related concepts are the post-industrial society (post-fordism), post-modern society, computer society and knowledge society, telematic society, society of the spectacle (postmodernism), Information Revolution and Information Age, network society (Manuel Castells) or even liquid modernity.

Definition

There is currently no universally accepted concept of what exactly can be defined as an information society and what shall not be included in the term. Most theoreticians agree that a transformation can be seen as started somewhere between the 1970s, the early 1990s transformations of the Socialist East and the 2000s period that formed most of today's net principles and currently as is changing the way societies work fundamentally. Information technology goes beyond the internet, as the principles of internet design and usage influence other areas, and there are discussions about how big the influence of specific media or specific modes of production really is. Frank Webster notes five major types of information that can be used to define information society: technological, economic, occupational, spatial and cultural. According to Webster, the character of information has transformed the way that we live today. How we conduct ourselves centers around theoretical knowledge and information.

Kasiwulaya and Gomo (Makerere University) allude that information societies are those that have intensified their use of IT for economic, social, cultural and political transformation. In 2005, governments reaffirmed their dedication to the foundations of the Information Society in the Tunis Commitment and outlined the basis for implementation and follow-up in the Tunis Agenda for the Information Society. In particular, the Tunis Agenda addresses the issues of financing of ICTs for development and Internet governance that could not be resolved in the first phase.

Some people, such as Antonio Negri, characterize the information society as one in which people do immaterial labour. By this, they appear to refer to the production of knowledge or cultural artifacts. One problem with this model is that it ignores the material and essentially industrial basis of the society. However it does point to a problem for workers, namely how many creative people does this society need to function? For example, it may be that you only need a few star performers, rather than a plethora of non-celebrities, as the work of those performers can be easily distributed, forcing all secondary players to the bottom of the market. It is now common for publishers to promote only their best selling authors and to try to avoid the rest—even if they still sell steadily. Films are becoming more and more judged, in terms of distribution, by their first weekend's performance, in many cases cutting out opportunity for word-of-mouth development.

Michael Buckland characterizes information in society in his book Information and Society. Buckland expresses the idea that information can be interpreted differently from person to person based on that individual's experiences.

Considering that metaphors and technologies of information move forward in a reciprocal relationship, we can describe some societies (especially the Japanese society) as an information society because we think of it as such.

The word information may be interpreted in many different ways. According to Buckland in Information and Society, most of the meanings fall into three categories of human knowledge: information as knowledge, information as a process, and information as a thing.

Thus, the Information Society refers to the social importance given to communication and information in today's society, where social, economic and cultural relations are involved.

In the Information Society, the process of capturing, processing and communicating information is the main element that characterizes it. Thus, in this type of society, the vast majority of it will be dedicated to the provision of services and said services will consist of the processing, distribution or use of information.

The growth of computer information in society

Internet users per 100 inhabitants
The amount of data stored globally has increased greatly since the 1980s, and by 2007, 94% of it was stored digitally.

The growth of the amount of technologically mediated information has been quantified in different ways, including society's technological capacity to store information, to communicate information, and to compute information. It is estimated that, the world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes in 1986, which is the informational equivalent to less than one 730-MB CD-ROM per person in 1986 (539 MB per person), to 295 (optimally compressed) exabytes in 2007. This is the informational equivalent of 60 CD-ROM per person in 2007 and represents a sustained annual growth rate of some 25%. The world's combined technological capacity to receive information through one-way broadcast networks was the informational equivalent of 174 newspapers per person per day in 2007.

The world's combined effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (optimally compressed) information in 1986, 471 petabytes in 1993, 2.2 (optimally compressed) exabytes in 2000, and 65 (optimally compressed) exabytes in 2007, which is the informational equivalent of 6 newspapers per person per day in 2007. The world's technological capacity to compute information with humanly guided general-purpose computers grew from 3.0 × 10^8 MIPS in 1986, to 6.4 x 10^12 MIPS in 2007, experiencing the fastest growth rate of over 60% per year during the last two decades.

James R. Beniger describes the necessity of information in modern society in the following way: “The need for sharply increased control that resulted from the industrialization of material processes through application of inanimate sources of energy probably accounts for the rapid development of automatic feedback technology in the early industrial period (1740-1830)” (p. 174) “Even with enhanced feedback control, industry could not have developed without the enhanced means to process matter and energy, not only as inputs of the raw materials of production but also as outputs distributed to final consumption.”(p. 175)

Development of the information society model

Colin Clark's sector model of an economy undergoing technological change. In later stages, the Quaternary sector of the economy grows.

One of the first people to develop the concept of the information society was the economist Fritz Machlup. In 1933, Fritz Machlup began studying the effect of patents on research. His work culminated in the study The production and distribution of knowledge in the United States in 1962. This book was widely regarded and was eventually translated into Russian and Japanese. The Japanese have also studied the information society (or jōhōka shakai, 情報化社会).

The issue of technologies and their role in contemporary society have been discussed in the scientific literature using a range of labels and concepts. This section introduces some of them. Ideas of a knowledge or information economy, post-industrial society, postmodern society, network society, the information revolution, informational capitalism, network capitalism, and the like, have been debated over the last several decades.

Fritz Machlup (1962) introduced the concept of the knowledge industry. He began studying the effects of patents on research before distinguishing five sectors of the knowledge sector: education, research and development, mass media, information technologies, information services. Based on this categorization he calculated that in 1959 29% per cent of the GNP in the USA had been produced in knowledge industries.

Economic transition

Peter Drucker has argued that there is a transition from an economy based on material goods to one based on knowledge. Marc Porat distinguishes a primary (information goods and services that are directly used in the production, distribution or processing of information) and a secondary sector (information services produced for internal consumption by government and non-information firms) of the information economy.

Porat uses the total value added by the primary and secondary information sector to the GNP as an indicator for the information economy. The OECD has employed Porat's definition for calculating the share of the information economy in the total economy (e.g. OECD 1981, 1986). Based on such indicators, the information society has been defined as a society where more than half of the GNP is produced and more than half of the employees are active in the information economy.

For Daniel Bell the number of employees producing services and information is an indicator for the informational character of a society. "A post-industrial society is based on services. (…) What counts is not raw muscle power, or energy, but information. (…) A post industrial society is one in which the majority of those employed are not involved in the production of tangible goods".

Alain Touraine already spoke in 1971 of the post-industrial society. "The passage to postindustrial society takes place when investment results in the production of symbolic goods that modify values, needs, representations, far more than in the production of material goods or even of 'services'. Industrial society had transformed the means of production: post-industrial society changes the ends of production, that is, culture. (…) The decisive point here is that in postindustrial society all of the economic system is the object of intervention of society upon itself. That is why we can call it the programmed society, because this phrase captures its capacity to create models of management, production, organization, distribution, and consumption, so that such a society appears, at all its functional levels, as the product of an action exercised by the society itself, and not as the outcome of natural laws or cultural specificities" (Touraine 1988: 104). In the programmed society also the area of cultural reproduction including aspects such as information, consumption, health, research, education would be industrialized. That modern society is increasing its capacity to act upon itself means for Touraine that society is reinvesting ever larger parts of production and so produces and transforms itself. This makes Touraine's concept substantially different from that of Daniel Bell who focused on the capacity to process and generate information for efficient society functioning.

Jean-François Lyotard has argued that "knowledge has become the principle [sic] force of production over the last few decades". Knowledge would be transformed into a commodity. Lyotard says that postindustrial society makes knowledge accessible to the layman because knowledge and information technologies would diffuse into society and break up Grand Narratives of centralized structures and groups. Lyotard denotes these changing circumstances as postmodern condition or postmodern society.

Similarly to Bell, Peter Otto and Philipp Sonntag (1985) say that an information society is a society where the majority of employees work in information jobs, i.e. they have to deal more with information, signals, symbols, and images than with energy and matter. Radovan Richta (1977) argues that society has been transformed into a scientific civilization based on services, education, and creative activities. This transformation would be the result of a scientific-technological transformation based on technological progress and the increasing importance of computer technology. Science and technology would become immediate forces of production (Aristovnik 2014: 55).

Nico Stehr (1994, 2002a, b) says that in the knowledge society a majority of jobs involves working with knowledge. "Contemporary society may be described as a knowledge society based on the extensive penetration of all its spheres of life and institutions by scientific and technological knowledge" (Stehr 2002b: 18). For Stehr, knowledge is a capacity for social action. Science would become an immediate productive force, knowledge would no longer be primarily embodied in machines, but already appropriated nature that represents knowledge would be rearranged according to certain designs and programs (Ibid.: 41-46). For Stehr, the economy of a knowledge society is largely driven not by material inputs, but by symbolic or knowledge-based inputs (Ibid.: 67), there would be a large number of professions that involve working with knowledge, and a declining number of jobs that demand low cognitive skills as well as in manufacturing (Stehr 2002a).

Also Alvin Toffler argues that knowledge is the central resource in the economy of the information society: "In a Third Wave economy, the central resource – a single word broadly encompassing data, information, images, symbols, culture, ideology, and values – is actionable knowledge" (Dyson/Gilder/Keyworth/Toffler 1994).

At the end of the twentieth century, the concept of the network society gained importance in information society theory. For Manuel Castells, network logic is besides information, pervasiveness, flexibility, and convergence a central feature of the information technology paradigm (2000a: 69ff). "One of the key features of informational society is the networking logic of its basic structure, which explains the use of the concept of 'network society'" (Castells 2000: 21). "As an historical trend, dominant functions and processes in the Information Age are increasingly organized around networks. Networks constitute the new social morphology of our societies, and the diffusion of networking logic substantially modifies the operation and outcomes in processes of production, experience, power, and culture" (Castells 2000: 500). For Castells the network society is the result of informationalism, a new technological paradigm.

Jan Van Dijk (2006) defines the network society as a "social formation with an infrastructure of social and media networks enabling its prime mode of organization at all levels (individual, group/organizational and societal). Increasingly, these networks link all units or parts of this formation (individuals, groups and organizations)" (Van Dijk 2006: 20). For Van Dijk networks have become the nervous system of society, whereas Castells links the concept of the network society to capitalist transformation, Van Dijk sees it as the logical result of the increasing widening and thickening of networks in nature and society. Darin Barney uses the term for characterizing societies that exhibit two fundamental characteristics: "The first is the presence in those societies of sophisticated – almost exclusively digital – technologies of networked communication and information management/distribution, technologies which form the basic infrastructure mediating an increasing array of social, political and economic practices. (…) The second, arguably more intriguing, characteristic of network societies is the reproduction and institutionalization throughout (and between) those societies of networks as the basic form of human organization and relationship across a wide range of social, political and economic configurations and associations".

Critiques

The major critique of concepts such as information society, postmodern society, knowledge society, network society, postindustrial society, etc. that has mainly been voiced by critical scholars is that they create the impression that we have entered a completely new type of society. "If there is just more information then it is hard to understand why anyone should suggest that we have before us something radically new" (Webster 2002a: 259). Critics such as Frank Webster argue that these approaches stress discontinuity, as if contemporary society had nothing in common with society as it was 100 or 150 years ago. Such assumptions would have ideological character because they would fit with the view that we can do nothing about change and have to adapt to existing political realities (kasiwulaya 2002b: 267).

These critics argue that contemporary society first of all is still a capitalist society oriented towards accumulating economic, political, and cultural capital. They acknowledge that information society theories stress some important new qualities of society (notably globalization and informatization), but charge that they fail to show that these are attributes of overall capitalist structures. Critics such as Webster insist on the continuities that characterise change. In this way Webster distinguishes between different epochs of capitalism: laissez-faire capitalism of the 19th century, corporate capitalism in the 20th century, and informational capitalism for the 21st century (kasiwulaya 2006).

For describing contemporary society based on a new dialectic of continuity and discontinuity, other critical scholars have suggested several terms like:

  • transnational network capitalism, transnational informational capitalism (Christian Fuchs 2008, 2007): "Computer networks are the technological foundation that has allowed the emergence of global network capitalism, that is, regimes of accumulation, regulation, and discipline that are helping to increasingly base the accumulation of economic, political, and cultural capital on transnational network organizations that make use of cyberspace and other new technologies for global coordination and communication. [...] The need to find new strategies for executing corporate and political domination has resulted in a restructuration of capitalism that is characterized by the emergence of transnational, networked spaces in the economic, political, and cultural system and has been mediated by cyberspace as a tool of global coordination and communication. Economic, political, and cultural space have been restructured; they have become more fluid and dynamic, have enlarged their borders to a transnational scale, and handle the inclusion and exclusion of nodes in flexible ways. These networks are complex due to the high number of nodes (individuals, enterprises, teams, political actors, etc.) that can be involved and the high speed at which a high number of resources is produced and transported within them. But global network capitalism is based on structural inequalities; it is made up of segmented spaces in which central hubs (transnational corporations, certain political actors, regions, countries, Western lifestyles, and worldviews) centralize the production, control, and flows of economic, political, and cultural capital (property, power, definition capacities). This segmentation is an expression of the overall competitive character of contemporary society." (Fuchs 2008: 110+119).
  • digital capitalism (Schiller 2000, cf. also Peter Glotz): "networks are directly generalizing the social and cultural range of the capitalist economy as never before" (Schiller 2000: xiv)
  • virtual capitalism: the "combination of marketing and the new information technology will enable certain firms to obtain higher profit margins and larger market shares, and will thereby promote greater concentration and centralization of capital" (Dawson/John Bellamy Foster 1998: 63sq),
  • high-tech capitalism or informatic capitalism (Fitzpatrick 2002) – to focus on the computer as a guiding technology that has transformed the productive forces of capitalism and has enabled a globalized economy.

Other scholars prefer to speak of information capitalism (Morris-Suzuki 1997) or informational capitalism (Manuel Castells 2000, Christian Fuchs 2005, Schmiede 2006a, b). Manuel Castells sees informationalism as a new technological paradigm (he speaks of a mode of development) characterized by "information generation, processing, and transmission" that have become "the fundamental sources of productivity and power" (Castells 2000: 21). The "most decisive historical factor accelerating, channelling and shaping the information technology paradigm, and inducing its associated social forms, was/is the process of capitalist restructuring undertaken since the 1980s, so that the new techno-economic system can be adequately characterized as informational capitalism" (Castells 2000: 18). Castells has added to theories of the information society the idea that in contemporary society dominant functions and processes are increasingly organized around networks that constitute the new social morphology of society (Castells 2000: 500). Nicholas Garnham is critical of Castells and argues that the latter's account is technologically determinist because Castells points out that his approach is based on a dialectic of technology and society in which technology embodies society and society uses technology (Castells 2000: 5sqq). But Castells also makes clear that the rise of a new "mode of development" is shaped by capitalist production, i.e. by society, which implies that technology isn't the only driving force of society.

Antonio Negri and Michael Hardt argue that contemporary society is an Empire that is characterized by a singular global logic of capitalist domination that is based on immaterial labour. With the concept of immaterial labour Negri and Hardt introduce ideas of information society discourse into their Marxist account of contemporary capitalism. Immaterial labour would be labour "that creates immaterial products, such as knowledge, information, communication, a relationship, or an emotional response" (Hardt/Negri 2005: 108; cf. also 2000: 280-303), or services, cultural products, knowledge (Hardt/Negri 2000: 290). There would be two forms: intellectual labour that produces ideas, symbols, codes, texts, linguistic figures, images, etc.; and affective labour that produces and manipulates affects such as a feeling of ease, well-being, satisfaction, excitement, passion, joy, sadness, etc. (Ibid.).

Overall, neo-Marxist accounts of the information society have in common that they stress that knowledge, information technologies, and computer networks have played a role in the restructuration and globalization of capitalism and the emergence of a flexible regime of accumulation (David Harvey 1989). They warn that new technologies are embedded into societal antagonisms that cause structural unemployment, rising poverty, social exclusion, the deregulation of the welfare state and of labour rights, the lowering of wages, welfare, etc.

Concepts such as knowledge society, information society, network society, informational capitalism, postindustrial society, transnational network capitalism, postmodern society, etc. show that there is a vivid discussion in contemporary sociology on the character of contemporary society and the role that technologies, information, communication, and co-operation play in it. Information society theory discusses the role of information and information technology in society, the question which key concepts shall be used for characterizing contemporary society, and how to define such concepts. It has become a specific branch of contemporary sociology.

Second and third nature

Information society is the means of sending and receiving information from one place to another. As technology has advanced so too has the way people have adapted in sharing information with each other.

"Second nature" refers a group of experiences that get made over by culture. They then get remade into something else that can then take on a new meaning. As a society we transform this process so it becomes something natural to us, i.e. second nature. So, by following a particular pattern created by culture we are able to recognise how we use and move information in different ways. From sharing information via different time zones (such as talking online) to information ending up in a different location (sending a letter overseas) this has all become a habitual process that we as a society take for granted.

However, through the process of sharing information vectors have enabled us to spread information even further. Through the use of these vectors information is able to move and then separate from the initial things that enabled them to move. From here, something called "third nature" has developed. An extension of second nature, third nature is in control of second nature. It expands on what second nature is limited by. It has the ability to mould information in new and different ways. So, third nature is able to ‘speed up, proliferate, divide, mutate, and beam in on us from elsewhere. It aims to create a balance between the boundaries of space and time (see second nature). This can be seen through the telegraph, it was the first successful technology that could send and receive information faster than a human being could move an object. As a result different vectors of people have the ability to not only shape culture but create new possibilities that will ultimately shape society.

Therefore, through the use of second nature and third nature society is able to use and explore new vectors of possibility where information can be moulded to create new forms of interaction.

Sociological uses

Estonia, a small Baltic country in northern Europe, is one of the most advanced digital societies.

In sociology, informational society refers to a post-modern type of society. Theoreticians like Ulrich Beck, Anthony Giddens and Manuel Castells argue that since the 1970s a transformation from industrial society to informational society has happened on a global scale.

As steam power was the technology standing behind industrial society, so information technology is seen as the catalyst for the changes in work organisation, societal structure and politics occurring in the late 20th century.

In the book Future Shock, Alvin Toffler used the phrase super-industrial society to describe this type of society. Other writers and thinkers have used terms like "post-industrial society" and "post-modern industrial society" with a similar meaning.

Related terms

A number of terms in current use emphasize related but different aspects of the emerging global economic order. The Information Society intends to be the most encompassing in that an economy is a subset of a society. The Information Age is somewhat limiting, in that it refers to a 30-year period between the widespread use of computers and the knowledge economy, rather than an emerging economic order. The knowledge era is about the nature of the content, not the socioeconomic processes by which it will be traded. The computer revolution, and knowledge revolution refer to specific revolutionary transitions, rather than the end state towards which we are evolving. The Information Revolution relates with the well known terms agricultural revolution and industrial revolution.

  • The information economy and the knowledge economy emphasize the content or intellectual property that is being traded through an information market or knowledge market, respectively. Electronic commerce and electronic business emphasize the nature of transactions and running a business, respectively, using the Internet and World-Wide Web. The digital economy focuses on trading bits in cyberspace rather than atoms in physical space. The network economy stresses that businesses will work collectively in webs or as part of business ecosystems rather than as stand-alone units. Social networking refers to the process of collaboration on massive, global scales. The internet economy focuses on the nature of markets that are enabled by the Internet.
  • Knowledge services and knowledge value put content into an economic context. Knowledge services integrates Knowledge management, within a Knowledge organization, that trades in a Knowledge market. In order for individuals to receive more knowledge, surveillance is used. This relates to the use of Drones as a tool in order to gather knowledge on other individuals. Although seemingly synonymous, each term conveys more than nuances or slightly different views of the same thing. Each term represents one attribute of the likely nature of economic activity in the emerging post-industrial society. Alternatively, the new economic order will incorporate all of the above plus other attributes that have not yet fully emerged.
  • In connection with the development of the information society, information pollution appeared, which in turn evolved information ecology – associated with information hygiene.

Intellectual property considerations

One of the central paradoxes of the information society is that it makes information easily reproducible, leading to a variety of freedom/control problems relating to intellectual property. Essentially, business and capital, whose place becomes that of producing and selling information and knowledge, seems to require control over this new resource so that it can effectively be managed and sold as the basis of the information economy. However, such control can prove to be both technically and socially problematic. Technically because copy protection is often easily circumvented and socially rejected because the users and citizens of the information society can prove to be unwilling to accept such absolute commodification of the facts and information that compose their environment.

Responses to this concern range from the Digital Millennium Copyright Act in the United States (and similar legislation elsewhere) which make copy protection (see DRM) circumvention illegal, to the free software, open source and copyleft movements, which seek to encourage and disseminate the "freedom" of various information products (traditionally both as in "gratis" or free of cost, and liberty, as in freedom to use, explore and share).

Caveat: Information society is often used by politicians meaning something like "we all do internet now"; the sociological term information society (or informational society) has some deeper implications about change of societal structure. Because we lack political control of intellectual property, we are lacking in a concrete map of issues, an analysis of costs and benefits, and functioning political groups that are unified by common interests representing different opinions of this diverse situation that are prominent in the information society.

Emic and etic

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Emic_and_etic In anthropology , folk...