Search This Blog

Wednesday, June 15, 2022

Gravity of Earth

From Wikipedia, the free encyclopedia
 
Earth's gravity measured by NASA GRACE mission, showing deviations from the theoretical gravity of an idealized, smooth Earth, the so-called Earth ellipsoid. Red shows the areas where gravity is stronger than the smooth, standard value, and blue reveals areas where gravity is weaker.

The gravity of Earth, denoted by g, is the net acceleration that is imparted to objects due to the combined effect of gravitation (from mass distribution within Earth) and the centrifugal force (from the Earth's rotation). It is a vector (physics) quantity, whose direction coincides with a plumb bob and strength or magnitude is given by the norm .

In SI units this acceleration is expressed in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near Earth's surface, the gravity acceleration is approximately 9.81 m/s2 (32.2 ft/s2), which means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about 9.81 metres (32.2 ft) per second every second. This quantity is sometimes referred to informally as little g (in contrast, the gravitational constant G is referred to as big G).

The precise strength of Earth's gravity varies depending on location. The nominal "average" value at Earth's surface, known as standard gravity is, by definition, 9.80665 m/s2 (32.1740 ft/s2). This quantity is denoted variously as gn, ge (though this sometimes means the normal equatorial value on Earth, 9.78033 m/s2 (32.0877 ft/s2)), g0, gee, or simply g (which is also used for the variable local value).

The weight of an object on Earth's surface is the downwards force on that object, given by Newton's second law of motion, or F = m(a) (force = mass × acceleration). Gravitational acceleration contributes to the total gravity acceleration, but other factors, such as the rotation of Earth, also contribute, and, therefore, affect the weight of the object. Gravity does not normally include the gravitational pull of the Moon and Sun, which are accounted for in terms of tidal effects.

Variation in magnitude

A non-rotating perfect sphere of uniform mass density, or whose density varies solely with distance from the centre (spherical symmetry), would produce a gravitational field of uniform magnitude at all points on its surface. The Earth is rotating and is also not spherically symmetric; rather, it is slightly flatter at the poles while bulging at the Equator: an oblate spheroid. There are consequently slight deviations in the magnitude of gravity across its surface.

Gravity on the Earth's surface varies by around 0.7%, from 9.7639 m/s2 on the Nevado Huascarán mountain in Peru to 9.8337 m/s2 at the surface of the Arctic Ocean. In large cities, it ranges from 9.7806 in Kuala Lumpur, Mexico City, and Singapore to 9.825 in Oslo and Helsinki.

Conventional value

In 1901 the third General Conference on Weights and Measures defined a standard gravitational acceleration for the surface of the Earth: gn = 9.80665 m/s2. It was based on measurements done at the Pavillon de Breteuil near Paris in 1888, with a theoretical correction applied in order to convert to a latitude of 45° at sea level. This definition is thus not a value of any particular place or carefully worked out average, but an agreement for a value to use if a better actual local value is not known or not important. It is also used to define the units kilogram force and pound force.

Calculating the gravity at Earth's surface using the average radius of Earth (6,371 kilometres (3,959 mi)), the experimentally determined value of the gravitational constant, and the Earth mass of 5.9722 ×1024 kg gives an acceleration of 9.8203 m/s2, slightly greater than the standard gravity of 9.80665 m/s2. The value of standard gravity corresponds to the gravity on Earth at a radius of 6,375.4 kilometres (3,961.5 mi).

Latitude

The differences of Earth's gravity around the Antarctic continent.

The surface of the Earth is rotating, so it is not an inertial frame of reference. At latitudes nearer the Equator, the outward centrifugal force produced by Earth's rotation is larger than at polar latitudes. This counteracts the Earth's gravity to a small degree – up to a maximum of 0.3% at the Equator – and reduces the apparent downward acceleration of falling objects.

The second major reason for the difference in gravity at different latitudes is that the Earth's equatorial bulge (itself also caused by centrifugal force from rotation) causes objects at the Equator to be farther from the planet's center than objects at the poles. Because the force due to gravitational attraction between two bodies (the Earth and the object being weighed) varies inversely with the square of the distance between them, an object at the Equator experiences a weaker gravitational pull than an object on the pole.

In combination, the equatorial bulge and the effects of the surface centrifugal force due to rotation mean that sea-level gravity increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles, so an object will weigh approximately 0.5% more at the poles than at the Equator.

Altitude

The graph shows the variation in gravity relative to the height of an object above the surface

Gravity decreases with altitude as one rises above the Earth's surface because greater altitude means greater distance from the Earth's centre. All other things being equal, an increase in altitude from sea level to 9,000 metres (30,000 ft) causes a weight decrease of about 0.29%. (An additional factor affecting apparent weight is the decrease in air density at altitude, which lessens an object's buoyancy. This would increase a person's apparent weight at an altitude of 9,000 metres by about 0.08%)

It is a common misconception that astronauts in orbit are weightless because they have flown high enough to escape the Earth's gravity. In fact, at an altitude of 400 kilometres (250 mi), equivalent to a typical orbit of the ISS, gravity is still nearly 90% as strong as at the Earth's surface. Weightlessness actually occurs because orbiting objects are in free-fall.

The effect of ground elevation depends on the density of the ground (see Slab correction section). A person flying at 9,100 m (30,000 ft) above sea level over mountains will feel more gravity than someone at the same elevation but over the sea. However, a person standing on the Earth's surface feels less gravity when the elevation is higher.

The following formula approximates the Earth's gravity variation with altitude:

Where

The formula treats the Earth as a perfect sphere with a radially symmetric distribution of mass; a more accurate mathematical treatment is discussed below.

Depth

Earth's radial density distribution according to the Preliminary Reference Earth Model (PREM).
 
Earth's gravity according to the Preliminary Reference Earth Model (PREM). Two models for a spherically symmetric Earth are included for comparison. The dark green straight line is for a constant density equal to the Earth's average density. The light green curved line is for a density that decreases linearly from center to surface. The density at the center is the same as in the PREM, but the surface density is chosen so that the mass of the sphere equals the mass of the real Earth.
 

An approximate value for gravity at a distance r from the center of the Earth can be obtained by assuming that the Earth's density is spherically symmetric. The gravity depends only on the mass inside the sphere of radius r. All the contributions from outside cancel out as a consequence of the inverse-square law of gravitation. Another consequence is that the gravity is the same as if all the mass were concentrated at the center. Thus, the gravitational acceleration at this radius is

where G is the gravitational constant and M(r) is the total mass enclosed within radius r. If the Earth had a constant density ρ, the mass would be M(r) = (4/3)πρr3 and the dependence of gravity on depth would be

The gravity g′ at depth d is given by g′ = g(1 − d/R) where g is acceleration due to gravity on the surface of the Earth, d is depth and R is the radius of the Earth. If the density decreased linearly with increasing radius from a density ρ0 at the center to ρ1 at the surface, then ρ(r) = ρ0 − (ρ0ρ1) r / re, and the dependence would be

The actual depth dependence of density and gravity, inferred from seismic travel times (see Adams–Williamson equation), is shown in the graphs below.

Local topography and geology

Local differences in topography (such as the presence of mountains), geology (such as the density of rocks in the vicinity), and deeper tectonic structure cause local and regional differences in the Earth's gravitational field, known as gravitational anomalies. Some of these anomalies can be very extensive, resulting in bulges in sea level, and throwing pendulum clocks out of synchronisation.

The study of these anomalies forms the basis of gravitational geophysics. The fluctuations are measured with highly sensitive gravimeters, the effect of topography and other known factors is subtracted, and from the resulting data conclusions are drawn. Such techniques are now used by prospectors to find oil and mineral deposits. Denser rocks (often containing mineral ores) cause higher than normal local gravitational fields on the Earth's surface. Less dense sedimentary rocks cause the opposite.

A map of recent volcanic activity and ridge spreading. The areas where NASA GRACE measured gravity to be stronger than the theoretical gravity have a strong correlation with the positions of the volcanic activity and ridge spreading.

There is a strong correlation between the gravity derivation map of earth from NASA GRACE with positions of recent volcanic activity, ridge spreading and volcanos. Where these regions have a stronger gravitation than theoretical predictions.

Other factors

In air or water, objects experience a supporting buoyancy force which reduces the apparent strength of gravity (as measured by an object's weight). The magnitude of the effect depends on the air density (and hence air pressure) or the water density respectively; see Apparent weight for details.

The gravitational effects of the Moon and the Sun (also the cause of the tides) have a very small effect on the apparent strength of Earth's gravity, depending on their relative positions; typical variations are 2 µm/s2 (0.2 mGal) over the course of a day.

Direction

A plumb bob determines the local vertical direction

Gravity acceleration is a vector quantity, with direction in addition to magnitude. In a spherically symmetric Earth, gravity would point directly towards the sphere's centre. As the Earth's figure is slightly flatter, there are consequently significant deviations in the direction of gravity: essentially the difference between geodetic latitude and geocentric latitude. Smaller deviations, called vertical deflection, are caused by local mass anomalies, such as mountains.

Comparative values worldwide

Tools exist for calculating the strength of gravity at various cities around the world. The effect of latitude can be clearly seen with gravity in high-latitude cities: Anchorage (9.826 m/s2), Helsinki (9.825 m/s2), being about 0.5% greater than that in cities near the equator: Kuala Lumpur (9.776 m/s2). The effect of altitude can be seen in Mexico City (9.776 m/s2; altitude 2,240 metres (7,350 ft)), and by comparing Denver (9.798 m/s2; 1,616 metres (5,302 ft)) with Washington, D.C. (9.801 m/s2; 30 metres (98 ft)), both of which are near 39° N. Measured values can be obtained from Physical and Mathematical Tables by T.M. Yarwood and F. Castle, Macmillan, revised edition 1970.

Mathematical models

If the terrain is at sea level, we can estimate, for the Geodetic Reference System 1980, , the acceleration at latitude :

This is the International Gravity Formula 1967, the 1967 Geodetic Reference System Formula, Helmert's equation or Clairaut's formula.

An alternative formula for g as a function of latitude is the WGS (World Geodetic System) 84 Ellipsoidal Gravity Formula:

where,

  • are the equatorial and polar semi-axes, respectively;
  • is the spheroid's eccentricity, squared;
  • is the defined gravity at the equator and poles, respectively;
  • (formula constant);

then, where ,

.

where the semi-axes of the earth are:

The difference between the WGS-84 formula and Helmert's equation is less than 0.68 μm·s−2.

Further reductions are applied to obtain gravity anomalies (see: Gravity anomaly#Computation).

Estimating g from the law of universal gravitation

From the law of universal gravitation, the force on a body acted upon by Earth's gravitational force is given by

where r is the distance between the centre of the Earth and the body (see below), and here we take to be the mass of the Earth and m to be the mass of the body.

Additionally, Newton's second law, F = ma, where m is mass and a is acceleration, here tells us that

Comparing the two formulas it is seen that:

So, to find the acceleration due to gravity at sea level, substitute the values of the gravitational constant, G, the Earth's mass (in kilograms), m1, and the Earth's radius (in metres), r, to obtain the value of g:

This formula only works because of the mathematical fact that the gravity of a uniform spherical body, as measured on or above its surface, is the same as if all its mass were concentrated at a point at its centre. This is what allows us to use the Earth's radius for r.

The value obtained agrees approximately with the measured value of g. The difference may be attributed to several factors, mentioned above under "Variations":

  • The Earth is not homogeneous
  • The Earth is not a perfect sphere, and an average value must be used for its radius
  • This calculated value of g only includes true gravity. It does not include the reduction of constraint force that we perceive as a reduction of gravity due to the rotation of Earth, and some of gravity being counteracted by centrifugal force.

There are significant uncertainties in the values of r and m1 as used in this calculation, and the value of G is also rather difficult to measure precisely.

If G, g and r are known then a reverse calculation will give an estimate of the mass of the Earth. This method was used by Henry Cavendish.

Measurement

The measurement of Earth's gravity is called gravimetry.

Satellite measurements

Gravity anomaly map from GRACE

Currently, the static and time-variable Earth's gravity field parameters are being determined using modern satellite missions, such as GOCE, CHAMP, Swarm, GRACE and GRACE-FO.[22][23] The lowest-degree parameters, including the Earth's oblateness and geocenter motion are best determined from Satellite laser ranging.

Large-scale gravity anomalies can be detected from space, as a by-product of satellite gravity missions, e.g., GOCE. These satellite missions aim at the recovery of a detailed gravity field model of the Earth, typically presented in the form of a spherical-harmonic expansion of the Earth's gravitational potential, but alternative presentations, such as maps of geoid undulations or gravity anomalies, are also produced.

The Gravity Recovery and Climate Experiment (GRACE) consists of two satellites that can detect gravitational changes across the Earth. Also these changes can be presented as gravity anomaly temporal variations. The Gravity Recovery and Interior Laboratory (GRAIL) also consisted of two spacecraft orbiting the Moon, which orbited for three years before their deorbit in 2015.

Animal migration

From Wikipedia, the free encyclopedia

Mexican free-tailed bats on their long aerial migration

Animal migration is the relatively long-distance movement of individual animals, usually on a seasonal basis. It is the most common form of migration in ecology. It is found in all major animal groups, including birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. The cause of migration may be local climate, local availability of food, the season of the year or for mating.

To be counted as a true migration, and not just a local dispersal or irruption, the movement of the animals should be an annual or seasonal occurrence, or a major habitat change as part of their life. An annual event could include Northern Hemisphere birds migrating south for the winter, or wildebeest migrating annually for seasonal grazing. A major habitat change could include young Atlantic salmon or sea lamprey leaving the river of their birth when they have reached a few inches in size. Some traditional forms of human migration fit this pattern.

Migrations can be studied using traditional identification tags such as bird rings, or tracked directly with electronic tracking devices. Before animal migration was understood, folklore explanations were formulated for the appearance and disappearance of some species, such as that barnacle geese grew from goose barnacles.

Overview

Concepts

Migration can take very different forms in different species, and has a variety of causes. As such, there is no simple accepted definition of migration. One of the most commonly used definitions, proposed by the zoologist J. S. Kennedy is

Migratory behavior is persistent and straightened-out movement effected by the animal's own locomotory exertions or by its active embarkation on a vehicle. It depends on some temporary inhibition of station-keeping responses, but promotes their eventual disinhibition and recurrence.

Migration encompasses four related concepts: persistent straight movement; relocation of an individual on a greater scale (in both space and time) than its normal daily activities; seasonal to-and-fro movement of a population between two areas; and movement leading to the redistribution of individuals within a population. Migration can be either obligate, meaning individuals must migrate, or facultative, meaning individuals can "choose" to migrate or not. Within a migratory species or even within a single population, often not all individuals migrate. Complete migration is when all individuals migrate, partial migration is when some individuals migrate while others do not, and differential migration is when the difference between migratory and non-migratory individuals is based on discernible characteristics like age or sex. Irregular (non-cyclical) migrations such as irruptions can occur under pressure of famine, overpopulation of a locality, or some more obscure influence.

Seasonal

Seasonal migration is the movement of various species from one habitat to another during the year. Resource availability changes depending on seasonal fluctuations, which influence migration patterns. Some species such as Pacific salmon migrate to reproduce; every year, they swim upstream to mate and then return to the ocean. Temperature is a driving factor of migration that is dependent on the time of year. Many species, especially birds, migrate to warmer locations during the winter to escape poor environmental conditions.

Circadian

Circadian migration is where birds utilise circadian rhythm (CR) to regulate migration in both fall and spring. In circadian migration, clocks of both circadian (daily) and circannual (annual) patterns are used to determine the birds' orientation in both time and space as they migrate from one destination to the next. This type of migration is advantageous in birds that, during the winter, remain close to the equator, and also allows the monitoring of the auditory and spatial memory of the bird's brain to remember an optimal site of migration. These birds also have timing mechanisms that provide them with the distance to their destination.

Tidal

Tidal migration is the use of tides by organisms to move periodically from one habitat to another. This type of migration is often used in order to find food or mates. Tides can carry organisms horizontally and vertically for as little as a few nanometres to even thousands of kilometres. The most common form of tidal migration is to and from the intertidal zone during daily tidal cycles. These zones are often populated by many different species and are rich in nutrients. Organisms like crabs, nematodes, and small fish move in and out of these areas as the tides rise and fall, typically about every twelve hours. The cycle movements are associated with foraging of marine and bird species. Typically, during low tide, smaller or younger species will emerge to forage because they can survive in the shallower water and have less chance of being preyed upon. During high tide, larger species can be found due to the deeper water and nutrient upwelling from the tidal movements. Tidal migration is often facilitated by ocean currents.

Diel

While most migratory movements occur on an annual cycle, some daily movements are also described as migration. Many aquatic animals make a diel vertical migration, travelling a few hundred metres up and down the water column, while some jellyfish make daily horizontal migrations of a few hundred metres.

In specific groups

Different kinds of animals migrate in different ways.

In birds

Flocks of birds assembling before migration southwards
 

Approximately 1,800 of the world's 10,000 bird species migrate long distances each year in response to the seasons. Many of these migrations are north-south, with species feeding and breeding in high northern latitudes in the summer and moving some hundreds of kilometres south for the winter. Some species extend this strategy to migrate annually between the Northern and Southern Hemispheres. The Arctic tern has the longest migration journey of any bird: it flies from its Arctic breeding grounds to the Antarctic and back again each year, a distance of at least 19,000 km (12,000 mi), giving it two summers every year.

Bird migration is controlled primarily by day length, signalled by hormonal changes in the bird's body. On migration, birds navigate using multiple senses. Many birds use a sun compass, requiring them to compensate for the sun's changing position with time of day. Navigation involves the ability to detect magnetic fields.

In fish

Many species of salmon migrate up rivers to spawn

Most fish species are relatively limited in their movements, remaining in a single geographical area and making short migrations to overwinter, to spawn, or to feed. A few hundred species migrate long distances, in some cases of thousands of kilometres. About 120 species of fish, including several species of salmon, migrate between saltwater and freshwater (they are 'diadromous').

Forage fish such as herring and capelin migrate around substantial parts of the North Atlantic ocean. The capelin, for example, spawn around the southern and western coasts of Iceland; their larvae drift clockwise around Iceland, while the fish swim northwards towards Jan Mayen island to feed and return to Iceland parallel with Greenland's east coast.

In the 'sardine run', billions of Southern African pilchard Sardinops sagax spawn in the cold waters of the Agulhas Bank and move northward along the east coast of South Africa between May and July.

In insects

An aggregation of migratory Pantala flavescens dragonflies, known as globe skimmers, in Coorg, India

Some winged insects such as locusts and certain butterflies and dragonflies with strong flight migrate long distances. Among the dragonflies, species of Libellula and Sympetrum are known for mass migration, while Pantala flavescens, known as the globe skimmer or wandering glider dragonfly, makes the longest ocean crossing of any insect: between India and Africa. Exceptionally, swarms of the desert locust, Schistocerca gregaria, flew westwards across the Atlantic Ocean for 4,500 kilometres (2,800 mi) during October 1988, using air currents in the Inter-Tropical Convergence Zone.

In some migratory butterflies, such as the monarch butterfly and the painted lady, no individual completes the whole migration. Instead, the butterflies mate and reproduce on the journey, and successive generations continue the migration.

In mammals

Some mammals undertake exceptional migrations; reindeer have one of the longest terrestrial migrations on the planet, reaching as much as 4,868 kilometres (3,025 mi) per year in North America. However, over the course of a year, grey wolves move the most. One grey wolf covered a total cumulative annual distance of 7,247 kilometres (4,503 mi).

High-mountain shepherds in Lesotho practice transhumance with their flocks.

Mass migration occurs in mammals such as the Serengeti 'great migration', an annual circular pattern of movement with some 1.7 million wildebeest and hundreds of thousands of other large game animals, including gazelles and zebra. More than 20 such species engage, or used to engage, in mass migrations. Of these migrations, those of the springbok, black wildebeest, blesbok, scimitar-horned oryx, and kulan have ceased. Long-distance migrations occur in some bats – notably the mass migration of the Mexican free-tailed bat between Oregon and southern Mexico. Migration is important in cetaceans, including whales, dolphins and porpoises; some species travel long distances between their feeding and their breeding areas.

Humans are mammals, but human migration, as commonly defined, is when individuals often permanently change where they live, which does not fit the patterns described here. An exception is some traditional migratory patterns such as transhumance, in which herders and their animals move seasonally between mountains and valleys, and the seasonal movements of nomads.

In other animals

Among the reptiles, adult sea turtles migrate long distances to breed, as do some amphibians. Hatchling sea turtles, too, emerge from underground nests, crawl down to the water, and swim offshore to reach the open sea. Juvenile green sea turtles make use of Earth's magnetic field to navigate.

Christmas Island red crabs on annual migration

Some crustaceans migrate, such as the largely-terrestrial Christmas Island red crab, which moves en masse each year by the millions. Like other crabs, they breathe using gills, which must remain wet, so they avoid direct sunlight, digging burrows to shelter from the sun. They mate on land near their burrows. The females incubate their eggs in their abdominal brood pouches for two weeks. They then return to the sea to release their eggs at high tide in the moon's last quarter. The larvae spend a few weeks at sea and then return to land.

Tracking migration

A migratory butterfly, a monarch, tagged for identification

Scientists gather observations of animal migration by tracking their movements. Animals were traditionally tracked with identification tags such as bird rings for later recovery. However, no information was obtained about the actual route followed between release and recovery, and only a fraction of tagged individuals were recovered. More convenient, therefore, are electronic devices such as radio-tracking collars that can be followed by radio, whether handheld, in a vehicle or aircraft, or by satellite. GPS animal tracking enables accurate positions to be broadcast at regular intervals, but the devices are inevitably heavier and more expensive than those without GPS. An alternative is the Argos Doppler tag, also called a 'Platform Transmitter Terminal' (PTT), which sends regularly to the polar-orbiting Argos satellites; using Doppler shift, the animal's location can be estimated, relatively roughly compared to GPS, but at a lower cost and weight. A technology suitable for small birds which cannot carry the heavier devices is the geolocator which logs the light level as the bird flies, for analysis on recapture. There is scope for further development of systems able to track small animals globally.

Radio-tracking tags can be fitted to insects, including dragonflies and bees.

In culture

Before animal migration was understood, various folklore and erroneous explanations were formulated to account for the disappearance or sudden arrival of birds in an area. In Ancient Greece, Aristotle proposed that robins turned into redstarts when summer arrived. The barnacle goose was explained in European Medieval bestiaries and manuscripts as either growing like fruit on trees, or developing from goose barnacles on pieces of driftwood. Another example is the swallow, which was once thought, even by naturalists such as Gilbert White, to hibernate either underwater, buried in muddy riverbanks, or in hollow trees.

Intuitive statistics

From Wikipedia, the free encyclopedia

Intuitive statistics, or folk statistics, refers to the cognitive phenomenon where organisms use data to make generalizations and predictions about the world. This can be a small amount of sample data or training instances, which in turn contribute to inductive inferences about either population-level properties, future data, or both. Inferences can involve revising hypotheses, or beliefs, in light of probabilistic data that inform and motivate future predictions. The informal tendency for cognitive animals to intuitively generate statistical inferences, when formalized with certain axioms of probability theory, constitutes statistics as an academic discipline.

Because this capacity can accommodate a broad range of informational domains, the subject matter is similarly broad and overlaps substantially with other cognitive phenomena. Indeed, some have argued that "cognition as an intuitive statistician" is an apt companion metaphor to the computer metaphor of cognition. Others appeal to a variety of statistical and probabilistic mechanisms behind theory construction and category structuring. Research in this domain commonly focuses on generalizations relating to number, relative frequency, risk, and any systematic signatures in inferential capacity that an organism (e.g., humans, or non-human primates) might have.

Background and theory

Intuitive inferences can involve generating hypotheses from incoming sense data, such as categorization and concept structuring. Data are typically probabilistic and uncertainty is the rule, rather than the exception, in learning, perception, language, and thought. Recently, researchers have drawn from ideas in probability theory, philosophy of mind, computer science, and psychology to model cognition as a predictive and generative system of probabilistic representations, allowing information structures to support multiple inferences in a variety of contexts and combinations. This approach has been called a probabilistic language of thought because it constructs representations probabilistically, from pre-existing concepts to predict a possible and likely state of the world.

Probability

Statisticians and probability theorists have long debated about the use of various tools, assumptions, and problems relating to inductive inference in particular. David Hume famously considered the problem of induction, questioning the logical foundations of how and why people can arrive at conclusions that extend beyond past experiences - both spatiotemporally and epistemologically. More recently, theorists have considered the problem by emphasizing techniques for arriving from data to hypothesis using formal content-independent procedures, or in contrast, by considering informal, content-dependent tools for inductive inference. Searches for formal procedures have led to different developments in statistical inference and probability theory with different assumptions, including Fisherian frequentist statistics, Bayesian inference, and Neyman-Pearson statistics.

Gerd Gigerenzer and David Murray argue that twentieth century psychology as a discipline adopted probabilistic inference as a unified set of ideas and ignored the controversies among probability theorists. They claim that a normative but incorrect view of how humans "ought to think rationally" follows from this acceptance. They also maintain, however, that the intuitive statistician metaphor of cognition is promising, and should consider different formal tools or heuristics as specialized for different problem domains, rather than a content- or context-free toolkit. Signal detection theorists and object detection models, for example, often use a Neyman-Pearson approach, whereas Fisherian frequentist statistics might aid cause-effect inferences.

Frequentist inference

Frequentist inference focuses on the relative proportions or frequencies of occurrences to draw probabilistic conclusions. It is defined by its closely related concept, frequentist probability. This entails a view that "probability" is nonsensical in the absence of pre-existing data, because it is understood as a relative frequency that long-run samples would approach given large amounts of data. Leda Cosmides and John Tooby have argued that it is not possible to derive a probability without reference to some frequency of previous outcomes, and this likely has evolutionary origins: Single-event probabilities, they claim, are not observable because organisms evolved to intuitively understand and make statistical inferences from frequencies of prior events, rather than to "see" probability as an intrinsic property of an event.

Bayesian inference

Bayesian inference generally emphasizes the subjective probability of a hypothesis, which is computed as a posterior probability using Bayes' Theorem. It requires a "starting point" called a prior probability, which has been contentious for some frequentists who claim that frequency data are required to develop a prior probability, in contrast to taking a probability as an a priori assumption.

Bayesian models have been quite popular among psychologists, particularly learning theorists, because they appear to emulate the iterative, predictive process by which people learn and develop expectations from new observations, while giving appropriate weight to previous observations. Andy Clark, a cognitive scientist and philosopher, recently wrote a detailed argument in support of understanding the brain as a constructive Bayesian engine that is fundamentally action-oriented and predictive, rather than passive or reactive. More classic lines of evidence cited among supporters of Bayesian inference include conservatism, or the phenomenon where people modify previous beliefs toward, but not all the way to, a conclusion implied by previous observations. This pattern of behavior is similar to the pattern of posterior probability distributions when a Bayesian model is conditioned on data, though critics argued that this evidence had been overstated and lacked mathematical rigor.

Alison Gopnik more recently tackled the problem by advocating the use of Bayesian networks, or directed graph representations of conditional dependencies. In a Bayesian network, edge weights are conditional dependency strengths that are updated in light of new data, and nodes are observed variables. The graphical representation itself constitutes a model, or hypothesis, about the world and is subject to change, given new data.

Error management theory

Error management theory (EMT) is an application of Neyman-Pearson statistics to cognitive and evolutionary psychology. It maintains that the possible fitness costs and benefits of type I (false positive) and type II (false negative) errors are relevant to adaptively rational inferences, toward which an organism is expected to be biased due to natural selection. EMT was originally developed by Martie Haselton and David Buss, with initial research focusing on its possible role in sexual overperception bias in men and sexual underperception bias in women.

This is closely related to a concept called the "smoke detector principle" in evolutionary theory. It is defined by the tendency for immune, affective, and behavioral defenses to be hypersensitive and overreactive, rather than insensitive or weakly expressed. Randolph Nesse maintains that this is a consequence of a typical payoff structure in signal detection: In a system that is invariantly structured with a relatively low cost of false positives and high cost of false negatives, naturally selected defenses are expected to err on the side of hyperactivity in response to potential threat cues. This general idea has been applied to hypotheses about the apparent tendency for humans to apply agency to non-agents based on uncertain or agent-like cues. In particular, some claim that it is adaptive for potential prey to assume agency by default if it is even slightly suspected, because potential predator threats typically involve cheap false positives and lethal false negatives.

Heuristics and biases

Heuristics are efficient rules, or computational shortcuts, for producing a judgment or decision. The intuitive statistician metaphor of cognition led to a shift in focus for many psychologists, away from emotional or motivational principles and toward computational or inferential principles. Empirical studies investigating these principles have led some to conclude that human cognition, for example, has built-in and systematic errors in inference, or cognitive biases. As a result, cognitive psychologists have largely adopted the view that intuitive judgments, generalizations, and numerical or probabilistic calculations are systematically biased. The result is commonly an error in judgment, including (but not limited to) recurrent logical fallacies (e.g., the conjunction fallacy), innumeracy, and emotionally motivated shortcuts in reasoning. Social and cognitive psychologists have thus considered it "paradoxical" that humans can outperform powerful computers at complex tasks, yet be deeply flawed and error-prone in simple, everyday judgments.

Much of this research was carried out by Amos Tversky and Daniel Kahneman as an expansion of work by Herbert Simon on bounded rationality and satisficing. Tversky and Kahneman argue that people are regularly biased in their judgments under uncertainty, because in a speed-accuracy tradeoff they often rely on fast and intuitive heuristics with wide margins of error rather than slow calculations from statistical principles. These errors are called "cognitive illusions" because they involve systematic divergences between judgments and accepted, normative rules in statistical prediction.

Gigerenzer has been critical of this view, arguing that it builds from a flawed assumption that a unified "normative theory" of statistical prediction and probability exists. His contention is that cognitive psychologists neglect the diversity of ideas and assumptions in probability theory, and in some cases, their mutual incompatibility. Consequently, Gigerenzer argues that many cognitive illusions are not violations of probability theory per se, but involve some kind of experimenter confusion between subjective probabilities with degrees of confidence and long-run outcome frequencies. Cosmides and Tooby similarly claim that different probabilistic assumptions can be more or less normative and rational in different types of situations, and that there is not general-purpose statistical toolkit for making inferences across all informational domains. In a review of several experiments they conclude, in support of Gigerenzer, that previous heuristics and biases experiments did not represent problems in an ecologically valid way, and that re-representing problems in terms of frequencies rather than single-event probabilities can make cognitive illusions largely vanish.

Tversky and Kahneman refuted this claim, arguing that making illusions disappear by manipulating them, whether they are cognitive or visual, does not undermine the initially discovered illusion. They also note that Gigerenzer ignores cognitive illusions resulting from frequency data, e.g., illusory correlations such as the hot hand in basketball. This, they note, is an example of an illusory positive autocorrelation that cannot be corrected by converted data to natural frequencies.

For adaptationists, EMT can be applied to inference under any informational domain, where risk or uncertainty are present, such as predator avoidance, agency detection, or foraging. Researchers advocating this adaptive rationality view argue that evolutionary theory casts heuristics and biases in a new light, namely, as computationally efficient and ecologically rational shortcuts, or instances of adaptive error management.

Base rate neglect

People often neglect base rates, or true actuarial facts about the probability or rate of a phenomenon, and instead give inappropriate amounts of weight to specific observations. In a Bayesian model of inference, this would amount to an underweighting of the prior probability, which has been cited as evidence against the appropriateness of a normative Bayesian framework for modeling cognition. Frequency representations can resolve base rate neglect, and some consider the phenomenon to be an experimental artifact, i.e., a result of probabilities or rates being represented as mathematical abstractions, which are difficult to intuitively think about. Gigerenzer speculates an ecological reason for this, noting that individuals learn frequencies through successive trials in nature. Tversky and Kahneman refute Gigerenzer's claim, pointing to experiments where subjects predicted a disease based on the presence vs. absence of pre-specified symptoms across 250 trials, with feedback after each trial. They note that base rate neglect was still found, despite the frequency formulation of subject trials in the experiment.

Conjunction fallacy

Another popular example of a supposed cognitive illusion is the conjunction fallacy, described in an experiment by Tversky and Kahneman known as the "Linda problem." In this experiment, participants are presented with a short description of a person called Linda, who is 31 years old, single, intelligent, outspoken, and went to a university where she majored in philosophy, was concerned about discrimination and social justice, and participated in anti-nuclear protests. When participants were asked if it were more probable that Linda is (1) a bank teller, or (2) a bank teller and a feminist, 85% responded with option 2, even though it option 1 cannot be less probable than option 2. They concluded that this was a product of a representativeness heuristic, or a tendency to draw probabilistic inferences based on property similarities between instances of a concept, rather than a statistically structured inference.

Gigerenzer argued that the conjunction fallacy is based on a single-event probability, and would dissolve under a frequentist approach. He and other researchers demonstrate that conclusions from the conjunction fallacy result from ambiguous language, rather than robust statistical errors or cognitive illusions. In an alternative version of the Linda problem, participants are told that 100 people fit Linda's description and are asked how many are (1) bank tellers and (2) bank tellers and feminists. Experimentally, this version of the task appears to eliminate or mitigate the conjunction fallacy.

Computational models

There has been some question about how concept structuring and generalization can be understood in terms of brain architecture and processes. This question is impacted by a neighboring debate among theorists about the nature of thought, specifically between connectionist and language of thought models. Concept generalization and classification have been modeled in a variety of connectionist models, or neural networks, specifically in domains like language learning and categorization. Some emphasize the limitations of pure connectionist models when they are expected to generalize future instances after training on previous instances. Gary Marcus, for example, asserts that training data would have to be completely exhaustive for generalizations to occur in existing connectionist models, and that as a result, they do not handle novel observations well. He further advocates an integrationist perspective between a language of thought, consisting of symbol representations and operations, and connectionist models than retain the distributed processing that is likely used by neural networks in the brain.

Evidence in humans

In practice, humans routinely make conceptual, linguistic, and probabilistic generalizations from small amounts of data. There is some debate about the utility of various tools of statistical inference in understanding the mind, but it is commonly accepted that the human mind is somehow an exceptionally apt prediction machine, and that action-oriented processes underlying this phenomenon, whatever they might entail, are at the core of cognition. Probabilistic inferences and generalization play central roles in concepts and categories and language learning, and infant studies are commonly used to understand the developmental trajectory of humans' intuitive statistical toolkit(s).

Infant studies

Developmental psychologists such as Jean Piaget have traditionally argued that children do not develop the general cognitive capacities for probabilistic inference and hypothesis testing until concrete operational (age 7–11 years) and formal operational (age 12 years-adulthood) stages of development, respectively.

This is sometimes contrasted to a growing preponderance of empirical evidence suggesting that humans are capable generalizers in infancy. For example, looking-time experiments using expected outcomes of red and white ping pong ball proportions found that 8-month-old infants appear to make inferences about population characteristics from which the sample came, and vice versa when given population-level data. Other experiments have similarly supported a capacity for probabilistic inference with 6- and 11-month-old infants, but not in 4.5-month-olds.

The colored ball paradigm in these experiments did not distinguish the possibilities of infants' inferences based on quantity vs. proportion, which was addressed in follow-up research where 12-month-old infants seemed to understand proportions, basing probabilistic judgments - motivated by preferences for the more probable outcomes - on initial evidence of the proportions in their available options. Critics of the effectiveness of looking-time tasks allowed infants to search for preferred objects in single-sample probability tasks, supporting the notion that infants can infer probabilities of single events when given a small or large initial sample size. The researchers involved in these findings have argued that humans possess some statistically structured, inferential system during preverbal stages of development and prior to formal education.

It is less clear, however, how and why generalization is observed in infants: It might extend directly from detection and storage of similarities and differences in incoming data, or frequency representations. Conversely, it might be produced by something like general-purpose Bayesian inference, starting with a knowledge base that is iteratively conditioned on data to update subjective probabilities, or beliefs. This ties together questions about the statistical toolkit(s) that might be involved in learning, and how they apply to infant and childhood learning specifically.

Gopnik advocates the hypothesis that infant and childhood learning are examples of inductive inference, a general-purpose mechanism for generalization, acting upon specialized information structures ("theories") in the brain. On this view, infants and children are essentially proto-scientists because they regularly use a kind of scientific method, developing hypotheses, performing experiments via play, and updating models about the world based on their results. For Gopnik, this use of scientific thinking and categorization in development and everyday life can be formalized as models of Bayesian inference. An application of this view is the "sampling hypothesis," or the view that individual variation in children's causal and probabilistic inferences is an artifact of random sampling from a diverse set of hypotheses, and flexible generalizations based on sampling behavior and context. These views, particularly those advocating general Bayesian updating from specialized theories, are considered successors to Piaget’s theory rather than wholesale refutations because they maintain its domain-generality, viewing children as randomly and unsystematically considering a range of models before selecting a probable conclusion.

In contrast to the general-purpose mechanistic view, some researchers advocate both domain-specific information structures and similarly specialized inferential mechanisms. For example, while humans do not usually excel at conditional probability calculations, the use of conditional probability calculations are central to parsing speech sounds into comprehensible syllables, a relatively straightforward and intuitive skill emerging as early as 8 months. Infants also appear to be good at tracking not only spatiotemporal states of objects, but at tracking properties of objects, and these cognitive systems appear to be developmentally distinct. This has been interpreted as domain specific toolkits of inference, each of which corresponds to separate types of information and has applications to concept learning.

Concept formation

Infants use form similarities and differences to develop concepts relating to objects, and this relies on multiple trials with multiple patterns, exhibiting some kind of common property between trials. Infants appear to become proficient at this ability in particular by 12 months, but different concepts and properties employ different relevant principles of Gestalt psychology, many of which might emerge at different stages of development. Specifically, infant categorization at as early as 4.5 months involves iterative and interdependent processes by which exemplars (data) and their similarities and differences are crucial for drawing boundaries around categories. These abstract rules are statistical by nature, because they can entail common co-occurrences of certain perceived properties in past instances and facilitate inferences about their structure in future instances. This idea has been extrapolated by Douglas Hofstadter and Emmanuel Sander, who argue that because analogy is a process of inference relying on similarities and differences between concept properties, analogy and categorization are fundamentally the same process used for organizing concepts from incoming data.

Language learning

Infants and small children are not only capable generalizers of trait quantity and proportion, but of abstract rule-based systems such as language and music. These rules can be referred to as “algebraic rules” of abstract informational structure, and are representations of rule systems, or grammars. For language, creating generalizations with Bayesian inference and similarity detection has been advocated by researchers as a special case of concept formation. Infants appear to be proficient in inferring abstract and structural rules from streams of linguistic sounds produced in their developmental environments, and to generate wider predictions based on those rules.

For example, 9-month-old infants are capable of more quickly and dramatically updating their expectations when repeated syllable strings contain surprising features, such as rare phonemes. In general, preverbal infants appear to be capable of discriminating between grammars with which they have been trained with experience, and novel grammars. In 7-month-old infant looking-time tasks, infants seemed to pay more attention to unfamiliar grammatical structures than to familiar ones, and in a separate study using 3-syllable strings, infants appeared to similarly have generalized expectations based on abstract syllabic structure previously presented, suggesting that they used surface occurrences, or data, in order to infer deeper abstract structure. This was taken to support the “multiple hypotheses [or models]” view by the researchers involved.

Evidence in non-human animals

Grey parrots

Multiple studies by Irene Pepperberg and her colleagues suggested that Grey parrots (Psittacus erithacus) have some capacity for recognizing numbers or number-like concepts, appearing to understand ordinality and cardinality of numerals. Recent experiments also indicated that, given some language training and capacity for referencing recognized objects, they also have some ability to make inferences about probabilities and hidden object type ratios.

Non-human primates

Experiments found that when reasoning about preferred vs. non-preferred food proportions, capuchin monkeys were able to make inferences about proportions inferred by sequentially sampled data. Rhesus monkeys were similarly capable of using probabilistic and sequentially sampled data to make inferences about rewarding outcomes, and neural activity in the parietal cortex appeared to be involved in the decision-making process when they made inferences. In a series of 7 experiments using a variety of relative frequency differences between banana pellets and carrots, orangutans, bonobos, chimpanzees and gorillas also appeared to guide their decisions based on the ratios favoring the banana pellets after this was established as their preferred food item.

Applications

Reasoning in medicine

Research on reasoning in medicine, or clinical reasoning, usually focuses on cognitive processes and/or decision-making outcomes among physicians and patients. Considerations include assessments of risk, patient preferences, and evidence-based medical knowledge. On a cognitive level, clinical inference relies heavily on interplay between abstraction, abduction, deduction, and induction. Intuitive "theories," or knowledge in medicine, can be understood as prototypes in concept spaces, or alternatively, as semantic networks. Such models serve as a starting point for intuitive generalizations to be made from a small number of cues, resulting in the physician's tradeoff between the "art and science" of medical judgement. This tradeoff was captured in an artificially intelligent (AI) program called MYCIN, which outperformed medical students, but not experienced physicians with extensive practice in symptom recognition. Some researchers argue that despite this, physicians are prone to systematic biases, or cognitive illusions, in their judgment (e.g., satisficing to make premature diagnoses, confirmation bias when diagnoses are suspected a priori).

Communication of patient risk

Statistical literacy and risk judgments have been described as problematic for physician-patient communication. For example, physicians frequently inflate the perceived risk of non-treatment, alter patients' risk perceptions by positively or negatively framing single statistics (e.g., 97% survival rate vs. 3% death rate), and/or fail to sufficiently communicate "reference classes" of probability statements to patients. The reference class is the object of a probability statement: If a psychiatrist says, for example, “this medication can lead to a 30-50% chance of a sexual problem,” it is ambiguous whether this means that 30-50% of patients will develop a sexual problem at some point, or if all patients will have problems in 30-50% of their sexual encounters.

Base rates in clinical judgment

In studies of base rate neglect, the problems given to participants often use base rates of disease prevalence. In these experiments, physicians and non-physicians are similarly susceptible to base rate neglect, or errors in calculating conditional probability. Here is an example from an empirical survey problem given to experienced physicians: Suppose that a hypothetical cancer had a prevalence of 0.3% in the population, and the true positive rate of a screening test was 50% with a false positive rate of 3%. Given a patient with a positive test result, what is the probability that the patient has cancer? When asked this question, physicians with an average of 14 years experience in medical practice ranged in their answers from 1-99%, with most answers being 47% or 50%. (The correct answer is 5%.) This observation of clinical base rate neglect and conditional probability error has been replicated in multiple empirical studies. Physicians' judgments in similar problems, however, improved substantially when the rates were re-formulated as natural frequencies.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...