Search This Blog

Wednesday, June 15, 2022

Animal migration

From Wikipedia, the free encyclopedia

Mexican free-tailed bats on their long aerial migration

Animal migration is the relatively long-distance movement of individual animals, usually on a seasonal basis. It is the most common form of migration in ecology. It is found in all major animal groups, including birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. The cause of migration may be local climate, local availability of food, the season of the year or for mating.

To be counted as a true migration, and not just a local dispersal or irruption, the movement of the animals should be an annual or seasonal occurrence, or a major habitat change as part of their life. An annual event could include Northern Hemisphere birds migrating south for the winter, or wildebeest migrating annually for seasonal grazing. A major habitat change could include young Atlantic salmon or sea lamprey leaving the river of their birth when they have reached a few inches in size. Some traditional forms of human migration fit this pattern.

Migrations can be studied using traditional identification tags such as bird rings, or tracked directly with electronic tracking devices. Before animal migration was understood, folklore explanations were formulated for the appearance and disappearance of some species, such as that barnacle geese grew from goose barnacles.

Overview

Concepts

Migration can take very different forms in different species, and has a variety of causes. As such, there is no simple accepted definition of migration. One of the most commonly used definitions, proposed by the zoologist J. S. Kennedy is

Migratory behavior is persistent and straightened-out movement effected by the animal's own locomotory exertions or by its active embarkation on a vehicle. It depends on some temporary inhibition of station-keeping responses, but promotes their eventual disinhibition and recurrence.

Migration encompasses four related concepts: persistent straight movement; relocation of an individual on a greater scale (in both space and time) than its normal daily activities; seasonal to-and-fro movement of a population between two areas; and movement leading to the redistribution of individuals within a population. Migration can be either obligate, meaning individuals must migrate, or facultative, meaning individuals can "choose" to migrate or not. Within a migratory species or even within a single population, often not all individuals migrate. Complete migration is when all individuals migrate, partial migration is when some individuals migrate while others do not, and differential migration is when the difference between migratory and non-migratory individuals is based on discernible characteristics like age or sex. Irregular (non-cyclical) migrations such as irruptions can occur under pressure of famine, overpopulation of a locality, or some more obscure influence.

Seasonal

Seasonal migration is the movement of various species from one habitat to another during the year. Resource availability changes depending on seasonal fluctuations, which influence migration patterns. Some species such as Pacific salmon migrate to reproduce; every year, they swim upstream to mate and then return to the ocean. Temperature is a driving factor of migration that is dependent on the time of year. Many species, especially birds, migrate to warmer locations during the winter to escape poor environmental conditions.

Circadian

Circadian migration is where birds utilise circadian rhythm (CR) to regulate migration in both fall and spring. In circadian migration, clocks of both circadian (daily) and circannual (annual) patterns are used to determine the birds' orientation in both time and space as they migrate from one destination to the next. This type of migration is advantageous in birds that, during the winter, remain close to the equator, and also allows the monitoring of the auditory and spatial memory of the bird's brain to remember an optimal site of migration. These birds also have timing mechanisms that provide them with the distance to their destination.

Tidal

Tidal migration is the use of tides by organisms to move periodically from one habitat to another. This type of migration is often used in order to find food or mates. Tides can carry organisms horizontally and vertically for as little as a few nanometres to even thousands of kilometres. The most common form of tidal migration is to and from the intertidal zone during daily tidal cycles. These zones are often populated by many different species and are rich in nutrients. Organisms like crabs, nematodes, and small fish move in and out of these areas as the tides rise and fall, typically about every twelve hours. The cycle movements are associated with foraging of marine and bird species. Typically, during low tide, smaller or younger species will emerge to forage because they can survive in the shallower water and have less chance of being preyed upon. During high tide, larger species can be found due to the deeper water and nutrient upwelling from the tidal movements. Tidal migration is often facilitated by ocean currents.

Diel

While most migratory movements occur on an annual cycle, some daily movements are also described as migration. Many aquatic animals make a diel vertical migration, travelling a few hundred metres up and down the water column, while some jellyfish make daily horizontal migrations of a few hundred metres.

In specific groups

Different kinds of animals migrate in different ways.

In birds

Flocks of birds assembling before migration southwards
 

Approximately 1,800 of the world's 10,000 bird species migrate long distances each year in response to the seasons. Many of these migrations are north-south, with species feeding and breeding in high northern latitudes in the summer and moving some hundreds of kilometres south for the winter. Some species extend this strategy to migrate annually between the Northern and Southern Hemispheres. The Arctic tern has the longest migration journey of any bird: it flies from its Arctic breeding grounds to the Antarctic and back again each year, a distance of at least 19,000 km (12,000 mi), giving it two summers every year.

Bird migration is controlled primarily by day length, signalled by hormonal changes in the bird's body. On migration, birds navigate using multiple senses. Many birds use a sun compass, requiring them to compensate for the sun's changing position with time of day. Navigation involves the ability to detect magnetic fields.

In fish

Many species of salmon migrate up rivers to spawn

Most fish species are relatively limited in their movements, remaining in a single geographical area and making short migrations to overwinter, to spawn, or to feed. A few hundred species migrate long distances, in some cases of thousands of kilometres. About 120 species of fish, including several species of salmon, migrate between saltwater and freshwater (they are 'diadromous').

Forage fish such as herring and capelin migrate around substantial parts of the North Atlantic ocean. The capelin, for example, spawn around the southern and western coasts of Iceland; their larvae drift clockwise around Iceland, while the fish swim northwards towards Jan Mayen island to feed and return to Iceland parallel with Greenland's east coast.

In the 'sardine run', billions of Southern African pilchard Sardinops sagax spawn in the cold waters of the Agulhas Bank and move northward along the east coast of South Africa between May and July.

In insects

An aggregation of migratory Pantala flavescens dragonflies, known as globe skimmers, in Coorg, India

Some winged insects such as locusts and certain butterflies and dragonflies with strong flight migrate long distances. Among the dragonflies, species of Libellula and Sympetrum are known for mass migration, while Pantala flavescens, known as the globe skimmer or wandering glider dragonfly, makes the longest ocean crossing of any insect: between India and Africa. Exceptionally, swarms of the desert locust, Schistocerca gregaria, flew westwards across the Atlantic Ocean for 4,500 kilometres (2,800 mi) during October 1988, using air currents in the Inter-Tropical Convergence Zone.

In some migratory butterflies, such as the monarch butterfly and the painted lady, no individual completes the whole migration. Instead, the butterflies mate and reproduce on the journey, and successive generations continue the migration.

In mammals

Some mammals undertake exceptional migrations; reindeer have one of the longest terrestrial migrations on the planet, reaching as much as 4,868 kilometres (3,025 mi) per year in North America. However, over the course of a year, grey wolves move the most. One grey wolf covered a total cumulative annual distance of 7,247 kilometres (4,503 mi).

High-mountain shepherds in Lesotho practice transhumance with their flocks.

Mass migration occurs in mammals such as the Serengeti 'great migration', an annual circular pattern of movement with some 1.7 million wildebeest and hundreds of thousands of other large game animals, including gazelles and zebra. More than 20 such species engage, or used to engage, in mass migrations. Of these migrations, those of the springbok, black wildebeest, blesbok, scimitar-horned oryx, and kulan have ceased. Long-distance migrations occur in some bats – notably the mass migration of the Mexican free-tailed bat between Oregon and southern Mexico. Migration is important in cetaceans, including whales, dolphins and porpoises; some species travel long distances between their feeding and their breeding areas.

Humans are mammals, but human migration, as commonly defined, is when individuals often permanently change where they live, which does not fit the patterns described here. An exception is some traditional migratory patterns such as transhumance, in which herders and their animals move seasonally between mountains and valleys, and the seasonal movements of nomads.

In other animals

Among the reptiles, adult sea turtles migrate long distances to breed, as do some amphibians. Hatchling sea turtles, too, emerge from underground nests, crawl down to the water, and swim offshore to reach the open sea. Juvenile green sea turtles make use of Earth's magnetic field to navigate.

Christmas Island red crabs on annual migration

Some crustaceans migrate, such as the largely-terrestrial Christmas Island red crab, which moves en masse each year by the millions. Like other crabs, they breathe using gills, which must remain wet, so they avoid direct sunlight, digging burrows to shelter from the sun. They mate on land near their burrows. The females incubate their eggs in their abdominal brood pouches for two weeks. They then return to the sea to release their eggs at high tide in the moon's last quarter. The larvae spend a few weeks at sea and then return to land.

Tracking migration

A migratory butterfly, a monarch, tagged for identification

Scientists gather observations of animal migration by tracking their movements. Animals were traditionally tracked with identification tags such as bird rings for later recovery. However, no information was obtained about the actual route followed between release and recovery, and only a fraction of tagged individuals were recovered. More convenient, therefore, are electronic devices such as radio-tracking collars that can be followed by radio, whether handheld, in a vehicle or aircraft, or by satellite. GPS animal tracking enables accurate positions to be broadcast at regular intervals, but the devices are inevitably heavier and more expensive than those without GPS. An alternative is the Argos Doppler tag, also called a 'Platform Transmitter Terminal' (PTT), which sends regularly to the polar-orbiting Argos satellites; using Doppler shift, the animal's location can be estimated, relatively roughly compared to GPS, but at a lower cost and weight. A technology suitable for small birds which cannot carry the heavier devices is the geolocator which logs the light level as the bird flies, for analysis on recapture. There is scope for further development of systems able to track small animals globally.

Radio-tracking tags can be fitted to insects, including dragonflies and bees.

In culture

Before animal migration was understood, various folklore and erroneous explanations were formulated to account for the disappearance or sudden arrival of birds in an area. In Ancient Greece, Aristotle proposed that robins turned into redstarts when summer arrived. The barnacle goose was explained in European Medieval bestiaries and manuscripts as either growing like fruit on trees, or developing from goose barnacles on pieces of driftwood. Another example is the swallow, which was once thought, even by naturalists such as Gilbert White, to hibernate either underwater, buried in muddy riverbanks, or in hollow trees.

Intuitive statistics

From Wikipedia, the free encyclopedia

Intuitive statistics, or folk statistics, refers to the cognitive phenomenon where organisms use data to make generalizations and predictions about the world. This can be a small amount of sample data or training instances, which in turn contribute to inductive inferences about either population-level properties, future data, or both. Inferences can involve revising hypotheses, or beliefs, in light of probabilistic data that inform and motivate future predictions. The informal tendency for cognitive animals to intuitively generate statistical inferences, when formalized with certain axioms of probability theory, constitutes statistics as an academic discipline.

Because this capacity can accommodate a broad range of informational domains, the subject matter is similarly broad and overlaps substantially with other cognitive phenomena. Indeed, some have argued that "cognition as an intuitive statistician" is an apt companion metaphor to the computer metaphor of cognition. Others appeal to a variety of statistical and probabilistic mechanisms behind theory construction and category structuring. Research in this domain commonly focuses on generalizations relating to number, relative frequency, risk, and any systematic signatures in inferential capacity that an organism (e.g., humans, or non-human primates) might have.

Background and theory

Intuitive inferences can involve generating hypotheses from incoming sense data, such as categorization and concept structuring. Data are typically probabilistic and uncertainty is the rule, rather than the exception, in learning, perception, language, and thought. Recently, researchers have drawn from ideas in probability theory, philosophy of mind, computer science, and psychology to model cognition as a predictive and generative system of probabilistic representations, allowing information structures to support multiple inferences in a variety of contexts and combinations. This approach has been called a probabilistic language of thought because it constructs representations probabilistically, from pre-existing concepts to predict a possible and likely state of the world.

Probability

Statisticians and probability theorists have long debated about the use of various tools, assumptions, and problems relating to inductive inference in particular. David Hume famously considered the problem of induction, questioning the logical foundations of how and why people can arrive at conclusions that extend beyond past experiences - both spatiotemporally and epistemologically. More recently, theorists have considered the problem by emphasizing techniques for arriving from data to hypothesis using formal content-independent procedures, or in contrast, by considering informal, content-dependent tools for inductive inference. Searches for formal procedures have led to different developments in statistical inference and probability theory with different assumptions, including Fisherian frequentist statistics, Bayesian inference, and Neyman-Pearson statistics.

Gerd Gigerenzer and David Murray argue that twentieth century psychology as a discipline adopted probabilistic inference as a unified set of ideas and ignored the controversies among probability theorists. They claim that a normative but incorrect view of how humans "ought to think rationally" follows from this acceptance. They also maintain, however, that the intuitive statistician metaphor of cognition is promising, and should consider different formal tools or heuristics as specialized for different problem domains, rather than a content- or context-free toolkit. Signal detection theorists and object detection models, for example, often use a Neyman-Pearson approach, whereas Fisherian frequentist statistics might aid cause-effect inferences.

Frequentist inference

Frequentist inference focuses on the relative proportions or frequencies of occurrences to draw probabilistic conclusions. It is defined by its closely related concept, frequentist probability. This entails a view that "probability" is nonsensical in the absence of pre-existing data, because it is understood as a relative frequency that long-run samples would approach given large amounts of data. Leda Cosmides and John Tooby have argued that it is not possible to derive a probability without reference to some frequency of previous outcomes, and this likely has evolutionary origins: Single-event probabilities, they claim, are not observable because organisms evolved to intuitively understand and make statistical inferences from frequencies of prior events, rather than to "see" probability as an intrinsic property of an event.

Bayesian inference

Bayesian inference generally emphasizes the subjective probability of a hypothesis, which is computed as a posterior probability using Bayes' Theorem. It requires a "starting point" called a prior probability, which has been contentious for some frequentists who claim that frequency data are required to develop a prior probability, in contrast to taking a probability as an a priori assumption.

Bayesian models have been quite popular among psychologists, particularly learning theorists, because they appear to emulate the iterative, predictive process by which people learn and develop expectations from new observations, while giving appropriate weight to previous observations. Andy Clark, a cognitive scientist and philosopher, recently wrote a detailed argument in support of understanding the brain as a constructive Bayesian engine that is fundamentally action-oriented and predictive, rather than passive or reactive. More classic lines of evidence cited among supporters of Bayesian inference include conservatism, or the phenomenon where people modify previous beliefs toward, but not all the way to, a conclusion implied by previous observations. This pattern of behavior is similar to the pattern of posterior probability distributions when a Bayesian model is conditioned on data, though critics argued that this evidence had been overstated and lacked mathematical rigor.

Alison Gopnik more recently tackled the problem by advocating the use of Bayesian networks, or directed graph representations of conditional dependencies. In a Bayesian network, edge weights are conditional dependency strengths that are updated in light of new data, and nodes are observed variables. The graphical representation itself constitutes a model, or hypothesis, about the world and is subject to change, given new data.

Error management theory

Error management theory (EMT) is an application of Neyman-Pearson statistics to cognitive and evolutionary psychology. It maintains that the possible fitness costs and benefits of type I (false positive) and type II (false negative) errors are relevant to adaptively rational inferences, toward which an organism is expected to be biased due to natural selection. EMT was originally developed by Martie Haselton and David Buss, with initial research focusing on its possible role in sexual overperception bias in men and sexual underperception bias in women.

This is closely related to a concept called the "smoke detector principle" in evolutionary theory. It is defined by the tendency for immune, affective, and behavioral defenses to be hypersensitive and overreactive, rather than insensitive or weakly expressed. Randolph Nesse maintains that this is a consequence of a typical payoff structure in signal detection: In a system that is invariantly structured with a relatively low cost of false positives and high cost of false negatives, naturally selected defenses are expected to err on the side of hyperactivity in response to potential threat cues. This general idea has been applied to hypotheses about the apparent tendency for humans to apply agency to non-agents based on uncertain or agent-like cues. In particular, some claim that it is adaptive for potential prey to assume agency by default if it is even slightly suspected, because potential predator threats typically involve cheap false positives and lethal false negatives.

Heuristics and biases

Heuristics are efficient rules, or computational shortcuts, for producing a judgment or decision. The intuitive statistician metaphor of cognition led to a shift in focus for many psychologists, away from emotional or motivational principles and toward computational or inferential principles. Empirical studies investigating these principles have led some to conclude that human cognition, for example, has built-in and systematic errors in inference, or cognitive biases. As a result, cognitive psychologists have largely adopted the view that intuitive judgments, generalizations, and numerical or probabilistic calculations are systematically biased. The result is commonly an error in judgment, including (but not limited to) recurrent logical fallacies (e.g., the conjunction fallacy), innumeracy, and emotionally motivated shortcuts in reasoning. Social and cognitive psychologists have thus considered it "paradoxical" that humans can outperform powerful computers at complex tasks, yet be deeply flawed and error-prone in simple, everyday judgments.

Much of this research was carried out by Amos Tversky and Daniel Kahneman as an expansion of work by Herbert Simon on bounded rationality and satisficing. Tversky and Kahneman argue that people are regularly biased in their judgments under uncertainty, because in a speed-accuracy tradeoff they often rely on fast and intuitive heuristics with wide margins of error rather than slow calculations from statistical principles. These errors are called "cognitive illusions" because they involve systematic divergences between judgments and accepted, normative rules in statistical prediction.

Gigerenzer has been critical of this view, arguing that it builds from a flawed assumption that a unified "normative theory" of statistical prediction and probability exists. His contention is that cognitive psychologists neglect the diversity of ideas and assumptions in probability theory, and in some cases, their mutual incompatibility. Consequently, Gigerenzer argues that many cognitive illusions are not violations of probability theory per se, but involve some kind of experimenter confusion between subjective probabilities with degrees of confidence and long-run outcome frequencies. Cosmides and Tooby similarly claim that different probabilistic assumptions can be more or less normative and rational in different types of situations, and that there is not general-purpose statistical toolkit for making inferences across all informational domains. In a review of several experiments they conclude, in support of Gigerenzer, that previous heuristics and biases experiments did not represent problems in an ecologically valid way, and that re-representing problems in terms of frequencies rather than single-event probabilities can make cognitive illusions largely vanish.

Tversky and Kahneman refuted this claim, arguing that making illusions disappear by manipulating them, whether they are cognitive or visual, does not undermine the initially discovered illusion. They also note that Gigerenzer ignores cognitive illusions resulting from frequency data, e.g., illusory correlations such as the hot hand in basketball. This, they note, is an example of an illusory positive autocorrelation that cannot be corrected by converted data to natural frequencies.

For adaptationists, EMT can be applied to inference under any informational domain, where risk or uncertainty are present, such as predator avoidance, agency detection, or foraging. Researchers advocating this adaptive rationality view argue that evolutionary theory casts heuristics and biases in a new light, namely, as computationally efficient and ecologically rational shortcuts, or instances of adaptive error management.

Base rate neglect

People often neglect base rates, or true actuarial facts about the probability or rate of a phenomenon, and instead give inappropriate amounts of weight to specific observations. In a Bayesian model of inference, this would amount to an underweighting of the prior probability, which has been cited as evidence against the appropriateness of a normative Bayesian framework for modeling cognition. Frequency representations can resolve base rate neglect, and some consider the phenomenon to be an experimental artifact, i.e., a result of probabilities or rates being represented as mathematical abstractions, which are difficult to intuitively think about. Gigerenzer speculates an ecological reason for this, noting that individuals learn frequencies through successive trials in nature. Tversky and Kahneman refute Gigerenzer's claim, pointing to experiments where subjects predicted a disease based on the presence vs. absence of pre-specified symptoms across 250 trials, with feedback after each trial. They note that base rate neglect was still found, despite the frequency formulation of subject trials in the experiment.

Conjunction fallacy

Another popular example of a supposed cognitive illusion is the conjunction fallacy, described in an experiment by Tversky and Kahneman known as the "Linda problem." In this experiment, participants are presented with a short description of a person called Linda, who is 31 years old, single, intelligent, outspoken, and went to a university where she majored in philosophy, was concerned about discrimination and social justice, and participated in anti-nuclear protests. When participants were asked if it were more probable that Linda is (1) a bank teller, or (2) a bank teller and a feminist, 85% responded with option 2, even though it option 1 cannot be less probable than option 2. They concluded that this was a product of a representativeness heuristic, or a tendency to draw probabilistic inferences based on property similarities between instances of a concept, rather than a statistically structured inference.

Gigerenzer argued that the conjunction fallacy is based on a single-event probability, and would dissolve under a frequentist approach. He and other researchers demonstrate that conclusions from the conjunction fallacy result from ambiguous language, rather than robust statistical errors or cognitive illusions. In an alternative version of the Linda problem, participants are told that 100 people fit Linda's description and are asked how many are (1) bank tellers and (2) bank tellers and feminists. Experimentally, this version of the task appears to eliminate or mitigate the conjunction fallacy.

Computational models

There has been some question about how concept structuring and generalization can be understood in terms of brain architecture and processes. This question is impacted by a neighboring debate among theorists about the nature of thought, specifically between connectionist and language of thought models. Concept generalization and classification have been modeled in a variety of connectionist models, or neural networks, specifically in domains like language learning and categorization. Some emphasize the limitations of pure connectionist models when they are expected to generalize future instances after training on previous instances. Gary Marcus, for example, asserts that training data would have to be completely exhaustive for generalizations to occur in existing connectionist models, and that as a result, they do not handle novel observations well. He further advocates an integrationist perspective between a language of thought, consisting of symbol representations and operations, and connectionist models than retain the distributed processing that is likely used by neural networks in the brain.

Evidence in humans

In practice, humans routinely make conceptual, linguistic, and probabilistic generalizations from small amounts of data. There is some debate about the utility of various tools of statistical inference in understanding the mind, but it is commonly accepted that the human mind is somehow an exceptionally apt prediction machine, and that action-oriented processes underlying this phenomenon, whatever they might entail, are at the core of cognition. Probabilistic inferences and generalization play central roles in concepts and categories and language learning, and infant studies are commonly used to understand the developmental trajectory of humans' intuitive statistical toolkit(s).

Infant studies

Developmental psychologists such as Jean Piaget have traditionally argued that children do not develop the general cognitive capacities for probabilistic inference and hypothesis testing until concrete operational (age 7–11 years) and formal operational (age 12 years-adulthood) stages of development, respectively.

This is sometimes contrasted to a growing preponderance of empirical evidence suggesting that humans are capable generalizers in infancy. For example, looking-time experiments using expected outcomes of red and white ping pong ball proportions found that 8-month-old infants appear to make inferences about population characteristics from which the sample came, and vice versa when given population-level data. Other experiments have similarly supported a capacity for probabilistic inference with 6- and 11-month-old infants, but not in 4.5-month-olds.

The colored ball paradigm in these experiments did not distinguish the possibilities of infants' inferences based on quantity vs. proportion, which was addressed in follow-up research where 12-month-old infants seemed to understand proportions, basing probabilistic judgments - motivated by preferences for the more probable outcomes - on initial evidence of the proportions in their available options. Critics of the effectiveness of looking-time tasks allowed infants to search for preferred objects in single-sample probability tasks, supporting the notion that infants can infer probabilities of single events when given a small or large initial sample size. The researchers involved in these findings have argued that humans possess some statistically structured, inferential system during preverbal stages of development and prior to formal education.

It is less clear, however, how and why generalization is observed in infants: It might extend directly from detection and storage of similarities and differences in incoming data, or frequency representations. Conversely, it might be produced by something like general-purpose Bayesian inference, starting with a knowledge base that is iteratively conditioned on data to update subjective probabilities, or beliefs. This ties together questions about the statistical toolkit(s) that might be involved in learning, and how they apply to infant and childhood learning specifically.

Gopnik advocates the hypothesis that infant and childhood learning are examples of inductive inference, a general-purpose mechanism for generalization, acting upon specialized information structures ("theories") in the brain. On this view, infants and children are essentially proto-scientists because they regularly use a kind of scientific method, developing hypotheses, performing experiments via play, and updating models about the world based on their results. For Gopnik, this use of scientific thinking and categorization in development and everyday life can be formalized as models of Bayesian inference. An application of this view is the "sampling hypothesis," or the view that individual variation in children's causal and probabilistic inferences is an artifact of random sampling from a diverse set of hypotheses, and flexible generalizations based on sampling behavior and context. These views, particularly those advocating general Bayesian updating from specialized theories, are considered successors to Piaget’s theory rather than wholesale refutations because they maintain its domain-generality, viewing children as randomly and unsystematically considering a range of models before selecting a probable conclusion.

In contrast to the general-purpose mechanistic view, some researchers advocate both domain-specific information structures and similarly specialized inferential mechanisms. For example, while humans do not usually excel at conditional probability calculations, the use of conditional probability calculations are central to parsing speech sounds into comprehensible syllables, a relatively straightforward and intuitive skill emerging as early as 8 months. Infants also appear to be good at tracking not only spatiotemporal states of objects, but at tracking properties of objects, and these cognitive systems appear to be developmentally distinct. This has been interpreted as domain specific toolkits of inference, each of which corresponds to separate types of information and has applications to concept learning.

Concept formation

Infants use form similarities and differences to develop concepts relating to objects, and this relies on multiple trials with multiple patterns, exhibiting some kind of common property between trials. Infants appear to become proficient at this ability in particular by 12 months, but different concepts and properties employ different relevant principles of Gestalt psychology, many of which might emerge at different stages of development. Specifically, infant categorization at as early as 4.5 months involves iterative and interdependent processes by which exemplars (data) and their similarities and differences are crucial for drawing boundaries around categories. These abstract rules are statistical by nature, because they can entail common co-occurrences of certain perceived properties in past instances and facilitate inferences about their structure in future instances. This idea has been extrapolated by Douglas Hofstadter and Emmanuel Sander, who argue that because analogy is a process of inference relying on similarities and differences between concept properties, analogy and categorization are fundamentally the same process used for organizing concepts from incoming data.

Language learning

Infants and small children are not only capable generalizers of trait quantity and proportion, but of abstract rule-based systems such as language and music. These rules can be referred to as “algebraic rules” of abstract informational structure, and are representations of rule systems, or grammars. For language, creating generalizations with Bayesian inference and similarity detection has been advocated by researchers as a special case of concept formation. Infants appear to be proficient in inferring abstract and structural rules from streams of linguistic sounds produced in their developmental environments, and to generate wider predictions based on those rules.

For example, 9-month-old infants are capable of more quickly and dramatically updating their expectations when repeated syllable strings contain surprising features, such as rare phonemes. In general, preverbal infants appear to be capable of discriminating between grammars with which they have been trained with experience, and novel grammars. In 7-month-old infant looking-time tasks, infants seemed to pay more attention to unfamiliar grammatical structures than to familiar ones, and in a separate study using 3-syllable strings, infants appeared to similarly have generalized expectations based on abstract syllabic structure previously presented, suggesting that they used surface occurrences, or data, in order to infer deeper abstract structure. This was taken to support the “multiple hypotheses [or models]” view by the researchers involved.

Evidence in non-human animals

Grey parrots

Multiple studies by Irene Pepperberg and her colleagues suggested that Grey parrots (Psittacus erithacus) have some capacity for recognizing numbers or number-like concepts, appearing to understand ordinality and cardinality of numerals. Recent experiments also indicated that, given some language training and capacity for referencing recognized objects, they also have some ability to make inferences about probabilities and hidden object type ratios.

Non-human primates

Experiments found that when reasoning about preferred vs. non-preferred food proportions, capuchin monkeys were able to make inferences about proportions inferred by sequentially sampled data. Rhesus monkeys were similarly capable of using probabilistic and sequentially sampled data to make inferences about rewarding outcomes, and neural activity in the parietal cortex appeared to be involved in the decision-making process when they made inferences. In a series of 7 experiments using a variety of relative frequency differences between banana pellets and carrots, orangutans, bonobos, chimpanzees and gorillas also appeared to guide their decisions based on the ratios favoring the banana pellets after this was established as their preferred food item.

Applications

Reasoning in medicine

Research on reasoning in medicine, or clinical reasoning, usually focuses on cognitive processes and/or decision-making outcomes among physicians and patients. Considerations include assessments of risk, patient preferences, and evidence-based medical knowledge. On a cognitive level, clinical inference relies heavily on interplay between abstraction, abduction, deduction, and induction. Intuitive "theories," or knowledge in medicine, can be understood as prototypes in concept spaces, or alternatively, as semantic networks. Such models serve as a starting point for intuitive generalizations to be made from a small number of cues, resulting in the physician's tradeoff between the "art and science" of medical judgement. This tradeoff was captured in an artificially intelligent (AI) program called MYCIN, which outperformed medical students, but not experienced physicians with extensive practice in symptom recognition. Some researchers argue that despite this, physicians are prone to systematic biases, or cognitive illusions, in their judgment (e.g., satisficing to make premature diagnoses, confirmation bias when diagnoses are suspected a priori).

Communication of patient risk

Statistical literacy and risk judgments have been described as problematic for physician-patient communication. For example, physicians frequently inflate the perceived risk of non-treatment, alter patients' risk perceptions by positively or negatively framing single statistics (e.g., 97% survival rate vs. 3% death rate), and/or fail to sufficiently communicate "reference classes" of probability statements to patients. The reference class is the object of a probability statement: If a psychiatrist says, for example, “this medication can lead to a 30-50% chance of a sexual problem,” it is ambiguous whether this means that 30-50% of patients will develop a sexual problem at some point, or if all patients will have problems in 30-50% of their sexual encounters.

Base rates in clinical judgment

In studies of base rate neglect, the problems given to participants often use base rates of disease prevalence. In these experiments, physicians and non-physicians are similarly susceptible to base rate neglect, or errors in calculating conditional probability. Here is an example from an empirical survey problem given to experienced physicians: Suppose that a hypothetical cancer had a prevalence of 0.3% in the population, and the true positive rate of a screening test was 50% with a false positive rate of 3%. Given a patient with a positive test result, what is the probability that the patient has cancer? When asked this question, physicians with an average of 14 years experience in medical practice ranged in their answers from 1-99%, with most answers being 47% or 50%. (The correct answer is 5%.) This observation of clinical base rate neglect and conditional probability error has been replicated in multiple empirical studies. Physicians' judgments in similar problems, however, improved substantially when the rates were re-formulated as natural frequencies.

Tuesday, June 14, 2022

Electromagnetism

From Wikipedia, the free encyclopedia

Aurora at Alaska showing light created by charged particles and magnetism, fundamental concepts to electromagnetism study
 

Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force is carried by electromagnetic fields composed of electric fields and magnetic fields, and it is responsible for electromagnetic radiation such as light. It is one of the four fundamental interactions (commonly called forces) in nature, together with the strong interaction, the weak interaction, and gravitation. At high energy, the weak force and electromagnetic force are unified as a single electroweak force.

Electromagnetic phenomena are defined in terms of the electromagnetic force, sometimes called the Lorentz force, which includes both electricity and magnetism as different manifestations of the same phenomenon. The electromagnetic force plays a major role in determining the internal properties of most objects encountered in daily life. The electromagnetic attraction between atomic nuclei and their orbital electrons holds atoms together. Electromagnetic forces are responsible for the chemical bonds between atoms which create molecules, and intermolecular forces. The electromagnetic force governs all chemical processes, which arise from interactions between the electrons of neighboring atoms. Electromagnetism is very widely used in modern technology, and electromagnetic theory is the basis of electric power engineering and electronics including digital technology.

There are numerous mathematical descriptions of the electromagnetic field. Most prominently, Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents.

The theoretical implications of electromagnetism, particularly the establishment of the speed of light based on properties of the "medium" of propagation (permeability and permittivity), led to the development of special relativity by Albert Einstein in 1905.

History of the theory

Originally, electricity and magnetism were considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments:

  1. Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: unlike charges attract, like ones repel.
  2. Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole.
  3. An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire.
  4. A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement.

In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism.

His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy.

This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies.

Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, so if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community.

An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:

A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ...

E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars."

Fundamental forces

Representation of the electric field vector of a wave of circularly polarized electromagnetic radiation.

The electromagnetic force is one of the four known fundamental forces. The other fundamental forces are:

All other forces (e.g., friction, contact forces) are derived from these four fundamental forces and they are known as non-fundamental forces.

The electromagnetic force is responsible for practically all phenomena one encounters in daily life above the nuclear scale, with the exception of gravity. Roughly speaking, all the forces involved in interactions between atoms can be explained by the electromagnetic force acting between the electrically charged atomic nuclei and electrons of the atoms. Electromagnetic forces also explain how these particles carry momentum by their movement. This includes the forces we experience in "pushing" or "pulling" ordinary material objects, which result from the intermolecular forces that act between the individual molecules in our bodies and those in the objects. The electromagnetic force is also involved in all forms of chemical phenomena.

A necessary part of understanding the intra-atomic and intermolecular forces is the effective force generated by the momentum of the electrons' movement, such that as electrons move between interacting atoms they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behaviour of matter at the molecular scale including its density is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves.

Classical electrodynamics

In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752. One of the first to discover and publish a link between man-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to produce a theory of electromagnetism that set the subject on a mathematical foundation.

A theory of electromagnetism, known as classical electromagnetism, was developed by various physicists during the period between 1820 and 1873 when it culminated in the publication of a treatise by James Clerk Maxwell, which unified the preceding developments into a single theory and discovered the electromagnetic nature of light. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.

One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in a vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.)

In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.)

Extension to nonlinear phenomena

Magnetic reconnection in the solar plasma gives rise to solar flares, a complex magnetohydrodynamical phenomenon.

The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations.

Quantities and units

Electromagnetic units are part of a system of electrical units based primarily upon the magnetic properties of electric currents, the fundamental SI unit being the ampere. The units are:

In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in a vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system.

Symbol Name of quantity Unit name Symbol Base units
E energy joule J kg⋅m2⋅s−2 = C⋅V
Q electric charge coulomb C A⋅s
I electric current ampere A A (= W/V = C/s)
J electric current density ampere per square metre A/m2 A⋅m−2
ΔV; Δφ; ε potential difference; voltage; electromotive force volt V J/C = kg⋅m2⋅s−3⋅A−1
R; Z; X electric resistance; impedance; reactance ohm Ω V/A = kg⋅m2⋅s−3⋅A−2
ρ resistivity ohm metre Ω⋅m kg⋅m3⋅s−3⋅A−2
P electric power watt W V⋅A = kg⋅m2⋅s−3
C capacitance farad F C/V = kg−1⋅m−2⋅A2⋅s4
ΦE electric flux volt metre V⋅m kg⋅m3⋅s−3⋅A−1
E electric field strength volt per metre V/m N/C = kg⋅m⋅A−1⋅s−3
D electric displacement field coulomb per square metre C/m2 A⋅s⋅m−2
ε permittivity farad per metre F/m kg−1⋅m−3⋅A2⋅s4
χe electric susceptibility (dimensionless) 1 1
G; Y; B conductance; admittance; susceptance siemens S Ω−1 = kg−1⋅m−2⋅s3⋅A2
κ, γ, σ conductivity siemens per metre S/m kg−1⋅m−3⋅s3⋅A2
B magnetic flux density, magnetic induction tesla T Wb/m2 = kg⋅s−2⋅A−1 = N⋅A−1⋅m−1
Φ, ΦM, ΦB magnetic flux weber Wb V⋅s = kg⋅m2⋅s−2⋅A−1
H magnetic field strength ampere per metre A/m A⋅m−1
L, M inductance henry H Wb/A = V⋅s/A = kg⋅m2⋅s−2⋅A−2
μ permeability henry per metre H/m kg⋅m⋅s−2⋅A−2
χ magnetic susceptibility (dimensionless) 1 1
µ magnetic dipole moment ampere square meter A⋅m2 A⋅m2 = J⋅T−1 = 103 emu
σ mass magnetization ampere square meter per kilogram A⋅m2/kg A⋅m2⋅kg−1 = emu⋅g−1 = erg⋅G−1⋅g−1

Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...