Search This Blog

Sunday, March 31, 2024

Chunking (psychology)

From Wikipedia, the free encyclopedia

In cognitive psychology, chunking is a process by which small individual pieces of a set of information are bound together to create a meaningful whole later on in memory. The chunks, by which the information is grouped, are meant to improve short-term retention of the material, thus bypassing the limited capacity of working memory and allowing the working memory to be more efficient. A chunk is a collection of basic units that are strongly associated with one another, and have been grouped together and stored in a person's memory. These chunks can be retrieved easily due to their coherent grouping. It is believed that individuals create higher-order cognitive representations of the items within the chunk. The items are more easily remembered as a group than as the individual items themselves. These chunks can be highly subjective because they rely on an individual's perceptions and past experiences, which are linked to the information set. The size of the chunks generally ranges from two to six items but often differs based on language and culture.

According to Johnson (1970), there are four main concepts associated with the memory process of chunking: chunk, memory code, decode and recode. The chunk, as mentioned prior, is a sequence of to-be-remembered information that can be composed of adjacent terms. These items or information sets are to be stored in the same memory code. The process of recoding is where one learns the code for a chunk, and decoding is when the code is translated into the information that it represents.

The phenomenon of chunking as a memory mechanism is easily observed in the way individuals group numbers, and information, in day-to-day life. For example, when recalling a number such as 12101946, if numbers are grouped as 12, 10, and 1946, a mnemonic is created for this number as a month, day, and year. It would be stored as December 10, 1946, instead of a string of numbers. Similarly, another illustration of the limited capacity of working memory as suggested by George Miller can be seen from the following example: While recalling a mobile phone number such as 9849523450, we might break this into 98 495 234 50. Thus, instead of remembering 10 separate digits that are beyond the putative "seven plus-or-minus two" memory span, we are remembering four groups of numbers. An entire chunk can also be remembered simply by storing the beginnings of a chunk in the working memory, resulting in the long-term memory recovering the remainder of the chunk.

Modality effect

A modality effect is present in chunking. That is, the mechanism used to convey the list of items to the individual affects how much "chunking" occurs.

Experimentally, it has been found that auditory presentation results in a larger amount of grouping in the responses of individuals than visual presentation does. Previous literature, such as George Miller's The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information (1956) has shown that the probability of recall of information is greater when the chunking strategy is used. As stated above, the grouping of the responses occurs as individuals place them into categories according to their inter-relatedness based on semantic and perceptual properties. Lindley (1966) showed that since the groups produced have meaning to the participant, this strategy makes it easier for an individual to recall and maintain information in memory during studies and testing. Therefore, when "chunking" is used as a strategy, one can expect a higher proportion of correct recalls.

Memory training systems, mnemonic

Various kinds of memory training systems and mnemonics include training and drills in specially-designed recoding or chunking schemes. Such systems existed before Miller's paper, but there was no convenient term to describe the general strategy and no substantive and reliable research. The term "chunking" is now often used in reference to these systems. As an illustration, patients with Alzheimer's disease typically experience working memory deficits; chunking is an effective method to improve patients' verbal working memory performance. Patients with Schizophrenia also experience working memory deficits which influence executive function; memory training procedures positively influence cognitive and rehabilitative outcomes. Chunking has been proven to decrease the load on the working memory in many ways. As well as remembering chunked information easier, a person can also recall other non-chunked memories easier due to the benefits chunking has on the working memory. For instance, in one study, participants with more specialized knowledge could reconstruct sequences of Chess moves because they had larger chunks of procedural knowledge, which means that the level of expertise and the sorting order of the information retrieved is essential in the influence of procedural knowledge chunks retained in short-term memory. Chunking has been shown to have an influence in linguistics, such as boundary perception.

Efficient Chunk sizes

According to the research conducted by Dirlam (1972), a mathematical analysis was conducted to see what the efficient chunk size is. We are familiar with the size range that chunking holds, but Dirlam (1972) wanted to discover the most efficient chunk size. The mathematical findings have discovered that four or three items in each chunk is the most optimal.

Channel capacity, "Magic number seven", Increase of short-term memory

The word chunking comes from a famous 1956 paper by George A. Miller, "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information". At a time when information theory was beginning to be applied in psychology, Miller observed that some human cognitive tasks fit the model of a "channel capacity" characterized by a roughly constant capacity in bits, but short-term memory did not. A variety of studies could be summarized by saying that short-term memory had a capacity of about "seven plus-or-minus two" chunks. Miller (1956) wrote, "With binary items, the span is about nine and, although it drops to about five with monosyllabic English words, the difference is far less than the hypothesis of constant information would require (see also, memory span). The span of immediate memory seems to be almost independent of the number of bits per chunk, at least over the range that has been examined to date." Miller acknowledged that "we are not very definite about what constitutes a chunk of information."

Miller (1956) noted that according to this theory, it should be possible to increase short-term memory for low-information-content items effectively by mentally recoding them into a smaller number of high-information-content items. He imagined this process is useful in scenarios such as "a man just beginning to learn radio-telegraphic code hears each dit and dah as a separate chunk. Soon he is able to organize these sounds into letters and then he can deal with the letters as chunks. Then the letters organize themselves as words, which are still larger chunks, and he begins to hear whole phrases." Thus, a telegrapher can effectively "remember" several dozen dits and dahs as a single phrase. Naïve subjects can remember a maximum of only nine binary items, but Miller reports a 1954 experiment in which people were trained to listen to a string of binary digits and (in one case) mentally group them into groups of five, recode each group into a name (for example, "twenty-one" for 10101), and remember the names. With sufficient practice, people found it possible to remember as many as forty binary digits. Miller wrote:

It is a little dramatic to watch a person get 40 binary digits in a row and then repeat them back without error. However, if you think of this merely as a mnemonic trick for extending the memory span, you will miss the more important point that is implicit in nearly all such mnemonic devices. The point is that recoding is an extremely powerful weapon for increasing the amount of information that we can deal with.

Expertise and skilled memory effects

Studies have shown that people have better memories when they are trying to remember items with which they are familiar. Similarly, people tend to create familiar chunks. This familiarity allows one to remember more individual pieces of content, and also more chunks as a whole. One well-known chunking study was conducted by Chase and Ericsson, who worked with an undergraduate student, SF, for over two years. They wanted to see if a person's digit span memory could be improved with practice. SF began the experiment with a normal span of 7 digits. SF was a long-distance runner, and chunking strings of digits into race times increased his digit span. By the end of the experiment, his digit span had grown to 80 numbers. A later description of the research in The Brain-Targeted Teaching Model for 21st Century Schools states that SF later expanded his strategy by incorporating ages and years, but his chunks were always familiar, which allowed him to recall them more easily. It is important to note that a person who does not have knowledge in the expert domain (e.g. being familiar with mile/marathon times) would have difficulty chunking with race times and ultimately be unable to memorize as many numbers using this method. The idea that a person who does not have knowledge in the expert domain would have difficulty chunking could also be seen in an experiment of novice and expert hikers to see if they could remember different mountain scenes. From this study, it was found that the expert hikers had better recall and recognition of structured stimuli. Another example could be seen with expert musicians in being able to chunk and recall encoded material that best meets the demands they are presented with at any given moment during the performance. 

Chunking and memory in chess revisited

Previous research has shown that chunking is an effective tool for enhancing memory capacity due to the nature of grouping individual pieces into larger, more meaningful groups that are easier to remember. Chunking is a popular tool for people who play chess, specifically a master. Chase and Simon (1973a) discovered that the skill levels of chess players are attributed to long-term memory storage and the ability to copy and recollect thousands of chunks. The process helps acquire knowledge at a faster pace. Since it is an excellent tool for enhancing memory, a chess player who utilizes chunking has a higher chance of success. According to Chase and Simon, while re-examining (1973b), an expert chess master is able to access information in long-term memory storage quickly due to the ability to recall chunks. Chunks stored in long-term memory are related to the decision of the movement of board pieces due to obvious patterns.

Chunking models for education

Many years of research has concluded that chunking is a reliable process for gaining knowledge and organization of information. Chunking provides explanation to the behavior of experts, such as a teacher. A teacher can utilize chunking in their classroom as a way to teach the curriculum. Gobet (2005) proposed that teachers can use chunking as a method to segment the curriculum into natural components. A student learns better when focusing on key features of material, so it is important to create the segments to highlight the important information. By understanding the process of how an expert is formed, it is possible to find general mechanisms for learning that can be implemented into classrooms.

Chunking in motor learning

Chunking is a method of learning that can be applied in a number of contexts and is not limited to learning verbal material. Karl Lashley, in his classic paper on serial order, argued that the sequential responses that appear to be organized in a linear and flat fashion concealed an underlying hierarchical structure. This was then demonstrated in motor control by Rosenbaum et al. in 1983. Thus sequences can consist of sub-sequences and these can, in turn, consist of sub-sub-sequences. Hierarchical representations of sequences have an advantage over linear representations: They combine efficient local action at low hierarchical levels while maintaining the guidance of an overall structure. While the representation of a linear sequence is simple from a storage point of view, there can be potential problems during retrieval. For instance, if there is a break in the sequence chain, subsequent elements will become inaccessible. On the other hand, a hierarchical representation would have multiple levels of representation. A break in the link between lower-level nodes does not render any part of the sequence inaccessible, since the control nodes (chunk nodes) at the higher level would still be able to facilitate access to the lower-level nodes.

Schematic of a hierarchical sequential structure with three levels. The lowest level could be a linear representation, while intermediate levels denote chunk nodes. The highest level is the entire sequence.

Chunks in motor learning are identified by pauses between successive actions in Terrace (2001). It is also suggested that during the sequence performance stage (after learning), participants download list items as chunks during pauses. He also argued for an operational definition of chunks suggesting a distinction between the notions of input and output chunks from the ideas of short-term and long-term memory. Input chunks reflect the limitation of working memory during the encoding of new information (how new information is stored in long-term memory), and how it is retrieved during subsequent recall. Output chunks reflect the organization of over-learned motor programs that are generated on-line in working memory. Sakai et al. (2003) showed that participants spontaneously organize a sequence into a number of chunks across a few sets and that these chunks were distinct among participants tested on the same sequence. They also demonstrated that the performance of a shuffled sequence was poorer when the chunk patterns were disrupted than when the chunk patterns were preserved. Chunking patterns also seem to depend on the effectors used.

Perlman found in his series of experiments that tasks that are larger in size and broken down into smaller sections had faster respondents than the task as a large whole. The study suggests that chunking a larger task into a smaller more manageable task can produce a better outcome. The research also found that completing the task in a coherent order rather than swapping from one task to another can also produce a better outcome.

Chunking in infants

Chunking is used in adults in different ways which can include low-level perceptual features, category membership, semantic relatedness, and statistical co-occurrences between items. Although due to recent studies we are starting to realize that infants also use chunking. They also use different types of knowledges to help them with chunking like conceptual knowledge, spatiotemporal cue knowledge, and knowledge of their social domain.

There have been studies that use different chunking models like PARSER and the Bayesian model. PARSER is a chunking model designed to account for human behavior by implementing psychologically plausible processes of attention, memory, and associative learning. In a recent study, it was determined that these chunking models like PARSER are seen in infants more than chunking models like Bayesian. PARSER is seen more because it is typically endowed with the ability to process up to three chunks simultaneously.

When it comes to infants using their social knowledge they need to use abstract knowledge and subtle cues because they can not create a perception of their social group on their own. Infants can form chunks using shared features or spatial proximity between objects.

Chunking in seven-month-old infants

Previous research shows that the mechanism of chunking is available in seven-month-old infants. This means that chunking can occur even before the working memory capacity has completely developed. Knowing that the working memory has a very limited capacity, it can be beneficial to utilize chunking. In infants, whose working memory capacity is not completely developed, it can be even more helpful to chunk memories. These studies were done using the violation-of-expectation method and recording the amount of time the infants watched the objects in front of them. Although the experiment showed that infants can use chunking, researchers also concluded that an infant's ability to chunk memories will continue to develop over the next year of their lives.

Chunking in 14-month-old infants

Working memory appears to store no more than three objects at a time in newborns and early toddlers. A study conducted in 2014, Infants use temporal regularities to chunk objects in memory, allowed for new information and knowledge. This research showed that 14-month-old infants, like adults, can chunk using their knowledge of object categories: they remembered four total objects when an array contained two tokens of two different types (e.g., two cats and two cars), but not when the array contained four tokens of the same type (e.g., four different cats). It demonstrates that newborns may employ spatial closeness to tie representations of particular items into chunks, benefiting memory performance as a result. Despite the fact that newborns' working memory capacity is restricted, they may employ numerous forms of information to tie representations of individual things into chunks, enhancing memory efficiency.

Chunking as the learning of long-term memory structures

This usage derives from Miller's (1956) idea of chunking as grouping, but the emphasis is now on long-term memory rather than only on short-term memory. A chunk can then be defined as "a collection of elements having strong associations with one another, but weak associations with elements within other chunks". The emphasis of chunking on long-term memory is supported by the idea that chunking only exists in long-term memory, but it assists with redintegration, which is involved in the recall of information in short-term memory. It may be easier to recall information in short-term memory if the information has been represented through chunking in long-term memory. Norris and Kalm (2021) argued that "redintegration can be achieved by treating recall from memory as a process of Bayesian inference whereby representations of chunks in LTM (long-term memory) provide the priors that can be used to interpret a degraded representation in STM (short-term memory)". In Bayesian inference, priors refer to the initial beliefs regarding the relative frequency of an event occurring instead of other plausible events occurring. When one who holds the initial beliefs receives more information, one will determine the likelihood of each of the plausible events that could happen and thus predict the specific event that will occur. Chunks in long-term memory are involved in forming the priors, and they assist with determining the likelihood and prediction of the recall of information in short-term memory. For example, if an acronym and its full meaning already exist in long-term memory, the recall of information regarding that acronym will be easier in short-term memory.

Chase and Simon in 1973 and later Gobet, Retschitzki, and de Voogt in 2004 showed that chunking could explain several phenomena linked to expertise in chess. Following a brief exposure to pieces on a chessboard, skilled chess players were able to encode and recall much larger chunks than novice chess players. However, this effect is mediated by specific knowledge of the rules of chess; when pieces were distributed randomly (including scenarios that were not common or allowed in real games), the difference in chunk size between skilled and novice chess players was significantly reduced. Several successful computational models of learning and expertise have been developed using this idea, such as EPAM (Elementary Perceiver and Memorizer) and CHREST (Chunk Hierarchy and Retrieval Structures). Chunking may be demonstrated in the acquisition of a memory skill, which was demonstrated by S. F., an undergraduate student with average memory and intelligence, who increased his digit span from seven to almost 80 within 20 months or after at least 230 hours. S. F. was able to improve his digit span partly through mnemonic associations, which is a form of chunking. S. F. associated digits, which were unfamiliar information to him, with running times, ages, and dates, which were familiar information to him. Ericsson et al. (1980) initially hypothesized that S. F. increased digit span was due to an increase in his short-term memory capacity. However, they rejected this hypothesis when they found that his short-memory capacity was always the same, considering that he "chunked" only three to four digits at once. Furthermore, he never rehearsed more than six digits at once nor rehearsed more than four groups in a supergroup. Lastly, if his short-term memory capacity increased, then he would have shown a greater capacity for the alphabets; he did not. Based on these contradictions, Ericsson et al. (1980) later concluded that S. F. was able to increase his digit span due to "the use of mnemonic associations in long-term memory," which further supports that chunking may exist in short-term memory rather than long-term memory.

Chunking has also been used with models of language acquisition. The use of chunk-based learning in language has been shown to be helpful. Understanding a group of basic words and then giving different categories of associated words to build on comprehension has shown to be an effective way to teach reading and language to children. Research studies have found that adults and infants were able to parse the words of a made-up language when they were exposed to a continuous auditory sequence of words arranged in random order. One of the explanations was that they may parse the words using small chunks that correspond to the made-up language. Subsequent studies have supported that when learning involves statistical probabilities (e.g., transitional probabilities in language), it may be better explained via chunking models. Franco and Destrebecqz (2012) further studied chunking in language acquisition and found that the presentation of a temporal cue was associated with a reliable prediction of the chunking model regarding learning, but the absence of the cue was associated with increased sensitivity to the strength of transitional probabilities. Their findings suggest that the chunking model can only explain certain aspects of learning, specifically language acquisition.  

Chunking learning style and short-term memory

Norris conducted a study in 2020 of chunking and short-term memory recollection and found that when a chunk is given it is stored as a single item even though it is a relatively large amount of information. This finding suggests that chunks should be less susceptible to decay or interference when they are recalled. The study used visual stimuli where all the items were given simultaneously. Items of two and three were found to be recalled easier than singles, and more singles were recalled when in a group with threes.

Chunking can be form of data suppression that allows for more information to be stored in short-term memory. Rather than verbal short-memory measured by the number of items stored, Miller (1956) suggested that verbal short-term memory are stored as chunks. Later studies were done to determine if chunking was a form data compression when there is limited space for memory. Chunking works as data compression when it comes to redundant information and it allows for more information to be stored in short-term memory. However, memory capacity may vary.

Chunking and working memory

An experiment was done to see how chunking could be beneficial to patients who had Alzheimer's disease. This study was based on how chunking was used to improve working memory in normal young people. Working memory is impaired in the early stages of Alzheimer's disease which affects the ability to do everyday tasks. It also affects executive control of working memory. It was found that participants who had mild Alzheimer's disease were able to use working memory strategies to enhance verbal and spatial working memory performance.

It has been long thought that chunking can improve working memory. A study was done to see how chunking can improve working memory when it came to symbolic sequences and gating mechanisms. This was done by having 25 participants learn 16 sequences through trial and error. The target was presented alongside a distractor and participants were to identify the target by using right or left buttons on a computer mouse. The final analysis was done on only 19 participants. The results showed that chunking does improve symbolic sequence performance through decreasing cognitive load and real-time strategy. Chunking has proved to be effective in reducing the load on adding items into working memory. Chunking allows more items to be encoded into working memory with more available to transfer into long-term memory.

Chunking and Two-Factor Theory

Chekaf, Cowan, and Mathy (2016)  looked at how immediate memory relates to the formation of chunks. In the immediate memory, they came up with a two-factor theory of the formation of chunks. These factors are compressibility and the order of the information. Compressibility refers to making information more compact and condensed. The material is transformed from something complex to something more simplified. Thus, compressibility relates to chunking due to the predictability factor. As for the second factor, the sequence of the information can impact what is being discovered. So the order, along with the process of compressing the material, may increase the probability that chunking occurs. These two factors interact with one another and matter in the concept of chunking. Chekaf, Cowan, and Mathy (2016) gave an example where the material "1,2,3,4” can be compressed to "numbers one through four." However, if the material was presented as "1,3,2,4” you cannot compress it because the order in which it is presented is different. Therefore, compressibility and order play an important role in chunking.

Monogamy in animals

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Monogamy_in_animals

Monogamous pairing in animals refers to the natural history of mating systems in which species pair bond to raise offspring. This is associated, usually implicitly, with sexual monogamy.

Monogamous mating

Monogamy is defined as a pair bond between two adult animals of the same species. This pair may cohabitate in an area or territory for some duration of time, and in some cases may copulate and reproduce with only each other. Monogamy may either be short-term, lasting one to a few seasons or long-term, lasting many seasons and in extreme cases, life-long. Monogamy can be partitioned into two categories, social monogamy and genetic monogamy which may occur together in some combination, or completely independently of one another. As an example, in the cichlid species Variabilichromis moorii, a monogamous pair will care for eggs and young together, but the eggs may not all be fertilized by the male giving the care. Monogamy in mammals is rather rare, only occurring in 3–9% of these species. A larger percentage of avian species are known to have monogamous relationships (about 90%), but most avian species practice social but not genetic monogamy in contrast to what was previously assumed by researchers. Monogamy is quite rare in fish and amphibians, but not unheard of, appearing in a select few species.

Social monogamy

Social monogamy refers to the cohabitation of one male and one female. The two individuals may cooperate in search of resources such as food and shelter and/or in caring for young. Paternal care in monogamous species is commonly displayed through carrying, feeding, defending, and socializing offspring. With social monogamy there may not be an expected sexual fidelity between the males and the females. The existence of purely social monogamy is a polygamous or polyandrous social pair with extra pair coupling. Social monogamy has been shown to increase fitness in prairie voles. It has been shown that female prairie voles live longer when paired with males in a social monogamous relationship. This could be because of the shared energy expenditure by the males and females lower each individual's input. In largemouth bass, females are sometimes seen to exhibit cuckoo behavior by laying some of their eggs in another female's nest, thus "stealing" fertilizations from other females. Sexual conflicts that have been proposed to arise from social monogamy include infidelity and parental investment. The proposed conflict is derived from the conflict-centric differential allocation hypothesis, which states that there is a tradeoff between investment and attractiveness.

Genetic monogamy

Genetic monogamy refers to a mating system in which fidelity of the bonding pair is exhibited. Though individual pairs may be genetically monogamous, no one species has been identified as fully genetically monogamous.

In some species, genetic monogamy has been enforced. Female voles have shown no difference in fecundity with genetic monogamy, but it may be enforced by males in some instances. Mate guarding is a typical tactic in monogamous species. It is present in many animal species and can sometimes be expressed in lieu of parental care by males. This may be for many reasons, including paternity assurance.

Evolution of monogamy in animals

While the evolution of monogamy in animals cannot be broadly ascertained, there are several theories as to how monogamy may have evolved.

Anisogamy

Anisogamy is a form of sexual reproduction which involves the fusion of two unequally-sized gametes. In many animals, there are two sexes: the male, in which the gamete is small, motile, usually plentiful, and less energetically expensive, and the female, in which the gamete is larger, more energetically expensive, made at a lower rate, and largely immobile. Anisogamy is thought to have evolved from isogamy, the fusion of similar gametes, multiple times in many different species.

The introduction of anisogamy has caused males and females to tend to have different optimal mating strategies. This is because males may increase their fitness by mating with many females, whereas females are limited by their own fecundity. Females are therefore typically more likely to be selective in choosing mates. Monogamy is suggested to limit fitness differences, as males and females will mate in pairs. This would seem to be non-beneficial to males, but may not be in all cases. Several behaviors and ecological concerns may have led to the evolution of monogamy as a relevant mating strategy. Partner and resource availability, enforcement, mate assistance, and territory defense may be some of the most prevalent factors affecting animal behavior.

Facultative monogamy

First introduced by Kleiman, facultative monogamy occurs when females are widely dispersed. This can either occur because females in a species tend to be solitary or because the distribution of resources available cause females to thrive when separated into distinct territories. In these instances, there is less of a chance for a given male to find multiple females to mate with. In such a case, it becomes more advantageous for a male to remain with a female, rather than seeking out another and risking (a) not finding another female and or (b) not being able to fight off another male from interfering with his offspring by mating with the female or through infanticide. In these situations, male-to-male competition is reduced and female choice is limited. The end result is that the mate choice is more random than in a more dense population, which has a number of effects including limiting dimorphism and sexual selection.

With resource availability limited, mating with multiple mates may be harder because the density of individuals is lowered. The habitat cannot sustain multiple mates, so monogamy may be more prevalent. This is because resources may be found more easily for the pair than for the individual. The argument for resource availability has been shown in many species, but in several species, once resource availability increases, monogamy is still apparent.

With increased resource availability, males may be offsetting the restriction of their fitness through several means. In instances of social monogamy, males may offset any lowered fitness through extra pair coupling. Extra pair coupling refers to male and females mating with several mates but only raising offspring with one mate. The male may not be related to all of the offspring of his main mate, but some offspring are being raised in other broods by other males and females, thereby offsetting any limitation of monogamy. Males are cuckolds, but because they have other female sexual partners, they cuckold other males and increase their own fitness. Males exhibit parental care habits in order to be an acceptable mate to the female. Any males that do not exhibit parental care would not be accepted as a sexual partner for socially monogamous females in an enforcement pattern.

Obligate monogamy

Kleiman also offered a second theory. In obligate monogamy, the driving force behind monogamy is a greater need for paternal investment. This theory assumes that without biparental care fitness level of offspring would be greatly reduced. This paternal care may or may not be equal to that of the maternal care.

Related to paternal care, some researchers have argued that infanticide is the true cause of monogamy. This theory has not garnered much support, however, critiqued by several authors including Lukas and Clutton-Brock and Dixson.

Enforcement

Monogamous mating may also be caused simply by enforcement through tactics such as mate guarding. In these species, the males will prevent other males from copulating with their chosen female or vice versa. Males will help to fend off other aggressive males, and keep their mate for themselves. This is not seen in all species, such as some primates, in which the female may be more dominant than the male and may not need help to avoid unwanted mating; the pair may still benefit from some form of mate assistance, however, and therefore monogamy may be enforced to ensure the assistance of males. Bi-parental care is not seen in all monogamous species, however, so this may not be the only cause of female enforcement.

Mate assistance and territory defense

In species where mate guarding is not needed, there may still be a need for the pair to protect each other. An example of this would be sentinel behavior in avian species. The main advantage of sentinel behavior is that many survival tactics are improved. As stated, the male or female will act as a sentinel and signal to their mate if a predator is present. This can lead to an increase in survivorship, foraging, and incubation of eggs.

Male care for offspring is rather rare in some taxa of species. This is because males may increase their fitness by searching for multiple mates. Females are limited in fitness by their fecundity, so multiple mating does not affect their fitness to the same extent. Males have the opportunity to find a new mate earlier than females when there is internal fertilization or the females exhibit the majority of the care for the offspring. When males are shown to care for offspring as well as females, it is referred to as bi-parental care.

Bi-parental care may occur when there is a lower chance of survival of the offspring without male care. The evolution of this care has been associated with energetically expensive offspring. Bi-parental care is exhibited in many avian species. In these cases, the male has a greater chance to increase his own fitness by seeing that his offspring live long enough to reproduce. If the male is not present in these populations, the survivorship of the offspring is drastically lowered and there is a lowering in male fitness. Without monogamy, bi-parental care is less common and there is an increased chance of infanticide. Infanticide with monogamous pairing would lead to a lowered fitness for socially monogamous males and is not seen to a wide extent.

Consequences of monogamous mating

Monogamy as a mating system in animals has been thought to lower levels of some pre and post copulatory competition methods. Because of this reduction in competition in some instances the regulation of certain morphological characteristics may be lowered. This would result in a vast variety of morphological and physiological differences such as sexual dimorphism and sperm quality.

Sexual dimorphism

Sexual dimorphism denotes the differences in males and females of the same species. Even in animals with seemingly no morphological sexual dimorphism visible there is still dimorphism in the gametes. Among mammals, males have the smaller gametes and females have the larger gametes. As soon as the two sexes emerge the dimorphism in the gamete structures and sizes may lead to further dimorphism in the species. Sexual dimorphism is often caused through evolution in response to male male competition and female choice. In polygamous species there is a noted sexual dimorphism. The sexual dimorphism is seen typically in sexual signaling aspects of morphology. Males typically exhibit these dimorphic traits and they are typically traits which help in signaling to females or male male competition. In monogamous species sexual conflict is thought to be lessened, and typically little to no sexual dimorphism is noted as there is less ornamentation and armor. This is because there is a relaxation of sexual selection. This may have something to do with a feedback loop caused by a low population density. If sexual selection is too strenuous in a population where there is a low density the population will shrink. In the continuing generations sexual selection will become less and less relevant as mating becomes more random. A similar feedback loop is thought to occur for the sperm quality in genetically monogamous pairs.

Sperm quality

Once anisogamy has emerged in a species due to gamete dimorphism there is an inherent level of competition. This could be seen as sperm competition in the very least. Sperm competition is defined as a post copulatory mode of sexual selection which causes the diversity of sperm across species. As soon as sperm and egg are the predominant mating types there is an increase in the need for the male gametes. This is because there will be a large number of unsuccessful sperm which will cost a certain level of expenditure on energy without a benefit from the individual sperm. Sperm in polygamous sexual encounters have evolved for size, speed, structure, and quantity. This competition causes selection for competitive traits which can be pre or post copulatory. In species where cryptic female choice is one of the main sources of competition females are able to choose sperm from among various male suitors. Typically the sperm of the highest quality are selected.

In genetically monogamous species it can be expected that sperm competition is absent or otherwise severely limited. There is no selection for the highest quality sperm amongst the sperm of multiple males, and copulation is more random than it is in polygamous situations. Therefore, sperm quality for monogamous species has a higher variation and lower quality sperm have been noted in several species. The lack of sperm competition is not advantageous for sperm quality. An example of this is in the Eurasian bullfinch which exhibits relaxed selection and sperm competition. The sperm of these males have a lower velocity than other closely related but polygamous passerine bird species and the amount of abnormalities in sperm structure, length, and count when compared to similar bird families is increased.

Animals

The evolution of mating systems in animals has received an enormous amount of attention from biologists. This section briefly reviews three main findings about the evolution of monogamy in animals.

The amount of social monogamy in animals varies across taxa, with over 90% of birds engaging in social monogamy while only 3–9% of mammals are known to do the same.

This list is not complete. Other factors may also contribute to the evolution of social monogamy. Moreover, different sets of factors may explain the evolution of social monogamy in different species. There is no one-size-fits-all explanation of why different species evolved monogamous mating systems.

Sexual dimorphism

Sexual dimorphism refers to differences in body characteristics between females and males. A frequently studied type of sexual dimorphism is body size. For example, among mammals, males typically have larger bodies than females. In other orders, however, females have larger bodies than males. Sexual dimorphism in body size has been linked to mating behavior.

In polygynous species, males compete for control over sexual access to females. Large males have an advantage in the competition for access to females, and they consequently pass their genes along to a greater number of offspring. This eventually leads to large differences in body size between females and males. Polygynous males are often 1.5 to 2.0 times larger in size than females. In monogamous species, on the other hand, females and males have more equal access to mates, so there is little or no sexual dimorphism in body size. From a new biological point of view, monogamy could result from mate guarding and is engaged as a result of sexual conflict.

Some researchers have attempted to infer the evolution of human mating systems from the evolution of sexual dimorphism. Several studies have reported a large amount of sexual dimorphism in Australopithecus, an evolutionary ancestor of human beings that lived between 2 and 5 million years ago.

These studies raise the possibility that Australopithecus had a polygamous mating system. Sexual dimorphism then began to decrease. Studies suggest sexual dimorphism reached modern human levels around the time of Homo erectus 0.5 to 2 million years ago. This line of reasoning suggests human ancestors started out polygamous and began the transition to monogamy somewhere between 0.5 million and 2 million years ago.

Attempts to infer the evolution of monogamy based on sexual dimorphism remain controversial for three reasons:

  • The skeletal remains of Australopithecus are quite fragmentary. This makes it difficult to identify the sex of the fossils. Researchers sometimes identify the sex of the fossils by their size, which, of course, can exaggerate findings of sexual dimorphism.
  • Recent studies using new methods of measurement suggest Australopithecus had the same amount of sexual dimorphism as modern humans. This raises questions about the amount of sexual dimorphism in Australopithecus.
  • Humans may have been partially unique in that selection pressures for sexual dimorphism might have been related to the new niches that humans were entering at the time, and how that might have interacted with potential early cultures and tool use. If these early humans had a differentiation of gender roles, with men hunting and women gathering, selection pressures in favor of increased size may have been distributed unequally between the sexes.
  • Even if future studies clearly establish sexual dimorphism in Australopithecus, other studies have shown the relationship between sexual dimorphism and mating system is unreliable. Some polygamous species show little or no sexual dimorphism. Some monogamous species show a large amount of sexual dimorphism.

Studies of sexual dimorphism raise the possibility that early human ancestors were polygamous rather than monogamous. But this line of research remains highly controversial. It may be that early human ancestors showed little sexual dimorphism, and it may be that sexual dimorphism in early human ancestors had no relationship to their mating systems.

Testis size

Chimpanzee
Male and female gorilla

The relative sizes of male testes often reflect mating systems. In species with promiscuous mating systems, where many males mate with many females, the testes tend to be relatively large. This appears to be the result of sperm competition. Males with large testes produce more sperm and thereby gain an advantage impregnating females. In polygynous species, where one male controls sexual access to females, the testes tend to be small. One male defends exclusive sexual access to a group of females and thereby eliminates sperm competition.

Studies of primates support the relationship between testis size and mating system. Chimpanzees, which have a promiscuous mating system, have large testes compared to other primates. Gorillas, which have a polygynous mating system, have smaller testes than other primates. Humans, which have a socially monogamous mating system, have moderately sized testes. The moderate amounts of sexual non-monogamy in humans may result in a low to moderate amount of sperm competition.

Monogamy as a best response

In species where the young are particularly vulnerable and may benefit from protection by both parents, monogamy may be an optimal strategy. Monogamy tends to also occur when populations are small and dispersed. This is not conductive to polygamous behavior as the male would spend far more time searching for another mate. The monogamous behavior allows the male to have a mate consistently, without having to waste energy searching for other females. Furthermore, there is an apparent connection between the time a male invests in their offspring and their monogamous behavior. A male which is required to care for the offspring to ensure their survival is much more likely to exhibit monogamous behavior over one that does not.

The selection factors in favor of different mating strategies for a species of animal, however, may potentially operate on a large number of factors throughout that animal's life cycle. For instance, with many species of bear, the female will often drive a male off soon after mating, and will later guard her cubs from him. It is thought that this may be due to the fact that too many bears close to one another may deplete the food available to the relatively small but growing cubs. Monogamy may be social but rarely genetic. For example, in the cichlid species Variabilichromis moorii, a monogamous pair will care for their eggs and young but the eggs are not all fertilized by the same male. Thierry Lodé argued that monogamy should result from conflict of interest between the sexes called sexual conflict.

Monogamous species

There are species which have adopted monogamy with great success. For instance, the male prairie vole will mate exclusively with the first female he ever mates with. The vole is extremely loyal and will go as far as to even attack other females that may approach him. This type of behavior has been linked to the hormone vasopressin. This hormone is released when a male mates and cares for young. Due to this hormone's rewarding effects, the male experiences a positive feeling when they maintain a monogamous relationship. To further test this theory, the receptors that control vasopressin were placed into another species of vole that is promiscuous. After this addition, the originally unfaithful voles became monogamous with their selected partner. These very same receptors can be found in human brain, and have been found to vary at the individual level—which could explain why some human males tend to be more loyal than others.

Black vultures stay together as it is more beneficial for their young to be taken care of by both parents. They take turns incubating the eggs, and then supplying their fledglings with food. Black vultures will also attack other vultures that are participating in extra pair copulation, this is an attempt to increase monogamy and decrease promiscuous behavior. Similarly, emperor penguins also stay together to care for their young. This is due to the harshness of the Antarctic weather, predators and the scarcity of food. One parent will protect the chick, while the other finds food. However, these penguins only remain monogamous until the chick is able to go off on their own. After the chick no longer needs their care, approximately 85% of parents will part ways and typically find a new partner every breeding season.

Hornbills are a socially monogamous bird species that usually only have one mate throughout their lives, much like the prairie vole. The female will close herself up in a nest cavity, sealed with a nest plug, for two months. At this time, she will lay eggs and will be cared for by her mate. The males are willing to work to support himself, his mate, and his offspring in order for survival; however, unlike the emperor penguin, the hornbills do not find new partners each season.

It is relatively uncommon to find monogamous relationships in fish, amphibians and reptiles; however, the red-backed salamander as well as the Caribbean cleaner goby practice monogamy as well. However, the male Caribbean cleaner goby fish has been found to separate from the female suddenly, leaving her abandoned. In a study conducted by Oregon State University, it was found that this fish practices not true monogamy, but serial monogamy. This essentially means that the goby will have multiple monogamous relationships throughout its life – but only be in one relationship at a time. The red-backed salamander exhibited signs of social monogamy, which is the idea that animals form pairs to mate and raise offspring, but still will partake in extra pair copulation with various males or females in order to increase their biological fitness. This is a relatively new concept in salamanders, and has not been seen frequently – it is also concerning that the act of monogamy may inhibit the salamanders reproductive rates and biological success. However, the study which was conducted in cooperation by the University of Louisiana, Lafayette, and the University of Virginia showed that the salamanders are not inhibited by this monogamy if they show alternative strategies with other mates.

Azara's night monkeys are another species that proved to be monogamous. In an 18-year study conducted by the University of Pennsylvania, these monkeys proved to be entirely monogamous, exhibiting no genetic information or visual information that could lead to the assumption that extra pair copulation was occurring. This explained the question as to why the male owl monkey invested so much time in protecting and raising their own offspring. Because monogamy is often referred to as "placing all your eggs in one basket" the male wants to ensure his young survive, and thus pass on his genes.

The desert grass spider, Agelenopsis aperta, is mostly monogamous as well. Male size is the determining factor in fights over a female, with the larger male emerging as the winner since their size signifies success in future offspring.

Other monogamous species include wolves, otters, a few hooved animals, some bats, certain species of fox, and the Eurasian beaver. This beaver is particularly interesting, as it is practicing monogamy in its reintroduction to certain parts of Europe; however, its American counterpart is not monogamous at all and often partakes in promiscuous behavior. The two species are quite similar in ecology, but American beavers tend to be less aggressive than European beavers. In this instance, the scarcity of the European beavers' population could drive its monogamous behavior; moreover, it lowers the risk of parasite transmission which is correlated with biological fitness. Monogamy is proving to be very efficient for this beaver, as their population is climbing.

Inbreeding avoidance

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Inbreeding_avoidance

Inbreeding avoidance, or the inbreeding avoidance hypothesis, is a concept in evolutionary biology that refers to the prevention of the deleterious effects of inbreeding. Animals only rarely exhibit inbreeding avoidance. The inbreeding avoidance hypothesis posits that certain mechanisms develop within a species, or within a given population of a species, as a result of assortative mating and natural and sexual selection, in order to prevent breeding among related individuals. Although inbreeding may impose certain evolutionary costs, inbreeding avoidance, which limits the number of potential mates for a given individual, can inflict opportunity costs. Therefore, a balance exists between inbreeding and inbreeding avoidance. This balance determines whether inbreeding mechanisms develop and the specific nature of such mechanisms.

A 2007 study showed that inbred mice had significantly reduced survival when they were reintroduced into a natural habitat.

Inbreeding can result in inbreeding depression, which is the reduction of fitness of a given population due to inbreeding. Inbreeding depression occurs via appearance of disadvantageous traits due to the pairing of deleterious recessive alleles in a mating pair's progeny. When two related individuals mate, the probability of deleterious recessive alleles pairing in the resulting offspring is higher as compared to when non-related individuals mate because of increased homozygosity. However, inbreeding also gives opportunity for genetic purging of deleterious alleles that otherwise would continue to exist in population and could potentially increase in frequency over time. Another possible negative effect of inbreeding is weakened immune system due to less diverse immunity alleles as a result of outbreeding depression.

A review of the genetics of inbreeding depression in wild animal and plant populations, as well as in humans, led to the conclusion that inbreeding depression and its opposite, heterosis (hybrid vigor), are predominantly caused by the presence of recessive deleterious alleles in populations. Inbreeding, including self-fertilization in plants and automictic parthenogenesis (thelytoky) in hymenoptera, tends to lead to the harmful expression of deleterious recessive alleles (inbreeding depression). Cross-fertilization between unrelated individuals ordinarily leads to the masking of deleterious recessive alleles in progeny.

Many studies have demonstrated that homozygous individuals are often disadvantaged with respect to heterozygous individuals. For example, a study conducted on a population of South African cheetahs demonstrated that the lack of genetic variability among individuals in the population has resulted in negative consequences for individuals, such as a greater rate of juvenile mortality and spermatozoal abnormalities. When heterozygotes possess a fitness advantage relative to a homozygote, a population with a large number of homozygotes will have a relatively reduced fitness, thus leading to inbreeding depression. Through these described mechanisms, the effects of inbreeding depression are often severe enough to cause the evolution of inbreeding avoidance mechanisms.

Mechanisms

Inbreeding avoidance mechanisms have evolved in response to selection against inbred offspring. Inbreeding avoidance occurs in nature by at least four mechanisms: kin recognition, dispersal, extra-pair/extra-group copulations, and delayed maturation/reproductive suppression. These mechanisms are not mutually exclusive and more than one can occur in a population at a given time.

Kin recognition

Golden hamsters have been shown to use their own phenotypes as a template in order to differentiate between kin and non-kin via olfaction.
Mean time (±s.d.) females spent courting in front of the non-kin (left bars) and brothers (right bars). Given is the time females spent in the choice zone measuring 7×19 cm in front of the males' compartments for outbred (n=9) and inbred fish (n=7) as well as for all females (n=16). Each test lasted 1800 s. n.s., non-significant, **p<0.01.

Kin recognition is the mechanism by which individuals identify and avoid mating with closely related conspecifics. There have been numerous documented examples of instances in which individuals are shown to find closely related conspecifics unattractive. In one set of studies, researchers formed artificial relative and non-relative mate-pairs (artificial meaning they preferentially paired individuals to mate for the purposes of the experiments) and compared the reproductive results of the two groups. In these studies, paired relatives demonstrated reduced reproduction and higher mating reluctance when compared with non-relatives. For example, in a study by Simmons in field crickets, female crickets exhibited greater mating latency for paired siblings and half-siblings than with non-siblings. In another set of studies, researchers allowed individuals to choose their mates from conspecifics that lie on a spectrum of relatedness. In this set, individuals were more likely to choose non-related over related conspecifics. For example, in a study by Krackow et al., male wild house mice were set up in an arena with four separate openings leading to cages with bedding from conspecifics. The conspecifics exhibited a range of relatedness to the test subjects, and the males significantly preferred the bedding of non-siblings to the bedding of related females.

Studies have shown that kin recognition is more developed in species in which dispersal patterns facilitate frequent adult kin encounters.

There is a significant amount of variation in the mechanisms used for kin recognition. These mechanisms include recognition based on association or familiarity, an individual's own phenotypic cues, chemical cues, and the MHC genes. In association/familiarity mechanisms, individuals learn the phenotypic profiles of their kin and use this template for kin recognition. Many species accomplish this by becoming "familiar" with their siblings, litter mates, or nestmates. These species rely on offspring being reared in close proximity to achieve kin recognition. This is called the Westermarck effect. For example, Holmes and Sherman conducted a comparative study in Arctic ground squirrels and Belding's ground squirrels. They manipulated the reared groups to include both siblings and cross-fostered nestmates and found that in both species the individuals were equally aggressive toward their nestmates, regardless of kinship. In certain species where social groups are highly stable, relatedness and association between infants and other individuals are usually highly correlated. Therefore, degree of association can be used as a meter for kin recognition.

Individuals can also use their own characteristics or phenotype as a template in kin recognition. For example, in one study, Mateo and Johnston had golden hamsters reared with only non-kin then later had them differentiate between odors of related and non-related individuals without any postnatal encounters with kin. The hamsters were able to discriminate between the odors, demonstrating the use of their own phenotype for the purpose of kin recognition. This study also provides an example of a species utilizing chemical cues for kin recognition.

The major histocompatibility complex genes, or MHC genes, have been implicated in kin recognition. One idea is that the MHC genes code for a specific pheromone profile for each individual, which are used to discriminate between kin and non-kin conspecifics. Several studies have demonstrated the involvement of the MHC genes in kin recognition. For example, Manning et al. conducted a study in house mice that looked at the species's behavior of communal nesting, or nursing one's own pups as well as the pups of other individuals. As Manning et al. state, kin selection theory predicts that the house mice will selectively nurse the pups of their relatives in order to maximize inclusive fitness. Manning et al. demonstrate that the house mice utilize the MHC genes in the process of discriminating between kin by preferring individuals who share the same allelic forms the MHC genes.

Human kin recognition

The possible use of olfaction-biased mechanisms in human kin recognition and inbreeding avoidance was examined in three different types of study. The results indicated that olfaction may help mediate the development during childhood of incest avoidance (the Westermarck effect).

Post-copulatory inbreeding avoidance in mice

Experiments using in vitro fertilization in the mouse, provided evidence of sperm selection at the gametic level. When sperm of sibling and non-sibling males were mixed, a fertilization bias towards the sperm of the non-sibling males was observed. The results were interpreted as egg-driven sperm selection against related sperm.

Inbreeding avoidance in plants

Experiments were performed with the dioecious plant Silene latifolia to test whether post-pollination selection favors less related pollen donors and reduces inbreeding. The results showed that in S. latifolia, and presumably in other plant systems with inbreeding depression, pollen or embryo selection after multiple-donor pollination may reduce inbreeding.

Dispersal

In Gombe Stream National Park, male chimpanzees remain in their natal community while females disperse to other groups.
The likelihood of mating with kin decreases with respect to natal dispersal distance (m). The bold line shows the fitted values where natal dispersal distance was fitted as a predictor to inbreeding (f≥0.03125); dashed lines show the 95% confidence interval for the fit. The horizontal line represents the overall population average likelihood of inbreeding.

Some species will adopt dispersal as a way to separate close relatives and prevent inbreeding. The initial dispersal route species may take is known as natal dispersal, whereby individuals move away from the area of birth. Subsequently, species may then resort to breeding dispersal, whereby individuals move from one non-natal group to another. Nelson-Flower et al. (2012) conducted a study on southern pied babblers and found that individuals may travel farther distances from natal groups than from non-natal groups. This may be attributed to the possibility of encountering kin within local ranges when dispersing. The extent to which an individual in a particular species will disperse depends on whether the benefits of dispersing can outweigh both the costs of inbreeding and the costs of dispersal. Long‐distance movements can bear mortality risks and energetic costs.

Sex-biased dispersal

In many cases of dispersal, one sex shows a greater tendency to disperse from their natal area than the opposite sex. The extent of bias for a particular sex is dependent on numerous factors which include, but are not limited to: mating system, social organization, inbreeding and dispersal costs, and physiological factors. When the costs and benefits of dispersal are symmetric for both males and females, then no sex-biased dispersal is expected to be observed in species.

Female dispersal

Birds tend to adopt monogamous mating systems in which the males remain in their natal groups to defend familiar territories with high resource quality. Females generally have high energy expenditure when producing offspring, therefore inbreeding is costly for the females in terms of offspring survival and reproductive success. Females will then benefit more by dispersing and choosing amongst these territorial males. In addition, according to the Oedipus hypothesis, daughters of female birds can cheat their mothers through brood parasitism, therefore females will evict the females from the nest, forcing their daughters to disperse. Female dispersal is not seen only in birds; males may remain philopatric in mammals when the average adult male residency in a breeding group exceeds the average age for female maturation and conception. For example, in a community of chimpanzees in Gombe National Park, males tend to remain in their natal community for the duration of their lives, while females typically move to other communities as soon as they reach maturity.

Male dispersal

Male dispersal is more common in mammals with cooperative breeding and polygynous systems. Australian marsupial juvenile males have a greater tendency to disperse from their natal groups, while the females remain philopatric. In Antechinus, this is due to the fact that males die immediately after mating; therefore when they disperse to mate, they often meet with female natal groups with zero males present. Furthermore, the Oedipus hypothesis also states that fathers in polygynous systems will evict sons with the potential to cuckold them. Polygynous mating systems also influence intrasexual competition between males, where in cases where males can guard multiple females and exert their dominance, subordinate males are often forced to disperse to other non-natal groups.

When species adopt alternative inbreeding avoidance mechanisms, they can indirectly influence whether a species will disperse. Their choice for non-natal group males then selects for male dispersal.

Delayed maturation

The delayed sexual maturation of offspring in the presence of parents is another mechanism by which individuals avoid inbreeding. Delayed maturation scenarios can involve the removal of the original, opposite-sex parent, as is the case in female lions that exhibit estrus earlier following the replacement of their fathers with new males. Another form of delayed maturation involves parental presence that inhibits reproductive activity, such as in mature marmosets offspring that are reproductively suppressed in the presence of opposite sex parents and siblings in their social groups. Reproductive suppression occurs when sexually mature individuals in a group are prevented from reproducing due to behavioral or chemical stimuli from other group members that suppress breeding behavior. Social cues from the surrounding environment often dictate when reproductive activity is suppressed and involves interactions between same-sex adults. If the current conditions for reproduction are unfavorable, such as when presented with only inbreeding as a means to reproduce, individuals may increase their lifetime reproductive success by timing their reproductive attempts to occur during more favorable conditions. This can be achieved by individuals suppressing their reproductive activity in poor reproduction conditions.

Inbreeding avoidance between philopatric offspring and their parents/siblings severely restricts breeding opportunities of subordinates living in their social groups. A study by O'Riain et al. (2000) examined meerkats social groups and factors affecting reproductive suppression in subordinate females. They found that in family groups, the absence of a dominant individual of either sex led to reproductive quiescence. Reproductive activity only resumed upon another sexually mature female obtaining dominance, and immigration of an unrelated male. Reproduction required both the presence of an unrelated opposite-sex partner, which acted as appropriate stimulus on reproductively suppressed subordinates that were quiescent in the presence of the original dominant individual.

DNA analysis has shown that 60% of offspring in splendid fairywrens nests were sired through extra-pair copulations, rather than from resident males.

Extra-pair copulations

In various species, females benefit by mating with multiple males, thus producing more offspring of higher genetic diversity and potentially quality. Females that are pair bonded to a male of poor genetic quality, as can be the case in inbreeding, are more likely to engage in extra-pair copulations in order to improve their reproductive success and the survivability of their offspring. This improved quality in offspring is generated from either the intrinsic effects of good genes, or from interactions between compatible genes from the parents. In inbreeding, loss of heterozygosity contributes to the overall decreased reproductive success, but when individuals engage in extra-pair copulations, mating between genetically dissimilar individuals leads to increased heterozygosity.

Extra-pair copulations involve a number of costs and benefits for both male and female animals. For males, extra-pair copulation involves spending more time away from the original pairing in search of other females. This risks the original female being fertilized by other males while the original male is searching for partners, leading to a loss of paternity. The tradeoff for this cost depends entirely on whether the male is able to fertilize the other females’ eggs in the extra-pair copulation. For females, extra-pair copulations ensure egg fertilization, and provide enhanced genetic variety with compatible sperm that avoid expression of damaging recessive genes that come with inbreeding. Through extra-pair mating, females are able to maximize the genetic variability of their offspring, providing protection against environmental changes that may otherwise target more homozygous populations that inbreeding often produces.

Whether a female engages in extra-pair copulations for the sake of inbreeding avoidance depends on whether the costs of extra-pair copulation outweigh the costs of inbreeding. In extra-pair copulations, both inbreeding costs and pair-bond male loss (leading to the loss of paternal care) must be considered with the benefits of reproductive success that extra-pair copulation provides. When paternal care is absent or has little influence on offspring survivability, it is generally favorable for females to engage in extra-pair mating to increase reproductive success and avoid inbreeding.

Gaps

Inbreeding avoidance has been studied via three main methods: (1) observing individual behavior in the presence and absence of close kin, (2) contrasting costs of avoidance with costs of tolerating close inbreeding, and (3) comparing observed and random frequencies of close inbreeding. No method is perfect, giving rise to questions about the completeness and consistency of the inbreeding avoidance hypothesis. Although the first option, individual behavioral observation, is preferred and most widely used, there is still debate over whether it can provide definitive evidence for inbreeding avoidance.

A majority of the literature on inbreeding avoidance was published at least 15 years ago, allowing for growth and development of the study through contemporary experimental methods and technology. Molecular techniques such as DNA fingerprinting have become more advanced and accessible, improving the efficiency and accuracy of measuring relatedness. Studying inbreeding avoidance in carnivores has garnered increased interest due to ongoing work to explain their social behaviors.

Transcranial magnetic stimulation

From Wikipedia, the free encyclopedia
 
Transcranial magnetic stimulation
Transcranial magnetic stimulation (schematic diagram)

Transcranial magnetic stimulation (TMS) is a noninvasive form of brain stimulation in which a changing magnetic field is used to induce an electric current at a specific area of the brain through electromagnetic induction. An electric pulse generator, or stimulator, is connected to a magnetic coil connected to the scalp. The stimulator generates a changing electric current within the coil which creates a varying magnetic field, inducing a current within a region in the brain itself.

TMS has shown diagnostic and therapeutic potential in the central nervous system with a wide variety of disease states in neurology and mental health, with research still evolving.

Adverse effects of TMS appear rare and include fainting and seizure. Other potential issues include discomfort, pain, hypomania, cognitive change, hearing loss, and inadvertent current induction in implanted devices such as pacemakers or defibrillators.

Medical uses

A magnetic coil is positioned on the patient's head.

TMS does not require surgery or electrode implantation.

Its use can be diagnostic and/or therapeutic. Effects vary based on frequency and intensity of the magnetic pulses as well as the length of treatment, which dictates the total number of pulses given. TMS treatments are approved by the FDA in the US and by NICE in the UK for the treatment of depression and are predominantly provided by private clinics. TMS stimulates cortical tissue without the pain sensations produced in transcranial electrical stimulation.

Diagnosis

TMS can be used clinically to measure activity and function of specific brain circuits in humans, most commonly with single or paired magnetic pulses. The most widely accepted use is in measuring the connection between the primary motor cortex of the central nervous system and the peripheral nervous system to evaluate damage related to past or progressive neurologic insult. TMS has utility as a diagnostic instrument for myelopathy, amyotrophic lateral sclerosis, and multiple sclerosis.

Treatment

Repetitive high frequency TMS (rTMS) has been investigated as a possible treatment option with various degrees of success in conditions including

Adverse effects

Although TMS is generally regarded as safe, risks are increased for therapeutic rTMS compared to single or paired diagnostic TMS. Adverse effects generally increase with higher frequency stimulation.

The greatest immediate risk from TMS is fainting, though this is uncommon. Seizures have been reported, but are rare. Other adverse effects include short term discomfort, pain, brief episodes of hypomania, cognitive change, hearing loss, impaired working memory, and the induction of electrical currents in implanted devices such as cardiac pacemakers.

Procedure

During the procedure, a magnetic coil is positioned at the head of the person receiving the treatment using anatomical landmarks on the skull, in particular the inion and nasion. The coil is then connected to a pulse generator, or stimulator, that delivers electric current to the coil.

Physics

TMS – butterfly coils

TMS uses electromagnetic induction to generate an electric current across the scalp and skull. A plastic-enclosed coil of wire is held next to the skull and when activated, produces a varying magnetic field oriented orthogonally to the plane of the coil. The changing magnetic field then induces an electric current in the brain that activates nearby nerve cells in a manner similar to a current applied superficially at the cortical surface.

The magnetic field is about the same strength as magnetic resonance imaging (MRI), and the pulse generally reaches no more than 5 centimeters into the brain unless using a modified coil and technique for deeper stimulation.

Transcranial magnetic stimulation is achieved by quickly discharging current from a large capacitor into a coil to produce pulsed magnetic fields between 2 and 3 teslas in strength. Directing the magnetic field pulse at a targeted area in the brain causes a localized electrical current which can then either depolarize or hyperpolarize neurons at that site. The induced electric field inside the brain tissue causes a change in transmembrane potentials resulting in depolarization or hyperpolarization of neurons, causing them to be more or less excitable, respectively.

TMS usually stimulates to a depth from 2 to 4 cm below the surface, depending on the coil and intensity used. Consequently, only superficial brain areas can be affected. Deep TMS can reach up to 6 cm into the brain to stimulate deeper layers of the motor cortex, such as that which controls leg motion. The path of this current can be difficult to model because the brain is irregularly shaped with variable internal density and water content, leading to a nonuniform magnetic field strength and conduction throughout its tissues.

Frequency and duration

The effects of TMS can be divided based on frequency, duration and intensity (amplitude) of stimulation:

  • Single or paired pulse TMS causes neurons in the neocortex under the site of stimulation to depolarize and discharge an action potential. If used in the primary motor cortex, it produces muscle activity referred to as a motor evoked potential (MEP) which can be recorded on electromyography. If used on the occipital cortex, 'phosphenes' (flashes of light) might be perceived by the subject. In most other areas of the cortex, there is no conscious effect, but behaviour may be altered (e.g., slower reaction time on a cognitive task), or changes in brain activity may be detected using diagnostic equipment.
  • Repetitive TMS produces longer-lasting effects which persist past the period of stimulation. rTMS can increase or decrease the excitability of the corticospinal tract depending on the intensity of stimulation, coil orientation, and frequency. Low frequency rTMS with a stimulus frequency less than 1 Hz is believed to inhibit cortical firing while a stimulus frequency greater than 1 Hz, or high frequency, is believed to provoke it. Though its mechanism is not clear, it has been suggested as being due to a change in synaptic efficacy related to long-term potentiation (LTP) and long-term depression like plasticity (LTD-like plasticity).

Coil types

Most devices use a coil shaped like a figure-eight to deliver a shallow magnetic field that affects more superficial neurons in the brain. Differences in magnetic coil design are considered when comparing results, with important elements including the type of material, geometry and specific characteristics of the associated magnetic pulse.

The core material may be either a magnetically inert substrate ('air core'), or a solid, ferromagnetically active material ('solid core'). Solid cores result in more efficient transfer of electrical energy to a magnetic field and reduce energy loss to heat, and so can be operated with the higher volume of therapy protocols without interruption due to overheating. Varying the geometric shape of the coil itself can cause variations in focality, shape, and depth of penetration. Differences in coil material and its power supply also affect magnetic pulse width and duration.

A number of different types of coils exist, each of which produce different magnetic fields. The round coil is the original used in TMS. Later, the figure-eight (butterfly) coil was developed to provide a more focal pattern of activation in the brain, and the four-leaf coil for focal stimulation of peripheral nerves. The double-cone coil conforms more to the shape of the head. The Hesed (H-core), circular crown and double cone coils allow more widespread activation and a deeper magnetic penetration. They are supposed to impact deeper areas in the motor cortex and cerebellum controlling the legs and pelvic floor, for example, though the increased depth comes at the cost of a less focused magnetic pulse.

History

Luigi Galvani (1737–1798) undertook research on the effects of electricity on the body in the late-eighteenth century and laid the foundations for the field of electrophysiology. In the 1830s Michael Faraday (1791–1867) discovered that an electrical current had a corresponding magnetic field, and that changing one could induce its counterpart.

Work to directly stimulate the human brain with electricity started in the late 1800s, and by the 1930s the Italian physicians Cerletti and Bini had developed electroconvulsive therapy (ECT).[37] ECT became widely used to treat mental illness, and ultimately overused, as it began to be seen as a panacea. This led to a backlash in the 1970s.

In 1980 Merton and Morton successfully used transcranial electrical stimulation (TES) to stimulate the motor cortex. However, this process was very uncomfortable, and subsequently Anthony T. Barker began to search for an alternative to TES. He began exploring the use of magnetic fields to alter electrical signaling within the brain, and the first stable TMS devices were developed in 1985. They were originally intended as diagnostic and research devices, with evaluation of their therapeutic potential being a later development. The United States' FDA first approved TMS devices in October 2008.

Research

TMS has shown potential therapeutic effect on neurologic conditions such as mild to moderate Alzheimer's disease, amyotrophic lateral sclerosis, persistent vegetative states, epilepsy, stroke related  tinnitus, multilple sclerosis, schizophrenia, and traumatic brain injury.

With Parkinson's disease, early results suggest that low frequency stimulation may have an effect on medication associated dyskinesia, and that high frequency stimulation improves motor function. The most effective treatment protocols appear to involve high frequency stimulation of the motor cortex, particularly on the dominant side, but with more variable results for treatment of the dorsolateral prefrontal cortex. It is less effective than electroconvulsive therapy for motor symptoms, though both appear to have utility. Cerebellar stimulation has also shown potential for the treatment of levodopa associated dyskinesia.

In psychiatry, it has shown potential with anxiety disorders, including panic disorder and obsessive–compulsive disorder (OCD). The most promising areas to target for OCD appear to be the orbitofrontal cortex and the supplementary motor area. Older protocols that targeted the prefrontal dorsal cortex were less successful. It has also been studied with autism, substance abuse, addiction, and post-traumatic stress disorder (PTSD). For treatment-resistant major depressive disorder, high-frequency (HF) rTMS of the left dorsolateral prefrontal cortex (DLPFC) appears effective and low-frequency (LF) rTMS of the right DLPFC has probable efficacy. Research on the efficacy of rTMS in non-treatment-resistant depression is limited.

TMS can also be used to map functional connectivity between the cerebellum and other areas of the brain.

A study on alternative Alzheimer's treatments at the Wahrendorff Clinic in Germany in 2021 reported that 84% of participants in the study have experienced positive effects after using the treatment.

Under the supervision of Professor Marc Ziegenbein, a psychiatry and psychotherapy specialist, the study of 77 subjects with mild to moderate Alzheimer's disease received frequent transcranial magnetic stimulation applications and observed over a period of time.

Improvements were mainly found in the areas of orientation in the environment, concentration, general well-being and satisfaction.

Study blinding

Mimicking the physical discomfort of TMS with placebo to discern its true effect is a challenging issue in research. It is difficult to establish a convincing placebo for TMS during controlled trials in conscious individuals due to the neck pain, headache and twitching in the scalp or upper face associated with the intervention. In addition, placebo manipulations can affect brain sugar metabolism and MEPs, which may confound results. This problem is exacerbated when using subjective measures of improvement. Placebo responses in trials of rTMS in major depression are negatively associated with refractoriness to treatment.

A 2011 review found that most studies did not report unblinding. In the minority that did, participants in real and sham rTMS groups were not significantly different in their ability to correctly guess their therapy, though there was a trend for participants in the real group to more often guess correctly.

Animal model limitations

TMS research in animal studies is limited due to its early US Food and Drug Administration approval for treatment-resistant depression, limiting development of animal specific magnetic coils.

Treatments for the general public

Regulatory approvals

Neurosurgery planning

Nexstim obtained United States Federal Food, Drug, and Cosmetic Act§Section 510(k) clearance for the assessment of the primary motor cortex for pre-procedural planning in December 2009 and for neurosurgical planning in June 2011.

Depression

The National Institutes of Health estimates depression medications work for 60 percent to 70 percent of people who take them. TMS is approved as a Class II medical device under the "de novo pathway". In addition, the World Health Organization reports that the number of people living with depression has increased nearly 20 percent since 2005. In a 2012 study, TMS was found to improve depression significantly in 58 percent of patients and provide complete remission of symptoms in 37 percent of patients. In 2002, Cochrane Library reviewed randomized controlled trials using TMS to treat depression. The review did not find a difference between rTMS and sham TMS, except for a period 2 weeks after treatment. In 2018, Cochrane Library stated a plan to contact authors about updating the review of rTMS for depression.

Obsessive–compulsive disorder (OCD)

In August 2018, the US Food and Drug Administration (US FDA) authorized the use of TMS developed by the Israeli company Brainsway in the treatment of obsessive–compulsive disorder (OCD).

In 2020, US FDA authorized the use of TMS developed by the U.S. company MagVenture Inc. in the treatment of OCD.

In 2023, US FDA authorized the use of TMS developed by the U.S. company Neuronetics Inc. in the treatment of OCD.

Other neurological areas

In the European Economic Area, various versions of Deep TMS H-coils have CE marking for Alzheimer's disease, autism, bipolar disorder, epilepsy, chronic pain, major depressive disorder, Parkinson's disease, post-traumatic stress disorder (PTSD), schizophrenia (negative symptoms) and to aid smoking cessation. One review found tentative benefit for cognitive enhancement in healthy people.

Coverage by health services and insurers

United Kingdom

The United Kingdom's National Institute for Health and Care Excellence (NICE) issues guidance to the National Health Service (NHS) in England, Wales, Scotland and Northern Ireland (UK). NICE guidance does not cover whether or not the NHS should fund a procedure. Local NHS bodies (primary care trusts and hospital trusts) make decisions about funding after considering the clinical effectiveness of the procedure and whether the procedure represents value for money for the NHS.

NICE evaluated TMS for severe depression (IPG 242) in 2007, and subsequently considered TMS for reassessment in January 2011 but did not change its evaluation. The Institute found that TMS is safe, but there is insufficient evidence for its efficacy.

In January 2014, NICE reported the results of an evaluation of TMS for treating and preventing migraine (IPG 477). NICE found that short-term TMS is safe but there is insufficient evidence to evaluate safety for long-term and frequent uses. It found that evidence on the efficacy of TMS for the treatment of migraine is limited in quantity, that evidence for the prevention of migraine is limited in both quality and quantity.

Subsequently, in 2015, NICE approved the use of TMS for the treatment of depression in the UK and IPG542 replaced IPG242. NICE said "The evidence on repetitive transcranial magnetic stimulation for depression shows no major safety concerns. The evidence on its efficacy in the short-term is adequate, although the clinical response is variable. Repetitive transcranial magnetic stimulation for depression may be used with normal arrangements for clinical governance and audit."

United States: commercial health insurance

In 2013, several commercial health insurance plans in the United States, including Anthem, Health Net, and Blue Cross Blue Shield of Nebraska and of Rhode Island, covered TMS for the treatment of depression for the first time. In contrast, UnitedHealthcare issued a medical policy for TMS in 2013 that stated there is insufficient evidence that the procedure is beneficial for health outcomes in patients with depression. UnitedHealthcare noted that methodological concerns raised about the scientific evidence studying TMS for depression include small sample size, lack of a validated sham comparison in randomized controlled studies, and variable uses of outcome measures. Other commercial insurance plans whose 2013 medical coverage policies stated that the role of TMS in the treatment of depression and other disorders had not been clearly established or remained investigational included Aetna, Cigna and Regence.

United States: Medicare

Policies for Medicare coverage vary among local jurisdictions within the Medicare system, and Medicare coverage for TMS has varied among jurisdictions and with time. For example:

  • In early 2012 in New England, Medicare covered TMS for the first time in the United States. However, that jurisdiction later decided to end coverage after October, 2013.
  • In August 2012, the jurisdiction covering Arkansas, Louisiana, Mississippi, Colorado, Texas, Oklahoma, and New Mexico determined that there was insufficient evidence to cover the treatment, but the same jurisdiction subsequently determined that Medicare would cover TMS for the treatment of depression after December 2013.
  • Subsequently, some other Medicare jurisdictions added Medicare coverage for depression.

Romance (love)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...