Search This Blog

Monday, August 3, 2020

Behavior analysis of child development

From Wikipedia, the free encyclopedia
 
The behavioral analysis of child development originates from John B. Watson's behaviorism. Watson studied child development, looking specifically at development through conditioning. He helped bring a natural science perspective to child psychology by introducing objective research methods based on observable and measurable behavior. B.F. Skinner then further extended this model to cover operant conditioning and verbal behavior. Skinner was then able to focus these research methods on feelings and how those emotions can be shaped by a subject's interaction with the environment. Sidney Bijou (1955) was the first to use this methodological approach extensively with children.

History

In 1948, Sidney Bijou took a position as associate professor of psychology at the University of Washington and served as director of the university's Institute of Child Development. Under his leadership, the Institute added a child development clinic, nursery school classrooms, and a research lab. Bijou began working with Donald Baer in the Department of Human Development and Family Life at the University of Kansas, applying behavior analytic principles to child development in an area referred to as "Behavioral Development" or "Behavior Analysis of Child Development". Skinner's behavioral approach and Kantor's interbehavioral approach were adopted in Bijou and Baer's model. They created a three-stage model of development (e.g., basic, foundational, and societal). Bijou and Baer looked at these socially determined stages, as opposed to organizing behavior into change points or cusps (behavioral cusp). In the behavioral model, development is considered a behavioral change. It is dependent on the kind of stimulus and the person's behavioral and learning function. Behavior analysis in child development takes a mechanistic, contextual, and pragmatic approach.

From its inception, the behavioral model has focused on prediction and control of the developmental process. The model focuses on the analysis of a behavior and then synthesizes the action to support the original behavior. The model was changed after Richard J. Herrnstein studied the matching law of choice behavior developed by studying of reinforcement in the natural environment. More recently, the model has focused more on behavior over time and the way that behavioral responses become repetitive. it has become concerned with how behavior is selected over time and forms into stable patterns of responding. A detailed history of this model was written by Pelaez. In 1995, Henry D. Schlinger, Jr. provided the first behavior analytic text since Bijou and Baer comprehensively showed how behavior analysis—a natural science approach to human behavior—could be used to understand existing research in child development. In addition, the quantitative behavioral developmental model by Commons and Miller is the first behavioral theory and research to address notion similar to stage.

Research methods

The methods used to analyze behavior in child development are based on several types of measurements. Single-subject research with a longitudinal study follow-up is a commonly-used approach. Current research is focused on integrating single-subject designs through meta-analysis to determine the effect sizes of behavioral factors in development. Lag sequential analysis has become popular for tracking the stream of behavior during observations. Group designs are increasingly being used. Model construction research involves latent growth modeling to determine developmental trajectories and structural equation modeling. Rasch analysis is now widely used to show sequentiality within a developmental trajectory.

A recent methodological change in the behavioral analytic theory is the use of observational methods combined with lag sequential analysis can determine reinforcement in the natural setting.

Quantitative behavioral development

The model of hierarchical complexity is a quantitative analytic theory of development. This model offers an explanation for why certain tasks are acquired earlier than others through developmental sequences and gives an explanation of the biological, cultural, organizational, and individual principles of performance. It quantifies the order of hierarchical complexity of a task based on explicit and mathematical measurements of behavior.

Research

Contingencies, uncertainty, and attachment

The behavioral model of attachment recognizes the role of uncertainty in an infant and the child's limited communication abilities. Contingent relationships are instrumental in the behavior analytic theory, because much emphasis is put on those actions that produce parents’ responses.

The importance of contingency appears to be highlighted in other developmental theories, but the behavioral model recognizes that contingency must be determined by two factors: the efficiency of the action and that efficiency compared to other tasks that the infant might perform at that point. Both infants and adults function in their environments by understanding these contingent relationships. Research has shown that contingent relationships lead to emotionally satisfying relationships.

Since 1961, behavioral research has shown that there is relationship between the parents’ responses to separation from the infant and outcomes of a “stranger situation.”. In a study done in 2000, six infants participated in a classic reversal design study that assessed infant approach rate to a stranger. If attention was based on stranger avoidance, the infant avoided the stranger. If attention was placed on infant approach, the infant approached the stranger.

Recent meta-analytic studies of this model of attachment based on contingency found a moderate effect of contingency on attachment, which increased to a large effect size when the quality of reinforcement was considered. Other research on contingency highlights its effect on the development of both pro-social and anti-social behavior. These effects can also be furthered by training parents to become more sensitive to children's behaviors, Meta-analytic research supports the notion that attachment is operant-based learning.

An infant's sensitivity to contingencies can be affected by biological factors and environment changes. Studies show that being placed in erratic environments with few contingencies may cause a child to have conduct problems and may lead to depression. (see Behavioral Development and Depression below). Research continues to look at the effects of learning-based attachment on moral development. Some studies have shown that erratic use of contingencies by parents early in life can produce devastating long-term effects for the child.

Motor development

Since Watson developed the theory of behaviorism, behavior analysts have held that motor development represents a conditioning process. This holds that crawling, climbing, and walking displayed by infants represents conditioning of biologically innate reflexes. In this case, the reflex of stepping is the respondent behavior and these reflexes are environmentally conditioned through experience and practice. This position was criticized by maturation theorists. They believed that the stepping reflex for infants actually disappeared over time and was not "continuous". By working with a slightly different theoretical model, while still using operant conditioning, Esther Thelen was able to show that children's stepping reflex disappears as a function of increased physical weight. However, when infants were placed in water, that same stepping reflex returned. This offered a model for the continuity of the stepping reflex and the progressive stimulation model for behavior analysts.

Infants deprived of physical stimulation or the opportunity to respond were found to have delayed motor development. Under conditions of extra stimulation, the motor behavior of these children rapidly improved. Some research has shown that the use of a treadmill can be beneficial to children with motor delays including Down syndrome and cerebral palsy. Research on opportunity to respond and the building of motor development continues today.

The behavioral development model of motor activity has produced a number of techniques, including operant-based biofeedback to facilitate development with success. Some of the stimulation methods such as operant-based biofeedback have been applied as treatment to children with cerebral palsy and even spinal injury successfully. Brucker's group demonstrated that specific operant conditioning-based biofeedback procedures can be effective in establishing more efficient use of remaining and surviving central nervous system cells after injury or after birth complications (like cerebral palsy). While such methods are not a cure and gains tend to be in the moderate range, they do show ability to enhance functioning.

Imitation and verbal behavior

Behaviorists have studied verbal behavior since the 1920s. E.A. Esper (1920) studied associative models of language, which has evolved into the current language interventions of matrix training and recombinative generalization. Skinner (1957) created a comprehensive taxonomy of language for speakers. Baer, along with Zettle and Haynes (1989), provided a developmental analysis of rule-governed behavior for the listener. and for the listener Zettle and Hayes (1989) with Don Baer providing a developmental analysis of rule-governed behavior. According to Skinner, language learning depends on environmental variables, which can be mastered by a child through imitation, practice, and selective reinforcement including automatic reinforcement.

B.F. Skinner was one of the first psychologists to take the role of imitation in verbal behavior as a serious mechanism for acquisition. He identified echoic behavior as one of his basic verbal operants, postulating that verbal behavior was learned by an infant from a verbal community. Skinner's account takes verbal behavior beyond an intra-individual process to an inter-individual process. He defined verbal behavior as "behavior reinforced through the mediation of others". Noam Chomsky refuted Skinner's assumptions.

In the behavioral model, the child is prepared to contact the contingencies to "join" the listener and speaker. At the very core, verbal episodes involve the rotation of the roles as speaker and listener. These kinds of exchanges are called conversational units and have been the focus of research at Columbia's communication disorders department.

Conversational units is a measure of socialization because they consist of verbal interactions in which the exchange is reinforced by both the speaker and the listener. H.C. Chu (1998) demonstrated contextual conditions for inducing and expanding conversational units between children with autism and non-handicapped siblings in two separate experiments. The acquisition of conversational units and the expansion of verbal behavior decrease incidences of physical "aggression" in the Chu study and several other reviews suggest similar effects. The joining of the listener and speaker progresses from listener speaker rotations with others as a likely precedent for the three major components of speaker-as-own listener—say so correspondence, self-talk conversational units, and naming.

Development of self

Robert Kohelenberg and Mavis Tsai (1991) created a behavior analytic model accounting for the development of one's “self”. Their model proposes that verbal processes can be used to form a stable sense of who we are through behavioral processes such as stimulus control. Kohlenberg and Tsai developed functional analytic psychotherapy to treat psychopathological disorders arising from the frequent invalidations of a child's statements such that “I” does not emerge. Other behavior analytic models for personality disorders exist. They trace out the complex biological–environmental interaction for the development of avoidant and borderline personality disorders. They focus on Reinforcement sensitivity theory, which states that some individuals are more or less sensitive to reinforcement than others. Nelson-Grey views problematic response classes as being maintained by reinforcing consequences or through rule governance.

Socialization

Over the last few decades, studies have supported the idea that contingent use of reinforcement and punishment over extended periods of time lead to the development of both pro-social and anti-social behaviors. However research has shown that reinforcement is more effective than punishment when teaching behavior to a child. It has also been shown that modeling is more effective than “preaching” in developing pro-social behavior in children. Rewards have also been closely studied in relation to the development of social behaviors in children. The building of self-control, empathy, and cooperation has all implicated rewards as a successful tactic, while sharing has been strongly linked with reinforcement.

The development of social skills in children is largely affected in that classroom setting by both teachers and peers. Reinforcement and punishment play major roles here as well. Peers frequently reinforce each other's behavior. One of the major areas that teachers and peers influence is sex-typed behavior, while peers also largely influence modes of initiating interaction, and aggression. Peers are more likely to punish cross-gender play while at the same time reinforcing play specific to gender. Some studies found that teachers were more likely to reinforce dependent behavior in females.

Behavioral principles have also been researched in emerging peer groups, focusing on status. Research shows that it takes different social skills to enter groups than it does to maintain or build one's status in groups. Research also suggests that neglected children are the least interactive and aversive, yet remain relatively unknown in groups. Children suffering from social problems do see an improvement in social skills after behavior therapy and behavior modification (see applied behavior analysis). Modeling has been successfully used to increase participation by shy and withdrawn children. Shaping of socially desirable behavior through positive reinforcement seems to have some of the most positive effects in children experiencing social problems.

Anti-social behavior

In the development of anti-social behavior, etiological models for anti-social behavior show considerable correlation with negative reinforcement and response matching. Escape conditioning, through the use of coercive behavior, has a powerful effect on the development and use of future anti-social tactics. The use of anti-social tactics during conflicts can be negatively reinforced and eventually seen as functional for the child in moment to moment interactions. Anti-social behaviors will also develop in children when imitation is reinforced by social approval. If approval is not given by teachers or parents, it can often be given by peers. An example of this is swearing. Imitating a parent, brother, peer, or a character on TV, a child may engage in the anti-social behavior of swearing. Upon saying it they may be reinforced by those around them which will lead to an increase in the anti-social behavior. The role of stimulus control has also been extensively explored in the development of anti-social behavior. Recent behavioral focus in the study of anti-social behavior has been a focus on rule-governed behavior. While correspondence for saying and doing has long been an interest for behavior analysts in normal development and typical socialization, recent conceptualizations have been built around families that actively train children in anti-social rules, as well as children who fail to develop rule control.

Developmental depression with origins in childhood

Behavioral theory of depression was outlined by Charles Ferster. A later revision was provided by Peter Lewisohn and Hyman Hops. Hops continued the work on the role of negative reinforcement in maintaining depression with Anthony Biglan. Additional factors such as the role of loss of contingent relations through extinction and punishment were taken from early work of Martin Seligman. The most recent summary and conceptual revisions of the behavioral model was provided by Johnathan Kanter. The standard model is that depression has multiple paths to develop. It can be generated by five basic processes, including: lack or loss of positive reinforcement, direct positive or negative reinforcement for depressive behavior, lack of rule-governed behavior or too much rule-governed behavior, and/or too much environmental punishment. For children, some of these variables could set the pattern for lifelong problems. For example, a child whose depressive behavior functions for negative reinforcement by stopping fighting between parents could develop a lifelong pattern of depressive behavior in the case of conflicts. Two paths that are particularly important are (1) lack or loss of reinforcement because of missing necessary skills at a developmental cusp point or (2) the failure to develop adequate rule-governed behavior. For the latter, the child could develop a pattern of always choosing the short-term small immediate reward (i.e., escaping studying for a test) at the expense of the long-term larger reward (passing courses in middle school). The treatment approach that emerged from this research is called behavioral activation.

In addition, use of positive reinforcement has been shown to improve symptoms of depression in children. Reinforcement has also been shown to improve the self-concept in children with depression comorbid with learning difficulties. Rawson and Tabb (1993) used reinforcement with 99 students (90 males and 9 females) aged from 8 to 12 with behavior disorders in a residential treatment program and showed significant reduction in depression symptoms compared to the control group.

Cognitive behavior

As children get older, direct control of contingencies is modified by the presence of rule-governed behavior. Rules serve as an establishing operation and set a motivational stage as well as a discrimintative stage for behavior. While the size of the effects on intellectual development are less clear, it appears that stimulation does have a facilitative effect on intellectual ability. However, it is important to be sure not to confuse the enhancing effect with the initial causal effect. Some data exists to show that children with developmental delays take more learning trials to acquire in material.

Learned units and developmental retardation

Behavior analysts have spent considerable time measuring learning in both the classroom and at home. In these settings, the role of a lack of stimulation has often been evidenced in the development of mild and moderate mental retardation. Recent work has focused on a model of "developmental retardation,". an area that emphasizes cumulative environmental effects and their role in developmental delays. To measure these developmental delays, subjects are given the opportunity to respond, defined as the instructional antecedent, and success is signified by the appropriate response and/or fluency in responses. Consequently, the learned unit is identified by the opportunity to respond in addition to given reinforcement.

One study employed this model by comparing students' time of instruction was in affluent schools to time of instruction in lower income schools. Results showed that lower income schools displayed approximately 15 minutes less instruction than more affluent schools due to disruptions in classroom management and behavior management. Altogether, these disruptions culminated into two years worth of lost instructional time by grade 10. The goal of behavior analytic research is to provide methods for reducing the overall number of children who fall into the retardation range of development by behavioral engineering.

Hart and Risely (1995, 1999) have completed extensive research on this topic as well. These researchers measured the rates of parent communication with children of the ages of 2–4 years and correlated this information with the IQ scores of the children at age 9. Their analyses revealed that higher parental communication with younger children was positively correlated with higher IQ in older children, even after controlling for race, class, and socio-economic status. Additionally, they concluded a significant change in IQ scores required intervention with at-risk children for approximately 40 hours per week.

Class formation

The formation of class-like behavior has also been a significant aspect in the behavioral analysis of development. . This research has provided multiple explanations to the development and formation of class-like behavior, including primary stimulus generalization, an analysis of abstraction, relational frame theory, stimulus class analysis (sometimes referred to as recombinative generalization), stimulus equivalence, and response class analysis. Multiple processes for class-like formation provide behavior analysts with relatively pragmatic explanations for common issues of novelty and generalization.

Responses are organized based upon the particular form needed to fit the current environmental challenges as well as the functional consequences. An example of large response classes lies in contingency adduction, which is an area that needs much further research, especially with a focus on how large classes of concepts shift. For example, as Piaget observed, individuals have a tendency at the pre-operational stage to have limits in their ability to preserve information(Piaget & Szeminska, 1952). While children's training in the development of conservation skills has been generally successful, complications have been noted. Behavior analysts argue that this is largely due to the number of tool skills that need to be developed and integrated. Contingency adduction offers a process by which such skills can be synthesized and which shows why it deserves further attention, particularly by early childhood interventionists.

Autism

Ferster (1961) was the first researcher to posit a behavior analytic theory for autism. Ferster's model saw autism as a by-product of social interactions between parent and child. Ferster presented an analysis of how a variety of contingencies of reinforcement between parent and child during early childhood might establish and strengthen a repertoire of behaviors typically seen in children diagnosed with autism. A similar model was proposed by Drash and Tutor (1993), who developed the contingency-shaped or behavioral incompatibility theory of autism. They identified at least six reinforcement paradigms that may contribute to significant deficiencies in verbal behavior typically characteristic of children diagnosed as austistic. They proposed that each of these paradigms may also create a repertoire of avoidance responses that could contribute to the establishment of a repertoire of behavior that would be incompatible with the acquisition of age-appropriate verbal behavior. More recent models attribute autism to neurological and sensory models that are overly worked and subsequently produce the autistic repertoire. Lovaas and Smith (1989) proposed that children with autism have a mismatch between their nervous systems and the environment, while Bijou and Ghezzi (1999) proposed a behavioral interference theory. However, both the environmental mismatch model and the inference model were recently reviewed, and new evidence shows support for the notion that the development of autistic behaviors are due to escape and avoidance of certain types of sensory stimuli. However, most behavioral models of autism remain largely speculative due to limited research efforts.

Role in education

One of the largest impacts of behavior analysis of child development is its role in the field of education. In 1968, Siegfried Englemann used operant conditioning techniques in a combination with rule learning to produce the direct instruction curriculum. In addition, Fred S. Keller used similar techniques to develop programmed instruction. B.F. Skinner developed a programmed instruction curriculum for teaching handwriting. One of Skinner's students, Ogden Lindsley, developed a standardized semilogrithmic chart, the "Standard Behavior Chart," now "Standard Celeration Chart," used to record frequencies of behavior, and to allow direct visual comparisons of both frequencies and changes in those frequencies (termed "celeration"). The use of this charting tool for analysis of instructional effects or other environmental variables through the direct measurement of learner performance has become known as precision teaching.

Behavior analysts with a focus on behavioral development form the basis of a movement called positive behavior support (PBS). PBS has focused on building safe schools.

In education, there are many different kinds of learning that are implemented to improve skills needed for interactions later in life. Examples of this differential learning include social and language skills. According to the NWREL (Northwest Regional Educational Laboratory), too much interaction with technology will hinder a child's social interactions with others due to its potential to become an addiction and subsequently lead to anti-social behavior. In terms of language development, children will start to learn and know about 5–20 different words by 18 months old.

Critiques of behavioral approach and new developments

Behavior analytic theories have been criticized for their focus on the explanation of the acquisition of relatively simple behavior (i.e., the behavior of nonhuman species, of infants, and of individuals who are intellectually disabled or autistic) rather than of complex behavior (see Commons & Miller). Michael Commons continued behavior analysis's rejection of mentalism and the substitution of a task analysis of the particular skills to be learned. In his new model, Commons has created a behavior analytic model of more complex behavior in line with more contemporary quantitative behavior analytic models called the model of hierarchical complexity. Commons constructed the model of hierarchical complexity of tasks and their corresponding stages of performance using just three main axioms.

In the study of development, recent work has been generated regarding the combination of behavior analytic views with dynamical systems theory. The added benefit of this approach is its portrayal of how small patterns of changes in behavior in terms of principles and mechanisms over time can produce substantial changes in development.

Current research in behavior analysis attempts to extend the patterns learned in childhood and to determine their impact on adult development.

Professional organizations

The Association for Behavior Analysis International has a special interest group for the behavior analysis of child development. 

Doctoral level behavior analysts who are psychologists belong to American Psychological Association's division 25: behavior analysis.

The World Association for Behavior Analysis has a certification in behavior therapy. The exam draws questions on behavioral theories of child development as well as behavioral theories of child psychopathology.

Gene–environment interaction

From Wikipedia, the free encyclopedia

Gene–environment interaction (or genotype–environment interaction or GxE or G×E) is when two different genotypes respond to environmental variation in different ways. A norm of reaction is a graph that shows the relationship between genes and environmental factors when phenotypic differences are continuous. They can help illustrate GxE interactions. When the norm of reaction is not parallel, as shown in the figure below, there is a gene by environment interaction. This indicates that each genotype responds to environmental variation in a different way. Environmental variation can be physical, chemical, biological, behavior patterns or life events.

This norm of reaction shows lines that are not parallel indicating a gene by environment interaction. Each genotype is responding to environmental variation in a different way.

Gene–environment interactions are studied to gain a better understanding of various phenomena. In genetic epidemiology, gene–environment interactions are useful for understanding some diseases. Sometimes, sensitivity to environmental risk factors for a disease are inherited rather than the disease itself being inherited. Individuals with different genotypes are affected differently by exposure to the same environmental factors, and thus gene–environment interactions can result in different disease phenotypes. For example, sunlight exposure has a stronger influence on skin cancer risk in fair-skinned humans than in individuals with darker skin.

These interactions are of particular interest to genetic epidemiologists for predicting disease rates and methods of prevention with respect to public health. The term is also used amongst developmental psychobiologists to better understand individual and evolutionary development.

Nature versus nurture debates assume that variation in a trait is primarily due to either genetic differences or environmental differences. However, the current scientific opinion holds that neither genetic differences nor environmental differences are solely responsible for producing phenotypic variation, and that virtually all traits are influenced by both genetic and environmental differences.

Statistical analysis of the genetic and environmental differences contributing to the phenotype would have to be used to confirm these as gene–environment interactions. In developmental genetics, a causal interaction is enough to confirm gene–environment interactions.

History of the definition

The history of defining gene–environment interaction dates back to the 1930s and remains a topic of debate today. The first instance of debate occurred between Ronald Fisher and Lancelot Hogben. Fisher sought to eliminate interaction from statistical studies as it was a phenomenon that could be removed using a variation in scale. Hogben believed that the interaction should be investigated instead of eliminated as it provided information on the causation of certain elements of development.

A similar argument faced multiple scientists in the 1970s. Arthur Jensen published the study “How much can we boost IQ and scholastic achievement?”, which amongst much criticism also faced contention by scientists Richard Lewontin and David Layzer. Lewontin and Layzer argued that in order to conclude causal mechanisms, the gene–environment interaction could not be ignored in the context of the study while Jensen defended that interaction was purely a statistical phenomenon and not related to development.

Around the same time, Kenneth J. Rothman supported the use of a statistical definition for interaction while researchers Kupper and Hogan believed the definition and existence of interaction was dependent on the model being used.

The most recent criticisms were spurred by Moffitt and Caspi's studies on 5-HTTLPR and stress and its influence on depression. In contrast to previous debates, Moffitt and Caspi were now using the statistical analysis to prove that interaction existed and could be used to uncover the mechanisms of a vulnerability trait. Contention came from Zammit, Owen and Lewis who reiterated the concerns of Fisher in that the statistical effect was not related to the developmental process and would not be replicable with a difference of scale.

Definitions

There are two different conceptions of gene–environment interaction today. Tabery has labeled them biometric and developmental interaction, while Sesardic uses the terms statistical and commonsense interaction.

The biometric (or statistical) conception has its origins in research programs that seek to measure the relative proportions of genetic and environmental contributions to phenotypic variation within populations. Biometric gene–environment interaction has particular currency in population genetics and behavioral genetics. Any interaction results in the breakdown of the additivity of the main effects of heredity and environment, but whether such interaction is present in particular settings is an empirical question. Biometric interaction is relevant in the context of research on individual differences rather than in the context of the development of a particular organism.

Developmental gene–environment interaction is a concept more commonly used by developmental geneticists and developmental psychobiologists. Developmental interaction is not seen merely as a statistical phenomenon. Whether statistical interaction is present or not, developmental interaction is in any case manifested in the causal interaction of genes and environments in producing an individual's phenotype.

Epidemiological models of GxE

In epidemiology, the following models can be used to group the different interactions between gene and environment.

Model A describes a genotype that increases the level of expression of a risk factor but does not cause the disease itself. For example, the PKU gene results in higher levels of phenylalanine than normal which in turn causes mental retardation.

The risk factor in Model B in contrast has a direct effect on disease susceptibility which is amplified by the genetic susceptibility. Model C depicts the inverse, where the genetic susceptibility directly effects disease while the risk factor amplifies this effect. In each independent situation, the factor directly effecting the disease can cause disease by itself.

Model D differs as neither factor in this situation can effect disease risk, however, when both genetic susceptibility and risk factor are present the risk is increased. For example, the G6PD deficiency gene when combined with fava bean consumption results in hemolytic anemia. This disease does not arise in individuals that eat fava beans and lack G6PD deficiency nor in G6PD-deficient people who do not eat fava beans.

Lastly, Model E depicts a scenario where the environmental risk factor and genetic susceptibility can individually both influence disease risk. When combined, however, the effect on disease risk differs.
The models are limited by the fact that the variables are binary and so do not consider polygenic or continuous scale variable scenarios.

Methods of analysis

Traditional genetic designs

Adoption studies

Adoption studies have been used to investigate how similar individuals that have been adopted are to their biological parents with whom they did not share the same environment with. Additionally, adopted individuals are compared to their adoptive family due to the difference in genes but shared environment. For example, an adoption study showed that Swedish men with disadvantaged adoptive environments and a genetic predisposition were more likely to abuse alcohol.

Twin studies

Using monozygotic twins, the effects of different environments on identical genotypes could be observed. Later studies leverage biometrical modelling techniques to include the comparisons of dizygotic twins to ultimately determine the different levels of gene expression in different environments.

Family studies

Family-based research focuses on the comparison of low-risk controls to high risk children to determine the environmental effect on subjects with different levels of genetic risk. For example, a Danish study on high-risk children with schizophrenic mothers depicted that children without a stable caregiver were associated with an increased risk of schizophrenia.

Molecular analyses

Interaction with single genes

The often used method to detect gene–environment interactions is by studying the effect a single gene variation has with respect to a particular environment. Single nucleotide polymorphisms (SNP's) are compared with single binary exposure factors to determine any effects.

Candidate studies such as these require strong biological hypotheses which are currently difficult to select given the little understanding of biological mechanisms that lead to higher risk.

These studies are also often difficult to replicate commonly due to small sample sizes which typically results in disputed results.

The polygenic nature of complex phenotypes suggests single candidate studies could be ineffective in determining the various smaller scale effects from the large number of influencing gene variants.

Interaction with multiple genes

Since the same environmental factor could interact with multiple genes, a polygenic approach can be taken to analyze GxE interactions. A polygenic score is generated using the alleles associated with a trait and their respective weights based on effect and examined in combination with environmental exposure. Though this method of research is still early, it is consistent with psychiatric disorders. As a result of the overlap of endophenotypes amongst disorders this suggests that the outcomes of gene–environment interactions are applicable across various diagnoses.

Genome-wide association studies and genome wide interaction studies

A genome wide interaction scan (GEWIS) approach examines the interaction between the environment and a large number of independent SNP's. An effective approach to this all-encompassing study occurs in two-steps where the genome is first filtered using gene-level tests and pathway based gene set analyses. The second step uses the SNP's with G–E association and tests for interaction.

The differential susceptibility hypothesis has been reaffirmed through genome wide approaches.

Controversies

Lack of replication

A particular concern with gene–environment interaction studies is the lack of reproducibility. Specifically complex traits studies have come under scrutiny for producing results that cannot be replicated. For example, studies of the 5-HTTLPR gene and stress resulting in modified risk of depression have had conflicting results.

A possible explanation behind the inconsistent results is the heavy use of multiple testing. Studies are suggested to produce inaccurate results due to the investigation of multiple phenotypes and environmental factors in individual experiments.

Additive vs multiplicative model

There are two different models for the scale of measurement that helps determine if gene–environment interaction exists in a statistical context. There is disagreement on which scale should be used. Under these analyses, if the combined variables fit either model then there is no interaction. The combined effects must either be greater for synergistic or less than for an antagonistic outcome. The additive model measures risk differences while the multiplicative model uses ratios to measure effects. The additive model has been suggested to be a better fit for predicting disease risk in a population while a multiplicative model is more appropriate for disease etiology.

Epigenetics is an example of an underlying mechanism of gene–environment effects, however, it does not conclude whether environment effects are additive, multiplicative or interactive.

Gene "×" environment "×" environment interactions

New studies have also revealed the interactive effect of multiple environment factors. For example, a child with a poor quality environment would be more sensitive to a poor environment as an adult which ultimately led to higher psychological distress scores. This depicts a three way interaction Gene x Environment x Environment. The same study suggests taking a life course approach to determining genetic sensitivity to environmental influences within the scope of mental illnesses.

Medical significance

Doctors are interested in knowing whether disease can be prevented by reducing exposure to environmental risks. Some people carry genetic factors that confer susceptibility or resistance to a certain disorder in a particular environment. The interaction between the genetic factors and environmental stimulus is what results in the disease phenotype. There may be significant public health benefits in using gene by environment interactions to prevent or cure disease.

An individual's response to a drug can result from various gene by environment interactions. Therefore, the clinical importance of pharmacogenetics and gene by environment interactions comes from the possibility that genomic, along with environmental information, will allow more accurate predictions of an individual's drug response. This would allow doctors to more precisely select a certain drug and dosage to achieve therapeutic response in a patient while minimizing side effects and adverse drug reactions. This information could also help to prevent the health care costs associated with adverse drug reactions and inconveniently prescribing drugs to patients who likely won't respond to them.

In a similar manner, an individual can respond to other environmental stimuli, factors or challenges differently according to specific genetic differences or alleles. These other factors include the diet and specific nutrients within the diet, physical activity, alcohol and tobacco use, sleep (bed time, duration), and any of a number of exposures, including toxins, pollutants, sunlight (latitude north–south of the equator), among any number of others. The diet, for example, is modifiable and has significant impact on a host of cardiometabolic diseases, including cardiovascular disease, coronary artery disease, coronary heart disease, type 2 diabetes, hypertension, stroke, myocardial infarction, and non-alcoholic fatty liver disease. In the clinic, typically assessed risks of these conditions include blood lipids (triglyceride, and HDL, LDL and total cholesterol), glycemic traits (plasma glucose and insulin, HOMA-IR, beta cell function as HOMA-BC), obesity anthropometrics (BMI/obesity, adiposity, body weight, waist circumference, waist-to-hip ratio), vascular measures (diastolic and systolic blood pressure), and biomarkers of inflammation. Gene–environment interactions can modulate the adverse effects of an allele that confers increased risk of disease, or can exacerbate the genotype–phenotype relationship and increase risk, in a manner often referred to as nutrigenetics. A catalog of genetic variants that associate with these and related cardiometabolic phenotypes and modified by common environmental factors is available.

Conversely, a disease study using breast cancer, type 2 diabetes, and rheumatoid arthritis shows that including GxE interactions in a risk prediction model does not improve risk identification.

Examples

Mean bristle number by °C
  1. In Drosophila: A classic example of gene–environment interaction was performed on Drosophila by Gupta and Lewontin in 1981. In their experiment they demonstrated that the mean bristle number on Drosophila could vary with changing temperatures. As seen in the graph to the right, different genotypes reacted differently to the changing environment. Each line represents a given genotype, and the slope of the line reflects the changing phenotype (bristle number) with changing temperature. Some individuals had an increase in bristle number with increasing temperature while others had a sharp decrease in bristle number with increasing temperature. This showed that the norms of reaction were not parallel for these flies, proving that gene–environment interactions exist.
  2. In plants: One very interesting approach about genotype by environment interaction strategies is its use in the selection of sugarcane cultivars adapted to different environments. In this article, they analyzed twenty sugarcane genotypes grown in eight different locations over two crop cycles to identify mega-environments related to higher cane yield, measured in tons of cane per hectare (TCH) and percentage of sucrose (Pol% cane) using biplot multivariate GEI models. The authors then created a novel strategy to study both yield variables in a two-way coupled strategy even though the results showed a mean negative correlation. Through coinertia analysis, it was possible to determine the best-fitted genotypes for both yield variables in all environments. The use of these novel strategies like coinertia in GEI, proved to be a great complement analysis to AMMI and GGE, especially when the yield improvement implies multiple yield variables. Seven genetically distinct yarrow plants were collected and three cuttings taken from each plant. One cutting of each genotype was planted at low, medium, and high elevations, respectively. When the plants matured, no one genotype grew best at all altitudes, and at each altitude the seven genotypes fared differently. For example, one genotype grew the tallest at the medium elevation but attained only middling height at the other two elevations. The best growers at low and high elevation grew poorly at medium elevation. The medium altitude produced the worst overall results, but still yielded one tall and two medium-tall samples. Altitude had an effect on each genotype, but not to the same degree nor in the same way. A sorghum bi-parental population was repeatedly grown in seven diverse geographic locations across years. A group of genotypes requires similar growing degree-day (GDD) to flower across all environments, while another group of genotypes need less GDD in certain environments, but higher GDD in different environments to flower. The complex flowering time patterns is attributed to the interaction of major flowering time genes (Ma1, Ma6, FT, ELF3) and an explicit environmental factor, photothermal time (PTT) capturing the interaction between temperature and photoperiod.
  3. Phenylketonuria (PKU) is a human genetic condition caused by mutations to a gene coding for a particular liver enzyme. In the absence of this enzyme, an amino acid known as phenylalanine does not get converted into the next amino acid in a biochemical pathway, and therefore too much phenylalanine passes into the blood and other tissues. This disturbs brain development leading to mental retardation and other problems. PKU affects approximately 1 out of every 15,000 infants in the U.S. However, most affected infants do not grow up impaired because of a standard screening program used in the U.S. and other industrialized societies. Newborns found to have high levels of phenylalanine in their blood can be put on a special, phenylalanine-free diet. If they are put on this diet right away and stay on it, these children avoid the severe effects of PKU. This example shows that a change in environment (lowering Phenylalanine consumption) can affect the phenotype of a particular trait, demonstrating a gene–environment interaction.
  4. A single nucleotide polymorphism rs1800566 in NAD(P)H Quinone Dehydrogenase 1 (NQO1) alters the risk of asthma and general lung injury upon interaction with NOx pollutants, in individuals with this mutation.
  5. A functional polymorphism in the monoamine oxidase A (MAOA) gene promoter can moderate the association between early life trauma and increased risk for violence and antisocial behavior. Low MAOA activity is a significant risk factor for aggressive and antisocial behavior in adults who report victimization as children. Persons who were abused as children but have a genotype conferring high levels of MAOA expression are less likely to develop symptoms of antisocial behavior. These findings must be interpreted with caution, however, because gene association studies on complex traits are notorious for being very difficult to confirm.
  6. In Drosophila eggs:
Egg Development Time by Temperature
 
Contrary to the aforementioned examples, length of egg development in Drosophila as a function of temperature demonstrates the lack of gene–environment interactions. The attached graph shows parallel reaction norms for a variety of individual Drosophila flies, showing that there is not a gene–environment interaction present between the two variables. In other words, each genotype responds similarly to the changing environment producing similar phenotypes. For all individual genotypes, average egg development time decreases with increasing temperature. The environment is influencing each of the genotypes in the same predictable manner.

Cultural neuroscience

From Wikipedia, the free encyclopedia

Cultural neuroscience is a field of research that focuses on the interrelation between a human’s cultural environment and neurobiological systems. The field particularly incorporates ideas and perspectives from related domains like anthropology, psychology, and cognitive neuroscience to study sociocultural influences on human behaviors. Such impacts on behavior are often measured using various neuroimaging methods, through which cross-cultural variability in neural activity can be examined.

Cultural neuroscientists study cultural variation in mental, neural and genomic processes as a means of articulating the bidirectional relationship of these processes and their emergent properties using a variety of methods. Researchers in cultural neuroscience are motivated by two fundamentally intriguing, yet still unanswered, questions on the origins of human nature and human diversity: how do cultural traits (e.g., values, beliefs, practices) shape neurobiology (e.g., genetic and neural processes) and behavior, and how do neurobiological mechanisms (e.g., genetic and neural processes) facilitate the emergence and transmission of cultural traits?

The idea that complex behavior results from the dynamic interaction of genes and cultural environment is not new; however, cultural neuroscience represents a novel empirical approach to demonstrating bidirectional interactions between culture and biology by integrating theory and methods from cultural psychology, neuroscience and neurogenetics.

Similar to other interdisciplinary fields such as social neuroscience, cognitive neuroscience, affective neuroscience, and neuroanthropology, cultural neuroscience aims to explain a given mental phenomenon in terms of a synergistic product of mental, neural and genetic events. In particular, cultural neuroscience shares common research goals with social neuroscientists examining how neurobiological mechanisms (e.g., mirror neurons), facilitate cultural transmission, (e.g., imitative learning) and neuroanthropologists examining how embedded culture, as captured by cross-species comparison and ethnography, is related to brain function. Cultural neuroscience also shares intellectual goals with critical neuroscience, a field of inquiry that scrutinizes the social, cultural, economic and political contexts and assumptions that underlie behavioral and brain science research as it is practiced today.

Research in cultural neuroscience has practical relevance to transcultural psychiatry, business and technology as well as broader implications for global public policy issues such as population health disparities, bioethics, globalization, immigration, interethnic ideology and international relations.

Previous cross-cultural research

While the field of cultural neuroscience may still be growing, there are studies conducted by various researchers that have looked at cross-cultural similarities and differences in human attention, visual perception, and the understanding of others and the self. Previous behavioral research has focused on the cultural differences in perception, particularly between people from East Asian and Western regions. The results from these studies have suggested that East Asians focus their visual perception more on the backgrounds and contexts of their environment, while Westerners focus on individual stimuli/objects. To further explore these findings, more research was done to specifically look at the neurological similarities and differences in attention and visual perception of people in East Asian and Western cultures. 

Results from a 2008 study by Hedden et al. support the previous findings by showing how East Asians require more attention than Americans for individually processing objects. Brain regions more focused on attention, such as areas in the parietal and prefrontal lobes as well as the inferior parietal lobule and precentral gyrus, were found to be highly active in East Asian subjects compared to American subjects, during individual object processing. A visual perception study conducted by Gutchess et al. in 2006, also found neurological differences between Chinese and American subjects as they completed tasks of encoding images of individual objects, backgrounds, and objects with backgrounds. The fMRI results from the study presented that during visual processing of objects, there was greater neural activity in the middle temporal gyri, right superior temporal gyri, and superior parietal lobules of the American subjects than that of the Chinese subjects. Such results indicate a focus on object processing among Westerners compared to East Asians. Insignificant differences in neural activity between subjects were found during the visual processing of images with backgrounds.

People from East Asian and Western cultures were also studied to learn more about cross-cultural differences in understanding both the self and other people. Findings from a 1991 study by Markus and Kitayama presented that people from Eastern cultures view the self in relation to others in their community, while people from Western cultures have a more independent perspective of the self. A 2007 fMRI study observed differences in activity in the ventromedial prefrontal cortex, a brain region highly active during self perception, when Western and Chinese subjects were thinking about themselves versus when they were thinking about their mothers. The results interestingly showed that there was still activity in the ventral medial prefrontal cortices of Chinese subjects even when they thought about their mothers, while activity was only detected in American subjects when they thought about themselves. 

A different study conducted by psychologist Joan Chiao found that due to cultural differences, East Asians are more likely to suffer from depression than Americans. She found that East Asians are more likely to carry the short allele of the serotonin transporter gene (STG) which leads to depression while Americans carry the long allele which doesn't lead to depression. Yet due to difference in cultural structure they found that collectivist societies are more likely to find happiness than individual societies.

Another study done by psychologists Nalini Ambady and Jonathan Freeman showed a difference in brain activity between Japanese and Americans when shown different body posture. They found that the reward circuitry in the limbic system would light up when Japanese participants saw submissive body posture while the reward circuitry would activate when Americans saw dominant body posture.

Culture differences in visual stimuli

Cultural differences exist in the ventral visual cortex and many studies have shown this. In a study conducted in 2005 they found that East Asians were more likely to keep their eyes focused on background scenes than westerners who would instead focus more on the central object such as a giraffe in a savanna. In a similar 2006 study it showed that in congruence to the difference in society structure westerners showed more activation in object processing regions, including the bilateral middle temporal gyrus, left superior parietal gyrus, and right superior temporal gyrus, although no activation differences were observed in context-processing regions such as the hippocampus. However, there has been some research contradicting cultural bias in the oculomotor control such as one conducted in 2007 by Rayner, Li, Williams, Cave, and well who failed to find evidence that East Asians focus more on context although they did find evidence that they are more likely to focus less on central objects. In a different study they focused more on difference in attention towards faces. They proved that Americans focus more broadly on the entire face such as both the eyes and mouth while Asians focus more on a single part, such as the mouth. The authors point out that this happens due to gaze avoidance in east Asian culture as a way of politeness. In 2008, another study focusing on context showed that East Asians were more likely to include greater details and background when taking photographs of a model when they were free to set the zoom function of the camera as they saw fit. In 2003, a group of researchers used the Frame-Line Test and asked the participants to draw a line of either exactly the same length as the one showed or one that was proportional in size. Americans were more accurate in the absolute task, suggesting better memory for the exact or absolute size of the focal object, but East Asians were more accurate in the relative (proportional) task, suggesting better memory for contextual relationships. In a later study conducted by the same group they found a pattern within the cultures when processing emotions. East Asians were less likely to know the difference between fear and disgust than Americans when sampling faces.

Many studies conducted proves that constant repetition in a certain skill has an effect on brain activity. For example, in a 2000 study they showed that taxi drivers in London showed larger gray matter in the posterior hippocampi than the average civilian. A different study in 2004 showed that those who know how to juggle have an increase in volume of the cortical tissue in the bilateral midtemporal area and left posterior intraparietal sulcus.

The findings from many neuroimaging studies reflect the behavioral patterns observed in previous anthropological and cultural research. Such comparisons that were made between particular behavioral and neural activity across different cultures, have already provided the scientific community with more insight into the cultural influences on human behavior.

Sensory processing sensitivity

From Wikipedia, the free encyclopedia

 
Characteristics of SPS as graphically summarized by Greven et al. (review article, 2019) A person with a high measure of SPS is said to be a highly sensitive person (HSP).
 
Sensory processing sensitivity (SPS) is a temperamental or personality trait involving "an increased sensitivity of the central nervous system and a deeper cognitive processing of physical, social and emotional stimuli". The trait is characterized by "a tendency to 'pause to check' in novel situations, greater sensitivity to subtle stimuli, and the engagement of deeper cognitive processing strategies for employing coping actions, all of which is driven by heightened emotional reactivity, both positive and negative".

A human with a particularly high measure of SPS is considered to have 'hypersensitivity', or be a highly sensitive person (HSP). The terms SPS and HSP were coined in the mid-1990s by psychologists Elaine Aron and her husband Arthur Aron, who developed the Highly Sensitive Person Scale (HSPS) questionnaire by which SPS is measured. Other researchers have applied various other terms to denote this responsiveness to stimuli that is seen in humans and other species.

According to the Arons and colleagues, people with high SPS make up about 15–20% of the population. Although some researchers consistently related high SPS to negative outcomes, other researchers have associated it with increased responsiveness to both positive and negative influences. Aron and colleagues state that the high-SPS personality trait is not a disorder.

Origin and development of the terms

Elaine Aron's book The Highly Sensitive Person was published in 1996. In 1997 Elaine and Arthur Aron formally identified sensory processing sensitivity (SPS) as the defining trait of highly sensitive persons (HSPs). The popular terms hypersensitivity (not to be confused with the medical term hypersensitivity) or highly sensitive are popular synonyms for the scientific concept of SPS. By way of definition, Aron and Aron (1997) wrote that sensory processing here refers not to the sense organs themselves, but to what occurs as sensory information is transmitted to or processed in the brain. They assert that the trait is not a disorder but an innate survival strategy that has both advantages and disadvantages.

Elaine Aron's academic journal articles as well as self-help publications for the lay reader have focused on distinguishing high SPS from socially reticent behavior and disorders with which high SPS can be confused; overcoming the social unacceptability that can cause low self-esteem; and emphasizing the advantages of high SPS to balance the disadvantages emphasized by others.

In 2015, sociologist Elizabeth Bernstein wrote in The Wall Street Journal that HSPs were "having a moment," noting that several hundred research studies had been conducted on topics related to HSPs' high sensitivity. The First International Scientific Conference on High Sensitivity or Sensory Processing Sensitivity was held at the Vrije Universiteit Brussel. By 2015, more than a million copies of The Highly Sensitive Person had been sold.

Earlier research

Research pre-dating the Arons' coining of the term "high sensitivity" includes that of German medicine professor Wolfgang Klages, who argued in the 1970s that the phenomenon of sensitive and highly sensitive humans is "biologically anchored" and that the "stimulus threshold of the thalamus" is much lower in these persons. As a result, said Klages, there is a higher permeability for incoming signals from afferent nerve fibers so that they pass "unfiltered" to the cerebral cortex.

The Arons (1997) recognized psychologist Albert Mehrabian's (1976, 1980, 1991) concept of filtering the "irrelevant", but wrote that the concept implied that the inability of HSPs' (Mehrabian's "low screeners") to filter out what is irrelevant would imply that what is relevant is determined from the perspective of non-HSPs ("high screeners").

Attributes, characteristics and prevalence

Boterberg et al. (2016) describe high SPS as a "temperamental or personality trait which is present in some individuals and reflects an increased sensitivity of the central nervous system and a deeper cognitive processing of physical, social and emotional stimuli".

People with high SPS report having a heightened response to stimuli such as pain, caffeine, hunger, and loud noises. According to Boterberg et al., these individuals are "believed to be easily overstimulated by external stimuli because they have a lower perceptual threshold and process stimuli cognitively deeper than most other people." This deeper processing may result in increased reaction time as more time is spent responding to cues in the environment, and might also contribute to cautious behavior and low risk-taking.

The HSP Scale, initially (1997) a questionnaire designed to measure SPS on a unidimensional scale, was subsequently decomposed into two, three, or four factors or sub-scales. Most components have been associated with traditionally accepted negative psychological outcomes including high stress levels, being easily overwhelmed, increased rates of depression, anxiety, and sleep problems, as well as symptoms of autism; the diathesis-stress model focused on increased vulnerability to negative influences. However, the differential susceptibility theory (DST) and biological sensitivity to context theory (BSCT) and sensory processing sensitivity (SPS) suggest increased plasticity in terms of responsivenessp to both positive and negative influences; and the vantage sensitivity (VS) concept emphasizes increased responsiveness to positive experiences.Researchers such as Smolewska et al. (2006) said positive outcomes were more common in individuals with high aesthetic sensitivity, who tend to experience heightened positive emotions in response to rewarding stimuli and more likely to score high on "openness" on the Big Five factors model.

Research in evolutionary biology provides evidence that the trait of SPS can be observed, under various terms, in over 100 nonhuman species,Aron writing that the SPS trait is meant to encompass what personality psychologists have described under various other names. Conversely, Aron has distinguished SPS from what she considers it is not, explicitly distinguishing high SPS from possibly similar-appearing traits or disorders (such as shyness, sensation-seeking, sensory processing disorder, and autism), and further, that SPS may be a basic variable that may underlie multiple other trait differences (such as introversion versus extraversion). Contrary to common misconception, according to Aron HSPs include both introverts and extroverts, and may be simultaneously high-sensation seeking and cautious.

In humans and other species, responsive and unresponsive individuals coexist and consistently display different levels of responsiveness to environmental stimuli, the different levels of responsiveness having corresponding evolutionary costs and benefits. This observation parallels Aron's assertion that high SPS is not a disorder, but rather a personality trait with attendant advantages and disadvantages. Accordingly, Aron cautions medical professionals against prescribing psychoactive medications to "cure" the trait, which may or may not coexist with an actual disorder.

By 2015 the trait had been documented at various levels of study, including temperament and behavior psychology, brain function and neuronal sensitization, and genetics. For example, genetic studies provide evidence that higher levels of SPS are linked to the serotonin transporter 5-HTTLPR short/short genotype, polymorphisms in dopamine neurotransmitter genes, and the ADRA2b norepinephrine-related gene variant.

HSP Scale score patterns in adults were thought to be distributed as a dichotomous categorical variable with a break point between 10% and 35%, with Aron choosing a cut-off of the highest-scoring 20% of individuals to define the HSP category. A 2019 review article stated that findings suggest people fall into three sensitivity groups along a normal distribution sensitivity continuum.

Religious cosmology

From Wikipedia, the free encyclopedia ...