Search This Blog

Friday, January 14, 2022

Neuroleptic malignant syndrome

From Wikipedia, the free encyclopedia
 
Neuroleptic malignant syndrome
Haloperidol (Haldol).jpg
Haloperidol, a known cause of NMS

SpecialtyCritical care medicine, neurology, psychiatry
SymptomsHigh fever, confusion, rigid muscles, variable blood pressure, sweating
ComplicationsRhabdomyolysis, high blood potassium, kidney failure, seizures
Usual onsetWithin a few weeks or days 
CausesNeuroleptic or antipsychotic medication
Risk factorsDehydration, agitation, catatonia
Diagnostic methodBased on symptoms in someone who has started neuroleptics within the last month
Differential diagnosisHeat stroke, malignant hyperthermia, serotonin syndrome, lethal catatonia
TreatmentStopping the offending medication, rapid cooling, starting other medications
MedicationDantrolene, bromocriptine, diazepam
Prognosis10%-15% risk of death
Frequency15 per 100,000 per year (on neuroleptics)

Neuroleptic malignant syndrome (NMS) is a rare but life-threatening reaction that can occur in response to neuroleptic or antipsychotic medication. Symptoms include high fever, confusion, rigid muscles, variable blood pressure, sweating, and fast heart rate. Complications may include rhabdomyolysis, high blood potassium, kidney failure, or seizures.

Any medications within the family of neuroleptics can cause the condition, though typical antipsychotics appear to have a higher risk than atypicals, specifically first generation antipsychotics like haloperidol. Onset is often within a few weeks of starting the medication but can occur at any time. Risk factors include dehydration, agitation, and catatonia.

Rapidly decreasing the use of levodopa or other dopamine agonists, such as pramipexole, may also trigger the condition. The underlying mechanism involves blockage of dopamine receptors. Diagnosis is based on symptoms.

Management includes stopping the offending medication, rapid cooling, and starting other medications. Medications used include dantrolene, bromocriptine, and diazepam. The risk of death among those affected is about 10%. Rapid diagnosis and treatment is required to improve outcomes. Many people can eventually be restarted on a lower dose of antipsychotic.

As of 2011, among those in psychiatric hospitals on neuroleptics about 15 per 100,000 are affected per year (0.015%). In the second half of the 20th century rates were over 100 times higher at about 2% (2,000 per 100,000). Males appear to be more often affected than females. The condition was first described in 1956.

Signs and symptoms

The first symptoms of neuroleptic malignant syndrome are usually muscle cramps and tremors, fever, symptoms of autonomic nervous system instability such as unstable blood pressure, and sudden changes in mental status (agitation, delirium, or coma). Once symptoms appear, they may progress rapidly and reach peak intensity in as little as three days. These symptoms can last anywhere from eight hours to forty days.

Symptoms are sometimes misinterpreted by doctors as symptoms of mental illness which can result in delayed treatment. NMS is less likely if a person has previously been stable for a period of time on antipsychotics, especially in situations where the dose has not been changed and there are no issues of noncompliance or consumption of psychoactive substances known to worsen psychosis.

  • Increased body temperature >38 °C (>100.4 °F), or
  • Confused or altered consciousness
  • sweating
  • Rigid muscles
  • Autonomic imbalance

Causes

NMS is usually caused by antipsychotic drug use, and a wide range of drugs can result in NMS. Individuals using butyrophenones (such as haloperidol and droperidol) or phenothiazines (such as promethazine and chlorpromazine) are reported to be at greatest risk. However, various atypical antipsychotics such as clozapine, olanzapine, risperidone, quetiapine, and ziprasidone have also been implicated in cases.

NMS may also occur in people taking dopaminergic drugs (such as levodopa) for Parkinson's disease, most often when the drug dosage is abruptly reduced. In addition, other drugs with anti-dopaminergic activity, such as the antiemetic metoclopramide, can induce NMS. Tetracyclics with anti-dopaminergic activity have been linked to NMS in case reports, such as the amoxapines. Additionally, desipramine, dothiepin, phenelzine, tetrabenazine, and reserpine have been known to trigger NMS. Whether lithium can cause NMS is unclear. However, concomitant use of lithium is associated with a higher risk of NMS when the patient starts on a neuroleptic drug e.g. anti-psychotics. 

At the molecular level, NMS is caused by a sudden, marked reduction in dopamine activity, either from withdrawal of dopaminergic agents or from blockade of dopamine receptors.

Risk factors

One of the clearest risk factors in the development of NMS is the course of drug therapy chosen to treat a condition. Use of high-potency neuroleptics, a rapid increase in the dosage of neuroleptics, and use of long-acting forms of neuroleptics are all known to increase the risk of developing NMS.

It has been purported that there is a genetic risk factor for NMS, since identical twins have both presented with NMS in one case, and a mother and two of her daughters have presented with NMS in another case.

Demographically, it appears that males, especially those under forty, are at greatest risk for developing NMS, although it is unclear if the increased incidence is a result of greater neuroleptic use in men under forty. It has also been suggested that postpartum women may be at a greater risk for NMS.

An important risk factor for this condition is Lewy body dementia. These patients are extremely sensitive to neuroleptics. As a result, neuroleptics should be used cautiously in all cases of dementia.

Pathophysiology

The mechanism is commonly thought to depend on decreased levels of dopamine activity due to:

It has been proposed that blockade of D2-like (D2, D3 and D4) receptors induce massive glutamate release, generating catatonia, neurotoxicity and myotoxicity. Additionally, the blockade of diverse serotonin receptors by atypical antipsychotics and activation of 5HT1 receptors by certain of them reduces GABA release and indirectly induces glutamate release, worsening this syndrome.

The muscular symptoms are most likely caused by blockade of the dopamine receptor D2, leading to abnormal function of the basal ganglia similar to that seen in Parkinson's disease.

However, the failure of D2 dopamine receptor antagonism, or dopamine receptor dysfunction, do not fully explain the presenting symptoms and signs of NMS, as well as the occurrence of NMS with atypical antipsychotic drugs with lower D2 dopamine activity. This has led to the hypothesis of sympathoadrenal hyperactivity (results from removing tonic inhibition from the sympathetic nervous system) as a mechanism for NMS. Release of calcium is increased from the sarcoplasmic reticulum with antipsychotic usage. This can result in increased muscle contractility, which can play a role in the breakdown of muscle, muscle rigidity, and hyperthermia. Some antipsychotic drugs, such as typical neuroleptics, are known to block dopamine receptors; other studies have shown that when drugs supplying dopamine are withdrawn, symptoms similar to NMS present themselves.

There is also thought to be considerable overlap between malignant catatonia and NMS in their pathophysiology, the former being idiopathic and the latter being the drug-induced form of the same syndrome.

The raised white blood cell count and creatine phosphokinase (CPK) plasma concentration seen in those with NMS is due to increased muscular activity and rhabdomyolysis (destruction of muscle tissue). The patient may suffer hypertensive crisis and metabolic acidosis. A non-generalized slowing on an EEG is reported in around 50% of cases.

The fever seen with NMS is believed to be caused by hypothalamic dopamine receptor blockade. The peripheral problems (the high white blood cell and CPK count) are caused by the antipsychotic drugs. They cause an increased calcium release from the sarcoplasmic reticulum of muscle cells which can result in rigidity and eventual cell breakdown. No major studies have reported an explanation for the abnormal EEG, but it is likely also attributable to dopamine blockage leading to changes in neuronal pathways.

Diagnosis

Differential diagnosis

Differentiating NMS from other neurological disorders can be very difficult. It requires expert judgement to separate symptoms of NMS from other diseases. Some of the most commonly mistaken diseases are encephalitis, toxic encephalopathy, status epilepticus, heat stroke, catatonia and malignant hyperthermia. Due to the comparative rarity of NMS, it is often overlooked and immediate treatment for the syndrome is delayed. Drugs such as cocaine and amphetamine may also produce similar symptoms.

The differential diagnosis is similar to that of hyperthermia, and includes serotonin syndrome. Features which distinguish NMS from serotonin syndrome include bradykinesia, muscle rigidity, and a high white blood cell count.

Treatment

NMS is a medical emergency and can lead to death if untreated. The first step is to stop the antipsychotic medication and treat the hyperthermia aggressively, such as with cooling blankets or ice packs to the axillae and groin. Supportive care in an intensive care unit capable of circulatory and ventilatory support is crucial. The best pharmacological treatment is still unclear. Dantrolene has been used when needed to reduce muscle rigidity, and more recently dopamine pathway medications such as bromocriptine have shown benefit. Amantadine is another treatment option due to its dopaminergic and anticholinergic effects. Apomorphine may be used however its use is supported by little evidence. Benzodiazepines may be used to control agitation. Highly elevated blood myoglobin levels can result in kidney damage, therefore aggressive intravenous hydration with diuresis may be required. When recognized early NMS can be successfully managed; however, up to 10% of cases can be fatal.

Should the affected person subsequently require an antipsychotic, trialing a low dose of a low-potency atypical antipsychotic is recommended.

Prognosis

The prognosis is best when identified early and treated aggressively. In these cases NMS is not usually fatal. In earlier studies the mortality rates from NMS ranged from 20%–38%, but by 2009 mortality rates were reported to have fallen below 10% over the previous two decades due to early recognition and improved management. Re-introduction to the drug that originally caused NMS to develop may also trigger a recurrence, although in most cases it does not.

Memory impairment is a consistent feature of recovery from NMS, and is usually temporary though in some cases may become persistent.

Epidemiology

Pooled data suggest the incidence of NMS is between 0.2%–3.23%. However, greater physician awareness coupled with increased use of atypical anti-psychotics have likely reduced the prevalence of NMS. Additionally, young males are particularly susceptible and the male:female ratio has been reported to be as high as 2:1.

History

NMS was known about as early as 1956, shortly after the introduction of the first phenothiazines. NMS was first described in 1960 by French clinicians who had been working on a study involving haloperidol. They characterized the condition that was associated with the side effects of haloperidol "syndrome malin des neuroleptiques", which was translated to neuroleptic malignant syndrome.

Research

While the pathophysiology of NMS remains unclear, the two most prevalent theories are:

  • Reduced dopamine activity due to receptor blockade
  • Sympathoadrenal hyperactivity and autonomic dysfunction

In the past, research and clinical studies seemed to corroborate the D2 receptor blockade theory in which antipsychotic drugs were thought to significantly reduce dopamine activity by blocking the D2 receptors associated with this neurotransmitter. However, recent studies indicate a genetic component to the condition. In support of the sympathoadrenal hyperactivity model proposed, it has been hypothesized that a defect in calcium regulatory proteins within the sympathetic neurons may bring about the onset of NMS. This model of NMS strengthens its suspected association with malignant hyperthermia in which NMS may be regarded as a neurogenic form of this condition which itself is linked to defective calcium-related proteins.

The introduction of atypical antipsychotic drugs, with lower affinity to the D2 dopamine receptors, was thought to have reduced the incidence of NMS. However, recent studies suggest that the decrease in mortality may be the result of increased physician awareness and earlier initiation of treatment rather than the action of the drugs themselves. NMS induced by atypical drugs also resembles "classical" NMS (induced by "typical" antipsychotic drugs), further casting doubt on the overall superiority of these drugs.

Childhood phobia

From Wikipedia, the free encyclopedia
 
Childhood phobia
SpecialtyPsychology

A childhood phobia is an exaggerated, intense fear "that is out of proportion to any real fear" found in children. It is often characterized by a preoccupation with a particular object, class of objects, or situation that one fears. A phobic reaction is twofold—the first part being the "intense irrational fear" and the second part being "avoidance."

Children during their developmental stages experience fears. Fear is a natural part of self-preservation. Fears allow children to act with the necessary cautions to stay safe. According to Child and Adolescent Mental Health, "such fears vary in frequency, intensity, and duration; they tend to be mild, age-specific, and transitory." Fears can be a result of misperceptions. When a child perceives a threatening situation, his or her body experiences a fight or flight reaction. Children placed in new situations with unfamiliar objects are more likely to experience such reactions. These fears should be passing, a result of childhood development.

A childhood fear develops into a childhood phobia when it begins to interfere with daily living. "Acute states of fear can elicit counterproductive physiological reactions such as trembling, profuse perspiration, faint feelings, weakness in joints and muscles, nausea, diarrhea, and disturbances in motor coordination" It is not uncommon for frightened or anxious children to regress in a phase of development. For example, a kindergartener might begin to baby talk or wet the bed when faced with a threatening or particularly frightening situation. Childhood phobias exist in many different varieties and intensities and have a wide range from tolerable to incapacitating.

Fear or phobia

The distinction between "normal" fears and phobias, a phobia (as defined by the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV)):

  • An irrepressible persistent fear of an object, activity or situation esp. when the subject is exposed to unfamiliar people or possible criticism. In children, the subject needs to be able to show a capacity for normal social reactions for their developmental stage, and when reactions occur they should happen among their peer group as well as with adults.
  • Any exposure to the object or situation causes some form of unrestrained anxiety. In children this may be revealed by tantrums, crying, hysteria, or freezing.
  • The fear reaction is excessive and unwarranted. Note: adults who suffer from anxiety disorders usually accept that their fear reaction was disproportionate to the situation; however, children may not have the cognitive abilities to make this realization depending on age and maturity.
  • The situation is avoided or endured with large amounts of stress and anxiety.
  • The fear reaction interferes with a normal routine e.g. if a fear of elevators cause a person to avoid taller buildings.
  • The duration is at least 6 months.
  • The origin of the fear reaction is not directly caused by the physiological effects of a drug or substance or origin of anxiety is not better classified by another disorder e.g. separation anxiety disorder.
  • If another mental or medical condition is present, it is unrelated to the origin of the fear reaction.
  • According to the Boston Children's Hospital a phobia is a type of anxiety disorder, that happens mostly with children and can be related to diverse reasons, they can happen due to biological, family and environmental factors those factors can be triggered through many different reasons, they can be inherited or associated with random or fixed events. 

Types

Fear of abandonment

From infancy, a child can feel whether or not their mother cares for them. As a child grows and develops, they will need continued guidance until they reach adulthood. When a child's discipline is directed at them instead of their misbehavior, the child feels as if their relationship with their parents is at risk. Phrases like "Ugh, you're killing me," "I'll give you up for adoption," or "I could just kill myself" are especially harmful. These phrases can make the child unstable and overly anxious when left alone. The children perceive that they are unloved and blame themselves for the rejection.

Fear of animals

The fear of animals most often occurs in the third year of life. In some cases, the fear has logical origins such as a traumatic experience with a large seemly furious dog. In others, however, the fear is less rational. When a child fears small seemingly harmless animals like bunny rabbits and kittens, it is often due to the child relating the animal to something "scary" they have seen elsewhere.

Fear of darkness

One of the first fears that a child can acquire is a fear of darkness. Because a child lacks the coordination and knowledge of an adult they often allow their themselves to imagine goblins and ghosts hidden in the depths of darkness surrounding them.

Fear of strangers

The fear of strangers develops within the first six to ten months. It is characterized by crying or whimpering when introduced to unfamiliar people.

Nightmares

Nightmares "represent the fulfillment of forbidden, repressed, or rejected wishes." Dreams may consist of aggressive monsters, sexual stirrings, or something unexpected.

Cause

Though some fears are inborn, the majority are learned. Phobias develop through negative experiences and through observation. One way children begin to develop fears is by witnessing or hearing about dangers. Ollendick proposes while some phobias may originate from a single traumatizing experience, others may be caused by simpler, or less dramatic, origins such as observing another child's phobic reaction or through the exposure to media that introduces phobias.

In a study reported by Child and Adolescent Mental Health, parents filled out a questionnaire regarding common origins of phobias. In this study of 50 hydrophobic children around the mean age of 5½ the results were as follows:

  • 2% of parents linked their child's phobia to a [direct conditioning episode]
  • 26% of parents linked their child's phobia to a [vicarious conditioning episodes]
  • 56% of parents linked their child's phobia to their child's very first contact with water
  • 16% of parents could not directly link their child's phobia

In addition to asking about the origins of a child's fear, the questionnaire asked if parents believed that "information associated with adverse consequences was the most influential factor in the development of their child"s phobia." The results were:

  • 0% of parents thought it was the most influential factor
  • 14% of parents thought it was somewhat influential
  • 86% of parents thought it had little to no influence

Treatment

Phobias are irrational; they cannot be reasoned away. In the case of most phobias, a qualified councilor or trained psychologist is needed to help a child overcome his or her phobia.

There are, however, a few things that may help a child overcome his or her fears. Parents or guardians should be supportive and encouraging to help their children overcome fears. Children should not be pushed to face their fears prematurely.

Epidemiology

According to Child and Adolescent Mental Health, approximately 5% of children suffer from specific phobias and 15% seek treatment for anxiety-related problems. In recent years the number of children with clinically diagnosed phobias has gradually increased. Researchers are finding that the majority of these diagnoses come anxiety related phobias or social phobias. Specific phobias are more prevalent in girls than in boys. Likewise, specific phobias are also more prevalent in older children than younger.

Operant conditioning

From Wikipedia, the free encyclopedia
 






Operant conditioning



Extinction

























Reinforcement
Increase behavior




Punishment
Decrease behavior































Positive reinforcement
Add appetitive stimulus
following correct behavior

Negative reinforcement
Positive punishment
Add noxious stimulus
following behavior

Negative punishment
Remove appetitive stimulus
following behavior



















Escape
Remove noxious stimulus
following correct behavior

Active avoidance
Behavior avoids noxious stimulus

Operant conditioning (also called instrumental conditioning) is a type of associative learning process through which the strength of a behavior is modified by reinforcement or punishment. It is also a procedure that is used to bring about such learning.

Although operant and classical conditioning both involve behaviors controlled by environmental stimuli, they differ in nature. In operant conditioning, behavior is controlled by external stimuli. For example, a child may learn to open a box to get the sweets inside, or learn to avoid touching a hot stove; in operant terms, the box and the stove are "discriminative stimuli". Operant behavior is said to be "voluntary". The responses are under the control of the organism and are operants. For example, the child may face a choice between opening the box and petting a puppy.

In contrast, classical conditioning involves involuntary behavior based on the pairing of stimuli with biologically significant events. The responses are under the control of some stimulus because they are reflexes, automatically elicited by the appropriate stimuli. For example, sight of sweets may cause a child to salivate, or the sound of a door slam may signal an angry parent, causing a child to tremble. Salivation and trembling are not operants; they are not reinforced by their consequences, and they are not voluntarily "chosen".

However, both kinds of learning can affect behavior. Classically conditioned stimuli—for example, a picture of sweets on a box—might enhance operant conditioning by encouraging a child to approach and open the box. Research has shown this to be a beneficial phenomenon in cases where operant behavior is error-prone.

The study of animal learning in the 20th century was dominated by the analysis of these two sorts of learning, and they are still at the core of behavior analysis. They have also been applied to the study of social psychology, helping to clarify certain phenomena such as the false consensus effect.

Historical note

Thorndike's law of effect

Operant conditioning, sometimes called instrumental learning, was first extensively studied by Edward L. Thorndike (1874–1949), who observed the behavior of cats trying to escape from home-made puzzle boxes. A cat could escape from the box by a simple response such as pulling a cord or pushing a pole, but when first constrained, the cats took a long time to get out. With repeated trials ineffective responses occurred less frequently and successful responses occurred more frequently, so the cats escaped more and more quickly. Thorndike generalized this finding in his law of effect, which states that behaviors followed by satisfying consequences tend to be repeated and those that produce unpleasant consequences are less likely to be repeated. In short, some consequences strengthen behavior and some consequences weaken behavior. By plotting escape time against trial number Thorndike produced the first known animal learning curves through this procedure.

Humans appear to learn many simple behaviors through the sort of process studied by Thorndike, now called operant conditioning. That is, responses are retained when they lead to a successful outcome and discarded when they do not, or when they produce aversive effects. This usually happens without being planned by any "teacher", but operant conditioning has been used by parents in teaching their children for thousands of years.

B. F. Skinner

B.F. Skinner at the Harvard Psychology Department, circa 1950

B.F. Skinner (1904–1990) is referred to as the Father of operant conditioning, and his work is frequently cited in connection with this topic. His 1938 book "The Behavior of Organisms: An Experimental Analysis", initiated his lifelong study of operant conditioning and its application to human and animal behavior. Following the ideas of Ernst Mach, Skinner rejected Thorndike's reference to unobservable mental states such as satisfaction, building his analysis on observable behavior and its equally observable consequences.

Skinner believed that classical conditioning was too simplistic to be used to describe something as complex as human behavior. Operant conditioning, in his opinion, better described human behavior as it examined causes and effects of intentional behavior.

To implement his empirical approach, Skinner invented the operant conditioning chamber, or "Skinner Box", in which subjects such as pigeons and rats were isolated and could be exposed to carefully controlled stimuli. Unlike Thorndike's puzzle box, this arrangement allowed the subject to make one or two simple, repeatable responses, and the rate of such responses became Skinner's primary behavioral measure. Another invention, the cumulative recorder, produced a graphical record from which these response rates could be estimated. These records were the primary data that Skinner and his colleagues used to explore the effects on response rate of various reinforcement schedules. A reinforcement schedule may be defined as "any procedure that delivers reinforcement to an organism according to some well-defined rule". The effects of schedules became, in turn, the basic findings from which Skinner developed his account of operant conditioning. He also drew on many less formal observations of human and animal behavior.

Many of Skinner's writings are devoted to the application of operant conditioning to human behavior. In 1948 he published Walden Two, a fictional account of a peaceful, happy, productive community organized around his conditioning principles. In 1957, Skinner published Verbal Behavior, which extended the principles of operant conditioning to language, a form of human behavior that had previously been analyzed quite differently by linguists and others. Skinner defined new functional relationships such as "mands" and "tacts" to capture some essentials of language, but he introduced no new principles, treating verbal behavior like any other behavior controlled by its consequences, which included the reactions of the speaker's audience.

Concepts and procedures

Origins of operant behavior: operant variability

Operant behavior is said to be "emitted"; that is, initially it is not elicited by any particular stimulus. Thus one may ask why it happens in the first place. The answer to this question is like Darwin's answer to the question of the origin of a "new" bodily structure, namely, variation and selection. Similarly, the behavior of an individual varies from moment to moment, in such aspects as the specific motions involved, the amount of force applied, or the timing of the response. Variations that lead to reinforcement are strengthened, and if reinforcement is consistent, the behavior tends to remain stable. However, behavioral variability can itself be altered through the manipulation of certain variables.

Modifying operant behavior: reinforcement and punishment

Reinforcement and punishment are the core tools through which operant behavior is modified. These terms are defined by their effect on behavior. Either may be positive or negative.

  • Positive reinforcement and negative reinforcement increase the probability of a behavior that they follow, while positive punishment and negative punishment reduce the probability of behavior that they follow.

Another procedure is called "extinction".

  • Extinction occurs when a previously reinforced behavior is no longer reinforced with either positive or negative reinforcement. During extinction the behavior becomes less probable. Occasional reinforcement can lead to an even longer delay before behavior extinction due to the learning factor of repeated instances becoming necessary to get reinforcement, when compared with reinforcement being given at each opportunity before extinction.

There are a total of five consequences.

  1. Positive reinforcement occurs when a behavior (response) is rewarding or the behavior is followed by another stimulus that is rewarding, increasing the frequency of that behavior. For example, if a rat in a Skinner box gets food when it presses a lever, its rate of pressing will go up. This procedure is usually called simply reinforcement.
  2. Negative reinforcement (a.k.a. escape) occurs when a behavior (response) is followed by the removal of an aversive stimulus, thereby increasing the original behavior's frequency. In the Skinner Box experiment, the aversive stimulus might be a loud noise continuously inside the box; negative reinforcement would happen when the rat presses a lever to turn off the noise.
  3. Positive punishment (also referred to as "punishment by contingent stimulation") occurs when a behavior (response) is followed by an aversive stimulus. Example: pain from a spanking, which would often result in a decrease in that behavior. Positive punishment is a confusing term, so the procedure is usually referred to as "punishment".
  4. Negative punishment (penalty) (also called "punishment by contingent withdrawal") occurs when a behavior (response) is followed by the removal of a stimulus. Example: taking away a child's toy following an undesired behavior by him/her, which would result in a decrease in the undesirable behavior.
  5. Extinction occurs when a behavior (response) that had previously been reinforced is no longer effective. Example: a rat is first given food many times for pressing a lever, until the experimenter no longer gives out food as a reward. The rat would typically press the lever less often and then stop. The lever pressing would then be said to be "extinguished."

It is important to note that actors (e.g. a rat) are not spoken of as being reinforced, punished, or extinguished; it is the actions that are reinforced, punished, or extinguished. Reinforcement, punishment, and extinction are not terms whose use is restricted to the laboratory. Naturally-occurring consequences can also reinforce, punish, or extinguish behavior and are not always planned or delivered on purpose.

Schedules of reinforcement

Schedules of reinforcement are rules that control the delivery of reinforcement. The rules specify either the time that reinforcement is to be made available, or the number of responses to be made, or both. Many rules are possible, but the following are the most basic and commonly used:

  • Fixed interval schedule: Reinforcement occurs following the first response after a fixed time has elapsed after the previous reinforcement. This schedule yields a "break-run" pattern of response; that is, after training on this schedule, the organism typically pauses after reinforcement, and then begins to respond rapidly as the time for the next reinforcement approaches.
  • Variable interval schedule: Reinforcement occurs following the first response after a variable time has elapsed from the previous reinforcement. This schedule typically yields a relatively steady rate of response that varies with the average time between reinforcements.
  • Fixed ratio schedule: Reinforcement occurs after a fixed number of responses have been emitted since the previous reinforcement. An organism trained on this schedule typically pauses for a while after a reinforcement and then responds at a high rate. If the response requirement is low there may be no pause; if the response requirement is high the organism may quit responding altogether.
  • Variable ratio schedule: Reinforcement occurs after a variable number of responses have been emitted since the previous reinforcement. This schedule typically yields a very high, persistent rate of response.
  • Continuous reinforcement: Reinforcement occurs after each response. Organisms typically respond as rapidly as they can, given the time taken to obtain and consume reinforcement, until they are satiated.

Factors that alter the effectiveness of reinforcement and punishment

The effectiveness of reinforcement and punishment can be changed.

  1. Satiation/Deprivation: The effectiveness of a positive or "appetitive" stimulus will be reduced if the individual has received enough of that stimulus to satisfy his/her appetite. The opposite effect will occur if the individual becomes deprived of that stimulus: the effectiveness of a consequence will then increase. A subject with a full stomach wouldn't feel as motivated as a hungry one.
  2. Immediacy: An immediate consequence is more effective than a delayed one. If one gives a dog a treat for sitting within five seconds, the dog will learn faster than if the treat is given after thirty seconds.
  3. Contingency: To be most effective, reinforcement should occur consistently after responses and not at other times. Learning may be slower if reinforcement is intermittent, that is, following only some instances of the same response. Responses reinforced intermittently are usually slower to extinguish than are responses that have always been reinforced.
  4. Size: The size, or amount, of a stimulus often affects its potency as a reinforcer. Humans and animals engage in cost-benefit analysis. If a lever press brings ten food pellets, lever pressing may be learned more rapidly than if a press brings only one pellet. A pile of quarters from a slot machine may keep a gambler pulling the lever longer than a single quarter.

Most of these factors serve biological functions. For example, the process of satiation helps the organism maintain a stable internal environment (homeostasis). When an organism has been deprived of sugar, for example, the taste of sugar is an effective reinforcer. When the organism's blood sugar reaches or exceeds an optimum level the taste of sugar becomes less effective or even aversive.

Shaping

Shaping is a conditioning method much used in animal training and in teaching nonverbal humans. It depends on operant variability and reinforcement, as described above. The trainer starts by identifying the desired final (or "target") behavior. Next, the trainer chooses a behavior that the animal or person already emits with some probability. The form of this behavior is then gradually changed across successive trials by reinforcing behaviors that approximate the target behavior more and more closely. When the target behavior is finally emitted, it may be strengthened and maintained by the use of a schedule of reinforcement.

Noncontingent reinforcement

Noncontingent reinforcement is the delivery of reinforcing stimuli regardless of the organism's behavior. Noncontingent reinforcement may be used in an attempt to reduce an undesired target behavior by reinforcing multiple alternative responses while extinguishing the target response. As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement".

Stimulus control of operant behavior

Though initially operant behavior is emitted without an identified reference to a particular stimulus, during operant conditioning operants come under the control of stimuli that are present when behavior is reinforced. Such stimuli are called "discriminative stimuli." A so-called "three-term contingency" is the result. That is, discriminative stimuli set the occasion for responses that produce reward or punishment. Example: a rat may be trained to press a lever only when a light comes on; a dog rushes to the kitchen when it hears the rattle of his/her food bag; a child reaches for candy when s/he sees it on a table.

Discrimination, generalization & context

Most behavior is under stimulus control. Several aspects of this may be distinguished:

  • Discrimination typically occurs when a response is reinforced only in the presence of a specific stimulus. For example, a pigeon might be fed for pecking at a red light and not at a green light; in consequence, it pecks at red and stops pecking at green. Many complex combinations of stimuli and other conditions have been studied; for example an organism might be reinforced on an interval schedule in the presence of one stimulus and on a ratio schedule in the presence of another.
  • Generalization is the tendency to respond to stimuli that are similar to a previously trained discriminative stimulus. For example, having been trained to peck at "red" a pigeon might also peck at "pink", though usually less strongly.
  • Context refers to stimuli that are continuously present in a situation, like the walls, tables, chairs, etc. in a room, or the interior of an operant conditioning chamber. Context stimuli may come to control behavior as do discriminative stimuli, though usually more weakly. Behaviors learned in one context may be absent, or altered, in another. This may cause difficulties for behavioral therapy, because behaviors learned in the therapeutic setting may fail to occur in other situations.

Behavioral sequences: conditioned reinforcement and chaining

Most behavior cannot easily be described in terms of individual responses reinforced one by one. The scope of operant analysis is expanded through the idea of behavioral chains, which are sequences of responses bound together by the three-term contingencies defined above. Chaining is based on the fact, experimentally demonstrated, that a discriminative stimulus not only sets the occasion for subsequent behavior, but it can also reinforce a behavior that precedes it. That is, a discriminative stimulus is also a "conditioned reinforcer". For example, the light that sets the occasion for lever pressing may be used to reinforce "turning around" in the presence of a noise. This results in the sequence "noise – turn-around – light – press lever – food". Much longer chains can be built by adding more stimuli and responses.

Escape and avoidance

In escape learning, a behavior terminates an (aversive) stimulus. For example, shielding one's eyes from sunlight terminates the (aversive) stimulation of bright light in one's eyes. (This is an example of negative reinforcement, defined above.) Behavior that is maintained by preventing a stimulus is called "avoidance," as, for example, putting on sun glasses before going outdoors. Avoidance behavior raises the so-called "avoidance paradox", for, it may be asked, how can the non-occurrence of a stimulus serve as a reinforcer? This question is addressed by several theories of avoidance (see below).

Two kinds of experimental settings are commonly used: discriminated and free-operant avoidance learning.

Discriminated avoidance learning

A discriminated avoidance experiment involves a series of trials in which a neutral stimulus such as a light is followed by an aversive stimulus such as a shock. After the neutral stimulus appears an operant response such as a lever press prevents or terminate the aversive stimulus. In early trials, the subject does not make the response until the aversive stimulus has come on, so these early trials are called "escape" trials. As learning progresses, the subject begins to respond during the neutral stimulus and thus prevents the aversive stimulus from occurring. Such trials are called "avoidance trials." This experiment is said to involve classical conditioning because a neutral CS (conditioned stimulus) is paired with the aversive US (unconditioned stimulus); this idea underlies the two-factor theory of avoidance learning described below.

Free-operant avoidance learning

In free-operant avoidance a subject periodically receives an aversive stimulus (often an electric shock) unless an operant response is made; the response delays the onset of the shock. In this situation, unlike discriminated avoidance, no prior stimulus signals the shock. Two crucial time intervals determine the rate of avoidance learning. This first is the S-S (shock-shock) interval. This is time between successive shocks in the absence of a response. The second interval is the R-S (response-shock) interval. This specifies the time by which an operant response delays the onset of the next shock. Note that each time the subject performs the operant response, the R-S interval without shock begins anew.

Two-process theory of avoidance

This theory was originally proposed in order to explain discriminated avoidance learning, in which an organism learns to avoid an aversive stimulus by escaping from a signal for that stimulus. Two processes are involved: classical conditioning of the signal followed by operant conditioning of the escape response:

a) Classical conditioning of fear. Initially the organism experiences the pairing of a CS with an aversive US. The theory assumes that this pairing creates an association between the CS and the US through classical conditioning and, because of the aversive nature of the US, the CS comes to elicit a conditioned emotional reaction (CER) – "fear." b) Reinforcement of the operant response by fear-reduction. As a result of the first process, the CS now signals fear; this unpleasant emotional reaction serves to motivate operant responses, and responses that terminate the CS are reinforced by fear termination. Note that the theory does not say that the organism "avoids" the US in the sense of anticipating it, but rather that the organism "escapes" an aversive internal state that is caused by the CS. Several experimental findings seem to run counter to two-factor theory. For example, avoidance behavior often extinguishes very slowly even when the initial CS-US pairing never occurs again, so the fear response might be expected to extinguish (see Classical conditioning). Further, animals that have learned to avoid often show little evidence of fear, suggesting that escape from fear is not necessary to maintain avoidance behavior.

Operant or "one-factor" theory

Some theorists suggest that avoidance behavior may simply be a special case of operant behavior maintained by its consequences. In this view the idea of "consequences" is expanded to include sensitivity to a pattern of events. Thus, in avoidance, the consequence of a response is a reduction in the rate of aversive stimulation. Indeed, experimental evidence suggests that a "missed shock" is detected as a stimulus, and can act as a reinforcer. Cognitive theories of avoidance take this idea a step farther. For example, a rat comes to "expect" shock if it fails to press a lever and to "expect no shock" if it presses it, and avoidance behavior is strengthened if these expectancies are confirmed.

Operant hoarding

Operant hoarding refers to the observation that rats reinforced in a certain way may allow food pellets to accumulate in a food tray instead of retrieving those pellets. In this procedure, retrieval of the pellets always instituted a one-minute period of extinction during which no additional food pellets were available but those that had been accumulated earlier could be consumed. This finding appears to contradict the usual finding that rats behave impulsively in situations in which there is a choice between a smaller food object right away and a larger food object after some delay. See schedules of reinforcement.

Neurobiological correlates

The first scientific studies identifying neurons that responded in ways that suggested they encode for conditioned stimuli came from work by Mahlon deLong and by R.T. Richardson. They showed that nucleus basalis neurons, which release acetylcholine broadly throughout the cerebral cortex, are activated shortly after a conditioned stimulus, or after a primary reward if no conditioned stimulus exists. These neurons are equally active for positive and negative reinforcers, and have been shown to be related to neuroplasticity in many cortical regions. Evidence also exists that dopamine is activated at similar times. There is considerable evidence that dopamine participates in both reinforcement and aversive learning. Dopamine pathways project much more densely onto frontal cortex regions. Cholinergic projections, in contrast, are dense even in the posterior cortical regions like the primary visual cortex. A study of patients with Parkinson's disease, a condition attributed to the insufficient action of dopamine, further illustrates the role of dopamine in positive reinforcement. It showed that while off their medication, patients learned more readily with aversive consequences than with positive reinforcement. Patients who were on their medication showed the opposite to be the case, positive reinforcement proving to be the more effective form of learning when dopamine activity is high.

A neurochemical process involving dopamine has been suggested to underlie reinforcement. When an organism experiences a reinforcing stimulus, dopamine pathways in the brain are activated. This network of pathways "releases a short pulse of dopamine onto many dendrites, thus broadcasting a global reinforcement signal to postsynaptic neurons." This allows recently activated synapses to increase their sensitivity to efferent (conducting outward) signals, thus increasing the probability of occurrence for the recent responses that preceded the reinforcement. These responses are, statistically, the most likely to have been the behavior responsible for successfully achieving reinforcement. But when the application of reinforcement is either less immediate or less contingent (less consistent), the ability of dopamine to act upon the appropriate synapses is reduced.

Questions about the law of effect

A number of observations seem to show that operant behavior can be established without reinforcement in the sense defined above. Most cited is the phenomenon of autoshaping (sometimes called "sign tracking"), in which a stimulus is repeatedly followed by reinforcement, and in consequence the animal begins to respond to the stimulus. For example, a response key is lighted and then food is presented. When this is repeated a few times a pigeon subject begins to peck the key even though food comes whether the bird pecks or not. Similarly, rats begin to handle small objects, such as a lever, when food is presented nearby. Strikingly, pigeons and rats persist in this behavior even when pecking the key or pressing the lever leads to less food (omission training). Another apparent operant behavior that appears without reinforcement is contrafreeloading.

These observations and others appear to contradict the law of effect, and they have prompted some researchers to propose new conceptualizations of operant reinforcement. A more general view is that autoshaping is an instance of classical conditioning; the autoshaping procedure has, in fact, become one of the most common ways to measure classical conditioning. In this view, many behaviors can be influenced by both classical contingencies (stimulus-response) and operant contingencies (response-reinforcement), and the experimenter's task is to work out how these interact.

Applications

Reinforcement and punishment are ubiquitous in human social interactions, and a great many applications of operant principles have been suggested and implemented. The following are some examples.

Addiction and dependence

Positive and negative reinforcement play central roles in the development and maintenance of addiction and drug dependence. An addictive drug is intrinsically rewarding; that is, it functions as a primary positive reinforcer of drug use. The brain's reward system assigns it incentive salience (i.e., it is "wanted" or "desired"), so as an addiction develops, deprivation of the drug leads to craving. In addition, stimuli associated with drug use – e.g., the sight of a syringe, and the location of use – become associated with the intense reinforcement induced by the drug. These previously neutral stimuli acquire several properties: their appearance can induce craving, and they can become conditioned positive reinforcers of continued use. Thus, if an addicted individual encounters one of these drug cues, a craving for the associated drug may reappear. For example, anti-drug agencies previously used posters with images of drug paraphernalia as an attempt to show the dangers of drug use. However, such posters are no longer used because of the effects of incentive salience in causing relapse upon sight of the stimuli illustrated in the posters.

In drug dependent individuals, negative reinforcement occurs when a drug is self-administered in order to alleviate or "escape" the symptoms of physical dependence (e.g., tremors and sweating) and/or psychological dependence (e.g., anhedonia, restlessness, irritability, and anxiety) that arise during the state of drug withdrawal.

Animal training

Animal trainers and pet owners were applying the principles and practices of operant conditioning long before these ideas were named and studied, and animal training still provides one of the clearest and most convincing examples of operant control. Of the concepts and procedures described in this article, a few of the most salient are the following: (a) availability of primary reinforcement (e.g. a bag of dog yummies); (b) the use of secondary reinforcement, (e.g. sounding a clicker immediately after a desired response, then giving yummy); (c) contingency, assuring that reinforcement (e.g. the clicker) follows the desired behavior and not something else; (d) shaping, as in gradually getting a dog to jump higher and higher; (e) intermittent reinforcement, as in gradually reducing the frequency of reinforcement to induce persistent behavior without satiation; (f) chaining, where a complex behavior is gradually constructed from smaller units.

Animal training has effects on positive reinforcement and negative reinforcement. Schedules of reinforcements may play a big role on the animal training case.

Applied behavior analysis

Applied behavior analysis is the discipline initiated by B. F. Skinner that applies the principles of conditioning to the modification of socially significant human behavior. It uses the basic concepts of conditioning theory, including conditioned stimulus (SC), discriminative stimulus (Sd), response (R), and reinforcing stimulus (Srein or Sr for reinforcers, sometimes Save for aversive stimuli). A conditioned stimulus controls behaviors developed through respondent (classical) conditioning, such as emotional reactions. The other three terms combine to form Skinner's "three-term contingency": a discriminative stimulus sets the occasion for responses that lead to reinforcement. Researchers have found the following protocol to be effective when they use the tools of operant conditioning to modify human behavior:

  1. State goal Clarify exactly what changes are to be brought about. For example, "reduce weight by 30 pounds."
  2. Monitor behavior Keep track of behavior so that one can see whether the desired effects are occurring. For example, keep a chart of daily weights.
  3. Reinforce desired behavior For example, congratulate the individual on weight losses. With humans, a record of behavior may serve as a reinforcement. For example, when a participant sees a pattern of weight loss, this may reinforce continuance in a behavioral weight-loss program. However, individuals may perceive reinforcement which is intended to be positive as negative and vice versa. For example, a record of weight loss may act as negative reinforcement if it reminds the individual how heavy they actually are. The token economy, is an exchange system in which tokens are given as rewards for desired behaviors. Tokens may later be exchanged for a desired prize or rewards such as power, prestige, goods or services.
  4. Reduce incentives to perform undesirable behavior For example, remove candy and fatty snacks from kitchen shelves.

Practitioners of applied behavior analysis (ABA) bring these procedures, and many variations and developments of them, to bear on a variety of socially significant behaviors and issues. In many cases, practitioners use operant techniques to develop constructive, socially acceptable behaviors to replace aberrant behaviors. The techniques of ABA have been effectively applied in to such things as early intensive behavioral interventions for children with an autism spectrum disorder (ASD) research on the principles influencing criminal behavior, HIV prevention, conservation of natural resources, education, gerontology, health and exercise, industrial safety, language acquisition, littering, medical procedures, parenting, psychotherapy, seatbelt use, severe mental disorders, sports, substance abuse, phobias, pediatric feeding disorders, and zoo management and care of animals. Some of these applications are among those described below.

Child behavior – parent management training

Providing positive reinforcement for appropriate child behaviors is a major focus of parent management training. Typically, parents learn to reward appropriate behavior through social rewards (such as praise, smiles, and hugs) as well as concrete rewards (such as stickers or points towards a larger reward as part of an incentive system created collaboratively with the child). In addition, parents learn to select simple behaviors as an initial focus and reward each of the small steps that their child achieves towards reaching a larger goal (this concept is called "successive approximations").

Economics

Both psychologists and economists have become interested in applying operant concepts and findings to the behavior of humans in the marketplace. An example is the analysis of consumer demand, as indexed by the amount of a commodity that is purchased. In economics, the degree to which price influences consumption is called "the price elasticity of demand." Certain commodities are more elastic than others; for example, a change in price of certain foods may have a large effect on the amount bought, while gasoline and other everyday consumables may be less affected by price changes. In terms of operant analysis, such effects may be interpreted in terms of motivations of consumers and the relative value of the commodities as reinforcers.

Gambling – variable ratio scheduling

As stated earlier in this article, a variable ratio schedule yields reinforcement after the emission of an unpredictable number of responses. This schedule typically generates rapid, persistent responding. Slot machines pay off on a variable ratio schedule, and they produce just this sort of persistent lever-pulling behavior in gamblers. The variable ratio payoff from slot machines and other forms of gambling has often been cited as a factor underlying gambling addiction.

Military psychology

Human beings have an innate resistance to killing and are reluctant to act in a direct, aggressive way towards members of their own species, even to save life. This resistance to killing has caused infantry to be remarkably inefficient throughout the history of military warfare.

This phenomenon was not understood until S.L.A. Marshall (Brigadier General and military historian) undertook interview studies of WWII infantry immediately following combat engagement. Marshall's well-known and controversial book, Men Against Fire, revealed that only 15% of soldiers fired their rifles with the purpose of killing in combat. Following acceptance of Marshall's research by the US Army in 1946, the Human Resources Research Office of the US Army began implementing new training protocols which resemble operant conditioning methods. Subsequent applications of such methods increased the percentage of soldiers able to kill to around 50% in Korea and over 90% in Vietnam. Revolutions in training included replacing traditional pop-up firing ranges with three-dimensional, man-shaped, pop-up targets which collapsed when hit. This provided immediate feedback and acted as positive reinforcement for a soldier's behavior. Other improvements to military training methods have included the timed firing course; more realistic training; high repetitions; praise from superiors; marksmanship rewards; and group recognition. Negative reinforcement includes peer accountability or the requirement to retake courses. Modern military training conditions mid-brain response to combat pressure by closely simulating actual combat, using mainly Pavlovian classical conditioning and Skinnerian operant conditioning (both forms of behaviorism).

Modern marksmanship training is such an excellent example of behaviorism that it has been used for years in the introductory psychology course taught to all cadets at the US Military Academy at West Point as a classic example of operant conditioning. In the 1980s, during a visit to West Point, B.F. Skinner identified modern military marksmanship training as a near-perfect application of operant conditioning.

Lt. Col. Dave Grossman states about operant conditioning and US Military training that:

It is entirely possible that no one intentionally sat down to use operant conditioning or behavior modification techniques to train soldiers in this area…But from the standpoint of a psychologist who is also a historian and a career soldier, it has become increasingly obvious to me that this is exactly what has been achieved.

Nudge theory

Nudge theory (or nudge) is a concept in behavioural science, political theory and economics which argues that indirect suggestions to try to achieve non-forced compliance can influence the motives, incentives and decision making of groups and individuals, at least as effectively – if not more effectively – than direct instruction, legislation, or enforcement.

Praise

The concept of praise as a means of behavioral reinforcement is rooted in B.F. Skinner's model of operant conditioning. Through this lens, praise has been viewed as a means of positive reinforcement, wherein an observed behavior is made more likely to occur by contingently praising said behavior. Hundreds of studies have demonstrated the effectiveness of praise in promoting positive behaviors, notably in the study of teacher and parent use of praise on child in promoting improved behavior and academic performance, but also in the study of work performance. Praise has also been demonstrated to reinforce positive behaviors in non-praised adjacent individuals (such as a classmate of the praise recipient) through vicarious reinforcement. Praise may be more or less effective in changing behavior depending on its form, content and delivery. In order for praise to effect positive behavior change, it must be contingent on the positive behavior (i.e., only administered after the targeted behavior is enacted), must specify the particulars of the behavior that is to be reinforced, and must be delivered sincerely and credibly.

Acknowledging the effect of praise as a positive reinforcement strategy, numerous behavioral and cognitive behavioral interventions have incorporated the use of praise in their protocols. The strategic use of praise is recognized as an evidence-based practice in both classroom management and parenting training interventions, though praise is often subsumed in intervention research into a larger category of positive reinforcement, which includes strategies such as strategic attention and behavioral rewards.

Several studies have been done on the effect cognitive-behavioral therapy and operant-behavioral therapy have on different medical conditions. When patients developed cognitive and behavioral techniques that changed their behaviors, attitudes, and emotions; their pain severity decreased. The results of these studies showed an influence of cognitions on pain perception and impact presented explained the general efficacy of Cognitive-Behavioral therapy (CBT) and Operant-Behavioral therapy (OBT).

Psychological manipulation

Braiker identified the following ways that manipulators control their victims:

Traumatic bonding

Traumatic bonding occurs as the result of ongoing cycles of abuse in which the intermittent reinforcement of reward and punishment creates powerful emotional bonds that are resistant to change. The other source indicated that  'The necessary conditions for traumatic bonding are that one person must dominate the other and that the level of abuse chronically spikes and then subsides. The relationship is characterized by periods of permissive, compassionate, and even affectionate behavior from the dominant person, punctuated by intermittent episodes of intense abuse. To maintain the upper hand, the victimizer manipulates the behavior of the victim and limits the victim's options so as to perpetuate the power imbalance. Any threat to the balance of dominance and submission may be met with an escalating cycle of punishment ranging from seething intimidation to intensely violent outbursts. The victimizer also isolates the victim from other sources of support, which reduces the likelihood of detection and intervention, impairs the victim's ability to receive countervailing self-referent feedback, and strengthens the sense of unilateral dependency...The traumatic effects of these abusive relationships may include the impairment of the victim's capacity for accurate self-appraisal, leading to a sense of personal inadequacy and a subordinate sense of dependence upon the dominating person. Victims also may encounter a variety of unpleasant social and legal consequences of their emotional and behavioral affiliation with someone who perpetrated aggressive acts, even if they themselves were the recipients of the aggression. '.

Video games

The majority of video games are designed around a compulsion loop, adding a type of positive reinforcement through a variable rate schedule to keep the player playing. This can lead to the pathology of video game addiction.

As part of a trend in the monetization of video games during the 2010s, some games offered loot boxes as rewards or as items purchasable by real world funds. Boxes contains a random selection of in-game items. The practice has been tied to the same methods that slot machines and other gambling devices dole out rewards, as it follows a variable rate schedule. While the general perception that loot boxes are a form of gambling, the practice is only classified as such in a few countries. However, methods to use those items as virtual currency for online gambling or trading for real world money has created a skin gambling market that is under legal evaluation.

Workplace culture of fear

Ashforth discussed potentially destructive sides of leadership and identified what he referred to as petty tyrants: leaders who exercise a tyrannical style of management, resulting in a climate of fear in the workplace. Partial or intermittent negative reinforcement can create an effective climate of fear and doubt. When employees get the sense that bullies are tolerated, a climate of fear may be the result.

Individual differences in sensitivity to reward, punishment, and motivation have been studied under the premises of reinforcement sensitivity theory and have also been applied to workplace performance.

One of the many reasons proposed for the dramatic costs associated with healthcare is the practice of defensive medicine. Prabhu reviews the article by Cole and discusses how the responses of two groups of neurosurgeons are classic operant behavior. One group practice in a state with restrictions on medical lawsuits and the other group with no restrictions. The group of neurosurgeons were queried anonymously on their practice patterns. The physicians changed their practice in response to a negative feedback (fear from lawsuit) in the group that practiced in a state with no restrictions on medical lawsuits.

Hydrogen-like atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Hydrogen-like_atom ...