Sigmund Freud
invoked the notion of regression in relation to his theory of dreams
(1900) and sexual perversions (1905), but the concept itself was first
elaborated in his paper "The Disposition to Obsessional Neurosis"
(1913). In 1914, he added a paragraph to The Interpretation of Dreams
that distinguished three kinds of regression, which he called
topographical regression, temporal regression, and formal regression.
Freud, regression, and neurosis
Freud saw inhibited development, fixation, and regression as centrally formative elements in the creation of a neurosis. Arguing that "the libidinal function goes through a lengthy development", he assumed that "a development of this kind involves two dangers – first, of inhibition, and secondly, of regression". Inhibitions produced fixations, and the "stronger the fixations on its
path of development, the more readily will the function evade external
difficulties by regressing to the fixations".
Neurosis for Freud was thus the product of a flight from an
unsatisfactory reality "along the path of involution, of regression, of a
return to earlier phases of sexual life, phases from which at one time
satisfaction was not withheld. This regression appears to be a twofold
one: a temporal one, in so far as the libido, the erotic needs, hark back to stages of development that are earlier in time, and a formal one, in that the original and primitive methods of psychic expression are employed in manifesting those needs".
Behaviors associated with regression can vary greatly depending upon the stage of fixation: one at the oral stage might result in excessive eating or smoking, or verbal aggression, whereas one at the anal stage
might result in excessive tidiness or messiness. Freud recognised that
"it is possible for several fixations to be left behind in the course of
development, and each of these may allow an irruption of the libido
that has been pushed off – beginning, perhaps, with the later acquired
fixations, and going on, as the lifestyle develops, to the original
ones".
In the service of the ego
Ernst Kris
supplements Freud's general formulations with a specific notion of
"regression in the service of the ego"...the specific means whereby
preconscious and unconscious material appear in the creator's
consciousness'. Kris thus opened the way for ego psychology to take a more positive view of regression. Carl Jung
had earlier argued that 'the patient's regressive tendency...is not
just a relapse into infantilism, but an attempt to get at something
necessary...the universal feeling of childhood innocence, the sense of
security, of protection, of reciprocated love, of trust'. Kris however was concerned rather to differentiate the way that
'Inspiration -...in which the ego controls the primary process and puts
it into its service – needs to be contrasted with the
opposite...condition, in which the ego is overwhelmed by the primary
process'.
Nevertheless his view of regression in the service of the ego could be readily extended into a quasi-Romantic
image of the creative process, in which 'it is only in the fiery storm
of a profound regression, in the course of which the personality
undergoes both dissolution of structure and reorganization, that the
genius becomes capable of wresting himself from the traditional pattern
that he had been forced to integrate through the identifications
necessitated and enforced by the oedipal constellation'.
From there it was perhaps only a small step to the 1960s
valorisation of regression as a positive good in itself. 'In this
particular type of journey, the direction we have to take is back and in....They
will say we are regressed and withdrawn and out of contact with them.
True enough, we have a long, long way to back to contact the reality'. Jungians had however already warned that 'romantic regression meant a
surrender to the non-rational side which had to be paid for by a
sacrifice of the rational and individual side'; and Freud for his part had dourly noted that 'this extraordinary
plasticity of mental developments is not unrestricted in direction; it
may be described as a special capacity for involution – regression –
since it may well happen that a later and higher level of development,
once abandoned, cannot be reached again'.
Later views
Anna Freud (1936) ranked regression first in her enumeration of the defense mechanisms', and similarly suggested that people act out behaviors from the stage of
psychosexual development in which they are fixated. For example, an
individual fixated at an earlier developmental stage might cry or sulk
upon hearing unpleasant news.
Michael Balint
distinguishes between two types of regression: a nasty "malignant"
regression that the Oedipal level neurotic is prone to... and the
"benign" regression of the basic-fault patient. The problem then is what the analyst can do 'to ensure that his
patient's regression should be therapeutic and any danger of a
pathological regression avoided'.
Others have highlighted the technical dilemmas of dealing with
regression from different if complementary angles. On the one hand,
making premature 'assumptions about the patient's state of regression in
the therapy...regarded as still at the breast', for example, might
block awareness of more adult functioning on the patient's part: of the patient's view of the therapist '. The opposite mistake would be 'justifying a retreat from regressive
material presented by a patient. When a patient begins to trust the
analyst or therapist it will be just such disturbing aspects of the
internal world that will be presented for understanding – not for a
panic retreat by the therapist'.
Peter Blos
suggested that 'revisiting of early psychic positions...helps the
adolescent come out of the family envelope', and that 'Regression during
adolescence thus advances the cause of development'. Stanley Olinick
speaks of 'regression in the service of the other' on the part of the
analyst 'during his or her clinical work. Such ego regression is a
pre-condition for empathy'.
Demonstration of pain, impairment, etc. also relates to
regression. When regression becomes the cornerstone of a personality and
the life strategy for overcoming problems, it leads to such an
infantile personality.
In fiction
A clear example of regressive behavior in fiction can be seen in J.D. Salinger's The Catcher in the Rye.
Holden constantly contradicts the progression of time and the aging
process by reverting to childish ideas of escape, unrealistic
expectations and frustration produced by his numerous shifts in
behavior. His tendencies to reject responsibility and society as a whole
because he 'doesn't fit in' also pushes him to prolonged use of reaction formation, unnecessary generalizations, and compulsive lying.
A similar example occurs in Samuel Beckett's Krapp's Last Tape.
Krapp is fixated on reliving earlier times, and reenacts the fetal
condition in his 'den'. He is unable to form mature relationships with
women, seeing them only as replacements for his deceased mother. He
experiences physical ailments that are linked to his fetal complex,
struggling to perform digestive functions on his own. This literal anal retentiveness exemplifies his inefficacy as an independent adult.
Nucleic acid metabolism refers to the set of chemical reactions involved in the synthesis and degradation of nucleic acids (DNA and RNA). Nucleic acids are polymers (biopolymers) composed of monomers called nucleotides.
Nucleotide synthesis is an anabolic process that typically involves the chemical reaction of a phosphate group, a pentose sugar, and a nitrogenous base. In contrast, the degradation of nucleic acids is a catabolic process in which nucleotides or nucleobases are broken down, and their components can be salvaged to form new nucleotides.
Both synthesis and degradation reactions require multiple enzymes to facilitate these processes. Defects or deficiencies in these enzymes can lead to a variety of metabolic disorders.
Composition of nucleotides, which make up nucleic acids.
Synthesis of nucleotides
Nucleotides are the monomers that polymerize to form nucleic acids. Each nucleotide consists of a sugar, a phosphate group, and a nitrogenous base. The nitrogenous bases found in nucleic acids belong to one of two categories: purines or pyrimidines.
In complex multicellular animals, both purines and pyrimidines are primarily synthesized in the liver, but they follow distinct biosynthetic pathways. However, all nucleotide synthesis requires phosphoribosyl pyrophosphate (PRPP), which donates the ribose and phosphate needed to form a nucleotide.
IMP serves as a precursor for both adenosine monophosphate (AMP) and guanosine monophosphate (GMP). AMP is synthesized from IMP using guanosine triphosphate (GTP) and aspartate, with aspartate being converted into fumarate. In contrast, the synthesis of GMP requires an intermediate step: IMP is first oxidized by NAD⁺ to form xanthosine monophosphate (XMP), which is subsequently converted into GMP via the hydrolysis of one ATP molecule and the conversion of glutamine to glutamate.
Both AMP and GMP can be phosphorylated by kinases to form adenosine triphosphate (ATP) and guanosine triphosphate
(GTP), respectively. ATP stimulates the production of GTP, while GTP
stimulates the production of ATP. This cross-regulation maintains a
balanced ratio of ATP and GTP, preventing an excess of either
nucleotide, which could increase the risk of DNA replication errors and
purine misincorporation.
Lesch–Nyhan syndrome is caused by a deficiency of hypoxanthine-guanine phosphoribosyltransferase
(HGPRT), an enzyme that catalyzes the salvage of guanine to GMP. This
X-linked congenital disorder leads to the overproduction of uric acid and is associated with neurological symptoms, including intellectual disability, spasticity, and compulsive self-mutilation.
Pyrimidine synthesis
Uridine-triphosphate
(UTP), at left, reacts with glutamine and other molecules to form
cytidine-triphosphate (CTP), on the right.
The synthesis of pyrimidine nucleotides begins with the formation of uridine monophosphate (UMP). This process requires aspartate, glutamine, bicarbonate, and two molecules of ATP to provide energy. Additionally, phosphoribosyl pyrophosphate
(PRPP) provides the ribose-phosphate backbone. Unlike purine synthesis,
in which the nitrogenous base is built upon PRPP, pyrimidine synthesis
forms the base first and attaches it to PRPP later in the process.
Once UMP is synthesized, it undergoes phosphorylation using ATP
to form uridine-triphosphate (UTP). UTP can then be converted into
cytidine-triphosphate (CTP) in a reaction catalyzed by CTP synthetase, which utilizes glutamine as an amine donor.
The synthesis of thymidine nucleotides requires the reduction of
UMP to deoxyuridine monophosphate (dUMP) via ribonucleotide reductase (see next section). dUMP is then methylated by thymidylate synthase to produce thymidine monophosphate (TMP).
The regulation of pyrimidine synthesis is tightly controlled. ATP,
a purine nucleotide, activates pyrimidine synthesis, while CTP, a
pyrimidine nucleotide, acts as an inhibitor. This regulatory feedback
ensures balanced purine and pyrimidine levels, which is essential for
DNA and RNA synthesis.
Deficiencies in enzymes involved in pyrimidine synthesis can lead to metabolic disorders such as orotic aciduria.
This genetic disorder is characterized by excessive excretion of orotic
acid in urine due to defects in the enzyme UMP synthase, which is
responsible for the conversion of orotic acid into UMP.
Converting nucleotides to deoxynucleotides
Nucleotides are initially synthesized with ribose as the sugar component, a characteristic feature of RNA. However, DNA requires deoxyribose, which lacks the 2'-hydroxyl (-OH) group on the ribose. The removal of this -OH group is catalyzed by ribonucleotide reductase,
an enzyme that converts nucleoside diphosphates (NDPs) into their deoxy
forms, deoxynucleoside diphosphates (dNDPs). The nucleotides must be in
the diphosphate form for this reaction to occur.
General outline of nucleic acid degradation for purines.
The breakdown of DNA and RNA occurs continuously within the cell.
Purine and pyrimidine nucleosides can either be degraded into waste
products for excretion or salvaged for reuse as nucleotide components.
Deficiencies in enzymes involved in pyrimidine catabolism can lead to diseases such as Dihydropyrimidine dehydrogenase deficiency, which causes neurological impairments.
Purine catabolism
Purine
degradation primarily occurs in the liver in humans and requires a
series of enzymes to break down purines into uric acid. First,
nucleotides lose their phosphate groups through the action of 5'-nucleotidase. The purine nucleoside adenosine is then deaminated by adenosine deaminase and hydrolyzed by a nucleosidase to form hypoxanthine. Hypoxanthine is subsequently oxidized to xanthine and then to uric acid via the enzyme xanthine oxidase.
The other purine nucleoside, guanosine, is cleaved to form guanine. Guanine is then deaminated by guanine deaminase
to produce xanthine, which is further converted to uric acid. In both
degradation pathways, oxygen serves as the final electron acceptor. The
excretion of uric acid varies among different animals.
Defects in purine catabolism can lead to various diseases, including gout, which results from the accumulation of uric acid crystals in joints, and adenosine deaminase deficiency, which causes immunodeficiency.
Interconversion of nucleotides
Once
nucleotides are synthesized, they can exchange phosphate groups to form
nucleoside mono-, di-, and triphosphates. The conversion of a
nucleoside diphosphate (NDP) to a nucleoside triphosphate (NTP) is
catalyzed by nucleoside diphosphate kinase, which utilizes ATP as the phosphate donor. Similarly, nucleoside monophosphate kinase facilitates the phosphorylation of nucleoside monophosphates to their diphosphate forms.
Additionally, adenylate kinase
plays a crucial role in regulating cellular energy balance by
catalyzing the interconversion of two molecules of ADP into ATP and AMP
(2 ADP ⇔ ATP + AMP).
The fugitive slave laws were laws passed by the United States Congress in 1793 and 1850 to provide for the return of slaves who escaped from one state into another state or territory. The idea of the fugitive slave law was derived from the Fugitive Slave Clause which is in the United States Constitution (Article IV, Section 2, Paragraph 3). It was thought that forcing states to return fugitive slaves to their masters violated states' rights
due to state sovereignty, and that seizing state property should not be
left up to the states. The Fugitive Slave Clause states that fugitive
slaves "shall be delivered up on Claim of the Party to whom such Service
or Labour may be due", which abridged state rights because apprehending
runaway slaves was a form of retrieving private property. The Compromise of 1850
entailed a series of laws that allowed slavery in the new territories
and forced officials in free states to give a hearing to slave-owners
without a jury.
Pre-colonial and colonial eras
Slavery
in the 13 colonies, 1770. Numbers show actual and estimated slave
population by colony. Colors show the slave population as a percentage
of each colony's total population. Boundaries shown are based on 1860
state boundaries, not those of 1770 colonies.
The New England Articles of Confederation
of 1643 contained a clause that provided for the return of fugitive
slaves. However, this only referred to the confederation of colonies of
Massachusetts, Plymouth, Connecticut, and New Haven, and was unrelated to the Articles of Confederation of the United States formed after the Declaration of Independence. There were African and Native American slaves in New England beginning in the 18th century. The Articles for the New England Confederation provided for the return of slaves in Section 8:
It is also agreed that if any
servant ran away from his master into any other of these confederated
Jurisdictions, that in such case, upon the certificate of one magistrate
in the Jurisdiction out of which the said servant fled, or upon other
due proof; the said servant shall be delivered, either to his master, or
any other that pursues and brings such certificate or proof.
As the colonies expanded with waves of settlers pushing westward,
slavery continued in the English territories and in former Dutch
territories like New Amsterdam, prompting further legislation of a similar nature. Serious attempts at formulating a uniform policy for the capture of
escaped slaves began under the Articles of Confederation of the United
States in 1785.
1785 attempt
There were two attempts at implementing a fugitive slave law in the Congress of the Confederation.
The Ordinance of 1784 was drafted by a Congressional committee headed by Thomas Jefferson,
and its provisions applied to all United States territory west of the
original 13 states. The original version was read to Congress on March
1, 1784, and it contained a clause stating:
That after the year 1800 of the Christian Era, there
shall be neither slavery nor involuntary servitude in any of the said
states, otherwise than in punishment of crimes, whereof the party shall
have been duly convicted to have been personally guilty.
Rufus King's failed resolution to re-implement the slavery prohibition in the Ordinance of 1784.
This was removed prior to final enactment of the ordinance on 23
April 1784. However, the issue did not die there, and on 6 April 1785 Rufus King
introduced a resolution to re-implement the slavery prohibition in the
1784 ordinance, containing a fugitive slave provision in the hope that
this would reduce opposition to the objective of the resolution. The
resolution contained the phrase:
Provided always, that upon the escape of any person into
any of the states described in the said resolve of Congress of the 23d
day of April, 1784, from whom labor or service is lawfully claimed in
any one of the thirteen original states, such fugitive may be lawfully
reclaimed and carried back to the person claiming his labor or service
as aforesaid, this resolve notwithstanding.
While the original 1784 ordinance applied to all U.S. territory that
was not a part of any existing state (and thus, to all future states),
the 1787 ordinance applied only to the Northwest Territory.
Northwest Ordinance of 1787
Congress made a further attempt to address the concerns of slave owners in 1787 by passing the Northwest Ordinance of 1787. The law appeared to outlaw slavery, which would have reduced the votes
of slave states in Congress, but southern representatives were concerned
with economic competition from potential slaveholders in the new
territory, and the effects that would have on the prices of staple crops
such as tobacco. They correctly predicted that slavery would be
permitted south of the Ohio River under the Southwest Ordinance of 1790, and therefore did not view this as a threat. In terms of the actual law, it did not ban slavery in practice, and it continued almost until the start of the Civil War.
King's phrasing from the 1785 attempt was incorporated in the Northwest Ordinance of 1787 when it was enacted on 13 July 1787. Article 6 has the provision for fugitive slaves:
Art. 6. There shall be neither slavery nor involuntary
servitude in the said territory, otherwise than in the punishment of
crimes whereof the party shall have been duly convicted: Provided, always,
That any person escaping into the same, from whom labor or service is
lawfully claimed in any one of the original States, such fugitive may be
lawfully reclaimed and conveyed to the person claiming his or her labor
or service as aforesaid.
In 1793, Congress passed "An Act respecting fugitives from justice,
and persons escaping from the service of their masters", or more
commonly known as the Fugitive Slave Act, to fulfill the Article IV
requirement to return escaped slaves.
Section 3 mandates the return of fugitives:
SEC. 3. ... That when a person held to labor in any of
the United States, or of the Territories on the Northwest or South of
the river Ohio ... shall escape into any other part of the said States
or Territory, the person to whom such labor or service may be due ... is
hereby empowered to seize or arrest such fugitive from labor ... and
upon proof ... before any Judge ... it shall be the duty of such Judge
... [to remove] the said fugitive from labor to the State or Territory
from which he or she fled.
Section 4 makes assisting runaways and fugitives a crime and outlines the punishment for those who assisted runaway slaves:
SEC. 4. ... That any person who shall knowingly and
willingly obstruct or hinder such claimant ... shall ... forfeit and pay
the sum of five hundred dollars.
High demand for slaves in the Deep South and the hunt for fugitives
caused free blacks to be at risk of being kidnapped and sold into
slavery, despite having "free" papers. Many people who were legally free
and had never been slaves were captured and brought south to be sold
into slavery. The historian Carol Wilson documented 300 such cases in Freedom at Risk (1994) and estimated there were likely thousands of others.
In the early 19th century, personal liberty laws
were passed to hamper officials in the execution of the law, but this
was mostly after the abolition of the Slave Trade, as there had been
very little support for abolition prior; Indiana in 1824 and Connecticut in 1828 provided jury trial for fugitives who appealed from an original decision against them. In 1840, New York and Vermont
extended the right of trial by jury to fugitives and provided them with
attorneys. As early as the first decade of the 19th century, individual
dissatisfaction with the law of 1793 had taken the form of systematic
assistance rendered to African Americans escaping from the South to Canada or New England: the so-called Underground Railroad.
The decision of the Supreme Court in the case of Prigg v. Pennsylvania
in 1842 (16 Peters 539)—that state authorities could not be forced to
act in fugitive slave cases, but that national authorities must carry
out the national law—was followed by legislation in Massachusetts (1843), Vermont (1843), Pennsylvania (1847) and Rhode Island (1848), forbidding state officials from aiding in enforcing the law and refusing the use of state jails for fugitive slaves.
Massachusetts
had abolished slavery in 1783, but the Fugitive Slave Law of 1850
required government officials to assist slavecatchers in capturing
fugitives within the state.
The demand from the South for more effective Federal legislation led to the second fugitive slave law, drafted by Senator James Murray Mason of Virginia, grandson of George Mason, and enacted on September 18, 1850, as a part of the Compromise of 1850. Special commissioners were to have concurrent jurisdiction
with the U.S. circuit and district courts and the inferior courts of
territories in enforcing the law; fugitives could not testify in their
own behalf; no trial by jury was provided.
Penalties were imposed upon marshals who refused to enforce the
law or from whom a fugitive should escape, and upon individuals who
aided black people to escape; the marshal might raise a posse comitatus; a fee of $10 (equivalent to $378 in 2024) was paid to the commissioner when his decision favored the claimant, only $5 (equivalent to $189 in 2024) when it favored the fugitive. The supposed justification for the disparity in compensation was that,
if the decision were in favor of the claimant, additional effort on the
part of the commissioner would be required in order to fill out the
paperwork actually remanding the slave back to the South. Both the fact of the escape and the identity of the fugitive were determined on purely ex parte
testimony. If a slave was brought in and returned to the master, the
person who brought in the slave would receive the sum of $10 (equivalent
to $378 in 2024) per slave.
The severity of this measure led to gross abuses and defeated its purpose; the number of abolitionists increased, the operations of the Underground Railroad became more efficient, and new personal liberty laws were enacted in Vermont (1850), Connecticut (1854), Rhode Island (1854), Massachusetts (1855), Michigan (1855), Maine (1855 and 1857), Kansas (1858) and Wisconsin (1858). The personal liberty laws forbade justices and judges to take cognizance of claims, extended habeas corpus
and the privilege of jury trial to fugitives, and punished false
testimony severely. In 1854, the Supreme Court of Wisconsin went so far
as to declare the Fugitive Slave Act unconstitutional.
These state laws were one of the grievances that South Carolina
would later use to justify its secession from the Union. Attempts to
carry into effect the law of 1850 aroused much bitterness. The arrests of Thomas Sims and of Shadrach Minkins in Boston in 1851; of Jerry M. Henry, in Syracuse, New York, in the same year; of Anthony Burns in 1854, in Boston; and of the Garner family in 1856, in Cincinnati, with other cases arising under the Fugitive Slave Law of 1850, probably had as much to do with bringing on the Civil War as did the controversy over slavery in the Territories.
A Ride for Liberty—The Fugitive Slaves (c. 1862) by Eastman Johnson Brooklyn Museum
With the beginning of the Civil War, the legal status of many slaves was changed by their masters being in arms. Major General Benjamin Franklin Butler, in May 1861, declared that Confederate slaves used for military purposes such as building fortifications were contraband of war. The Confiscation Act of 1861
was passed in August 1861, and discharged from service or labor any
slave employed in aiding or promoting any insurrection against the
government of the United States.
By the congressional Act Prohibiting the Return of Slaves of March 13, 1862, any slave of a disloyal master who was in territory occupied by Northern troops was declared ipso facto
free. But for some time the Fugitive Slave Law was considered still to
hold in the case of fugitives from masters in the border states who were
loyal to the Union government, and it was not until June 28, 1864, that
the Act of 1850 was fully repealed.
In psychology, psychoanalysis, and psychotherapy, projection is the mental process in which an individual attributes their own internal thoughts, beliefs, emotions, experiences, and personality traits to another person or group.
[T]he process by which one attributes one’s own individual positive or negative characteristics, affects, and impulses to another person or group... often a defense mechanism
in which unpleasant or unacceptable impulses, stressors, ideas,
affects, or responsibilities are attributed to others. For example, the
defense mechanism of projection enables a person conflicted over
expressing anger to change “I hate them” to “They hate me.” Such
defensive patterns are often used to justify prejudice or evade
responsibility.
History
A prominent precursor in the formulation of the projection principle was Giambattista Vico.In 1841, Ludwig Feuerbach was the first enlightenment thinker to employ this concept as the basis for a systematic critique of religion.
The Babylonian Talmud
(500 AD) notes the human tendency toward projection and warns against
it: "Do not taunt your neighbour with the blemish you yourself have." In the parable of the Mote and the Beam in the New Testament, Jesus warned against projection:
Why
do you look at the speck of sawdust in your brother's eye and pay no
attention to the plank in your own eye? How can you say to your brother,
'Let me take the speck out of your eye,' when all the time there is a
plank in your own eye? You hypocrite, first take the plank out of your
own eye, and then you will see clearly to remove the speck from your
brother's eye.
Freud
Projection (German: Projektion) was first conceptualised by Sigmund Freud in his letters to Wilhelm Fliess, and further refined by Karl Abraham and Anna Freud.
Freud argued that in projection, thoughts, motivations, desires, and
feelings that cannot be accepted as one's own are dealt with by being
placed in the outside world and attributed to someone else. Freud would later argue that projection did not take place arbitrarily, but rather seized on and exaggerated an element that already existed on a small scale in the other person.
According to Freud, projective identification occurs when the other person introjects, or unconsciously adopts, that which is projected onto them. In projective identification, the selfmaintains a connection with what is projected, in contrast to the total repudiation of projection proper.
Further psychoanalytic development
Freud conceptualised projection within his broader theory of psychoanalysis and the id, ego, and superego. Later psychoanalysts have interpreted and developed Freud's theory of projection in varied ways.
Otto Fenichel argued that projection involves that which the ego refuses to accept, which is thus split off and placed in another.
Melanie Klein saw the projection of good parts of the self as leading potentially to over-idealisation of the object. Equally, it may be one's conscience that is projected, in an attempt to
escape its control: a more benign version of this allows one to come to
terms with outside authority.
Carl Jung considered that the unacceptable parts of the personality represented by the Shadow archetype were particularly likely to give rise to projection, both small-scale and on a national/international basis. Marie-Louise Von Franz extended her view of projection, stating that "wherever known reality stops, where we touch the unknown, there we project an archetypal image".
Erik Erikson argues that projection tends to come to the fore in normal people at times of personal or political crisis.
Historical clinical use
Drawing on Gordon Allport's
idea of the expression of self onto activities and objects, projective
techniques have been devised to aid personality assessment, including
the Rorschach ink-blots and the Thematic Apperception Test (TAT).
Theoretical views
Psychoanalytic theory
According to some psychoanalysts, projection forms the basis of empathy by the projection of personal experiences to understand someone else's subjective world. In its malignant forms, projection is a defense mechanism in which the ego defends itself against disowned and highly negative parts of the self by denying their existence in themselves and attributing them to others, breeding misunderstanding and causing interpersonal damage. Projection incorporates blame shifting and can manifest as shame dumping. It has also been described as an early phase of introjection.
In psychoanalytical and psychodynamic terms, projection may help a fragile ego reduce anxiety, but at the cost of a certain dissociation, as in dissociative identity disorder. In extreme cases, an individual's personality may end up becoming critically depleted. In such cases, therapy may be required which would include the slow
rebuilding of the personality through the "taking back" of such
projections.
Psychotherapy and counselling
Counter-projection
Jung
wrote, "All projections provoke counter-projection when the object is
unconscious of the quality projected upon it by the subject." Jung argued that what is unconscious in the recipient will be projected
back onto the projector, precipitating a form of mutual acting out. In a different usage, Harry Stack Sullivan saw counter-projection in the therapeutic context as a way of warding off the compulsive re-enactment of a psychological trauma, by emphasizing the difference between the current situation and the projected obsession with the perceived perpetrator of the original trauma.
Psychoanalytic and psychodynamic techniques
The method of managed projection
is a projective technique. The basic principle of this method is that a
subject is presented with their own verbal portrait named by the name
of another person, as well as with a portrait of their fictional
opposition. The technique may be suitable for application in psychological
counseling and might provide valuable information about the form and
nature of their self-esteem. Bodalev, A (2000). "General psychodiagnostics".
Psychobiography
Psychological projection is one of the medical explanations of bewitchment used to explain the behavior of the afflicted children at Salem in 1692. The historian John Demos
wrote in 1970 that the symptoms of bewitchment displayed by the
afflicted girls could have been due to the girls undergoing
psychological projection of repressed aggression.
Types
In victim blaming,
the victim of someone else's actions or bad luck may be offered
criticism, the theory being that the victim may be at fault for having
attracted the other person's hostility. According to some theorists, in
such cases, the psyche projects the experiences of weakness or
vulnerability with the aim of ridding itself of the feelings and,
through its disdain for them or the act of blaming, their conflict with
the ego.
Thoughts of infidelity to a partner may also be unconsciously projected in self-defence on to the partner in question, so that the guilt attached to the thoughts can be repudiated or turned to blame instead, in a process linked to denial. For example, a person who is having a sexual affair may fear that their
spouse is planning an affair or may accuse the innocent spouse of adultery.
A bully may project their own feelings of vulnerability
onto the target(s) of the bullying activity. Despite the fact that a
bully's typically denigrating activities are aimed at the bully's
targets, the true source of such negativity is ultimately almost always
found in the bully's own sense of personal insecurity or vulnerability. Such aggressive projections of displaced negative emotions can occur anywhere from the micro-level of interpersonal relationships, all the way up to the macro-level of international politics, or even international armed conflict.
Projection of a severe conscience is another form of defense, one which may be linked to the making of false accusations, personal or political. In a more positive light, a patient may sometimes project their feelings of hope onto the therapist. People in love "reading" each other's mind involves a projection of the self into the other.
Criticism
Research on social projection supports the existence of a false-consensus effect
whereby humans have a broad tendency to believe that others are similar
to themselves, and thus "project" their personal traits onto others. This applies to both good and bad traits; it is not a defense mechanism for denying the existence of the trait within the self.
A study of the empirical evidence for a range of defense
mechanisms by Baumeister, Dale, and Sommer (1998) concluded, "The view
that people defensively project specific bad traits of their own onto
others as a means of denying that they have them is not well supported." However, Newman, Duff, and Baumeister (1997) proposed a new model of defensive projection in which the repressor's efforts to suppress thoughts
of their undesirable traits make those trait categories highly
accessible—so that they are then used all the more often when forming
impressions of others. The projection is then only a byproduct of the
real defensive mechanism.
In labor economics, an efficiency wage is a wage paid in excess of the market-clearing wage to increase the labor productivity of workers. Specifically, it points to the incentive for managers to pay their
employees more than the market-clearing wage to increase their productivity or to reduce the costs associated with employee turnover.
Theories of efficiency wages explain the existence of involuntary unemployment in economies outside of recessions, providing for a natural rate of unemployment above zero. Because workers are paid more than the equilibrium wage, workers may experience periods of unemployment in which workers compete for a limited supply of well-paying jobs.
Overview of theory
There are several reasons why managers may pay efficiency wages:
Avoiding shirking: If it is difficult to measure the quantity or quality of a worker's effort – and systems of piece rates or commissions
are impossible, there may be an incentive for the worker to "shirk" (do
less work than agreed). The manager thus may pay an efficiency wage in
order to create or increase the cost of job loss, which gives a sting to
the threat of firing. This threat can be used to prevent shirking.
Minimizing turnover: By paying above-market wages, the
worker's motivation to leave the job and look for a job elsewhere will
be reduced. This strategy also reduces the expense of training
replacement workers.
Selection: If job performance depends on workers' ability and
workers differ from each other in those terms, firms with higher wages
will attract more able job-seekers, and this may make it profitable to
offer wages that exceed the market clearing level.
Sociological theories: Efficiency wages may result from traditions. Akerlof's theory (in very simple terms) involves higher wages encouraging high morale, which raises productivity.
Nutritional theories: In developing countries,
efficiency wages may allow workers to eat well enough to avoid illness
and to be able to work harder and even more productively.
The model of efficiency wages, largely based on shirking, developed by Carl Shapiro and Joseph E. Stiglitz has been particularly influential.
In
the Shapiro-Stiglitz model workers are paid at a level where they do
not shirk. This prevents wages from dropping to market-clearing levels.
Full employment cannot be achieved because workers would shirk if they
were not threatened with the possibility of unemployment. The curve for
the no-shirking condition (labeled NSC) goes to infinity at full
employment.
A theory in which employers voluntarily pay employees above the
market equilibrium level to increase worker productivity. The shirking
model begins with the fact that complete contracts rarely (or never)
exist in the real world. This implies that both parties to the contract
have some discretion, but frequently, due to monitoring problems, the
employee's side of the bargain is subject to the most discretion.
Methods such as piece rates are often impracticable because monitoring
is too costly or inaccurate; or they may be based on measures too
imperfectly verifiable by workers, creating a moral hazard
problem on the employer's side. Thus, paying a wage in excess of
market-clearing may provide employees with cost-effective incentives to
work rather than shirk.
In the Shapiro and Stiglitz model, workers either work or shirk,
and if they shirk they have a certain probability of being caught, with
the penalty of being fired. Equilibrium then entails unemployment, because to create an opportunity cost
to shirking, firms try to raise their wages above the market average
(so that sacked workers face a probabilistic loss). But since all firms
do this, the market wage itself is pushed up, and the result is that
wages are raised above market-clearing, creating involuntary unemployment. This creates a low, or no income alternative, which makes job loss
costly and serves as a worker discipline device. Unemployed workers
cannot bid for jobs by offering to work at lower wages since, if hired,
it would be in the worker's interest to shirk on the job, and he has no
credible way of promising not to do so. Shapiro and Stiglitz point out
that their assumption that workers are identical (e.g. there is no
stigma to having been fired) is a strong one – in practice, reputation can work as an additional
disciplining device. Conversely, higher wages and unemployment increase
the cost of finding a new job after being laid off. So in the shirking
model, higher wages are also a monetary incentive.
Shapiro-Stiglitz's model holds that unemployment threatens
workers, and the stronger the danger, the more willing workers are to
work through correct behavior. This view illustrates the endogenous
decision-making of workers in the labor market; that is, workers will be
more inclined to work hard when faced with the threat of unemployment
to avoid the risk of unemployment. In the labor market, many factors
influence workers' behavior and supply. Among them, the threat of
unemployment is an essential factor affecting workers' behavior and
supply. When workers are at risk of losing their jobs, they tend to
increase their productivity and efficiency by working harder, thus
improving their chances of employment. This endogenous decision of
behavior and supply can somewhat alleviate the unemployment problem in
the labor market.
The shirking model does not predict that the bulk of the
unemployed at any one time are those fired for shirking, because if the
threat associated with being fired is effective, little or no shirking
and sacking will occur. Instead, the unemployed will consist of a
rotating pool of individuals who have quit for personal reasons, are new
entrants to the labour market, or have been laid off for other reasons.
Pareto optimality,
with costly monitoring, will entail some unemployment since
unemployment plays a socially valuable role in creating work incentives.
But the equilibrium unemployment rate will not be Pareto optimal since
firms do not consider the social cost of the unemployment they helped to
create.
One criticism of the efficiency wage hypothesis is that more
sophisticated employment contracts can, under certain conditions, reduce
or eliminate involuntary unemployment. The use of seniority wages to
solve the incentive problem, where initially, workers are paid less than
their marginal productivity, and as they work effectively over time within the firm, earnings increase until they exceed marginal productivity. The upward tilt in the age-earnings profile here provides the incentive
to avoid shirking, and the present value of wages can fall to the
market-clearing level, eliminating involuntary unemployment. The slope
of earnings profiles is significantly affected by incentives.
However, a significant criticism is that moral hazard would be
shifted to employers responsible for monitoring the worker's efforts.
Employers do not want employees to be lazy. Employers want employees to
be able to do more work while getting their reserved wages. Obvious incentives would exist for firms to declare shirking when it
has not taken place. In the Lazear model, firms have apparent incentives
to fire older workers (paid above marginal product) and hire new
cheaper workers, creating a credibility problem. The seriousness of this
employer moral hazard depends on how much effort can be monitored by
outside auditors, so that firms cannot cheat. However, reputation
effects (e.g. Lazear 1981) may be able to do the same job.
Labor turnover
"Labor
turnover" refers to rapid changes in the workforce from one position to
another. This is determined by the ratio of the size of the labor and
the number of employees employed. With regards to the efficiency wage hypothesis, firms also offer wages
in excess of market-clearing, due to the high cost of replacing workers
(search, recruitment, training costs). If all firms are identical, one possible equilibrium involves all firms
paying a common wage rate above the market-clearing level, with
involuntary unemployment serving to diminish turnover. These models can
easily be adapted to explain dual labor markets:
if low-skill, labor-intensive firms have lower turnover costs (as seems
likely), there may be a split between a low-wage, low-effort,
high-turnover sector and a high-wage, high effort, low-turnover sector.
Again, more sophisticated employment contracts may solve the problem.
Selection
Similar
to the shirking model, the selection model also believes that the
information asymmetry problem is the main culprit that causes the market
function not fully to exert to eliminate involuntary unemployment.
However, unlike the shirking model, which focuses on employee shirking,
the election model emphasizes the information disadvantage of employers
in terms of labor quality. Due to the inability to accurately observe
the real quality of employees, we only know that high wages can hire
high-quality employees, and wage cuts will make high-quality employees
go first. Therefore, wages will not continue to fall due to involuntary
unemployment to maintain the excellent quality of workers.
In selection wage theories it is presupposed that performance on
the job depends on "ability", and that workers are heterogeneous
concerning ability. The selection effect of higher wages may come about
through self-selection or because firms with a larger pool of applicants
can increase their hiring standards and obtain a more productive
workforce. Workers with higher abilities are more likely to earn more
wages, and companies are willing to pay higher wages to hire
high-quality people as employees.
Self-selection (often referred to as adverse selection) comes about if the workers’ ability and reservation wages are positively correlated. The basic assumption of efficiency wage theory is that the efficiency
of workers increases with the increase of wages. In this case, companies
face a trade-off between hiring productive workers at higher salaries
or less effective workers at lower wages. These notes derive the
so-called Solow condition, which minimizes wages even if the cost of
practical labor input is minimized. Solow condition means that in the
labor market, the wage level paid by enterprises should equal the
marginal product of workers, namely the market value of labor force.
This condition is based on two basic assumptions: that firms operate in a
competitive market and cannot control market wages and that individual
workers are price takers rather than price setters. If there are two
kinds of firms (low and high wage), then we effectively have two sets of
lotteries (since firms cannot screen), the difference being that
high-ability workers do not enter the low-wage lotteries as their
reservation wage is too high. Thus low-wage firms attract only
low-ability lottery entrants, while high-wage firms attract workers of
all abilities (i.e. on average, they will select average workers).
Therefore high-wage firms are paying an efficiency wage – they pay more
and, on average, get more. However, the assumption that firms cannot measure effort and pay piece
rates after workers are hired or to fire workers whose output is too low
is quite strong. Firms may also be able to design self-selection or
screening devices that induce workers to reveal their true
characteristics.
High wages can effectively reduce personnel turnover, promote
employees to work harder, prevent employees from resigning collectively,
and effectively attract more high-quality employees. If firms can assess the productivity of applicants, they will try to
select the best among the applicants. A higher wage offer will attract
more applicants, particularly more highly qualified ones. This permits a
firm to raise its hiring standard, thereby enhancing its productivity. Wage compression makes it profitable for firms to screen applicants under such circumstances, and selection wages may be necessary.
Sociological models
Fairness, norms, and reciprocity
Standard economic models ("neoclassical economics") assume that people pursue only their self-interest and do not care about "social" goals ("homo economicus").
Neoclassical economics is divided into three theories, namely
methodological individualism, methodological instrumentalist, and
methodological equilibration. Some attention has been paid to the idea that people may be altruistic, but it is only with the addition of reciprocity and norms of fairness that the model becomes accurate. Thus of crucial importance is the idea of exchange: a person who is
altruistic towards another expects the other to fulfil some fairness
norm, be it reciprocating in kind, in some different but – according to
some shared standard – equivalent way, or simply by being grateful. If
the expected reciprocation is not forthcoming, the altruism will
unlikely be repeated or continued. In addition, similar norms of
fairness will typically lead people into negative forms of reciprocity,
too – in retaliation for acts perceived as vindictive. This can bind
actors into vicious loops where vindictive acts are met with further
vindictive acts.
In practice, despite the neat logic of standard neoclassical
models, these sociological models do impinge upon many economic
relations, though in different ways and to different degrees. For
example, suppose an employee has been exceptionally loyal. In that case,
a manager may feel some obligation to treat that employee well, even
when it is not in his (narrowly defined, economic) self-interest. It
would appear that although broader, longer-term economic benefits may
result (e.g. through reputation, or perhaps through simplified
decision-making according to fairness norms), a significant factor must
be that there are noneconomic benefits the manager receives, such as not
having a guilty conscience (loss of self-esteem). For real-world,
socialised, normal human beings (as opposed to abstracted factors of
production), this is likely to be the case quite often. As a
quantitative estimate of the importance of this, the total value of
voluntary labor in the US – $74 billion annually – will suffice. Examples of the negative aspect of fairness include consumers
"boycotting" firms they disapprove of by not buying products they
otherwise would (and therefore settling for second-best); and employees
sabotaging firms they feel hard done by.
Rabin (1993) offers three stylised facts as a starting point on
how norms affect behaviour: (a) people are prepared to sacrifice their
material well-being to help those who are being kind; (b) they are also
prepared to do this to punish those being unkind; (c) both (a) and (b)
have a greater effect on behaviour as the material cost of sacrificing
(in relative rather than absolute terms) becomes smaller. Rabin supports
his Fact A by Dawes and Thaler's (1988) survey of the experimental
literature, which concludes that for most one-shot public good
decisions in which the individually optimal contribution is close to
0%, the contribution rate ranges from 40 to 60% of the socially optimal
level. Fact B is demonstrated by the "ultimatum game" (e.g. Thaler
1988), where an amount of money is split between two people, one
proposing a division, the other accepting or rejecting (where rejection
means both get nothing). Rationally, the proposer should offer no more
than a penny, and the decider accept any offer of at least a penny.
Still, in practice, even in one-shot settings, proposers make fair
proposals, and deciders are prepared to punish unfair offers by
rejecting them. Fact C is tested and partially confirmed by Gerald
Leventhal and David Anderson (1970), but is also reasonably intuitive.
In the ultimatum game, a 90% split (regarded as unfair) is (intuitively)
far more likely to be punished if the amount to be split is $1 than $1
million.
A crucial point (as noted in Akerlof 1982) is that notions of
fairness depend on the status quo and other reference points.
Experiments (Fehr and Schmidt 2000) and surveys (Kahneman, Knetsch, and
Thaler 1986) indicate that people have clear notions of fairness based
on particular reference points (disagreements can arise in the choice of
reference point). Thus, for example, firms who raise prices or lower
wages to take advantage of increased demand or increased labour supply
are frequently perceived as acting unfairly, where the same changes are
deemed acceptable when the firm makes them due to increased costs
(Kahneman et al.). In other words, in people's intuitive "naïve
accounting" (Rabin 1993), a key role is played by the idea of
entitlements embodied in reference points (although as Dufwenberg and
Kirchsteiger 2000 point out, there may be informational problems, e.g.
for workers in determining what the firm's profit is, given tax
avoidance and stock-price considerations). In particular, it is
perceived as unfair for actors to increase their share at the expense of
others. However, over time such a change may become entrenched and form
a new reference point which (typically) is no longer in itself deemed
unfair.
Sociological efficiency wage models
Solow
(1981) argued that wage rigidity may be partly due to social
conventions and principles of appropriate behaviour, which are not
entirely individualistic. Akerlof (1982) provided the first explicitly sociological model leading
to the efficiency wage hypothesis. Using a variety of evidence from
sociological studies, Akerlof argues that worker effort depends on the
work norms of the relevant reference group. In Akerlof's partial gift exchange model,
the firm can raise group work norms and average effort by paying
workers a gift of wages over the minimum required in return for effort
above the minimum required. The sociological model can explain phenomena
inexplicable on neoclassical terms, such as why firms do not fire
workers who turn out to be less productive, why piece rates are so
little used even where quite feasible; and why firms set work standards
exceeded by most workers. A possible criticism is that workers do not
necessarily view high wages as gifts, but as merely fair (particularly
since typically 80% or more of workers consider themselves in the top
quarter of productivity), in which case they will not reciprocate with
high effort.
Akerlof and Yellen
(1990), responding to these criticisms and building on work from
psychology, sociology, and personnel management, introduce "the fair
wage-effort hypothesis", which states that workers form a notion of the fair wage, and if the actual wage is lower, withdraw effort in proportion, so that, depending on the wage-effort elasticity
and the costs to the firm of shirking, the fair wage may form a key
part of the wage bargain. This explains persistent evidence of
consistent wage differentials across industries (e.g. Slichter 1950;
Dickens and Katz 1986; Krueger and Summers 1988): if firms must pay high
wages to some groups of workers – perhaps because they are in short
supply or for other efficiency-wage reasons such as shirking – then
demands for fairness will lead to a compression of the pay scale, and
wages for different groups within the firm will be higher than in other
industries or firms.
The union threat model is one of several explanations for industry wage differentials. This Keynesian economics
model looks at the role of unions in wage determination. The degree in
which union wages exceed non-union member wages is known as union wage premium. Some firms seek to prevent unionization in the first instances. Varying costs of union avoidance across sectors will lead some firms to offer supracompetitive wages as pay premiums to workers in exchange for their avoiding unionization. Under the union threat model (Dickens 1986), the ease with which
industry can defeat a union drive has a negative relationship with its
wage differential. In other words, inter-industry wage variability should be low where the threat of unionization is low.
Empirical literature
Raff and Summers (1987) conduct a case study on Henry Ford’s introduction of the five dollar day
in 1914. Their conclusion is that the Ford experience supports
efficiency wage interpretations. Ford’s decision to increase wages so
dramatically (doubling for most workers) is most plausibly portrayed as
the consequence of efficiency wage considerations, with the structure
being consistent, evidence of substantial queues for Ford jobs, and
significant increases in productivity and profits at Ford. Concerns such
as high turnover and poor worker morale appear to have played an
important role in the five-dollar decision. Ford’s new wage put him in
the position of rationing jobs, and increased wages did yield
substantial productivity benefits and profits. There is also evidence
that other firms emulated Ford’s policy to some extent, with wages in
the automobile industry 40% higher than in the rest of manufacturing
(Rae 1965, quoted in Raff and Summers). Given low monitoring costs and
skill levels on the Ford production line, such benefits (and the
decision itself) appear particularly significant.
Fehr, Kirchler, Weichbold and Gächter (1998) conduct labour
market experiments to separate the effects of competition and social
norms/customs/standards of fairness. They find that firms persistently
try to enforce lower wages in complete contract markets. By contrast,
wages are higher and more stable in gift exchange markets and bilateral
gift exchanges. It appears that in complete contract situations,
competitive equilibrium exerts a considerable drawing power, whilst in
the gift exchange market it does not.
Fehr et al. stress that reciprocal effort choices are truly a
one-shot phenomenon without reputation or other repeated-game effects.
"It is, therefore, tempting to interpret reciprocal effort behavior as a
preference phenomenon."(p. 344). Two types of preferences can account
for this behaviour: a) workers may feel obligated to share the
additional income from higher wages at least partly with firms; b)
workers may have reciprocal motives (reward good behaviour, punish bad).
"In the context of this interpretation, wage setting is inherently
associated with signaling intentions, and workers condition their effort
responses on the inferred intentions." (p. 344). Charness (1996),
quoted in Fehr et al., finds that when signaling is removed (wages are
set randomly or by the experimenter), workers exhibit a lower, but still
positive, wage-effort relation, suggesting some gain-sharing motive and
some reciprocity (where intentions can be signaled).
Fehr et al. state that "Our preferred interpretation of firms’
wage-setting behavior is that firms voluntarily paid job rents to elicit
non-minimum effort levels." Although excess supply of labour created
enormous competition among workers, firms did not take advantage. In the
long run, instead of being governed by competitive forces, firms’ wage
offers were solely governed by reciprocity considerations because the
payment of non-competitive wages generated higher profits. Thus, firms
and workers can be better off relying on stable reciprocal interactions.
That is to say, when the demands of enterprises and workers reach a
balance point, it is stable and developing for both parties.
That reciprocal behavior generates efficiency gains has been
confirmed by several other papers e.g. Berg, Dickhaut, and McCabe (1995)
– even under conditions of double anonymity and where actors know even
the experimenter cannot observe individual behaviour, reciprocal
interactions, and efficiency gains are frequent. Fehr, Gächter, and
Kirchsteiger ([1996] 1997) show that reciprocal interactions generate
substantial efficiency gains. However, the efficiency-enhancing role of
reciprocity is generally associated with serious behavioural deviations
from competitive equilibrium predictions. To counter a possible
criticism of such theories, Fehr and Tougareva (1995) showed these
reciprocal exchanges (efficiency-enhancing) are independent of the
stakes involved (they compared outcomes with stakes worth a week's
income with stakes worth 3 months’ income and found no difference).
As one counter to over-enthusiasm for efficiency wage models,
Leonard (1987) finds little support for shirking or turnover efficiency
wage models, by testing their predictions for large and persistent wage
differentials. The shirking version assumes a trade-off between
self-supervision and external supervision, while the turnover version
assumes turnover is costly to the firm. Variation in the cost of
monitoring/shirking or turnover is hypothesized to account for wage
variations across firms for homogeneous workers. But Leonard finds that
wages for narrowly defined occupations within one sector of one state
are widely dispersed, suggesting other factors may be at work.
Efficiency wage models do not explain everything about wages. For
example, involuntary unemployment and persistent wage rigidity are often
problematic in many economies. But the efficiency wage model fails to
account for these issues.
Mathematical explanation
Paul Krugman explains how the efficiency wage theory comes into play in real society. The productivity of individual workers is a function of their wage , and the total productivity is the sum of individual productivity. Accordingly, the sales of the firm to which the workers belong becomes a function of both employment and the individual productivity. The firm's profit is
Then we assume that the higher the wage of the workers become, the higher the individual productivity: . If the employment is chosen so that the profit is maximised, it is constant. Under this optimised condition, we have
that is,
Obviously, the gradient of the slope is positive, because the higher individual
productivity the higher sales. The never goes to negative because of the optimised condition, and therefore we have
This means that if the firm increases their wage their profit becomes
constant or even larger. Because after the employee's salary increases,
the employee will work harder, and will not easily quit or go to other
companies. This increases the stability of the company and the
motivation of employees. Thus the efficiency wage theory motivates the
owners of the firm to raise the wage to increase the profit of the firm,
and high wages can also be called a reward mechanism.