The fugitive slave laws were laws passed by the United States Congress in 1793 and 1850 to provide for the return of slaves who escaped from one state into another state or territory. The idea of the fugitive slave law was derived from the Fugitive Slave Clause which is in the United States Constitution (Article IV, Section 2, Paragraph 3). It was thought that forcing states to return fugitive slaves to their masters violated states' rights
due to state sovereignty, and that seizing state property should not be
left up to the states. The Fugitive Slave Clause states that fugitive
slaves "shall be delivered up on Claim of the Party to whom such Service
or Labour may be due", which abridged state rights because apprehending
runaway slaves was a form of retrieving private property. The Compromise of 1850
entailed a series of laws that allowed slavery in the new territories
and forced officials in free states to give a hearing to slave-owners
without a jury.
Pre-colonial and colonial eras
Slavery
in the 13 colonies, 1770. Numbers show actual and estimated slave
population by colony. Colors show the slave population as a percentage
of each colony's total population. Boundaries shown are based on 1860
state boundaries, not those of 1770 colonies.
The New England Articles of Confederation
of 1643 contained a clause that provided for the return of fugitive
slaves. However, this only referred to the confederation of colonies of
Massachusetts, Plymouth, Connecticut, and New Haven, and was unrelated to the Articles of Confederation of the United States formed after the Declaration of Independence. There were African and Native American slaves in New England beginning in the 18th century. The Articles for the New England Confederation provided for the return of slaves in Section 8:
It is also agreed that if any
servant ran away from his master into any other of these confederated
Jurisdictions, that in such case, upon the certificate of one magistrate
in the Jurisdiction out of which the said servant fled, or upon other
due proof; the said servant shall be delivered, either to his master, or
any other that pursues and brings such certificate or proof.
As the colonies expanded with waves of settlers pushing westward,
slavery continued in the English territories and in former Dutch
territories like New Amsterdam, prompting further legislation of a similar nature. Serious attempts at formulating a uniform policy for the capture of
escaped slaves began under the Articles of Confederation of the United
States in 1785.
1785 attempt
There were two attempts at implementing a fugitive slave law in the Congress of the Confederation.
The Ordinance of 1784 was drafted by a Congressional committee headed by Thomas Jefferson,
and its provisions applied to all United States territory west of the
original 13 states. The original version was read to Congress on March
1, 1784, and it contained a clause stating:
That after the year 1800 of the Christian Era, there
shall be neither slavery nor involuntary servitude in any of the said
states, otherwise than in punishment of crimes, whereof the party shall
have been duly convicted to have been personally guilty.
Rufus King's failed resolution to re-implement the slavery prohibition in the Ordinance of 1784.
This was removed prior to final enactment of the ordinance on 23
April 1784. However, the issue did not die there, and on 6 April 1785 Rufus King
introduced a resolution to re-implement the slavery prohibition in the
1784 ordinance, containing a fugitive slave provision in the hope that
this would reduce opposition to the objective of the resolution. The
resolution contained the phrase:
Provided always, that upon the escape of any person into
any of the states described in the said resolve of Congress of the 23d
day of April, 1784, from whom labor or service is lawfully claimed in
any one of the thirteen original states, such fugitive may be lawfully
reclaimed and carried back to the person claiming his labor or service
as aforesaid, this resolve notwithstanding.
While the original 1784 ordinance applied to all U.S. territory that
was not a part of any existing state (and thus, to all future states),
the 1787 ordinance applied only to the Northwest Territory.
Northwest Ordinance of 1787
Congress made a further attempt to address the concerns of slave owners in 1787 by passing the Northwest Ordinance of 1787. The law appeared to outlaw slavery, which would have reduced the votes
of slave states in Congress, but southern representatives were concerned
with economic competition from potential slaveholders in the new
territory, and the effects that would have on the prices of staple crops
such as tobacco. They correctly predicted that slavery would be
permitted south of the Ohio River under the Southwest Ordinance of 1790, and therefore did not view this as a threat. In terms of the actual law, it did not ban slavery in practice, and it continued almost until the start of the Civil War.
King's phrasing from the 1785 attempt was incorporated in the Northwest Ordinance of 1787 when it was enacted on 13 July 1787. Article 6 has the provision for fugitive slaves:
Art. 6. There shall be neither slavery nor involuntary
servitude in the said territory, otherwise than in the punishment of
crimes whereof the party shall have been duly convicted: Provided, always,
That any person escaping into the same, from whom labor or service is
lawfully claimed in any one of the original States, such fugitive may be
lawfully reclaimed and conveyed to the person claiming his or her labor
or service as aforesaid.
In 1793, Congress passed "An Act respecting fugitives from justice,
and persons escaping from the service of their masters", or more
commonly known as the Fugitive Slave Act, to fulfill the Article IV
requirement to return escaped slaves.
Section 3 mandates the return of fugitives:
SEC. 3. ... That when a person held to labor in any of
the United States, or of the Territories on the Northwest or South of
the river Ohio ... shall escape into any other part of the said States
or Territory, the person to whom such labor or service may be due ... is
hereby empowered to seize or arrest such fugitive from labor ... and
upon proof ... before any Judge ... it shall be the duty of such Judge
... [to remove] the said fugitive from labor to the State or Territory
from which he or she fled.
Section 4 makes assisting runaways and fugitives a crime and outlines the punishment for those who assisted runaway slaves:
SEC. 4. ... That any person who shall knowingly and
willingly obstruct or hinder such claimant ... shall ... forfeit and pay
the sum of five hundred dollars.
High demand for slaves in the Deep South and the hunt for fugitives
caused free blacks to be at risk of being kidnapped and sold into
slavery, despite having "free" papers. Many people who were legally free
and had never been slaves were captured and brought south to be sold
into slavery. The historian Carol Wilson documented 300 such cases in Freedom at Risk (1994) and estimated there were likely thousands of others.
In the early 19th century, personal liberty laws
were passed to hamper officials in the execution of the law, but this
was mostly after the abolition of the Slave Trade, as there had been
very little support for abolition prior; Indiana in 1824 and Connecticut in 1828 provided jury trial for fugitives who appealed from an original decision against them. In 1840, New York and Vermont
extended the right of trial by jury to fugitives and provided them with
attorneys. As early as the first decade of the 19th century, individual
dissatisfaction with the law of 1793 had taken the form of systematic
assistance rendered to African Americans escaping from the South to Canada or New England: the so-called Underground Railroad.
The decision of the Supreme Court in the case of Prigg v. Pennsylvania
in 1842 (16 Peters 539)—that state authorities could not be forced to
act in fugitive slave cases, but that national authorities must carry
out the national law—was followed by legislation in Massachusetts (1843), Vermont (1843), Pennsylvania (1847) and Rhode Island (1848), forbidding state officials from aiding in enforcing the law and refusing the use of state jails for fugitive slaves.
Massachusetts
had abolished slavery in 1783, but the Fugitive Slave Law of 1850
required government officials to assist slavecatchers in capturing
fugitives within the state.
The demand from the South for more effective Federal legislation led to the second fugitive slave law, drafted by Senator James Murray Mason of Virginia, grandson of George Mason, and enacted on September 18, 1850, as a part of the Compromise of 1850. Special commissioners were to have concurrent jurisdiction
with the U.S. circuit and district courts and the inferior courts of
territories in enforcing the law; fugitives could not testify in their
own behalf; no trial by jury was provided.
Penalties were imposed upon marshals who refused to enforce the
law or from whom a fugitive should escape, and upon individuals who
aided black people to escape; the marshal might raise a posse comitatus; a fee of $10 (equivalent to $378 in 2024) was paid to the commissioner when his decision favored the claimant, only $5 (equivalent to $189 in 2024) when it favored the fugitive. The supposed justification for the disparity in compensation was that,
if the decision were in favor of the claimant, additional effort on the
part of the commissioner would be required in order to fill out the
paperwork actually remanding the slave back to the South. Both the fact of the escape and the identity of the fugitive were determined on purely ex parte
testimony. If a slave was brought in and returned to the master, the
person who brought in the slave would receive the sum of $10 (equivalent
to $378 in 2024) per slave.
The severity of this measure led to gross abuses and defeated its purpose; the number of abolitionists increased, the operations of the Underground Railroad became more efficient, and new personal liberty laws were enacted in Vermont (1850), Connecticut (1854), Rhode Island (1854), Massachusetts (1855), Michigan (1855), Maine (1855 and 1857), Kansas (1858) and Wisconsin (1858). The personal liberty laws forbade justices and judges to take cognizance of claims, extended habeas corpus
and the privilege of jury trial to fugitives, and punished false
testimony severely. In 1854, the Supreme Court of Wisconsin went so far
as to declare the Fugitive Slave Act unconstitutional.
These state laws were one of the grievances that South Carolina
would later use to justify its secession from the Union. Attempts to
carry into effect the law of 1850 aroused much bitterness. The arrests of Thomas Sims and of Shadrach Minkins in Boston in 1851; of Jerry M. Henry, in Syracuse, New York, in the same year; of Anthony Burns in 1854, in Boston; and of the Garner family in 1856, in Cincinnati, with other cases arising under the Fugitive Slave Law of 1850, probably had as much to do with bringing on the Civil War as did the controversy over slavery in the Territories.
A Ride for Liberty—The Fugitive Slaves (c. 1862) by Eastman Johnson Brooklyn Museum
With the beginning of the Civil War, the legal status of many slaves was changed by their masters being in arms. Major General Benjamin Franklin Butler, in May 1861, declared that Confederate slaves used for military purposes such as building fortifications were contraband of war. The Confiscation Act of 1861
was passed in August 1861, and discharged from service or labor any
slave employed in aiding or promoting any insurrection against the
government of the United States.
By the congressional Act Prohibiting the Return of Slaves of March 13, 1862, any slave of a disloyal master who was in territory occupied by Northern troops was declared ipso facto
free. But for some time the Fugitive Slave Law was considered still to
hold in the case of fugitives from masters in the border states who were
loyal to the Union government, and it was not until June 28, 1864, that
the Act of 1850 was fully repealed.
In psychology, psychoanalysis, and psychotherapy, projection is the mental process in which an individual attributes their own internal thoughts, beliefs, emotions, experiences, and personality traits to another person or group.
[T]he process by which one attributes one’s own individual positive or negative characteristics, affects, and impulses to another person or group... often a defense mechanism
in which unpleasant or unacceptable impulses, stressors, ideas,
affects, or responsibilities are attributed to others. For example, the
defense mechanism of projection enables a person conflicted over
expressing anger to change “I hate them” to “They hate me.” Such
defensive patterns are often used to justify prejudice or evade
responsibility.
History
A prominent precursor in the formulation of the projection principle was Giambattista Vico.In 1841, Ludwig Feuerbach was the first enlightenment thinker to employ this concept as the basis for a systematic critique of religion.
The Babylonian Talmud
(500 AD) notes the human tendency toward projection and warns against
it: "Do not taunt your neighbour with the blemish you yourself have." In the parable of the Mote and the Beam in the New Testament, Jesus warned against projection:
Why
do you look at the speck of sawdust in your brother's eye and pay no
attention to the plank in your own eye? How can you say to your brother,
'Let me take the speck out of your eye,' when all the time there is a
plank in your own eye? You hypocrite, first take the plank out of your
own eye, and then you will see clearly to remove the speck from your
brother's eye.
Freud
Projection (German: Projektion) was first conceptualised by Sigmund Freud in his letters to Wilhelm Fliess, and further refined by Karl Abraham and Anna Freud.
Freud argued that in projection, thoughts, motivations, desires, and
feelings that cannot be accepted as one's own are dealt with by being
placed in the outside world and attributed to someone else. Freud would later argue that projection did not take place arbitrarily, but rather seized on and exaggerated an element that already existed on a small scale in the other person.
According to Freud, projective identification occurs when the other person introjects, or unconsciously adopts, that which is projected onto them. In projective identification, the selfmaintains a connection with what is projected, in contrast to the total repudiation of projection proper.
Further psychoanalytic development
Freud conceptualised projection within his broader theory of psychoanalysis and the id, ego, and superego. Later psychoanalysts have interpreted and developed Freud's theory of projection in varied ways.
Otto Fenichel argued that projection involves that which the ego refuses to accept, which is thus split off and placed in another.
Melanie Klein saw the projection of good parts of the self as leading potentially to over-idealisation of the object. Equally, it may be one's conscience that is projected, in an attempt to
escape its control: a more benign version of this allows one to come to
terms with outside authority.
Carl Jung considered that the unacceptable parts of the personality represented by the Shadow archetype were particularly likely to give rise to projection, both small-scale and on a national/international basis. Marie-Louise Von Franz extended her view of projection, stating that "wherever known reality stops, where we touch the unknown, there we project an archetypal image".
Erik Erikson argues that projection tends to come to the fore in normal people at times of personal or political crisis.
Historical clinical use
Drawing on Gordon Allport's
idea of the expression of self onto activities and objects, projective
techniques have been devised to aid personality assessment, including
the Rorschach ink-blots and the Thematic Apperception Test (TAT).
Theoretical views
Psychoanalytic theory
According to some psychoanalysts, projection forms the basis of empathy by the projection of personal experiences to understand someone else's subjective world. In its malignant forms, projection is a defense mechanism in which the ego defends itself against disowned and highly negative parts of the self by denying their existence in themselves and attributing them to others, breeding misunderstanding and causing interpersonal damage. Projection incorporates blame shifting and can manifest as shame dumping. It has also been described as an early phase of introjection.
In psychoanalytical and psychodynamic terms, projection may help a fragile ego reduce anxiety, but at the cost of a certain dissociation, as in dissociative identity disorder. In extreme cases, an individual's personality may end up becoming critically depleted. In such cases, therapy may be required which would include the slow
rebuilding of the personality through the "taking back" of such
projections.
Psychotherapy and counselling
Counter-projection
Jung
wrote, "All projections provoke counter-projection when the object is
unconscious of the quality projected upon it by the subject." Jung argued that what is unconscious in the recipient will be projected
back onto the projector, precipitating a form of mutual acting out. In a different usage, Harry Stack Sullivan saw counter-projection in the therapeutic context as a way of warding off the compulsive re-enactment of a psychological trauma, by emphasizing the difference between the current situation and the projected obsession with the perceived perpetrator of the original trauma.
Psychoanalytic and psychodynamic techniques
The method of managed projection
is a projective technique. The basic principle of this method is that a
subject is presented with their own verbal portrait named by the name
of another person, as well as with a portrait of their fictional
opposition. The technique may be suitable for application in psychological
counseling and might provide valuable information about the form and
nature of their self-esteem. Bodalev, A (2000). "General psychodiagnostics".
Psychobiography
Psychological projection is one of the medical explanations of bewitchment used to explain the behavior of the afflicted children at Salem in 1692. The historian John Demos
wrote in 1970 that the symptoms of bewitchment displayed by the
afflicted girls could have been due to the girls undergoing
psychological projection of repressed aggression.
Types
In victim blaming,
the victim of someone else's actions or bad luck may be offered
criticism, the theory being that the victim may be at fault for having
attracted the other person's hostility. According to some theorists, in
such cases, the psyche projects the experiences of weakness or
vulnerability with the aim of ridding itself of the feelings and,
through its disdain for them or the act of blaming, their conflict with
the ego.
Thoughts of infidelity to a partner may also be unconsciously projected in self-defence on to the partner in question, so that the guilt attached to the thoughts can be repudiated or turned to blame instead, in a process linked to denial. For example, a person who is having a sexual affair may fear that their
spouse is planning an affair or may accuse the innocent spouse of adultery.
A bully may project their own feelings of vulnerability
onto the target(s) of the bullying activity. Despite the fact that a
bully's typically denigrating activities are aimed at the bully's
targets, the true source of such negativity is ultimately almost always
found in the bully's own sense of personal insecurity or vulnerability. Such aggressive projections of displaced negative emotions can occur anywhere from the micro-level of interpersonal relationships, all the way up to the macro-level of international politics, or even international armed conflict.
Projection of a severe conscience is another form of defense, one which may be linked to the making of false accusations, personal or political. In a more positive light, a patient may sometimes project their feelings of hope onto the therapist. People in love "reading" each other's mind involves a projection of the self into the other.
Criticism
Research on social projection supports the existence of a false-consensus effect
whereby humans have a broad tendency to believe that others are similar
to themselves, and thus "project" their personal traits onto others. This applies to both good and bad traits; it is not a defense mechanism for denying the existence of the trait within the self.
A study of the empirical evidence for a range of defense
mechanisms by Baumeister, Dale, and Sommer (1998) concluded, "The view
that people defensively project specific bad traits of their own onto
others as a means of denying that they have them is not well supported." However, Newman, Duff, and Baumeister (1997) proposed a new model of defensive projection in which the repressor's efforts to suppress thoughts
of their undesirable traits make those trait categories highly
accessible—so that they are then used all the more often when forming
impressions of others. The projection is then only a byproduct of the
real defensive mechanism.
In labor economics, an efficiency wage is a wage paid in excess of the market-clearing wage to increase the labor productivity of workers. Specifically, it points to the incentive for managers to pay their
employees more than the market-clearing wage to increase their productivity or to reduce the costs associated with employee turnover.
Theories of efficiency wages explain the existence of involuntary unemployment in economies outside of recessions, providing for a natural rate of unemployment above zero. Because workers are paid more than the equilibrium wage, workers may experience periods of unemployment in which workers compete for a limited supply of well-paying jobs.
Overview of theory
There are several reasons why managers may pay efficiency wages:
Avoiding shirking: If it is difficult to measure the quantity or quality of a worker's effort – and systems of piece rates or commissions
are impossible, there may be an incentive for the worker to "shirk" (do
less work than agreed). The manager thus may pay an efficiency wage in
order to create or increase the cost of job loss, which gives a sting to
the threat of firing. This threat can be used to prevent shirking.
Minimizing turnover: By paying above-market wages, the
worker's motivation to leave the job and look for a job elsewhere will
be reduced. This strategy also reduces the expense of training
replacement workers.
Selection: If job performance depends on workers' ability and
workers differ from each other in those terms, firms with higher wages
will attract more able job-seekers, and this may make it profitable to
offer wages that exceed the market clearing level.
Sociological theories: Efficiency wages may result from traditions. Akerlof's theory (in very simple terms) involves higher wages encouraging high morale, which raises productivity.
Nutritional theories: In developing countries,
efficiency wages may allow workers to eat well enough to avoid illness
and to be able to work harder and even more productively.
The model of efficiency wages, largely based on shirking, developed by Carl Shapiro and Joseph E. Stiglitz has been particularly influential.
In
the Shapiro-Stiglitz model workers are paid at a level where they do
not shirk. This prevents wages from dropping to market-clearing levels.
Full employment cannot be achieved because workers would shirk if they
were not threatened with the possibility of unemployment. The curve for
the no-shirking condition (labeled NSC) goes to infinity at full
employment.
A theory in which employers voluntarily pay employees above the
market equilibrium level to increase worker productivity. The shirking
model begins with the fact that complete contracts rarely (or never)
exist in the real world. This implies that both parties to the contract
have some discretion, but frequently, due to monitoring problems, the
employee's side of the bargain is subject to the most discretion.
Methods such as piece rates are often impracticable because monitoring
is too costly or inaccurate; or they may be based on measures too
imperfectly verifiable by workers, creating a moral hazard
problem on the employer's side. Thus, paying a wage in excess of
market-clearing may provide employees with cost-effective incentives to
work rather than shirk.
In the Shapiro and Stiglitz model, workers either work or shirk,
and if they shirk they have a certain probability of being caught, with
the penalty of being fired. Equilibrium then entails unemployment, because to create an opportunity cost
to shirking, firms try to raise their wages above the market average
(so that sacked workers face a probabilistic loss). But since all firms
do this, the market wage itself is pushed up, and the result is that
wages are raised above market-clearing, creating involuntary unemployment. This creates a low, or no income alternative, which makes job loss
costly and serves as a worker discipline device. Unemployed workers
cannot bid for jobs by offering to work at lower wages since, if hired,
it would be in the worker's interest to shirk on the job, and he has no
credible way of promising not to do so. Shapiro and Stiglitz point out
that their assumption that workers are identical (e.g. there is no
stigma to having been fired) is a strong one – in practice, reputation can work as an additional
disciplining device. Conversely, higher wages and unemployment increase
the cost of finding a new job after being laid off. So in the shirking
model, higher wages are also a monetary incentive.
Shapiro-Stiglitz's model holds that unemployment threatens
workers, and the stronger the danger, the more willing workers are to
work through correct behavior. This view illustrates the endogenous
decision-making of workers in the labor market; that is, workers will be
more inclined to work hard when faced with the threat of unemployment
to avoid the risk of unemployment. In the labor market, many factors
influence workers' behavior and supply. Among them, the threat of
unemployment is an essential factor affecting workers' behavior and
supply. When workers are at risk of losing their jobs, they tend to
increase their productivity and efficiency by working harder, thus
improving their chances of employment. This endogenous decision of
behavior and supply can somewhat alleviate the unemployment problem in
the labor market.
The shirking model does not predict that the bulk of the
unemployed at any one time are those fired for shirking, because if the
threat associated with being fired is effective, little or no shirking
and sacking will occur. Instead, the unemployed will consist of a
rotating pool of individuals who have quit for personal reasons, are new
entrants to the labour market, or have been laid off for other reasons.
Pareto optimality,
with costly monitoring, will entail some unemployment since
unemployment plays a socially valuable role in creating work incentives.
But the equilibrium unemployment rate will not be Pareto optimal since
firms do not consider the social cost of the unemployment they helped to
create.
One criticism of the efficiency wage hypothesis is that more
sophisticated employment contracts can, under certain conditions, reduce
or eliminate involuntary unemployment. The use of seniority wages to
solve the incentive problem, where initially, workers are paid less than
their marginal productivity, and as they work effectively over time within the firm, earnings increase until they exceed marginal productivity. The upward tilt in the age-earnings profile here provides the incentive
to avoid shirking, and the present value of wages can fall to the
market-clearing level, eliminating involuntary unemployment. The slope
of earnings profiles is significantly affected by incentives.
However, a significant criticism is that moral hazard would be
shifted to employers responsible for monitoring the worker's efforts.
Employers do not want employees to be lazy. Employers want employees to
be able to do more work while getting their reserved wages. Obvious incentives would exist for firms to declare shirking when it
has not taken place. In the Lazear model, firms have apparent incentives
to fire older workers (paid above marginal product) and hire new
cheaper workers, creating a credibility problem. The seriousness of this
employer moral hazard depends on how much effort can be monitored by
outside auditors, so that firms cannot cheat. However, reputation
effects (e.g. Lazear 1981) may be able to do the same job.
Labor turnover
"Labor
turnover" refers to rapid changes in the workforce from one position to
another. This is determined by the ratio of the size of the labor and
the number of employees employed. With regards to the efficiency wage hypothesis, firms also offer wages
in excess of market-clearing, due to the high cost of replacing workers
(search, recruitment, training costs). If all firms are identical, one possible equilibrium involves all firms
paying a common wage rate above the market-clearing level, with
involuntary unemployment serving to diminish turnover. These models can
easily be adapted to explain dual labor markets:
if low-skill, labor-intensive firms have lower turnover costs (as seems
likely), there may be a split between a low-wage, low-effort,
high-turnover sector and a high-wage, high effort, low-turnover sector.
Again, more sophisticated employment contracts may solve the problem.
Selection
Similar
to the shirking model, the selection model also believes that the
information asymmetry problem is the main culprit that causes the market
function not fully to exert to eliminate involuntary unemployment.
However, unlike the shirking model, which focuses on employee shirking,
the election model emphasizes the information disadvantage of employers
in terms of labor quality. Due to the inability to accurately observe
the real quality of employees, we only know that high wages can hire
high-quality employees, and wage cuts will make high-quality employees
go first. Therefore, wages will not continue to fall due to involuntary
unemployment to maintain the excellent quality of workers.
In selection wage theories it is presupposed that performance on
the job depends on "ability", and that workers are heterogeneous
concerning ability. The selection effect of higher wages may come about
through self-selection or because firms with a larger pool of applicants
can increase their hiring standards and obtain a more productive
workforce. Workers with higher abilities are more likely to earn more
wages, and companies are willing to pay higher wages to hire
high-quality people as employees.
Self-selection (often referred to as adverse selection) comes about if the workers’ ability and reservation wages are positively correlated. The basic assumption of efficiency wage theory is that the efficiency
of workers increases with the increase of wages. In this case, companies
face a trade-off between hiring productive workers at higher salaries
or less effective workers at lower wages. These notes derive the
so-called Solow condition, which minimizes wages even if the cost of
practical labor input is minimized. Solow condition means that in the
labor market, the wage level paid by enterprises should equal the
marginal product of workers, namely the market value of labor force.
This condition is based on two basic assumptions: that firms operate in a
competitive market and cannot control market wages and that individual
workers are price takers rather than price setters. If there are two
kinds of firms (low and high wage), then we effectively have two sets of
lotteries (since firms cannot screen), the difference being that
high-ability workers do not enter the low-wage lotteries as their
reservation wage is too high. Thus low-wage firms attract only
low-ability lottery entrants, while high-wage firms attract workers of
all abilities (i.e. on average, they will select average workers).
Therefore high-wage firms are paying an efficiency wage – they pay more
and, on average, get more. However, the assumption that firms cannot measure effort and pay piece
rates after workers are hired or to fire workers whose output is too low
is quite strong. Firms may also be able to design self-selection or
screening devices that induce workers to reveal their true
characteristics.
High wages can effectively reduce personnel turnover, promote
employees to work harder, prevent employees from resigning collectively,
and effectively attract more high-quality employees. If firms can assess the productivity of applicants, they will try to
select the best among the applicants. A higher wage offer will attract
more applicants, particularly more highly qualified ones. This permits a
firm to raise its hiring standard, thereby enhancing its productivity. Wage compression makes it profitable for firms to screen applicants under such circumstances, and selection wages may be necessary.
Sociological models
Fairness, norms, and reciprocity
Standard economic models ("neoclassical economics") assume that people pursue only their self-interest and do not care about "social" goals ("homo economicus").
Neoclassical economics is divided into three theories, namely
methodological individualism, methodological instrumentalist, and
methodological equilibration. Some attention has been paid to the idea that people may be altruistic, but it is only with the addition of reciprocity and norms of fairness that the model becomes accurate. Thus of crucial importance is the idea of exchange: a person who is
altruistic towards another expects the other to fulfil some fairness
norm, be it reciprocating in kind, in some different but – according to
some shared standard – equivalent way, or simply by being grateful. If
the expected reciprocation is not forthcoming, the altruism will
unlikely be repeated or continued. In addition, similar norms of
fairness will typically lead people into negative forms of reciprocity,
too – in retaliation for acts perceived as vindictive. This can bind
actors into vicious loops where vindictive acts are met with further
vindictive acts.
In practice, despite the neat logic of standard neoclassical
models, these sociological models do impinge upon many economic
relations, though in different ways and to different degrees. For
example, suppose an employee has been exceptionally loyal. In that case,
a manager may feel some obligation to treat that employee well, even
when it is not in his (narrowly defined, economic) self-interest. It
would appear that although broader, longer-term economic benefits may
result (e.g. through reputation, or perhaps through simplified
decision-making according to fairness norms), a significant factor must
be that there are noneconomic benefits the manager receives, such as not
having a guilty conscience (loss of self-esteem). For real-world,
socialised, normal human beings (as opposed to abstracted factors of
production), this is likely to be the case quite often. As a
quantitative estimate of the importance of this, the total value of
voluntary labor in the US – $74 billion annually – will suffice. Examples of the negative aspect of fairness include consumers
"boycotting" firms they disapprove of by not buying products they
otherwise would (and therefore settling for second-best); and employees
sabotaging firms they feel hard done by.
Rabin (1993) offers three stylised facts as a starting point on
how norms affect behaviour: (a) people are prepared to sacrifice their
material well-being to help those who are being kind; (b) they are also
prepared to do this to punish those being unkind; (c) both (a) and (b)
have a greater effect on behaviour as the material cost of sacrificing
(in relative rather than absolute terms) becomes smaller. Rabin supports
his Fact A by Dawes and Thaler's (1988) survey of the experimental
literature, which concludes that for most one-shot public good
decisions in which the individually optimal contribution is close to
0%, the contribution rate ranges from 40 to 60% of the socially optimal
level. Fact B is demonstrated by the "ultimatum game" (e.g. Thaler
1988), where an amount of money is split between two people, one
proposing a division, the other accepting or rejecting (where rejection
means both get nothing). Rationally, the proposer should offer no more
than a penny, and the decider accept any offer of at least a penny.
Still, in practice, even in one-shot settings, proposers make fair
proposals, and deciders are prepared to punish unfair offers by
rejecting them. Fact C is tested and partially confirmed by Gerald
Leventhal and David Anderson (1970), but is also reasonably intuitive.
In the ultimatum game, a 90% split (regarded as unfair) is (intuitively)
far more likely to be punished if the amount to be split is $1 than $1
million.
A crucial point (as noted in Akerlof 1982) is that notions of
fairness depend on the status quo and other reference points.
Experiments (Fehr and Schmidt 2000) and surveys (Kahneman, Knetsch, and
Thaler 1986) indicate that people have clear notions of fairness based
on particular reference points (disagreements can arise in the choice of
reference point). Thus, for example, firms who raise prices or lower
wages to take advantage of increased demand or increased labour supply
are frequently perceived as acting unfairly, where the same changes are
deemed acceptable when the firm makes them due to increased costs
(Kahneman et al.). In other words, in people's intuitive "naïve
accounting" (Rabin 1993), a key role is played by the idea of
entitlements embodied in reference points (although as Dufwenberg and
Kirchsteiger 2000 point out, there may be informational problems, e.g.
for workers in determining what the firm's profit is, given tax
avoidance and stock-price considerations). In particular, it is
perceived as unfair for actors to increase their share at the expense of
others. However, over time such a change may become entrenched and form
a new reference point which (typically) is no longer in itself deemed
unfair.
Sociological efficiency wage models
Solow
(1981) argued that wage rigidity may be partly due to social
conventions and principles of appropriate behaviour, which are not
entirely individualistic. Akerlof (1982) provided the first explicitly sociological model leading
to the efficiency wage hypothesis. Using a variety of evidence from
sociological studies, Akerlof argues that worker effort depends on the
work norms of the relevant reference group. In Akerlof's partial gift exchange model,
the firm can raise group work norms and average effort by paying
workers a gift of wages over the minimum required in return for effort
above the minimum required. The sociological model can explain phenomena
inexplicable on neoclassical terms, such as why firms do not fire
workers who turn out to be less productive, why piece rates are so
little used even where quite feasible; and why firms set work standards
exceeded by most workers. A possible criticism is that workers do not
necessarily view high wages as gifts, but as merely fair (particularly
since typically 80% or more of workers consider themselves in the top
quarter of productivity), in which case they will not reciprocate with
high effort.
Akerlof and Yellen
(1990), responding to these criticisms and building on work from
psychology, sociology, and personnel management, introduce "the fair
wage-effort hypothesis", which states that workers form a notion of the fair wage, and if the actual wage is lower, withdraw effort in proportion, so that, depending on the wage-effort elasticity
and the costs to the firm of shirking, the fair wage may form a key
part of the wage bargain. This explains persistent evidence of
consistent wage differentials across industries (e.g. Slichter 1950;
Dickens and Katz 1986; Krueger and Summers 1988): if firms must pay high
wages to some groups of workers – perhaps because they are in short
supply or for other efficiency-wage reasons such as shirking – then
demands for fairness will lead to a compression of the pay scale, and
wages for different groups within the firm will be higher than in other
industries or firms.
The union threat model is one of several explanations for industry wage differentials. This Keynesian economics
model looks at the role of unions in wage determination. The degree in
which union wages exceed non-union member wages is known as union wage premium. Some firms seek to prevent unionization in the first instances. Varying costs of union avoidance across sectors will lead some firms to offer supracompetitive wages as pay premiums to workers in exchange for their avoiding unionization. Under the union threat model (Dickens 1986), the ease with which
industry can defeat a union drive has a negative relationship with its
wage differential. In other words, inter-industry wage variability should be low where the threat of unionization is low.
Empirical literature
Raff and Summers (1987) conduct a case study on Henry Ford’s introduction of the five dollar day
in 1914. Their conclusion is that the Ford experience supports
efficiency wage interpretations. Ford’s decision to increase wages so
dramatically (doubling for most workers) is most plausibly portrayed as
the consequence of efficiency wage considerations, with the structure
being consistent, evidence of substantial queues for Ford jobs, and
significant increases in productivity and profits at Ford. Concerns such
as high turnover and poor worker morale appear to have played an
important role in the five-dollar decision. Ford’s new wage put him in
the position of rationing jobs, and increased wages did yield
substantial productivity benefits and profits. There is also evidence
that other firms emulated Ford’s policy to some extent, with wages in
the automobile industry 40% higher than in the rest of manufacturing
(Rae 1965, quoted in Raff and Summers). Given low monitoring costs and
skill levels on the Ford production line, such benefits (and the
decision itself) appear particularly significant.
Fehr, Kirchler, Weichbold and Gächter (1998) conduct labour
market experiments to separate the effects of competition and social
norms/customs/standards of fairness. They find that firms persistently
try to enforce lower wages in complete contract markets. By contrast,
wages are higher and more stable in gift exchange markets and bilateral
gift exchanges. It appears that in complete contract situations,
competitive equilibrium exerts a considerable drawing power, whilst in
the gift exchange market it does not.
Fehr et al. stress that reciprocal effort choices are truly a
one-shot phenomenon without reputation or other repeated-game effects.
"It is, therefore, tempting to interpret reciprocal effort behavior as a
preference phenomenon."(p. 344). Two types of preferences can account
for this behaviour: a) workers may feel obligated to share the
additional income from higher wages at least partly with firms; b)
workers may have reciprocal motives (reward good behaviour, punish bad).
"In the context of this interpretation, wage setting is inherently
associated with signaling intentions, and workers condition their effort
responses on the inferred intentions." (p. 344). Charness (1996),
quoted in Fehr et al., finds that when signaling is removed (wages are
set randomly or by the experimenter), workers exhibit a lower, but still
positive, wage-effort relation, suggesting some gain-sharing motive and
some reciprocity (where intentions can be signaled).
Fehr et al. state that "Our preferred interpretation of firms’
wage-setting behavior is that firms voluntarily paid job rents to elicit
non-minimum effort levels." Although excess supply of labour created
enormous competition among workers, firms did not take advantage. In the
long run, instead of being governed by competitive forces, firms’ wage
offers were solely governed by reciprocity considerations because the
payment of non-competitive wages generated higher profits. Thus, firms
and workers can be better off relying on stable reciprocal interactions.
That is to say, when the demands of enterprises and workers reach a
balance point, it is stable and developing for both parties.
That reciprocal behavior generates efficiency gains has been
confirmed by several other papers e.g. Berg, Dickhaut, and McCabe (1995)
– even under conditions of double anonymity and where actors know even
the experimenter cannot observe individual behaviour, reciprocal
interactions, and efficiency gains are frequent. Fehr, Gächter, and
Kirchsteiger ([1996] 1997) show that reciprocal interactions generate
substantial efficiency gains. However, the efficiency-enhancing role of
reciprocity is generally associated with serious behavioural deviations
from competitive equilibrium predictions. To counter a possible
criticism of such theories, Fehr and Tougareva (1995) showed these
reciprocal exchanges (efficiency-enhancing) are independent of the
stakes involved (they compared outcomes with stakes worth a week's
income with stakes worth 3 months’ income and found no difference).
As one counter to over-enthusiasm for efficiency wage models,
Leonard (1987) finds little support for shirking or turnover efficiency
wage models, by testing their predictions for large and persistent wage
differentials. The shirking version assumes a trade-off between
self-supervision and external supervision, while the turnover version
assumes turnover is costly to the firm. Variation in the cost of
monitoring/shirking or turnover is hypothesized to account for wage
variations across firms for homogeneous workers. But Leonard finds that
wages for narrowly defined occupations within one sector of one state
are widely dispersed, suggesting other factors may be at work.
Efficiency wage models do not explain everything about wages. For
example, involuntary unemployment and persistent wage rigidity are often
problematic in many economies. But the efficiency wage model fails to
account for these issues.
Mathematical explanation
Paul Krugman explains how the efficiency wage theory comes into play in real society. The productivity of individual workers is a function of their wage , and the total productivity is the sum of individual productivity. Accordingly, the sales of the firm to which the workers belong becomes a function of both employment and the individual productivity. The firm's profit is
Then we assume that the higher the wage of the workers become, the higher the individual productivity: . If the employment is chosen so that the profit is maximised, it is constant. Under this optimised condition, we have
that is,
Obviously, the gradient of the slope is positive, because the higher individual
productivity the higher sales. The never goes to negative because of the optimised condition, and therefore we have
This means that if the firm increases their wage their profit becomes
constant or even larger. Because after the employee's salary increases,
the employee will work harder, and will not easily quit or go to other
companies. This increases the stability of the company and the
motivation of employees. Thus the efficiency wage theory motivates the
owners of the firm to raise the wage to increase the profit of the firm,
and high wages can also be called a reward mechanism.
Philosophical realism—usually not treated as a position of its
own but as a stance towards other subject matters—is the view that a
certain kind of thing (ranging widely from abstract objects like numbers to moral statements to the physical world itself) has mind-independent existence, i.e. that it exists even in the absence of any mind perceiving it or that its existence is not just a mere appearance in the eye of the beholder. This includes a number of positions within epistemology and metaphysics which express that a given thing instead exists independently of knowledge, thought, or understanding. This can apply to items such as the physical world, the past and future, other minds, and the self, though may also apply less directly to things such as universals, mathematical truths, moral truths, and thought itself. However, realism may also include various positions which instead reject metaphysical treatments of reality altogether.
Realism can also be a view about the properties of reality in general, holding that reality exists independent of the mind, as opposed to non-realist views (like some forms of skepticism and solipsism) which question the certainty of anything beyond one's own mind. Philosophers who profess realism often claim that truth consists in a correspondence between cognitive representations and reality.
Realists tend to believe that whatever we believe now is only an
approximation of reality but that the accuracy and fullness of
understanding can be improved. In some contexts, realism is contrasted with idealism. Today it is more often contrasted with anti-realism, for example in the philosophy of science.
Metaphysical realism maintains that "whatever exists does so,
and has the properties and relations it does, independently of deriving
its existence or nature from being thought of or experienced." In other words, an objective reality exists (not merely one or more subjective realities).
Perceptual realism is the common sense view that tables, chairs
and cups of coffee exist independently of perceivers. Direct realists
also claim that it is with such objects that we directly engage. The
objects of perception include such familiar items as paper clips, suns
and olive oil tins. It is these things themselves that we see, smell,
touch, taste and listen to. There are, however, two versions of direct
realism: naïve direct realism and scientific direct realism. They differ
in the properties they claim the objects of perception possess when
they are not being perceived. Naïve realism claims that such objects
continue to have all the properties that we usually perceive them to
have, properties such as yellowness, warmth, and mass. Scientific
realism, however, claims that some of the properties an object is
perceived as having are dependent on the perceiver, and that unperceived
objects should not be conceived as retaining them. Such a stance has a
long history:
By convention sweet and by convention bitter, by convention hot, by
convention cold, by convention colour; in reality atoms and void.
[Democritus, c. 460-370 BCE, quoted by Sextus Empiricus in Barnes, 1987,
pp. 252-253.]
In contrast, some forms of idealism assert that no world exists apart from mind-dependent ideas and some forms of skepticism say we cannot trust our senses. The naive realist view is that objects have properties, such as texture, smell, taste and colour, that are usually perceived absolutely correctly. We perceive them as they really are.
Immanent realism is the ontological understanding which holds that universals are immanently real within particulars themselves, not in a separate realm, and not mere names. Most often associated with Aristotle and the Aristotelian tradition.
Scientific realism
is, at the most general level, the view that the world described by
science is the real world, as it is, independent of what we might take
it to be. Within philosophy of science,
it is often framed as an answer to the question "how is the success of
science to be explained?" The debate over what the success of science
involves centers primarily on the status of unobservable entities apparently talked about by scientific theories.
Generally, those who are scientific realists assert that one can make
reliable claims about unobservables (viz., that they have the same ontological status) as observables. Analytic philosophers
generally have a commitment to scientific realism, in the sense of
regarding the scientific method as a reliable guide to the nature of
reality. The main alternative to scientific realism is instrumentalism.
Scientific realism in physics
Realism in physics (especially quantum mechanics)
is the claim that the world is in some sense mind-independent: that
even if the results of a possible measurement do not pre-exist the act
of measurement, that does not require that they are the creation of the
observer (contrary to the "consciousness causes collapse" interpretation of quantum mechanics). That interpretation of quantum mechanics, on the other hand, states that the wave function
is already the full description of reality. The different possible
realities described by the wave function are equally true. The observer
collapses the wave function into their own reality. One's reality can be
mind-dependent under this interpretation of quantum mechanics.
Moral realism is the position that ethical sentences express propositions that refer to objective features of the world.
Aesthetic realism
Aesthetic realism (not to be confused with Aesthetic Realism, the philosophy developed by Eli Siegel, or "realism" in the arts) is the view that there are mind-independent aesthetic facts.
Plato (left) and Aristotle (right), a detail of The School of Athens, a fresco by Raphael. In Plato's metaphysics, ever-unchanging Forms, or Ideas, exist apart from particular physical things, and are related to them as their prototype or exemplar. Aristotle's philosophy of reality also aims at the universal. Aristotle finds the universal, which he calls essence, in the commonalities of particular things.
Platonic realism is a radical form of realism regarding the existence of abstract objects, including universals,
which are often translated from Plato's works as "Forms". Since Plato
frames Forms as ideas that are literally real (existing even outside of
human minds), this stance is also called Platonic idealism. This should not be confused with "idealistic" in the ordinary sense of "optimistic" or with other types of philosophical idealism, as presented by philosophers such as George Berkeley. As Platonic abstractions
are not spatial, temporal, or subjectively mental, they are arguably
not compatible with the emphasis of Berkeley's idealism grounded in
mental existence. Plato's Forms include numbers and geometrical figures,
making his theory also include mathematical realism; they also include the Form of the Good, making it additionally include ethical realism.
In Aristotle's more modest view, the existence of universals
(like "blueness") is dependent on the particulars that exemplify them
(like a particular "blue bird", "blue piece of paper", "blue robe",
etc.), and those particulars exist independent of any minds: classic metaphysical realism.
Ancient Indian philosophy
There
were many ancient Indian realist schools, such as the Mimamsa,
Vishishtadvaita, Dvaita, Nyaya, Yoga, Samkhya, Sauntrantika, Jain,
Vaisesika, and others. They argued for their realist positions, and
heavily criticized idealism, like that of the Yogachara, and composed refutations of the Yogacara position.
Medieval philosophy
Medieval realism developed out of debates over the problem of universals. Universals are terms or properties that can be applied to many things,
such as "red", "beauty", "five", or "dog". Realism (also known as exaggerated realism) in this context, contrasted with conceptualism and nominalism, holds that such universals really exist, independently and somehow prior to the world. Moderate realism holds that they exist, but only insofar as they are instantiated in specific things; they do not exist separately
from the specific thing. Conceptualism holds that they exist, but only
in the mind, while nominalism holds that universals do not "exist" at
all but are no more than words (flatus vocis) that describe specific objects.
In early modern philosophy, Scottish Common Sense Realism was a school of philosophy which sought to defend naive realism against philosophical paradox and scepticism, arguing that matters of common sense
are within the reach of common understanding and that common-sense
beliefs even govern the lives and thoughts of those who hold
non-commonsensical beliefs. It originated in the ideas of the most
prominent members of the Scottish School of Common Sense, Thomas Reid, Adam Ferguson and Dugald Stewart, during the 18th century Scottish Enlightenment and flourished in the late 18th and early 19th centuries in Scotland and America.
The roots of Scottish Common Sense Realism can be found in responses to such philosophers as John Locke, George Berkeley, and David Hume. The approach was a response to the "ideal system" that began with Descartes' concept of the limitations of sense experience
and led Locke and Hume to a skepticism that called religion and the
evidence of the senses equally into question. The common sense realists
found skepticism to be absurd and so contrary to common experience that
it had to be rejected. They taught that ordinary experiences provide
intuitively certain assurance of the existence of the self, of real
objects that could be seen and felt and of certain "first principles"
upon which sound morality and religious beliefs could be established.
Its basic principle was enunciated by its founder and greatest figure,
Thomas Reid:
If there are certain principles, as I think there are, which the
constitution of our nature leads us to believe, and which we are under a
necessity to take for granted in the common concerns of life, without
being able to give a reason for them—these are what we call the
principles of common sense; and what is manifestly contrary to them, is
what we call absurd.