Search This Blog

Monday, October 10, 2022

Fiduciary

From Wikipedia, the free encyclopedia

The Court of Chancery, which governed fiduciary relations in England prior to the Judicature Acts

A fiduciary is a person who holds a legal or ethical relationship of trust with one or more other parties (person or group of persons). Typically, a fiduciary prudently takes care of money or other assets for another person. One party, for example, a corporate trust company or the trust department of a bank, acts in a fiduciary capacity to another party, who, for example, has entrusted funds to the fiduciary for safekeeping or investment. Likewise, financial advisers, financial planners, and asset managers, including managers of pension plans, endowments, and other tax-exempt assets, are considered fiduciaries under applicable statutes and laws. In a fiduciary relationship, one person, in a position of vulnerability, justifiably vests confidence, good faith, reliance, and trust in another whose aid, advice, or protection is sought in some matter. In such a relation, good conscience requires the fiduciary to act at all times for the sole benefit and interest of the one who trusts.

A fiduciary is someone who has undertaken to act for and on behalf of another in a particular matter in circumstances which give rise to a relationship of trust and confidence.

Fiduciary duties in a financial sense exist to ensure that those who manage other people's money act in their beneficiaries' interests, rather than serving their own interests.

A fiduciary duty is the highest standard of care in equity or law. A fiduciary is expected to be extremely loyal to the person to whom he owes the duty (the "principal") such that there must be no conflict of duty between fiduciary and principal, and the fiduciary must not profit from their position as a fiduciary. (unless the principal consents). The nature of fiduciary obligations differs among jurisdictions. In Australia, only proscriptive or negative fiduciary obligations are recognised, whereas in Canada, fiduciaries can come under both proscriptive (negative) and prescriptive (positive) fiduciary obligations.

In English common law, the fiduciary relation is an important concept within a part of the legal system known as equity. In the United Kingdom, the Judicature Acts merged the courts of equity (historically based in England's Court of Chancery) with the courts of common law, and as a result the concept of fiduciary duty also became applicable in common law courts.

When a fiduciary duty is imposed, equity requires a different, stricter standard of behavior than the comparable tortious duty of care in common law. The fiduciary has a duty not to be in a situation where personal interests and fiduciary duty conflict, not to be in a situation where their fiduciary duty conflicts with another fiduciary duty, and a duty not to profit from their fiduciary position without knowledge and consent. A fiduciary ideally would not have a conflict of interest. It has been said that fiduciaries must conduct themselves "at a level higher than that trodden by the crowd" and that "[t]he distinguishing or overriding duty of a fiduciary is the obligation of undivided loyalty".

In different jurisdictions

Different jurisdictions regard fiduciary duties in different lights. Canadian law, for example, has developed a more expansive view of fiduciary obligation than American law, while Australian law and British law have developed more conservative approaches than either the United States or Canada. In Australia, it has been found that there is no comprehensive list of criteria by which to establish a fiduciary relationship. Courts have so far refused to define the concept of a fiduciary, instead preferring to develop the law on a case-by-case basis and by way of analogy. Fiduciary relationships are of different types and carry different obligations so that a test appropriate to determine whether a fiduciary relationship exists for one purpose might be inappropriate for another:

In 2014 the Law Commission (England and Wales) reviewed the fiduciary duties of investment intermediaries, looking particularly at the duties on pension trustees. They commented that the term "fiduciary" is used in many different ways.

Fiduciary duties cannot be understood in isolation. Instead they are better viewed as ‘legal polyfilla’, molding themselves flexibly around other legal structures, and sometimes filling the gaps.

— Law Commission (England and Wales) Fiduciary Duties of Investment Intermediaries Law Com 350, para 3.11 

The question of who is a fiduciary is a "notoriously intractable" question and this was the first of many questions. In SEC v. Chenery Corporation, Frankfurter J said,

To say that a man is a fiduciary only begins the analysis; it gives direction to further inquiry. To whom is he a fiduciary? What obligations does he owe as a fiduciary? In what respect has he failed to discharge these obligations? And what are the consequences of his deviation from his duty?

The law expressed here follows the general body of elementary fiduciary law found in most common law jurisdictions; for in-depth analysis of particular jurisdictional idiosyncrasies please consult primary authorities within the relevant jurisdiction.

This is especially true in the area of Labor and Employment law. In Canada a fiduciary has obligations to the employer even after the employment relationship is terminated, whereas in the United States the employment and fiduciary relationships terminate together.

Fiduciary duties under Delaware corporate law

The corporate law of Delaware is the most influential in the United States, as more than 50% of publicly traded companies in the United States, including 64% of the Fortune 500, have chosen to incorporate in that State. Under Delaware law, officers, directors and other control persons of corporations and other entities owe three primary fiduciary duties, (1) the duty of care, (2) the duty of loyalty and (3) the duty of good faith.

The duty of care requires control persons to act on an informed basis after due consideration of all information. The duty includes a requirement that such persons reasonably inform themselves of alternatives. In doing so, they may rely on employees and other advisers so long as they do so with a critical eye and do not unquestionably accept the information and conclusions provided to them. Under normal circumstances, their actions are accorded the protection of the business judgment rule, which presumes that control persons acted properly, provided that they act on an informed basis, in good faith and in the honest belief that the action taken was in the best interests of the company.

The duty of loyalty requires control persons to look to the interests of the company and its other owners and not to their personal interests. In general, they cannot use their positions of trust, confidence and inside knowledge to further their own private interests or approve an action that will provide them with a personal benefit (such as continued employment) that does not primarily benefit the company or its other owners.

The duty of good faith requires control persons to exercise care and prudence in making business decisions—that is, the care that a reasonably prudent person in a similar position would use under similar circumstances. Control persons fail to act in good faith, even if their actions are not illegal, when they take actions for improper purposes or, in certain circumstances, when their actions have grossly inequitable results. The duty to act in good faith is an obligation not only to make decisions free from self-interest, but also free of any interest that diverts the control persons from acting in the best interest of the company. The duty to act in good faith may be measured by an individual's particular knowledge and expertise. The higher the level of expertise, the more accountable that person will be (e.g., a finance expert may be held to a more exacting standard than others in accepting a third party valuation).

At one time, courts seemed to view the duty of good faith as an independent obligation. However, more recently, courts have treated the duty of good faith as a component of the duty of loyalty.

Diagram illustrating fiduciary duty, placing good faith within duty of loyalty.

Fiduciary duty in Canadian corporate law

In Canada, directors of corporations owe a fiduciary duty. A debate exists as to the nature and extent of this duty following a controversial landmark judgment from the Supreme Court of Canada in BCE Inc. v. 1976 Debentureholders. Scholarly literature has defined this as a "tripartite fiduciary duty", composed of (1) an overarching duty to the corporation, which contains two component duties — (2) a duty to protect shareholder interests from harm, and (3) a procedural duty of "fair treatment" for relevant stakeholder interests. This tripartite structure encapsulates the duty of directors to act in the "best interests of the corporation, viewed as a good corporate citizen".

Relationships

The most common circumstance where a fiduciary duty will arise is between a trustee, whether real or juristic, and a beneficiary. The trustee to whom property is legally committed is the legal—i.e., common law—owner of all such property. The beneficiary, at law, has no legal title to the trust; however, the trustee is bound by equity to suppress their own interests and administer the property only for the benefit of the beneficiary. In this way, the beneficiary obtains the use of property without being its technical owner.

Others, such as corporate directors, may be held to a fiduciary duty similar in some respects to that of a trustee. This happens when, for example, the directors of a bank are trustees for the depositors, the directors of a corporation are trustees for the stockholders or a guardian is trustee of their ward's property. A person in a sensitive position sometimes protects themselves from possible conflict of interest charges by setting up a blind trust, placing their financial affairs in the hands of a fiduciary and giving up all right to know about or intervene in their handling.

The fiduciary functions of trusts and agencies are commonly performed by a trust company, such as a commercial bank, organized for that purpose. In the United States, the Office of the Comptroller of the Currency (OCC), an agency of the United States Department of the Treasury, is the primary regulator of the fiduciary activities of federal savings associations.

When a court desires to hold the offending party to a transaction responsible so as to prevent unjust enrichment, the judge can declare that a fiduciary relation exists between the parties, as though the offender were in fact a trustee for the partner.

Relationships which routinely attract by law a fiduciary duty between certain classes of persons include these:

In Australia, the categories of fiduciary relationships are not closed.

Roman and civil law recognized a type of contract called fiducia (also contractus fiduciae or fiduciary contract), involving essentially a sale to a person coupled with an agreement that the purchaser should sell the property back upon the fulfillment of certain conditions. Such contracts were used in the emancipation of children, in connection with testamentary gifts and in pledges. Under Roman law a woman could arrange a fictitious sale called a fiduciary coemption in order to change her guardian or gain legal capacity to make a will.

In Roman Dutch law, a fiduciary heir may receive property subject to passing it to another on fulfilment of certain conditions; the gift is called a fideicommissum. The fiduciary of a fideicommissum is a fideicommissioner and one that receives property from a fiduciary heir is a fideicommissary heir.

Fiduciary principles may be applied in a variety of legal contexts.

Possible relationships

Joint ventures, as opposed to business partnerships, are not presumed to carry a fiduciary duty; however, this is a matter of degree. If a joint venture is conducted at commercial arm's length and both parties are on an equal footing then the courts will be reluctant to find a fiduciary duty, but if the joint venture is carried out more in the manner of a partnership then fiduciary relationships can and often will arise.

Husbands and wives are not presumed to be in a fiduciary relationship; however, this may be easily established. Similarly, ordinary commercial transactions in themselves are not presumed to but can give rise to fiduciary duties, should the appropriate circumstances arise. These are usually circumstances where the contract specifies a degree of trust and loyalty or it can be inferred by the court.

Australian courts also do not recognise parents and their children to be in fiduciary relationships. In contrast, the Supreme Court of Canada allowed a child to sue her father for damages for breach of his fiduciary duties, opening the door in Canada for allowing fiduciary obligations between parent and child to be recognised.

Australian courts have also not accepted doctor-patient relationships as fiduciary in nature. In Breen v Williams, the High Court viewed the doctor's responsibilities over their patients as lacking the representative capacity of the trustee in fiduciary relationships. Moreover, the existence of remedies in contract and tort made the Court reluctant in recognising the fiduciary relationship.

In 2011, in an insider trading case, the U.S. Securities and Exchange Commission brought charges against a boyfriend of a Disney intern, alleging he had a fiduciary duty to his girlfriend and breached it. The boyfriend, Toby Scammell, allegedly received and used insider information on Disney's takeover of Marvel Comics.

Generally, the employment relationship is not regarded as fiduciary, but may be so if

... within a particular contractual relationship there are specific contractual obligations which the employee has undertaken which have placed him in a situation where equity imposes these rigorous duties in addition to the contractual obligations. Although terminologies like duty of good faith, or loyalty, or the mutual duty of trust and confidence are frequently used to describe employment relationships, such concepts usually denote situations where "a party merely has to take into consideration the interests of another, but does not have to act in the interests of that other.

If fiduciary relationships are to arise between employers and employees, it is necessary to ascertain that the employee has placed himself in a position where he must act solely in the interests of his employer. In the case of Canadian Aero Service Ltd v O'Malley, it was held that a senior employee is much more likely to be found to owe fiduciary duties towards his employer.

A protector of a trust may owe fiduciary duties to the beneficiaries, although there is no case law establishing this to be the case.

In 2015, the United States Department of Labor issued a proposed rule that if finalized would extend the fiduciary duty relationship to investment advisors and some brokers including insurance brokers. In 2017, the Trump Administration planned to order a 180-delay of implementation of the rule, sometimes known as the 'fiduciary rule'. The rule would require "brokers offering retirement investment advice to put their clients' interest first". The Trump Administration later rescinded the fiduciary rule on July 20, 2018. Prior to its repeal, the rule was also dealt blows by the US Fifth Circuit Court of Appeals in March and June 2018.

Example

For example, two members, X and Y, of a band currently under contract with one another (or with some other tangible, existing relationship that creates a legal duty) record songs together. Let us imagine it is a serious, successful band and that a court would declare that the two members are equal partners in a business. One day, X takes some demos made cooperatively by the duo to a recording label, where an executive expresses interest. X pretends it is all his work and receives an exclusive contract and $50,000. Y is unaware of the encounter until reading it in the paper the next week.

This situation represents a conflict of interest and duty. Both X and Y hold fiduciary duties to each other, which means they must subdue their own interests in favor of the duo's collective interest. By signing an individual contract and taking all the money, X has put personal interest above the fiduciary duty. Therefore, a court will find that X has breached his fiduciary duty. The judicial remedy here will be that X holds both the contract and the money in a constructive trust for the duo. Note, X will not be punished or totally denied of the benefit; both X and Y will receive a half share in the contract and the money.

Elements of duty

A fiduciary, such as the administrator, executor or guardian of an estate, may be legally required to file with a probate court or judge a surety bond, called a fiduciary bond or probate bond, to guarantee faithful performance of his duties. One of those duties may be to prepare, generally under oath, an inventory of the tangible or intangible property of the estate, describing the items or classes of property and usually placing a valuation on them.

A bank or other fiduciary having legal title to a mortgage may sell fractional shares to investors, thereby creating a participating mortgage.

Accountability

A fiduciary will be liable to account if proven to have acquired a profit, benefit or gain from the relationship by one of three means:

  • In circumstances of conflict of duty and interest;
  • In circumstances of conflict of duty to one person and duty to another person;
  • By taking advantage of the fiduciary position.

Therefore, it is said the fiduciary has a duty not to be in a situation where personal interests and fiduciary duty conflict, a duty not to be in a situation where his fiduciary duty conflicts with another fiduciary duty, and not to profit from his fiduciary position without express knowledge and consent. A fiduciary cannot have a conflict of interest.

The state of Texas in the United States sets out the duties of a fiduciary in its Estates Code, chapter 751, as follows (the bracketed references to TPC refer to the Texas Probate Code superseded by the Estates Code, effective January 1, 2014):

Sec. 751.101. Fiduciary Duties. [TPC §489B(a)]
An attorney in fact or agent is a fiduciary and has a duty to inform and to account for actions taken under the power of attorney.
Sec. 751.102. Duty to Timely Inform Principal. [TPC §489B(b)]
(a) The attorney in fact or agent shall timely inform the principal of each action taken under the power of attorney.
(b) Failure of an attorney in fact or agent to timely inform, as to third parties, does not invalidate any action of the attorney in fact or agent.
Sec. 751.103. Maintenance of Records. [TPC §489B(c), (f)]
(a) The attorney in fact or agent shall maintain records of each action taken or decision made by the attorney in fact or agent.
(b) The attorney in fact or agent shall maintain all records until delivered to the principal, released by the principal, or discharged by a court.
Sec. 751.104. Accounting. [TPC §489B(d), (e)]
(a) The principal may demand an accounting by the attorney in fact or agent.
(b) Unless otherwise directed by the principal, an accounting under Subsection (a) must include:
(1) the property belonging to the principal that has come to the attorney in fact’s or agent’s knowledge or into the attorney in fact’s or agent’s possession;
(2) each action taken or decision made by the attorney in fact or agent;
(3) a complete account of receipts, disbursements, and other actions of the attorney in fact or agent that includes the source and nature of each receipt, disbursement, or action, with receipts of principal and income shown separately;
(4) a listing of all property over which the attorney in fact or agent has exercised control that includes:
(A) an adequate description of each asset; and
(B) the asset’s current value, if the value is known to the attorney in fact or agent;
(5) the cash balance on hand and the name and location of the depository at which the cash balance is kept;
(6) each known liability; and
(7) any other information and facts known to the attorney in fact or agent as necessary for a full and definite understanding of the exact condition of the property belonging to the principal.
(c) Unless directed otherwise by the principal, the attorney in fact or agent shall also provide to the principal all documentation regarding the principal’s property.

Conflict of duties

A fiduciary's duty must not conflict with another fiduciary duty. Conflicts between one fiduciary duty and another fiduciary duty arise most often when a lawyer or an agent, such as a real estate agent, represent more than one client, and the interests of those clients conflict. This would occur when a lawyer attempts to represent both the plaintiff and the defendant in the same matter, for example. The rule comes from the logical conclusion that a fiduciary cannot make the principal's interests a top priority if he has two principals and their interests are diametrically opposed; he must balance the interests, which is not acceptable to equity. Therefore, the conflict of duty and duty rule is really an extension of the conflict of interest and duty rules.

No-profit rule

A fiduciary must not profit from the fiduciary position. This includes any benefits or profits which, although unrelated to the fiduciary position, came about because of an opportunity that the fiduciary position afforded. It is unnecessary that the principal would have been unable to make the profit; if the fiduciary makes a profit, by virtue of his role as fiduciary for the principal, then the fiduciary must report the profit to the principal. If the principal provides fully informed consent, then the fiduciary may keep the benefit and be absolved of any liability for what would be a breach of fiduciary duty. If this requirement is not met then the property is deemed by the court to be held by the fiduciary on constructive trust for the principal.

Secret commissions, or bribes, also come under the no profit rule. The bribe shall be held in constructive trust for the principal. The person who made the bribe cannot recover it, since he has committed a crime. Similarly, the fiduciary, who received the bribe, has committed a crime. Fiduciary duties are an aspect of equity and, in accordance with the equitable principles, or maxims, equity serves those with clean hands. Therefore, the bribe is held on constructive trust for the principal, the only innocent party.

Bribes were initially considered not to be held on constructive trust, but were considered to be held as a debt by the fiduciary to the principal. This approach has been overruled; the bribe is now classified as a constructive trust. The change is due to pragmatic reasons, especially in regard to a bankrupt fiduciary. If a fiduciary takes a bribe and that bribe is considered a debt then if the fiduciary goes bankrupt the debt will be left in his pool of assets to be paid to creditors and the principal may miss out on recovery because other creditors were more secured. If the bribe is treated as held on a constructive trust then it will remain in the possession of the fiduciary, despite bankruptcy, until such time as the principal recovers it.

Avoiding these accountabilities

The landmark Australian decision ASIC v Citigroup noted that the "informed consent" on behalf of the beneficiary to breaches of either the no-profit and no-conflict rule will allow the fiduciary to get around these rules. Furthermore, it highlighted that a contract may include a clause that allows individuals to avoid all fiduciary obligations within the course of dealings, and thereby continue to make a personal profit or deal with other parties- tasks that may otherwise have been in conflict with what would have been a fiduciary duty had it not been for this clause. In the Australian case of Farah Constructions Pty Ltd v Say-Dee Pty Ltd, however, Gleeson CJ, Gummow, Callinan, Heydon and Crennan JJ observed that the sufficiency of disclosure may depend on the sophistication and intelligence of the persons to whom the disclosure must be made.

However, in the English case of Armitage v Nurse an exception was noted to be the fiduciary's obligation of good faith; liability for breach of fiduciary duty by way of fraud or dishonesty cannot be avoided through an exclusion clause in a contract. The decision in Armitage v Nurse has been applied in Australian.

Breaches of duty and remedies

Conduct by a fiduciary may be deemed constructive fraud when it is based on acts, omissions or concealments considered fraudulent and that gives one an advantage against the other because such conduct—though not actually fraudulent, dishonest or deceitful—demands redress for reasons of public policy. Breach of fiduciary duty may occur in insider trading, when an insider or a related party makes trades in a corporation's securities based on material non-public information obtained during the performance of the insider's duties at the corporation. Breach of fiduciary duty by a lawyer with regard to a client, if negligent, may be a form of legal malpractice; if intentional, it may be remedied in equity.

Where a principal can establish both a fiduciary duty and a breach of that duty, through violation of the above rules, the court will find that the benefit gained by the fiduciary should be returned to the principal because it would be unconscionable to allow the fiduciary to retain the benefit by employing his strict common law legal rights. This will be the case, unless the fiduciary can show there was full disclosure of the conflict of interest or profit and that the principal fully accepted and freely consented to the fiduciary's course of action.

Remedies will differ according to the type of damage or benefit. They are usually distinguished between proprietary remedies, dealing with property, and personal remedies, dealing with pecuniary (monetary) compensation. Where concurrent contractual and fiduciary relationships exist, remedies available to the plaintiff beneficiary is dependent upon the duty of care owed by the defendant and the specific breach of duty allowing for remedy/damages. The courts will clearly distinguish the relationship and determine the nature in which the breach occurred.

Constructive trusts

Where the unconscionable gain by the fiduciary is in an easily identifiable form, such as the recording contract discussed above, the usual remedy will be the already discussed constructive trust.

Constructive trusts pop up in many aspects of equity, not just in a remedial sense, but, in this sense, what is meant by a constructive trust is that the court has created and imposed a duty on the fiduciary to hold the money in safekeeping until it can be rightfully transferred to the principal.

Account of profits

An account of profits is another potential remedy. It is usually used where the breach of duty was ongoing or when the gain is hard to identify. The idea of an account of profits is that the fiduciary profited unconscionably by virtue of the fiduciary position, so any profit made should be transferred to the principal. It may sound like a constructive trust at first, but it is not.

An account of profits is the appropriate remedy when, for example, a senior employee has taken advantage of his fiduciary position by conducting his own company on the side and has run up quite a lot of profits over a period of time, profits which he wouldn't have been able to make otherwise. The fiduciary in breach may however receive an allowance for effort and ingenuity expended in making the profit.

Compensatory damages

Compensatory damages are also available. Accounts of profits can be hard remedies to establish, therefore, a plaintiff will often seek compensation (damages) instead. Courts of equity initially had no power to award compensatory damages, which traditionally were a remedy at common law, but legislation and case law has changed the situation so compensatory damages may now be awarded for a purely equitable action.

Fiduciary duty and pension governance

Some experts have argued that, in the context of pension governance, trustees have started to reassert their fiduciary prerogatives more strongly after 2008 – notably following the heavy losses or reduced returns incurred by many retirement schemes in the wake of the Great Recession and the progression of ESG and Responsible Investment ideas: "Clearly, there is a mounting demand for CEOs (equity issuers) and governments (sovereign bond issuers) to be more 'accountable' ... No longer ‘absentee landlords', trustees have started to exercise more forcefully their governance prerogatives across the boardrooms of Britain, Benelux and America: coming together through the establishment of engaged pressure groups." However, in the United States, there are questions whether a pension's decision to consider factors such as how investments impact contributors' continued employment violate a fiduciary duty to maximize the retirement fund's returns.

Pension funds and other large institutional investors are increasingly making their voices heard to call out irresponsible practices in the businesses in which they invest 

The Fiduciary Duty in the 21st Century Programme, led by the United Nations Environment Programme Finance Initiative, the Principles for Responsible Investment, and the Generation Foundation, aims to end the debate on whether fiduciary duty is a legitimate barrier to the integration of environmental, social and governance (ESG) issues in investment practice and decision-making. This followed the 2015 publication of "Fiduciary Duty in the 21st Century" which concluded that "failing to consider all long-term investment value drivers, including ESG issues, is a failure of fiduciary duty". Founded on the realization that there is a general lack of legal clarity globally about the relationship between sustainability and investors’ fiduciary duty, the programme engaged with and interviewed over 400 policymakers and investors to raise awareness of the importance of ESG issues to the fiduciary duties of investors. The programme also published roadmaps which set out recommendations to fully embed the consideration of ESG factors in the fiduciary duties of investors across more than eight capital markets. Drawing upon findings from Fiduciary Duty in the 21st Century, the European Commission High-Level Expert Group (HLEG) recommended in its 2018 final report that the EU Commission clarify investor duties to better embrace long-term horizon and sustainability preferences.

Third-wave feminism

From Wikipedia, the free encyclopedia
 
Rebecca Walker in 2003. The term third wave is credited to Walker's 1992 article, "Becoming the Third Wave".

Third-wave feminism is an iteration of the feminist movement that began in the early 1990s, prominent in the decades prior to the fourth wave. Grounded in the civil-rights advances of the second wave, Gen X and Early Gen Y generations third-wave feminists born in the 1960s and 1970s embraced diversity and individualism in women, and sought to redefine what it meant to be a feminist. The third wave saw the emergence of new feminist currents and theories, such as intersectionality, sex positivity, vegetarian ecofeminism, transfeminism, and postmodern feminism. According to feminist scholar Elizabeth Evans, the "confusion surrounding what constitutes third-wave feminism is in some respects its defining feature."

The third wave is traced to the emergence of the riot grrrl feminist punk subculture in Olympia, Washington, in the early 1990s, and to Anita Hill's televised testimony in 1991 (to an all-male, all-white Senate Judiciary Committee) that African-American judge Clarence Thomas had sexually harassed her. The term third wave is credited to Rebecca Walker, who responded to Thomas's appointment to the Supreme Court with an article in Ms. magazine, "Becoming the Third Wave" (1992). She wrote:

So I write this as a plea to all women, especially women of my generation: Let Thomas' confirmation serve to remind you, as it did me, that the fight is far from over. Let this dismissal of a woman's experience move you to anger. Turn that outrage into political power. Do not vote for them unless they work for us. Do not have sex with them, do not break bread with them, do not nurture them if they don't prioritize our freedom to control our bodies and our lives. I am not a post-feminism feminist. I am the Third Wave.

Walker sought to establish that third-wave feminism was not just a reaction, but a movement in itself, because the feminist cause had more work ahead. The term intersectionality—to describe the idea that women experience "layers of oppression" caused, for example, by gender, race and class—had been introduced by Kimberlé Crenshaw in 1989, and it was during the third wave that the concept flourished. As feminists came online in the late 1990s and early 2000s and reached a global audience with blogs and e-zines, they broadened their goals, focusing on abolishing gender-role stereotypes and expanding feminism to include women with diverse racial and cultural identities.

History

The rights and programs gained by feminists of the second wave served as a foundation for the third wave. The gains included Title IX (equal access to education), public discussion about the abuse and rape of women, access to contraception and other reproductive services (including the legalization of abortion), the creation and enforcement of sexual-harassment policies for women in the workplace, the creation of domestic-abuse shelters for women and children, child-care services, educational funding for young women, and women's studies programs.

Feminist leaders rooted in the second wave such as Gloria E. Anzaldúa, bell hooks, Cherríe Moraga, Audre Lorde, Maxine Hong Kingston, and other feminists of color, sought to negotiate a space within feminist thought for consideration of race. Cherríe Moraga and Gloria E. Anzaldúa had published the anthology This Bridge Called My Back (1981), which, along with All the Women Are White, All the Blacks Are Men, But Some of Us Are Brave (1982), edited by Akasha (Gloria T.) Hull, Patricia Bell-Scott, and Barbara Smith, argued that second-wave feminism had focused primarily on the problems of white women. The emphasis on the intersection between race and gender became increasingly prominent.

In the interlude of the late 1970s and early 1980s, the feminist sex wars arose as a reaction against the radical feminism of the second wave and its views on sexuality, therein countering with a concept of "sex-positivity" and heralding the third wave.

Another crucial point for the start of the third wave is the publication in 1990 of Gender Trouble: Feminism and the Subversion of Identity by Judith Butler, which soon became one of the most influential works of contemporary feminist theory. In it, Butler argued against homogenizing conceptions of "women", which had a normative and exclusionary effect not only in the social world more broadly but also within feminism. This was the case not only for racialized or working-class women, but also for masculine, lesbian or non-binary women. Besides, she outlined her theory of gender as performativity, which posited that gender works by enforcing a series of repetitions of verbal and non-verbal acts that generate the "illusion" of a coherent and intelligible gender expression and identity, which otherwise lack any essential property. Lastly, Butler developed the claim that there is no resource to a "natural" sex, but that what we call such is always already culturally mediated, and therefore inseparable from gender. These views were foundational for the field of queer theory, and played a major role in the development of third-wave feminist theories and practices.

Early years

Riot grrrl

Kathleen Hanna, lead singer of Bikini Kill, 1991

The emergence of riot grrrl, the feminist punk subculture, in the early 1990s in Olympia, Washington, marked the beginning of third-wave feminism. The triple "r" in grrrl was intended to reclaim the word girl for women. Alison Piepmeier writes that riot grrrl and Sarah Dyer's Action Girl Newsletter formulated "a style, rhetoric, and iconography for grrrl zines" that came to define third-wave feminism, and that focused on the viewpoint of adolescent girls. Based on hard-core punk rock, the movement created zines and art, talked about rape, patriarchy, sexuality, and female empowerment, started chapters, and supported and organized women in music. An undated Bikini Kill tour flier asked "What is Riot grrrl?":

BECAUSE in every form of media I see us/myself slapped, decapitated, laughed at, objectified, raped, trivialized, pushed, ignored, stereotyped, kicked, scorned, molested, silenced, invalidated, knifed, shot, choked, and killed. ... BECAUSE a safe space needs to be created for girls where we can open our eyes and reach out to each other without being threatened by this sexist society and our day to day bullshit. ... BECAUSE we girls want to create mediums that speak to US. We are tired of boy band after boy band, boy zine after boy zine, boy punk after boy punk after boy. BECAUSE I am tired of these things happening to me; I'm not a fuck toy. I'm not a punching bag. I'm not a joke.

Riot grrrl was grounded in the DIY philosophy of punk values, adopting an anti-corporate stance of self-sufficiency and self-reliance. Its emphasis on universal female identity and separatism often appeared more closely allied with second-wave feminism. Bands associated with the movement included Bratmobile, Excuse 17, Jack Off Jill, Free Kitten, Heavens to Betsy, Huggy Bear, L7, Fifth Column, and Team Dresch.

Riot grrrl culture gave people the space to enact change on a macro, meso and micro scale. As Kevin Dunn explains:

Using the do-it-yourself ethos of punk to provide resources for individual empowerment, Riot Grrrl encouraged females to engage in multiple sites of resistance. At the macro-level, Riot Grrrls resist society's dominant constructions of femininity. At the meso-level, they resist stifling gender roles in punk. At the micro-level, they challenge gender constructions in their families and among their peers.

The demise of riot grrrl is linked to commodification and misrepresentation of its message, mainly through media coverage. Writing in Billboard magazine, Jennifer Keishin Armstrong states:

In the early 1990s, the women's movement seemed dead to the mainstream. Few pop cultural figures embraced the term "feminist." The underground punk movement known as "Riot Grrrl" scared anyone outside of it, while Alanis Morissette's breakthrough single "You Oughta Know" scared everyone else even more. Then, in the middle of the decade, the Spice Girls took all of that fear and made feminism – popularized as Girl Power – fun. Suddenly, regular girls far outside Women's Studies classrooms had at least an inkling of what would be known in wonky circles as Third Wave Feminism – led by Generation Xers pushing for sexual freedom and respect for traditionally "girly" pursuits like makeup and fashion, among many other issues.

El Hunt of NME states, "Riot grrrl bands in general were very focused on making space for women at gigs. They understood the importance of giving women a platform and voice to speak out against abusers. For a lot of young women and girls, who probably weren't following the Riot grrrl scene at all, The Spice Girls brought this spirit into the mainstream and made it accessible."

Anita Hill

In 1991, Anita Hill, when questioned, accused Clarence Thomas, an African-American judge who had been nominated to the United States Supreme Court, of sexual harassment. Thomas denied the accusations, calling them a "high-tech lynching". After extensive debate, the United States Senate voted 52–48 in favor of Thomas. In response, Ms. Magazine published an article by Rebecca Walker, entitled "Becoming the Third Wave", in which she stated: "I am not a post-feminism feminist. I am the third wave." Many had argued that Thomas should be acquitted because of his plans to create opportunities for people of color. When Walker asked her partner his opinion and he said the same thing, she asked: "When will progressive black men prioritize my rights and well-being?" She wanted racial equality but without dismissing women.

In 1992, dubbed the "Year of the Woman", four women entered the United States Senate to join the two already there. The following year, another woman, Kay Bailey Hutchison, won a special election, bringing the number to seven. The 1990s saw the US's first female Attorney General (Janet Reno) and Secretary of State (Madeleine Albright), as well as the second woman on the Supreme Court, Ruth Bader Ginsburg, and the first US First Lady, Hillary Clinton, to have had an independent political, legal and activist career.

Purpose

Jennifer Baumgardner, co-author of Manifesta (2000), in 2008

Arguably the biggest challenge to third-wave feminism was that the gains of second-wave feminism were taken for granted, and the importance of feminism not understood. Baumgardner and Richards (2000) wrote: "[F]or anyone born after the early 1960s, the presence of feminism in our lives is taken for granted. For our generation, feminism is like fluoride. We scarcely notice that we have it—it's simply in the water."

Essentially the claim was that gender equality had already been achieved, via the first two waves, and further attempts to push for women's rights were irrelevant and unnecessary, or perhaps even pushed the pendulum too far in women's favor. This issue manifested itself in the heated debates about whether affirmative action was creating gender equality or punishing white, middle-class males for the biological history that they had inherited. Third-wave feminism therefore focused on Consciousness raising—"one's ability to open their mind to the fact that male domination does affect the women of our generation, is what we need.

Third-wave feminists often engaged in "micro-politics", and challenged the second wave's paradigm as to what was good for women. Proponents of third-wave feminism said that it allowed women to define feminism for themselves. Describing third-wave feminism in Manifesta: Young Women, Feminism And The Future (2000), Jennifer Baumgardner and Amy Richards suggested that feminism could change with every generation and individual:

The fact that feminism is no longer limited to arenas where we expect to see it—NOW, Ms., women's studies, and redsuited congresswomen—perhaps means that young women today have really reaped what feminism has sown. Raised after Title IX and William Wants a Doll [sic], young women emerged from college or high school or two years of marriage or their first job and began challenging some of the received wisdom of the past ten or twenty years of feminism. We're not doing feminism the same way that the seventies feminists did it; being liberated doesn't mean copying what came before but finding one's own way—a way that is genuine to one's own generation.

Protesters at a women's march in 2017

Third-wave feminists used personal narratives as a form of feminist theory. Expressing personal experiences gave women space to recognize that they were not alone in the oppression and discrimination they faced. Using these accounts has benefits because it records personal details that may not be available in traditional historical texts.

Third-wave ideology focused on a more post-structuralist interpretation of gender and sexuality. Post-structuralist feminists saw binaries such as male–female as an artificial construct created to maintain the power of the dominant group. Joan W. Scott wrote in 1998 that "poststructuralists insist that words and texts have no fixed or intrinsic meanings, that there is no transparent or self-evident relationship between them and either ideas or things, no basic or ultimate correspondence between language and the world".

Relationship with second wave

The second wave of feminism is often accused of being elitist and ignoring groups such as women of colour and transgender women, instead, focusing on white, middle class, cisgender women. Third wave feminists questioned the beliefs of their predecessors and began to apply feminist theory to a wider variety of women, who had not been previously included in feminist activity.

Amy Richards defined the feminist culture for the third wave as "third wave because it's an expression of having grown up with feminism". Second-wave feminists grew up where the politics intertwined within the culture, such as "Kennedy, the Vietnam War, civil rights, and women's rights". In contrast, the third wave sprang from a culture of "punk-rock, hip-hop, 'zines, products, consumerism and the Internet". In an essay entitled "Generations, Academic Feminists in dialogue" Diane Elam wrote:

This problem manifests itself when senior feminists insist that junior feminists be good daughters, defending the same kind of feminism their mothers advocated. Questions and criticisms are allowed, but only if they proceed from the approved brand of feminism. Daughters are not allowed to invent new ways of thinking and doing feminism for themselves; feminists' politics should take the same shape that it has always assumed.

Rebecca Walker, in To Be Real: Telling the Truth and Changing the Face of Feminism (1995), wrote about her fear of rejection by her mother (Alice Walker) and her godmother (Gloria Steinem) for challenging their views:

Young Women feminists find themselves watching their speech and tone in their works so as not to upset their elder feminist mothers. There is a definite gap among feminists who consider themselves to be second-wave and those who would label themselves as third-wave. Although, the age criteria for second-wave feminists and third-wave feminists is murky, younger feminists definitely have a hard time proving themselves worthy as feminist scholars and activists.

Issues

Violence against women

The Vagina Monologues premiered in New York in 1996.

Violence against women, including rape, domestic violence, and sexual harassment, became a central issue. Organizations such as V-Day formed with the goal of ending gender violence, and artistic expressions, such as The Vagina Monologues, generated awareness. Third-wave feminists wanted to transform traditional notions of sexuality and embrace "an exploration of women's feelings about sexuality that included vagina-centred topics as diverse as orgasm, birth, and rape".

Reproductive rights

One of third-wave feminism's primary goals was to demonstrate that access to contraception and abortion are women's reproductive rights. According to Baumgardner and Richards, "It is not feminism's goal to control any woman's fertility, only to free each woman to control her own." South Dakota's 2006 attempt to ban abortion in all cases, except when necessary to protect the mother's life, and the US Supreme Court's vote to uphold the partial birth abortion ban were viewed as restrictions on women's civil and reproductive rights. Restrictions on abortion in the US, which was mostly legalized by the 1973 Supreme Court decision in Roe v. Wade, were becoming more common in states around the country. These included mandatory waiting periods, parental-consent laws, and spousal-consent laws.

Reclaiming derogatory terms

The first Slutwalk, Toronto, 2011

English speakers continued to use words such as spinster, bitch, whore, and cunt to refer to women in derogatory ways. Inga Muscio wrote, "I posit that we're free to seize a word that was kidnapped and co-opted in a pain-filled, distant past, with a ransom that cost our grandmothers' freedom, children, traditions, pride and land." Taking back the word bitch was fueled by the single "All Women Are Bitches" (1994) by the all-woman band Fifth Column, and by the book Bitch: In Praise of Difficult Women (1999) by Elizabeth Wurtzel.

The utility of the reclamation strategy became a hot topic with the introduction of SlutWalks in 2011. The first took place in Toronto on 3 April that year in response to a Toronto police officer's remark that "women should avoid dressing like sluts in order not to be victimized." Additional SlutWalks sprang up internationally, including in Berlin, London, New York City, Seattle, and West Hollywood. Several feminist bloggers criticized the campaign; reclamation of the word slut was questioned.

Sexual liberation

Third-wave feminists expanded the second-wave feminist's definition of sexual liberation to "mean a process of first becoming conscious of the ways one's gender identity and sexuality have been shaped by society and then intentionally constructing (and becoming free to express) one's authentic gender identity". Since third-wave feminism relied on different personal definitions to explain feminism, there is controversy surrounding what sexual liberation really entails. Many third-wave feminists supported the idea that women should embrace their sexuality as a way to take back their power.

Other issues

Third-wave feminism regarded race, social class, and transgender rights as central issues. It also paid attention to workplace matters such as the glass ceiling, unfair maternity-leave policies, motherhood support for single mothers by means of welfare and child care, respect for working mothers, and the rights of mothers who decide to leave their careers to raise their children full-time.

Criticism

Lack of cohesion

One issue raised by critics was a lack of cohesion because of the absence of a single cause for third-wave feminism. The first wave fought for and gained the right for women to vote. The second wave fought for the right for women to have access to an equal opportunity in the workforce, as well as the end of legal sex discrimination. The third wave allegedly lacked a cohesive goal and was often seen as an extension of the second wave. Some argued that the third wave could be dubbed the "Second Wave, Part Two" when it came to the politics of feminism and that "only young feminist culture" was "truly third wave". One argument ran that the equation of third-wave feminism with individualism prevented the movement from growing and moving towards political goals. Kathleen P. Iannello wrote:

The conceptual and real-world 'trap' of choice feminism (between work and home) has led women to challenge each other rather than the patriarchy. Individualism conceived of as 'choice' does not empower women; it silences them and prevents feminism from becoming a political movement and addressing the real issues of distribution of resources.

Objection to "wave construct"

Feminist scholars such as Shira Tarrant objected to the "wave construct" because it ignored important progress between the periods. Furthermore, if feminism is a global movement, she argued, the fact that the "first-, second-, and third waves time periods correspond most closely to American feminist developments" raises serious problems about how feminism fails to recognize the history of political issues around the world. The "wave construct", critics argued, also focused on white women's suffrage and continued to marginalize the issues of women of color and lower-class women.

Relationship with women of color

Third-wave feminists proclaim themselves as the most inclusive wave of feminism. Critics have noted that while progressive, there is still exclusion of women of color. Black feminists argue that "the women rights movements were not uniquely for the liberation of Blacks or Black Women. Rather, efforts such as women's suffrage and abolition of slavery ultimately uplifted, strengthened, and benefited White society and White women".

"Girly" feminism

Third-wave feminism was often associated, primarily by its critics, with the emergence of so-called "lipstick" or "girly" feminists and the rise of "raunch culture". This was because these new feminists advocated "expressions of femininity and female sexuality as a challenge to objectification". Accordingly, this included the dismissal of any restriction, whether deemed patriarchal or feminist, to define or control how women or girls should dress, act, or generally express themselves. These emerging positions stood in stark contrast with the anti-pornography strains of feminism prevalent in the 1980s. Second-wave feminism viewed pornography as encouraging violence towards women. The new feminists posited that the ability to make autonomous choices about self-expression could be an empowering act of resistance, not simply internalized oppression.

Such views were critiqued because of the subjective nature of empowerment and autonomy. Scholars were unsure whether empowerment was best measured as an "internal feeling of power and agency" or as an external "measure of power and control". Moreover they critiqued an over-investment in "a model of free will and choice" in the marketplace of identities and ideas. Regardless, the "girly" feminists attempted to be open to all different selves while maintaining a dialogue about the meaning of identity and femininity in the contemporary world.

Third-wave feminists said that these viewpoints should not be limited by the label "girly" feminism or regarded as simply advocating "raunch culture". Rather, they sought to be inclusive of the many diverse roles women fulfill. Gender scholars Linda Duits [nl] and Liesbet van Zoonen highlighted this inclusivity by looking at the politicization of women's clothing choices and how the "controversial sartorial choices of girls" and women are constituted in public discourse as "a locus of necessary regulation". Thus the "hijab" and the "belly shirt", as dress choices, were both identified as requiring regulation but for different reasons. Both caused controversy, while appearing to be opposing forms of self-expression. Through the lens of "girly" feminists, one can view both as symbolic of "political agency and resistance to objectification". The "hijab" could be seen as an act of resistance against Western ambivalence towards Islamic identity, and the "belly shirt" an act of resistance against patriarchal society's narrow views of female sexuality. Both were regarded as valid forms of self-expression.

History of money

From Wikipedia, the free encyclopedia

The history of money concerns the development throughout time of systems that provide the functions of money. Such systems can be understood as means of trading wealth indirectly; not directly as with bartering. Money is a mechanism that facilitates this process.

Money may take a physical form as in coins and notes, or may exist as a written or electronic account. It may have intrinsic value (commodity money), be legally exchangeable for something with intrinsic value (representative money), or only have nominal value (fiat money).

Overview

The invention of money took place before the beginning of written history. Consequently, any story of how money first developed is mostly based on conjecture and logical inference.

The significant evidence establishes many things were traded in ancient markets that could be described as a medium of exchange. These included livestock and grain–things directly useful in themselves – but also merely attractive items such as cowrie shells or beads were exchanged for more useful commodities. However, such exchanges would be better described as barter, and the common bartering of a particular commodity (especially when the commodity items are not fungible) does not technically make that commodity "money" or a "commodity money" like the shekel – which was both a coin representing a specific weight of barley, and the weight of that sack of barley.

Due to the complexities of ancient history (ancient civilizations developing at different paces and not keeping accurate records or having their records destroyed), and because the ancient origins of economic systems precede written history, it is impossible to trace the true origin of the invention of money. Further, evidence in the histories supports the idea that money has taken two main forms divided into the broad categories of money of account (debits and credits on ledgers) and money of exchange (tangible media of exchange made from clay, leather, paper, bamboo, metal, etc.).

As "money of account" depends on the ability to record a count, the tally stick was a significant development. The oldest of these dates from the Aurignacian, about 30,000 years ago. The 20,000-year-old Ishango Bone – found near one of the sources of the Nile in the Democratic Republic of Congo – seems to use matched tally marks on the thigh bone of a baboon for correspondence counting. Accounting records – in the monetary system sense of the term accounting – dating back more than 7,000 years have been found in Mesopotamia, and documents from ancient Mesopotamia show lists of expenditures, and goods received and traded and the history of accounting evidences that money of account pre-dates the use of coinage by several thousand years. David Graeber proposes that money as a unit of account was invented when the unquantifiable obligation "I owe you one" transformed into the quantifiable notion of "I owe you one unit of something". In this view, money emerged first as money of account and only later took the form of money of exchange.

Regarding money of exchange, the use of representative money historically pre-dates the invention of coinage as well.[2] In the ancient empires of Egypt, Babylon, India and China, the temples and palaces often had commodity warehouses which made use of clay tokens and other materials which served as evidence of a claim upon a portion of the goods stored in the warehouses. There isn't any concrete evidence these kinds of tokens were used for trade, however, only for administration and accounting.

While not the oldest form of money of exchange, various metals (both common and precious metals) were also used in both barter systems and monetary systems and the historical use of metals provides some of the clearest illustration of how the barter systems gave birth to monetary systems. The Romans' use of bronze, while not among the more ancient examples, is well-documented, and it illustrates this transition clearly. First, the "aes rude" (rough bronze) was used. This was a heavy weight of unmeasured bronze used in what was probably a barter system—the barter-ability of the bronze was related exclusively to its usefulness in metalsmithing and it was bartered with the intent of being turned into tools. The next historical step was bronze in bars that had a 5-pound pre-measured weight (presumably to make barter easier and more fair), called "aes signatum" (signed bronze), which is where debate arises between if this is still the barter system or now a monetary system. Finally, there is a clear break from the use of bronze in barter into its undebatable use as money because of lighter measures of bronze not intended to be used as anything other than coinage for transactions. The aes grave (heavy bronze) (or As) is the start of the use of coins in Rome, but not the oldest known example of metal coinage.

Gold and silver have been the most common forms of money throughout history. In many languages, such as Spanish, French, Hebrew and Italian, the word for silver is still directly related to the word for money. Sometimes other metals were used. For instance, Ancient Sparta minted coins from iron to discourage its citizens from engaging in foreign trade. In the early 17th century Sweden lacked precious metals, and so produced "plate money": large slabs of copper 50 cm or more in length and width, stamped with indications of their value.

Gold coins began to be minted again in Europe in the 13th century. Frederick II is credited with having reintroduced gold coins during the Crusades. During the 14th century Europe changed from use of silver in currency to minting of gold. Vienna made this change in 1328.

Metal-based coins had the advantage of carrying their value within the coins themselves – on the other hand, they induced manipulations, such as the clipping of coins to remove some of the precious metal. A greater problem was the simultaneous co-existence of gold, silver and copper coins in Europe. The exchange rates between the metals varied with supply and demand. For instance the gold guinea coin began to rise against the silver crown in England in the 1670s and 1680s. Consequently, silver was exported from England in exchange for gold imports. The effect was worsened with Asian traders not sharing the European appreciation of gold altogether – gold left Asia and silver left Europe in quantities European observers like Isaac Newton, Master of the Royal Mint observed with unease.

Stability came when national banks guaranteed to change silver money into gold at a fixed rate; it did, however, not come easily. The Bank of England risked a national financial catastrophe in the 1730s when customers demanded their money be changed into gold in a moment of crisis. Eventually London's merchants saved the bank and the nation with financial guarantees.

Another step in the evolution of money was the change from a coin being a unit of weight to being a unit of value. A distinction could be made between its commodity value and its specie value. The difference in these values is seigniorage.

Theories of money

The earliest ideas included Aristotle's "metallist" and Plato's "chartalist" concepts, which Joseph Schumpeter integrated into his own theory of money as forms of classification. Especially, the Austrian economist attempted to develop a catallactic theory of money out of Claim Theory. Schumpeter's theory had several themes but the most important of these involve the notions that money can be analyzed from the viewpoint of social accounting and that it is also firmly connected to the theory of value and price.

There are at least two theories of what money is, and these can influence the interpretation of historical and archeological evidence of early monetary systems. The commodity theory of money (money of exchange) is preferred by those who wish to view money as a natural outgrowth of market activity. Others view the credit theory of money (money of account) as more plausible and may posit a key role for the state in establishing money. The Commodity theory is more widely held and much of this article is written from that point of view. Overall, the different theories of money developed by economists largely focus on functions, use, and management of money.

Other theorists also note that the status of a particular form of money always depends on the status ascribed to it by humans and by society. For instance, gold may be seen as valuable in one society but not in another or that a bank note is merely a piece of paper until it is agreed that it has monetary value.

Money supply

In modern times economists have sought to classify the different types of money supply. The different measures of the money supply have been classified by various central banks, using the prefix "M". The supply classifications often depend on how narrowly a supply is specified, for example the "M"s may range from M0 (narrowest) to M3 (broadest). The classifications depend on the particular policy formulation used:

  • M0: In some countries, such as the United Kingdom, M0 includes bank reserves, so M0 is referred to as the monetary base, or narrow money.
  • MB: is referred to as the monetary base or total currency. This is the base from which other forms of money (like checking deposits, listed below) are created and is traditionally the most liquid measure of the money supply.
  • M1: Bank reserves are not included in M1.
  • M2: Represents M1 and "close substitutes" for M1. M2 is a broader classification of money than M1. M2 is a key economic indicator used to forecast inflation.
  • M3: M2 plus large and long-term deposits. Since 2006, M3 is no longer published by the U.S. central bank. However, there are still estimates produced by various private institutions.
  • MZM: Money with zero maturity. It measures the supply of financial assets redeemable at par on demand. Velocity of MZM is historically a relatively accurate predictor of inflation.

Technologies

Assaying

Assaying is analysis of the chemical composition of metals. The discovery of the touchstone for assaying helped the popularisation of metal-based commodity money and coinage. Any soft metal, such as gold, can be tested for purity on a touchstone. As a result, the use of gold for as commodity money spread from Asia Minor, where it first gained wide usage.

A touchstone allows the amount of gold in a sample of an alloy to been estimated. In turn this allows the alloy's purity to be estimated. This allows coins with a uniform amount of gold to be created. Coins were typically minted by governments and then stamped with an emblem that guaranteed the weight and value of the metal. However, as well as intrinsic value coins had a face value. Sometimes governments would reduce the amount of precious metal in a coin (reducing the intrinsic value) and assert the same face value, this practice is known as debasement.

Prehistory: predecessors of money and its emergence

Non-monetary exchange

Gifting and debt

There is no evidence, historical or contemporary, of a society in which barter is the main mode of exchange; instead, non-monetary societies operated largely along the principles of gift economy and debt. When barter did in fact occur, it was usually between either complete strangers or potential enemies.

Barter

With barter, an individual possessing any surplus of value, such as a measure of grain or a quantity of livestock, could directly exchange it for something perceived to have similar or greater value or utility, such as a clay pot or a tool, however, the capacity to carry out barter transactions is limited in that it depends on a coincidence of wants. For example, a farmer has to find someone who not only wants the grain he produced but who could also offer something in return that the farmer wants.

Hypothesis of barter as the origin of money

In Politics Book 1:9 (c. 350 BC) the Greek philosopher Aristotle contemplated the nature of money. He considered that every object has two uses: the original purpose for which the object was designed, and as an item to sell or barter. The assignment of monetary value to an otherwise insignificant object such as a coin or promissory note arises as people acquired a psychological capacity to place trust in each other and in external authority within barter exchange. Finding people to barter with is a time-consuming process; Austrian economist Carl Menger hypothesised that this reason was a driving force in the creation of monetary systems – people seeking a way to stop wasting their time looking for someone to barter with.

In his book Debt: The First 5,000 Years, anthropologist David Graeber argues against the suggestion that money was invented to replace barter. The problem with this version of history, he suggests, is the lack of any supporting evidence. His research indicates that gift economies were common, at least at the beginnings of the first agrarian societies, when humans used elaborate credit systems. Graeber proposes that money as a unit of account was invented the moment when the unquantifiable obligation "I owe you one" transformed into the quantifiable notion of "I owe you one unit of something". In this view, money emerged first as credit and only later acquired the functions of a medium of exchange and a store of value. Graeber's criticism partly relies on and follows that made by A. Mitchell Innes in his 1913 article "What is money?". Innes refutes the barter theory of money, by examining historic evidence and showing that early coins never were of consistent value nor of more or less consistent metal content. Therefore, he concludes that sales is not exchange of goods for some universal commodity, but an exchange for credit. He argues that "credit and credit alone is money". Anthropologist Caroline Humphrey examines the available ethnographic data and concludes that "No example of a barter economy, pure and simple, has ever been described, let alone the emergence from it of money; all available ethnography suggests that there never has been such a thing".

Economists Robert P. Murphy and George Selgin replied to Graeber saying that the barter hypothesis is consistent with economic principles, and a barter system would be too brief to leave a permanent record. John Alexander Smith from Bella Caledonia said that in this exchange Graeber is the one acting as a scientist by trying to falsify the barter hypotheses, while Selgin is taking a theological stance by taking the hypothesis as truth revealed from authority.

Gift economy

In a gift economy, valuable goods and services are regularly given without any explicit agreement for immediate or future rewards (i.e. there is no formal quid pro quo). Ideally, simultaneous or recurring giving serves to circulate and redistribute valuables within the community.

There are various social theories concerning gift economies. Some consider the gifts to be a form of reciprocal altruism, where relationships are created through this type of exchange. Another interpretation is that implicit "I owe you" debt and social status are awarded in return for the "gifts". Consider for example, the sharing of food in some hunter-gatherer societies, where food-sharing is a safeguard against the failure of any individual's daily foraging. This custom may reflect altruism, it may be a form of informal insurance, or may bring with it social status or other benefits.

Emergence of money

Anthropologists have noted many cases of 'primitive' societies using what looks to us very like money but for non-commercial purposes, indeed commercial use may have been prohibited:

Often, such currencies are never used to buy and sell anything at all. Instead, they are used to create, maintain, and otherwise reorganize relations between people: to arrange marriages, establish the paternity of children, head off feuds, console mourners at funerals, seek forgiveness in the case of crimes, negotiate treaties, acquire followers—almost anything but trade in yams, shovels, pigs, or jewelry.

This suggests that the basic idea of money may have long preceded its application to commercial trade.

After the domestication of cattle and the start of cultivation of crops in 9000–6000 BC, livestock and plant products were used as money. However, it is in the nature of agricultural production that things take time to reach fruition. The farmer may need to buy things that he cannot pay for immediately. Thus the idea of debt and credit was introduced, and a need to record and track it arose.

The establishment of the first cities in Mesopotamia (c. 3000 BCE) provided the infrastructure for the next simplest form of money of account—asset-backed credit or Representative money. Farmers would deposit their grain in the temple which recorded the deposit on clay tablets and gave the farmer a receipt in the form of a clay token which they could then use to pay fees or other debts to the temple. Since the bulk of the deposits in the temple were of the main staple, barley, a fixed quantity of barley came to be used as a unit of account.

Aristotle's opinion of the creation of money of exchange as a new thing in society is:

When the inhabitants of one country became more dependent on those of another, and they imported what they needed, and exported what they had too much of, money necessarily came into use.

Trading with foreigners required a form of money which was not tied to the local temple or economy, money that carried its value with it. A third, proxy, commodity that would mediate exchanges which could not be settled with direct barter was the solution. Which commodity would be used was a matter of agreement between the two parties, but as trade links expanded and the number of parties involved increased the number of acceptable proxies would have decreased. Ultimately, one or two commodities were converged on in each trading zone, the most common being gold and silver.

This process was independent of the local monetary system so in some cases societies may have used money of exchange before developing a local money of account. In societies where foreign trade was rare money of exchange may have appeared much later than money of account.

In early Mesopotamia copper was used in trade for a while but was soon superseded by silver. The temple (which financed and controlled most foreign trade) fixed exchange rates between barley and silver, and other important commodities, which enabled payment using any of them. It also enabled the extensive use of accounting in managing the whole economy, which led to the development of writing and thus the beginning of history.

Bronze Age: commodity money, credit and debt

Many cultures around the world developed the use of commodity money, that is, objects that have value in themselves as well as value in their use as money. Ancient China, Africa, and India used cowry shells.

The Mesopotamian civilization developed a large-scale economy based on commodity money. The shekel was the unit of weight and currency, first recorded c. 3000 BC, which was nominally equivalent to a specific weight of barley that was the preexisting and parallel form of currency. The Babylonians and their neighboring city states later developed the earliest system of economics as we think of it today, in terms of rules on debt, legal contracts and law codes relating to business practices and private property. Money emerged when the increasing complexity of transactions made it useful.

The Code of Hammurabi, the best-preserved ancient law code, was created c. 1760 BC (middle chronology) in ancient Babylon. It was enacted by the sixth Babylonian king, Hammurabi. Earlier collections of laws include the code of Ur-Nammu, king of Ur (c. 2050 BC), the Code of Eshnunna (c. 1930 BC) and the code of Lipit-Ishtar of Isin (c. 1870 BC). These law codes formalized the role of money in civil society. They set amounts of interest on debt, fines for "wrongdoing", and compensation in money for various infractions of formalized law.

It has long been assumed that metals, where available, were favored for use as proto-money over such commodities as cattle, cowry shells, or salt, because metals are at once durable, portable, and easily divisible. The use of gold as proto-money has been traced back to the fourth millennium BC when the Egyptians used gold bars of a set weight as a medium of exchange, as had been done earlier in Mesopotamia with silver bars.

Spade money from the Zhou Dynasty, c. 650–400 BC

The first mention in the Bible of the use of money is in the Book of Genesis in reference to criteria for the circumcision of a bought slave. Later, the Cave of Machpelah is purchased (with silver) by Abraham, some time after 1985 BC, although scholars believe the book was edited in the 6th or 5th centuries BC.

1000 BC – 400 AD

First coins

Greek drachm of Aegina. Obverse: Land turtle. Reverse: ΑΙΓ(INA) and dolphin
 
A 7th century one-third stater coin from Lydia, shown larger
 

From about 1000 BC, money in the form of small knives and spades made of bronze was in use in China during the Zhou dynasty, with cast bronze replicas of cowrie shells in use before this. The first manufactured actual coins seem to have appeared separately in India, China, and the cities around the Aegean Sea 7th century BC. While these Aegean coins were stamped (heated and hammered with insignia), the Indian coins (from the Ganges river valley) were punched metal disks, and Chinese coins (first developed in the Great Plain) were cast bronze with holes in the center to be strung together. The different forms and metallurgical processes imply a separate development.

All modern coins, in turn, are descended from the coins that appear to have been invented in the kingdom of Lydia in Asia Minor somewhere around 7th century BC and that spread throughout Greece in the following centuries: disk-shaped, made of gold, silver, bronze or imitations thereof, with both sides bearing an image produced by stamping; one side is often a human head.

Maybe the first ruler in the Mediterranean known to have officially set standards of weight and money was Pheidon. Minting occurred in the late 7th century BC amongst the Greek cities of Asia Minor, spreading to the Greek islands of the Aegean and to the south of Italy by 500 BC. The first stamped money (having the mark of some authority in the form of a picture or words) can be seen in the Bibliothèque Nationale in Paris. It is an electrum stater, coined at Aegina island. This coin dates to about 7th century BC.

Herodotus dated the introduction of coins to Italy to the Etruscans of Populonia in about 550 BC.

Other coins made of electrum (a naturally occurring alloy of silver and gold) were manufactured on a larger scale about 7th century BC in Lydia (on the coast of what is now Turkey). Similar coinage was adopted and manufactured to their own standards in nearby cities of Ionia, including Mytilene and Phokaia (using coins of electrum) and Aegina (using silver) during the 7th century BC, and soon became adopted in mainland Greece, and the Persian Empire (after it incorporated Lydia in 547 BC).

The use and export of silver coinage, along with soldiers paid in coins, contributed to the Athenian Empire's dominance of the region in the 5th century BC. The silver used was mined in southern Attica at Laurium and Thorikos by a huge workforce of slave labour. A major silver vein discovery at Laurium in 483 BC led to the huge expansion of the Athenian military fleet.

The worship of Moneta is recorded by Livy with the temple built in the time of Rome 413 (123); a temple consecrated to the same goddess was built in the earlier part of the 4th century (perhaps the same temple). For four centuries the temple contained the mint of Rome. The name of the goddess thus became the source of numerous words in English and the Romance languages, including the words "money" and "mint"

Roman banking system

400–1450

Medieval coins and moneys of account

Charlemagne, in 800 AD, implemented a series of reforms upon becoming "Holy Roman Emperor", including the issuance of a standard coin, the silver penny. Between 794 and 1200 the penny was the only denomination of coin in Western Europe. Minted without oversight by bishops, cities, feudal lords and fiefdoms, by 1160, coins in Venice contained only 0.05g of silver, while England's coins were minted at 1.3g. Large coins were introduced in the mid-13th century. In England, a dozen pennies was called a "shilling" and twenty shillings a "pound".

Debasement of coin was widespread. Significant periods of debasement took place in 1340–60 and 1417–29, when no small coins were minted, and by the 15th century the issuance of small coin was further restricted by government restrictions and even prohibitions. With the exception of the Great Debasement, England's coins were consistently minted from sterling silver (silver content of 92.5%). A lower quality of silver with more copper mixed in, used in Barcelona, was called "billion".

First paper money

Earliest banknote from China during the Song Dynasty which is known as "Jiaozi"

Paper money was introduced in Song dynasty China during the 11th century. The development of the banknote began in the seventh century, with local issues of paper currency. Its roots were in merchant receipts of deposit during the Tang dynasty (618–907), as merchants and wholesalers desired to avoid the heavy bulk of copper coinage in large commercial transactions. The issue of credit notes is often for a limited duration, and at some discount to the promised amount later. The jiaozi nevertheless did not replace coins during the Song Dynasty; paper money was used alongside the coins. The central government soon observed the economic advantages of printing paper money, issuing a monopoly right of several of the deposit shops to the issuance of these certificates of deposit. By the early 12th century, the amount of banknotes issued in a single year amounted to an annual rate of 26 million strings of cash coins.

The taka was widely used across South Asia during the sultanate period
 
Silver coin of the Maurya Empire, known as rūpyarūpa, with symbols of wheel and elephant. 3rd century BC.
 
The French East India Company issued rupees in the name of Muhammad Shah (1719–1748) for Northern India trade. This was cast in Pondicherry.

Both the Kabuli rupee and the Kandahari rupee were used as currency in Afghanistan prior to 1891, when they were standardized as the Afghan rupee. The Afghan rupee, which was subdivided into 60 paisas, was replaced by the Afghan afghani in 1925.

Until the middle of the 20th century, Tibet's official currency was also known as the Tibetan rupee.

In the 13th century, paper money became known in Europe through the accounts of travelers, such as Marco Polo and William of Rubruck. Marco Polo's account of paper money during the Yuan dynasty is the subject of a chapter of his book, The Travels of Marco Polo, titled "How the Great Kaan Causeth the Bark of Trees, Made into Something Like Paper, to Pass for Money All Over his Country." In medieval Italy and Flanders, because of the insecurity and impracticality of transporting large sums of money over long distances, money traders started using promissory notes. In the beginning these were personally registered, but they soon became a written order to pay the amount to whomever had it in their possession. These notes can be seen as a predecessor to regular banknotes.

Trade bills of exchange

Bills of exchange became prevalent with the expansion of European trade toward the end of the Middle Ages. A flourishing Italian wholesale trade in cloth, woolen clothing, wine, tin and other commodities was heavily dependent on credit for its rapid expansion. Goods were supplied to a buyer against a bill of exchange, which constituted the buyer's promise to make payment at some specified future date. Provided that the buyer was reputable or the bill was endorsed by a credible guarantor, the seller could then present the bill to a merchant banker and redeem it in money at a discounted value before it actually became due. The main purpose of these bills nevertheless was, that traveling with cash was particularly dangerous at the time. A deposit could be made with a banker in one town, in turn a bill of exchange was handed out, that could be redeemed in another town.

These bills could also be used as a form of payment by the seller to make additional purchases from his own suppliers. Thus, the bills – an early form of credit – became both a medium of exchange and a medium for storage of value. Like the loans made by the Egyptian grain banks, this trade credit became a significant source for the creation of new money. In England, bills of exchange became an important form of credit and money during last quarter of the 18th century and the first quarter of the 19th century before banknotes, checks and cash credit lines were widely available.

Islamic Golden Age

At around the same time in the medieval Islamic world, a vigorous monetary economy was created during the 7th–12th centuries on the basis of the expanding levels of circulation of a stable high-value currency (the dinar). Innovations introduced by Muslim economists, traders and merchants include the earliest uses of credit, cheques, promissory notes, savings accounts, transactional accounts, loaning, trusts, exchange rates, the transfer of credit and debt, and banking institutions for loans and deposits.

Indian subcontinent

In the Indian subcontinent, Sher Shah Suri (1540–1545), introduced a silver coin called a rupiya, weighing 178 grams. Its use was continued by the Mughal Empire. The history of the rupee traces back to Ancient India circa 3rd century BC. Ancient India was one of the earliest issuers of coins in the world, along with the Lydian staters, several other Middle Eastern coinages and the Chinese wen. The term is from rūpya, a Sanskrit term for silver coin, from Sanskrit rūpa, beautiful form.

The imperial taka was officially introduced by the monetary reforms of Muhammad bin Tughluq, the emperor of the Delhi Sultanate, in 1329. It was modeled as representative money, a concept pioneered as paper money by the Mongols in China and Persia. The tanka was minted in copper and brass. Its value was exchanged with gold and silver reserves in the imperial treasury. The currency was introduced due to the shortage of metals.

Tallies

The acceptance of symbolic forms of money meant that a symbol could be used to represent something of value that was available in physical storage somewhere else in space, such as grain in the warehouse; or something of value that would be available later, such as a promissory note or bill of exchange, a document ordering someone to pay a certain sum of money to another on a specific date or when certain conditions have been fulfilled.

In the 12th century, the English monarchy introduced an early version of the bill of exchange in the form of a notched piece of wood known as a tally stick. Tallies originally came into use at a time when paper was rare and costly, but their use persisted until the early 19th century, even after paper money had become prevalent. The notches denoted various amounts of taxes payable to the Crown. Initially tallies were simply a form of receipt to the taxpayer at the time of rendering his dues. As the revenue department became more efficient, they began issuing tallies to denote a promise of the tax assessee to make future tax payments at specified times during the year. Each tally consisted of a matching pair – one stick was given to the assessee at the time of assessment representing the amount of taxes to be paid later, and the other held by the Treasury representing the amount of taxes to be collected at a future date.

The Treasury discovered that these tallies could also be used to create money. When the Crown had exhausted its current resources, it could use the tally receipts representing future tax payments due to the Crown as a form of payment to its own creditors, who in turn could either collect the tax revenue directly from those assessed or use the same tally to pay their own taxes to the government. The tallies could also be sold to other parties in exchange for gold or silver coin at a discount reflecting the length of time remaining until the tax was due for payment. Thus, the tallies became an accepted medium of exchange for some types of transactions and an accepted store of value. Like the girobanks before it, the Treasury soon realized that it could also issue tallies that were not backed by any specific assessment of taxes. By doing so, the Treasury created new money that was backed by public trust and confidence in the monarchy rather than by specific revenue receipts.

1450–1971

Goldsmith bankers

Goldsmiths in England had been craftsmen, bullion merchants, money changers, and money lenders since the 16th century. But they were not the first to act as financial intermediaries; in the early 17th century, the scriveners were the first to keep deposits for the express purpose of relending them. Merchants and traders had amassed huge hoards of gold and entrusted their wealth to the Royal Mint for storage. In 1640 King Charles I seized the private gold stored in the mint as a forced loan (which was to be paid back over time). Thereafter merchants preferred to store their gold with the goldsmiths of London, who possessed private vaults, and charged a fee for that service. In exchange for each deposit of precious metal, the goldsmiths issued receipts certifying the quantity and purity of the metal they held as a bailee (i.e., in trust). These receipts could not be assigned (only the original depositor could collect the stored goods). Gradually the goldsmiths took over the function of the scriveners of relending on behalf of a depositor and also developed modern banking practices; promissory notes were issued for money deposited which by custom and/or law was a loan to the goldsmith, i.e., the depositor expressly allowed the goldsmith to use the money for any purpose including advances to his customers. The goldsmith charged no fee, or even paid interest on these deposits. Since the promissory notes were payable on demand, and the advances (loans) to the goldsmith's customers were repayable over a longer time period, this was an early form of fractional reserve banking. The promissory notes developed into an assignable instrument, which could circulate as a safe and convenient form of money backed by the goldsmith's promise to pay. Hence goldsmiths could advance loans in the form of gold money, or in the form of promissory notes, or in the form of checking accounts. Gold deposits were relatively stable, often remaining with the goldsmith for years on end, so there was little risk of default so long as public trust in the goldsmith's integrity and financial soundness was maintained. Thus, the goldsmiths of London became the forerunners of British banking and prominent creators of new money based on credit.

First European banknotes

100 USD banknotes

The first European banknotes were issued by Stockholms Banco, a predecessor of Sweden's central bank Sveriges Riksbank, in 1661. These replaced the copper-plates being used instead as a means of payment, although in 1664 the bank ran out of coins to redeem notes and ceased operating in the same year.

Inspired by the success of the London goldsmiths, some of whom became the forerunners of great English banks, banks began issuing paper notes quite properly termed "banknotes", which circulated in the same way that government-issued currency circulates today. In England this practice continued up to 1694. Scottish banks continued issuing notes until 1850, and still do issue banknotes backed by Bank of England notes. In the United States, this practice continued through the 19th century; at one time there were more than 5,000 different types of banknotes issued by various commercial banks in America. Only the notes issued by the largest, most creditworthy banks were widely accepted. The scrip of smaller, lesser-known institutions circulated locally. Farther from home it was only accepted at a discounted rate, if at all. The proliferation of types of money went hand in hand with a multiplication in the number of financial institutions.

These banknotes were a form of representative money which could be converted into gold or silver by application at the bank. Since banks issued notes far in excess of the gold and silver they kept on deposit, sudden loss of public confidence in a bank could precipitate mass redemption of banknotes and result in bankruptcy.

In India the earliest paper money was issued by Bank of Hindostan (1770– 1832), General Bank of Bengal and Bihar (1773–75), and Bengal Bank (1784–91).

The use of banknotes issued by private commercial banks as legal tender has gradually been replaced by the issuance of bank notes authorized and controlled by national governments. The Bank of England was granted sole rights to issue banknotes in England after 1694. In the United States, the Federal Reserve Bank was granted similar rights after its establishment in 1913. Until recently, these government-authorized currencies were forms of representative money, since they were partially backed by gold or silver and were theoretically convertible into gold or silver.

1971–present

In 1971, United States President Richard Nixon announced that the US dollar would not be directly convertible to Gold anymore. This measure effectively destroyed the Bretton Woods system by removing one of its key components, in what came to be known as the Nixon shock. Since then, the US dollar, and thus all national currencies, are free-floating currencies. Additionally, international, national and local money is now dominated by virtual credit rather than real bullion.

Payment cards

In the late 20th century, payment cards such as credit cards and debit cards became the dominant mode of consumer payment in the First World. The Bankamericard, launched in 1958, became the first third-party credit card to acquire widespread use and be accepted in shops and stores all over the United States, soon followed by the Mastercard and the American Express. Since 1980, Credit Card companies are exempt from state usury laws, and so can charge any interest rate they see fit. Outside America, other payment cards became more popular than credit cards, such as France's Carte Bleue.

Digital currency

The development of computer technology in the second part of the twentieth century allowed money to be represented digitally. By 1990, in the United States, all money transferred between its central bank and commercial banks was in electronic form. By the 2000s most money existed as digital currency in banks databases. In 2012, by number of transaction, 20 to 58 percent of transactions were electronic (dependent on country). The benefit of digital currency is that it allows for easier, faster, and more flexible payments.

Cryptocurrencies

In 2008, Bitcoin was proposed by an unknown author/s under the pseudonym of Satoshi Nakamoto. It was implemented the same year. Its use of cryptography allowed the currency to have a trustless, fungible and tamper resistant distributed ledger called a blockchain. It became the first widely used decentralized, peer-to-peer, cryptocurrency. Other comparable systems had been proposed since the 1980s. The protocol proposed by Nakamoto solved what is known as the double-spending problem without the need of a trusted third-party.

Since Bitcoin's inception, thousands of other cryptocurrencies have been introduced.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...