Search This Blog

Wednesday, June 19, 2024

Standard score

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Standard_score
Comparison of the various grading methods in a normal distribution, including: standard deviations, cumulative percentages, percentile equivalents, z-scores, T-scores

In statistics, the standard score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores.

It is calculated by subtracting the population mean from an individual raw score and then dividing the difference by the population standard deviation. This process of converting a raw score into a standard score is called standardizing or normalizing (however, "normalizing" can refer to many types of ratios; see Normalization for more).

Standard scores are most commonly called z-scores; the two terms may be used interchangeably, as they are in this article. Other equivalent terms in use include z-value, z-statistic, normal score, standardized variable and pull in high energy physics.

Computing a z-score requires knowledge of the mean and standard deviation of the complete population to which a data point belongs; if one only has a sample of observations from the population, then the analogous computation using the sample mean and sample standard deviation yields the t-statistic.

Calculation

If the population mean and population standard deviation are known, a raw score x is converted into a standard score by

where:

μ is the mean of the population,
σ is the standard deviation of the population.

The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above.

Calculating z using this formula requires use of the population mean and the population standard deviation, not the sample mean or sample deviation. However, knowing the true mean and standard deviation of a population is often an unrealistic expectation, except in cases such as standardized testing, where the entire population is measured.

When the population mean and the population standard deviation are unknown, the standard score may be estimated by using the sample mean and sample standard deviation as estimates of the population values.

In these cases, the z-score is given by

where:

is the mean of the sample,
S is the standard deviation of the sample.

Though it should always be stated, the distinction between use of the population and sample statistics often is not made. In either case, the numerator and denominator of the equations have the same units of measure so that the units cancel out through division and z is left as a dimensionless quantity.

Applications

Z-test

The z-score is often used in the z-test in standardized testing – the analog of the Student's t-test for a population whose parameters are known, rather than estimated. As it is very unusual to know the entire population, the t-test is much more widely used.

Prediction intervals

The standard score can be used in the calculation of prediction intervals. A prediction interval [L,U], consisting of a lower endpoint designated L and an upper endpoint designated U, is an interval such that a future observation X will lie in the interval with high probability , i.e.

For the standard score Z of X it gives:

By determining the quantile z such that

it follows:

Process control

In process control applications, the Z value provides an assessment of the degree to which a process is operating off-target.

Comparison of scores measured on different scales: ACT and SAT

The z score for Student A was 1, meaning Student A was 1 standard deviation above the mean. Thus, Student A performed in the 84.13 percentile on the SAT.

When scores are measured on different scales, they may be converted to z-scores to aid comparison. Dietz et al. give the following example, comparing student scores on the (old) SAT and ACT high school tests. The table shows the mean and standard deviation for total scores on the SAT and ACT. Suppose that student A scored 1800 on the SAT, and student B scored 24 on the ACT. Which student performed better relative to other test-takers?


SAT ACT
Mean 1500 21
Standard deviation 300 5
The z score for Student B was 0.6, meaning Student B was 0.6 standard deviation above the mean. Thus, Student B performed in the 72.57 percentile on the SAT.

The z-score for student A is

The z-score for student B is

Because student A has a higher z-score than student B, student A performed better compared to other test-takers than did student B.

Percentage of observations below a z-score

Continuing the example of ACT and SAT scores, if it can be further assumed that both ACT and SAT scores are normally distributed (which is approximately correct), then the z-scores may be used to calculate the percentage of test-takers who received lower scores than students A and B.

Cluster analysis and multidimensional scaling

"For some multivariate techniques such as multidimensional scaling and cluster analysis, the concept of distance between the units in the data is often of considerable interest and importance… When the variables in a multivariate data set are on different scales, it makes more sense to calculate the distances after some form of standardization."

Principal components analysis

In principal components analysis, "Variables measured on different scales or on a common scale with widely differing ranges are often standardized."

Relative importance of variables in multiple regression: standardized regression coefficients

Standardization of variables prior to multiple regression analysis is sometimes used as an aid to interpretation. (page 95) state the following.

"The standardized regression slope is the slope in the regression equation if X and Y are standardized … Standardization of X and Y is done by subtracting the respective means from each set of observations and dividing by the respective standard deviations … In multiple regression, where several X variables are used, the standardized regression coefficients quantify the relative contribution of each X variable."

However, Kutner et al. (p 278) give the following caveat: "… one must be cautious about interpreting any regression coefficients, whether standardized or not. The reason is that when the predictor variables are correlated among themselves, … the regression coefficients are affected by the other predictor variables in the model … The magnitudes of the standardized regression coefficients are affected not only by the presence of correlations among the predictor variables but also by the spacings of the observations on each of these variables. Sometimes these spacings may be quite arbitrary. Hence, it is ordinarily not wise to interpret the magnitudes of standardized regression coefficients as reflecting the comparative importance of the predictor variables."

Standardizing in mathematical statistics

In mathematical statistics, a random variable X is standardized by subtracting its expected value and dividing the difference by its standard deviation

If the random variable under consideration is the sample mean of a random sample of X:

then the standardized version is


Where the standardised sample mean's variance was calculated as follows:


T-score

In educational assessment, T-score is a standard score Z shifted and scaled to have a mean of 50 and a standard deviation of 10. It is also known as hensachi in Japanese, where the concept is much more widely known and used in the context of high school and university admissions.

In bone density measurements, the T-score is the standard score of the measurement compared to the population of healthy 30-year-old adults, and has the usual mean of 0 and standard deviation of 1.

Panic buying

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Panic_buying

Panic buying (alternatively hyphenated as panic-buying; also known as panic purchasing) occurs when consumers buy unusually large amounts of a product in anticipation of, or after, a disaster or perceived disaster, or in anticipation of a large price increase, or shortage.

Panic buying during various health crises is influenced by "(1) individuals' perception of the threat of a health crisis and scarcity of products; (2) fear of the unknown, which is caused by emotional pressure and uncertainty; (3) coping behaviour, which views panic buying as a venue to relieve anxiety and regain control over the crisis; and (4) social psychological factors, which account for the influence of the social network of an individual".

Panic buying is a type of herd behavior. It is of interest in consumer behavior theory, the broad field of economic study dealing with explanations for "collective action such as fads and fashions, stock market movements, runs on nondurable goods, buying sprees, hoarding, and banking panics".

Fishing-rod panic buying in Corpus Christi, Texas, during the COVID-19 pandemic

Panic buying can lead to genuine shortages regardless of whether the risk of a shortage is real or perceived; the latter scenario is an example of self-fulfilling prophecy.

Examples

Panic buying occurred before, during, or following:

COVID-19 pandemic

Panic buying became a major international phenomenon between February and March 2020 during the early onset of the COVID-19 pandemic, and continued in smaller, more localized waves throughout during sporadic lockdowns across the world. Stores around the world were depleted of items such as face masks, food, bottled water, milk, toilet paper, hand sanitizer, rubbing alcohol, antibacterial wipes and painkillers. As a result, many retailers rationed the sale of these items.

Online retailers such as eBay and Amazon began to pull certain items listed for sale by third parties such as toilet paper, face masks, pasta, canned vegetables, hand sanitizer and antibacterial wipes over price gouging concerns. As a result, Amazon restricted the sale of these items and others (such as thermometers and ventilators) to healthcare professionals and government agencies. Additionally, panic renting of self-storage units took place during the onset of the pandemic.

The massive buyouts of toilet paper caused bewilderment and confusion from the public. Images of empty shelves of toilet paper were shared on social media in many countries around the world, e.g. Australia, United States, the United Kingdom, Canada, Singapore, Hong Kong and Japan. In Australia, two women were charged over a physical altercation over toilet paper at a supermarket. The severity of the panic buying drew criticism; particularly from Prime Minister of Australia Scott Morrison, calling for Australians to "stop it".

Research on this specific social phenomenon of toilet paper hoarding suggested that social media had played a crucial role in stimulating mass-anxiety and panic. Social media research found that many people posting about toilet paper panic buying were negative, either expressing anger or frustration over the frantic situation. This high amount of negative viral posts could act as an emotional trigger of anxiety and panic, spontaneously spreading fear and fueling psychological reactions in midst of the crisis. It may have triggered a snowball effect in the public, encouraged by the images and videos of empty shelves and people fighting over toilet rolls.

Bank run

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Bank_run
American Union Bank, New York City, April 26, 1932

A bank run or run on the bank occurs when many clients withdraw their money from a bank, because they believe the bank may fail in the near future. In other words, it is when, in a fractional-reserve banking system (where banks normally only keep a small proportion of their assets as cash), numerous customers withdraw cash from deposit accounts with a financial institution at the same time because they believe that the financial institution is, or might become, insolvent. When they transfer funds to another institution, it may be characterized as a capital flight. As a bank run progresses, it may become a self-fulfilling prophecy: as more people withdraw cash, the likelihood of default increases, triggering further withdrawals. This can destabilize the bank to the point where it runs out of cash and thus faces sudden bankruptcy. To combat a bank run, a bank may acquire more cash from other banks or from the central bank, or limit the amount of cash customers may withdraw, either by imposing a hard limit or by scheduling quick deliveries of cash, encouraging high-return term deposits to reduce on-demand withdrawals or suspending withdrawals altogether.

A banking panic or bank panic is a financial crisis that occurs when many banks suffer runs at the same time, as people suddenly try to convert their threatened deposits into cash or try to get out of their domestic banking system altogether. A systemic banking crisis is one where all or almost all of the banking capital in a country is wiped out. The resulting chain of bankruptcies can cause a long economic recession as domestic businesses and consumers are starved of capital as the domestic banking system shuts down. According to former U.S. Federal Reserve chairman Ben Bernanke, the Great Depression was caused by the failure of the Federal Reserve System to prevent deflation, and much of the economic damage was caused directly by bank runs. The cost of cleaning up a systemic banking crisis can be huge, with fiscal costs averaging 13% of GDP and economic output losses averaging 20% of GDP for important crises from 1970 to 2007.

Several techniques have been used to try to prevent bank runs or mitigate their effects. They have included a higher reserve requirement (requiring banks to keep more of their reserves as cash), government bailouts of banks, supervision and regulation of commercial banks, the organization of central banks that act as a lender of last resort, the protection of deposit insurance systems such as the U.S. Federal Deposit Insurance Corporation, and after a run has started, a temporary suspension of withdrawals. These techniques do not always work: for example, even with deposit insurance, depositors may still be motivated by beliefs they may lack immediate access to deposits during a bank reorganization.

History

10 livres tournois banknote issued by Banque Royale, France, 1720. In 1720, shareholders demanded cash payment, leading to a run on the bank and financial chaos in France. On display at the British Museum.
The run on the Montreal City and District Savings Bank, with the mayor addressing the crowd. Printed in 1872 in the Canadian Illustrated News.

Bank runs first appeared as a part of cycles of credit expansion and its subsequent contraction. From the 16th century onwards, English goldsmiths issuing promissory notes suffered severe failures due to bad harvests, plummeting parts of the country into famine and unrest. Other examples are the Dutch tulip manias (1634–37), the British South Sea Bubble (1717–19), the French Mississippi Company (1717–20), the post-Napoleonic depression (1815–30), and the Great Depression (1929–39).

Bank runs have also been used to blackmail individuals and governments. In 1832, for example, the British government under the Duke of Wellington overturned a majority government on the orders of the king, William IV, to prevent reform (the later Reform Act 1832 (2 & 3 Will. 4. c. 45)). Wellington's actions angered reformers, and they threatened a run on the banks under the rallying cry "Stop the Duke, go for gold!".

Many of the recessions in the United States were caused by banking panics. The Great Depression contained several banking crises consisting of runs on multiple banks from 1929 to 1933; some of these were specific to regions of the U.S. Bank runs were most common in states whose laws allowed banks to operate only a single branch, dramatically increasing risk compared to banks with multiple branches particularly when single-branch banks were located in areas economically dependent on a single industry.

Banking panics began in the Southern United States in November 1930, one year after the stock market crash, triggered by the collapse of a string of banks in Tennessee and Kentucky, which brought down their correspondent networks. In December, New York City experienced massive bank runs that were contained to the many branches of a single bank. Philadelphia was hit a week later by bank runs that affected several banks, but were successfully contained by quick action by the leading city banks and the Federal Reserve Bank. Withdrawals became worse after financial conglomerates in New York and Los Angeles failed in prominently-covered scandals. Much of the US Depression's economic damage was caused directly by bank runs, though Canada had no bank runs during this same era due to different banking regulations.

Money supply decreased substantially between Black Tuesday and the Bank Holiday in March 1933 when there were massive bank runs across the United States.

Milton Friedman and Anna Schwartz argued that steady withdrawals from banks by nervous depositors ("hoarding") were inspired by news of the fall 1930 bank runs and forced banks to liquidate loans, which directly caused a decrease in the money supply, shrinking the economy. Bank runs continued to plague the United States for the next several years. Citywide runs hit Boston (December 1931), Chicago (June 1931 and June 1932), Toledo (June 1931), and St. Louis (January 1933), among others. Institutions put into place during the Depression have prevented runs on U.S. commercial banks since the 1930s, even under conditions such as the U.S. savings and loan crisis of the 1980s and 1990s.

The global financial crisis that began in 2007 was centered around market-liquidity failures that were comparable to a bank run. The crisis contained a wave of bank nationalizations, including those associated with Northern Rock of the UK and IndyMac of the U.S. This crisis was caused by low real interest rates stimulating an asset price bubble fuelled by new financial products that were not stress tested and that failed in the downturn.

Theory

A poster for the 1896 Broadway melodrama The War of Wealth depicts a 19th-century bank run in the U.S.

Under fractional-reserve banking, the type of banking currently used in most developed countries, banks retain only a fraction of their demand deposits as cash. The remainder is invested in securities and loans, whose terms are typically longer than the demand deposits, resulting in an asset–liability mismatch. No bank has enough reserves on hand to cope with all deposits being taken out at once.

Diamond and Dybvig developed an influential model to explain why bank runs occur and why banks issue deposits that are more liquid than their assets. According to the model, the bank acts as an intermediary between borrowers who prefer long-maturity loans and depositors who prefer liquid accounts. The Diamond–Dybvig model provides an example of an economic game with more than one Nash equilibrium, where it is logical for individual depositors to engage in a bank run once they suspect one might start, even though that run will cause the bank to collapse.

In the model, business investment requires expenditures in the present to obtain returns that take time in coming, for example, spending on machines and buildings now for production in future years. A business or entrepreneur that needs to borrow to finance investment will want to give their investments a long time to generate returns before full repayment, and will prefer long maturity loans, which offer little liquidity to the lender. The same principle applies to individuals and households seeking financing to purchase large-ticket items such as housing or automobiles. The households and firms who have the money to lend to these businesses may have sudden, unpredictable needs for cash, so they are often willing to lend only on the condition of being guaranteed immediate access to their money in the form of liquid demand deposit accounts, that is, accounts with shortest possible maturity. Since borrowers need money and depositors fear to make these loans individually, banks provide a valuable service by aggregating funds from many individual deposits, portioning them into loans for borrowers, and spreading the risks both of default and sudden demands for cash. Banks can charge much higher interest on their long-term loans than they pay out on demand deposits, allowing them to earn a profit.

Depositors clamor to withdraw their savings from a bank in Berlin, 13 July 1931

If only a few depositors withdraw at any given time, this arrangement works well. Barring some major emergency on a scale matching or exceeding the bank's geographical area of operation, depositors' unpredictable needs for cash are unlikely to occur at the same time; that is, by the law of large numbers, banks can expect only a small percentage of accounts withdrawn on any one day because individual expenditure needs are largely uncorrelated. A bank can make loans over a long horizon, while keeping only relatively small amounts of cash on hand to pay any depositors who may demand withdrawals.

However, if many depositors withdraw all at once, the bank itself (as opposed to individual investors) may run short of liquidity, and depositors will rush to withdraw their money, forcing the bank to liquidate many of its assets at a loss, and eventually to fail. If such a bank were to attempt to call in its loans early, businesses might be forced to disrupt their production while individuals might need to sell their homes and/or vehicles, causing further losses to the larger economy. Even so, many, if not most, debtors would be unable to pay the bank in full on demand and would be forced to declare bankruptcy, possibly affecting other creditors in the process.

A bank run can occur even when started by a false story. Even depositors who know the story is false will have an incentive to withdraw, if they suspect other depositors will believe the story. The story becomes a self-fulfilling prophecy. Indeed, Robert K. Merton, who coined the term self-fulfilling prophecy, mentioned bank runs as a prime example of the concept in his book Social Theory and Social Structure. Mervyn King, governor of the Bank of England, once noted that it may not be rational to start a bank run, but it is rational to participate in one once it had started.

Systemic banking crisis

Bank run during the Great Depression in the United States, February 1933

A bank run is the sudden withdrawal of deposits of just one bank. A banking panic or bank panic is a financial crisis that occurs when many banks suffer runs at the same time, as a cascading failure. In a systemic banking crisis, all or almost all of the banking capital in a country is wiped out; this can result when regulators ignore systemic risks and spillover effects.

Systemic banking crises are associated with substantial fiscal costs and large output losses. Frequently, emergency liquidity support and blanket guarantees have been used to contain these crises, not always successfully. Although fiscal tightening may help contain market pressures if a crisis is triggered by unsustainable fiscal policies, expansionary fiscal policies are typically used. In crises of liquidity and solvency, central banks can provide liquidity to support illiquid banks. Depositor protection can help restore confidence, although it tends to be costly and does not necessarily speed up economic recovery. Intervention is often delayed in the hope that recovery will occur, and this delay increases the stress on the economy.

Some measures are more effective than others in containing economic fallout and restoring the banking system after a systemic crisis. These include establishing the scale of the problem, targeted debt relief programs to distressed borrowers, corporate restructuring programs, recognizing bank losses, and adequately capitalizing banks. Speed of intervention appears to be crucial; intervention is often delayed in the hope that insolvent banks will recover if given liquidity support and relaxation of regulations, and in the end this delay increases stress on the economy. Programs that are targeted, that specify clear quantifiable rules that limit access to preferred assistance, and that contain meaningful standards for capital regulation, appear to be more successful. According to IMF, government-owned asset management companies (bad banks) are largely ineffective due to political constraints.

A silent run occurs when the implicit fiscal deficit from a government's unbooked loss exposure to zombie banks is large enough to deter depositors of those banks. As more depositors and investors begin to doubt whether a government can support a country's banking system, the silent run on the system can gather steam, causing the zombie banks' funding costs to increase. If a zombie bank sells some assets at market value, its remaining assets contain a larger fraction of unbooked losses; if it rolls over its liabilities at increased interest rates, it squeezes its profits along with the profits of healthier competitors. The longer the silent run goes on, the more benefits are transferred from healthy banks and taxpayers to the zombie banks. The term is also used when many depositors in countries with deposit insurance draw down their balances below the limit for deposit insurance.

The cost of cleaning up after a crisis can be huge. In systemically important banking crises in the world from 1970 to 2007, the average net recapitalization cost to the government was 6% of GDP, fiscal costs associated with crisis management averaged 13% of GDP (16% of GDP if expense recoveries are ignored), and economic output losses averaged about 20% of GDP during the first four years of the crisis.

Prevention and mitigation

2007 run on Northern Rock, a UK bank, during the late-2000s financial crisis
A run on a Bank of East Asia branch in Hong Kong, caused by "malicious rumours" in 2008

Several techniques have been used to help prevent or mitigate bank runs.

Individual banks

Some prevention techniques apply to individual banks, independently of the rest of the economy.

  • Banks often project an appearance of stability, with solid architecture and conservative dress.
  • A bank may try to hide information that might spark a run. For example, in the days before deposit insurance, it made sense for a bank to have a large lobby and fast service, to prevent the formation of a line of depositors extending out into the street which might cause passers-by to infer a bank run.
  • A bank may try to slow down the bank run by artificially slowing the process. One technique is to get a large number of friends and relatives of bank employees to stand in line and make many small, slow transactions.
  • Scheduling prominent deliveries of cash can convince participants in a bank run that there is no need to withdraw deposits hastily.
  • Banks can encourage customers to make term deposits that cannot be withdrawn on demand. If term deposits form a high enough percentage of a bank's liabilities, its vulnerability to bank runs will be reduced considerably. The drawback is that banks have to pay a higher interest rate on term deposits.
  • A bank can temporarily suspend withdrawals to stop a run; this is called suspension of convertibility. In many cases, the threat of suspension prevents the run, which means the threat need not be carried out.
  • Emergency acquisition of a vulnerable bank by another institution with stronger capital reserves. This technique is commonly used by the U.S. Federal Deposit Insurance Corporation to dispose of insolvent banks, rather than paying depositors directly from its own funds.
  • If there is no immediate prospective buyer for a failing institution, a regulator or deposit insurer may set up a bridge bank which operates temporarily until the business can be liquidated or sold.
  • To clean up after a bank failure, the government may set up a "bad bank", which is a new government-run asset management corporation that buys individual nonperforming assets from one or more private banks, reducing the proportion of junk bonds in their asset pools, and then acts as the creditor in the insolvency cases that follow. This, however, creates a moral hazard problem, essentially subsidizing bankruptcy: temporarily underperforming debtors can be forced to file for bankruptcy in order to make them eligible to be sold to the bad bank.

Systemic techniques

Some prevention techniques apply across the whole economy, though they may still allow individual institutions to fail.

  • Deposit insurance systems insure each depositor up to a certain amount, so that depositors' savings are protected even if the bank fails. This removes the incentive to withdraw one's deposits simply because others are withdrawing theirs. However, depositors may still be motivated by fears they may lack immediate access to deposits during a bank reorganization. To avoid such fears triggering a run, the U.S. FDIC keeps its takeover operations secret, and re-opens branches under new ownership on the next business day. Government deposit insurance programs can be ineffective if the government itself is perceived to be running short of cash.
  • Bank capital requirements reduces the possibility that a bank becomes insolvent. The Basel III agreement strengthens bank capital requirements and introduces new regulatory requirements on bank liquidity and bank leverage.
    • Full-reserve banking is the hypothetical case where the reserve ratio is set to 100%, and funds deposited are not lent out by the bank as long as the depositor retains the legal right to withdraw the funds on demand. Under this approach, banks would be forced to match maturities of loans and deposits, thus greatly reducing the risk of bank runs.
    • A less severe alternative to full-reserve banking is a reserve ratio requirement, which limits the proportion of deposits which a bank can lend out, making it less likely for a bank run to start, as more reserves will be available to satisfy the demands of depositors. This practice sets a limit on the fraction in fractional-reserve banking.
  • Transparency may help prevent crises from spreading through the banking system. In the context of the 2007-2010 subprime mortgage crisis, the extreme complexity of certain types of assets made it difficult for market participants to assess which financial institutions would survive, which amplified the crisis by making most institutions very reluctant to lend to one another.
  • Central banks act as a lender of last resort. To prevent a bank run, the central bank guarantees that it will make short-term loans to banks, to ensure that, if they remain economically viable, they will always have enough liquidity to honor their deposits. Walter Bagehot's book Lombard Street provides an influential early analysis of the role of the lender of last resort.

The role of the lender of last resort, and the existence of deposit insurance, both create moral hazard, since they reduce banks' incentive to avoid making risky loans. They are nonetheless standard practice, as the benefits of collective prevention are commonly believed to outweigh the costs of excessive risk-taking.

Techniques to deal with a banking panic when prevention have failed:

  • Declaring an emergency bank holiday
  • Government or central bank announcements of increased lines of credit, loans, or bailouts for vulnerable banks

Depictions in fiction

The bank panic of 1933 is the setting of Archibald MacLeish's 1935 play, Panic. Other fictional depictions of bank runs include those in American Madness (1932), It's a Wonderful Life (1946, set in 1932 U.S.), Silver River (1948), Mary Poppins (1964, set in 1910 London), Rollover (1981), Noble House (1988) and The Pope Must Die (1991).

Arthur Hailey's novel The Moneychangers includes a potentially fatal run on a fictitious US bank.

A run on a bank is one of the many causes of the characters' suffering in Upton Sinclair's The Jungle.

In The Simpsons season 6 episode 21 The PTA Disbands as a prank Bart Simpson causes a bank run at the First Bank of Springfield.

Tuesday, June 18, 2024

Monetarism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Monetarism

Monetarism
is a school of thought in monetary economics that emphasizes the role of policy-makers in controlling the amount of money in circulation. It gained prominence in the 1970s, but was mostly abandoned as a direct guidance to monetary policy during the following decade because of the rise of inflation targeting through movements of the official interest rate.

The monetarist theory states that variations in the money supply have major influences on national output in the short run and on price levels over longer periods. Monetarists assert that the objectives of monetary policy are best met by targeting the growth rate of the money supply rather than by engaging in discretionary monetary policy. Monetarism is commonly associated with neoliberalism.

Monetarism is mainly associated with the work of Milton Friedman, who was an influential opponent of Keynesian economics, criticising Keynes's theory of fighting economic downturns using fiscal policy (government spending). Friedman and Anna Schwartz wrote an influential book, A Monetary History of the United States, 1867–1960, and argued "that inflation is always and everywhere a monetary phenomenon".

Though Friedman opposed the existence of the Federal Reserve, he advocated, given its existence, a central bank policy aimed at keeping the growth of the money supply at a rate commensurate with the growth in productivity and demand for goods. Money growth targeting was mostly abandoned by the central banks who tried it, however. Contrary to monetarist thinking, the relation between money growth and inflation proved to be far from tight. Instead, starting in the early 1990s, most major central banks turned to direct inflation targeting, relying on steering short-run interest rates as their main policy instrument. Afterwards, monetarism was subsumed into the new neoclassical synthesis which appeared in macroeconomics around 2000.

Description

Monetarism is an economic theory that focuses on the macroeconomic effects of the supply of money and central banking. Formulated by Milton Friedman, it argues that excessive expansion of the money supply is inherently inflationary, and that monetary authorities should focus solely on maintaining price stability.

Monetarist theory draws its roots from the quantity theory of money, a centuries-old economic theory which had been put forward by various economists, among them Irving Fisher and Alfred Marshall, before Friedman restated it in 1956.

Monetary history of the United States

Money supply decreased significantly between Black Tuesday and the Bank Holiday in March 1933 in the wake of massive bank runs across the United States.

Monetarists argued that central banks sometimes caused major unexpected fluctuations in the money supply. Friedman asserted that actively trying to stabilize demand through monetary policy changes can have negative unintended consequences. In part he based this view on the historical analysis of monetary policy, A Monetary History of the United States, 1867–1960, which he coauthored with Anna Schwartz in 1963. The book attributed inflation to excess money supply generated by a central bank. It attributed deflationary spirals to the reverse effect of a failure of a central bank to support the money supply during a liquidity crunch. In particular, the authors argued that the Great Depression of the 1930s was caused by a massive contraction of the money supply (they deemed it "the Great Contraction"), and not by the lack of investment that Keynes had argued. They also maintained that post-war inflation was caused by an over-expansion of the money supply. They made famous the assertion of monetarism that "inflation is always and everywhere a monetary phenomenon."

Fixed monetary rule

Friedman proposed a fixed monetary rule, called Friedman's k-percent rule, where the money supply would be automatically increased by a fixed percentage per year. The rate should equal the growth rate of real GDP, leaving the price level unchanged. For instance, if the economy is expected to grow at 2 percent in a given year, the Fed should allow the money supply to increase by 2 percent. Because discretionary monetary policy would be as likely to destabilise as to stabilise the economy, Friedman advocated that the Fed be bound to fixed rules in conducting its policy.

Opposition to the gold standard

Most monetarists oppose the gold standard. Friedman viewed a pure gold standard as impractical. For example, whereas one of the benefits of the gold standard is that the intrinsic limitations to the growth of the money supply by the use of gold would prevent inflation, if the growth of population or increase in trade outpaces the money supply, there would be no way to counteract deflation and reduced liquidity (and any attendant recession) except for the mining of more gold. But he also admitted that if a government was willing to surrender control over its monetary policy and not to interfere with economic activities, a gold-based economy would be possible.

Rise

Clark Warburton is credited with making the first solid empirical case for the monetarist interpretation of business fluctuations in a series of papers from 1945.[1]p. 493 Within mainstream economics, the rise of monetarism started with Milton Friedman's 1956 restatement of the quantity theory of money. Friedman argued that the demand for money could be described as depending on a small number of economic variables.

Thus, according to Friedman, when the money supply expanded, people would not simply wish to hold the extra money in idle money balances; i.e., if they were in equilibrium before the increase, they were already holding money balances to suit their requirements, and thus after the increase they would have money balances surplus to their requirements. These excess money balances would therefore be spent and hence aggregate demand would rise. Similarly, if the money supply were reduced people would want to replenish their holdings of money by reducing their spending. In this, Friedman challenged a simplification attributed to Keynes suggesting that "money does not matter." Thus the word 'monetarist' was coined.

The popularity of monetarism picked up in political circles when the prevailing view of neo-Keynesian economics seemed unable to explain the contradictory problems of rising unemployment and inflation in response to the Nixon shock in 1971 and the oil shocks of 1973. On one hand, higher unemployment seemed to call for reflation, but on the other hand rising inflation seemed to call for disinflation. The social-democratic post-war consensus that had prevailed in first world countries was thus called into question by the rising neoliberal political forces.

Monetarism in the US and the UK

In 1979, United States President Jimmy Carter appointed as Federal Reserve Chief Paul Volcker, who made fighting inflation his primary objective, and who restricted the money supply (in accordance with the Friedman rule) to tame inflation in the economy. The result was a major rise in interest rates, not only in the United States; but worldwide. The "Volcker shock" continued from 1979 to the summer of 1982, decreasing inflation and increasing unemployment.

By the time Margaret Thatcher, Leader of the Conservative Party in the United Kingdom, won the 1979 general election defeating the sitting Labour Government led by James Callaghan, the UK had endured several years of severe inflation, which was rarely below the 10% mark and by the time of the May 1979 general election, stood at 10.3%. Thatcher implemented monetarism as the weapon in her battle against inflation, and succeeded at reducing it to 4.6% by 1983. However, unemployment in the United Kingdom increased from 5.7% in 1979 to 12.2% in 1983, reaching 13.0% in 1982; starting with the first quarter of 1980, the UK economy contracted in terms of real gross domestic product for six straight quarters.

Decline

Monetarist ascendancy was brief, however. The period when major central banks focused on targeting the growth of money supply, reflecting monetarist theory, lasted only for a few years, in the US from 1979 to 1982.

The money supply is useful as a policy target only if the relationship between money and nominal GDP, and therefore inflation, is stable and predictable. This implies that the velocity of money must be predictable. In the 1970s velocity had seemed to increase at a fairly constant rate, but in the 1980s and 1990s velocity became highly unstable, experiencing unpredictable periods of increases and declines. Consequently, the stable correlation between the money supply and nominal GDP broke down, and the usefulness of the monetarist approach came into question. Many economists who had been convinced by monetarism in the 1970s abandoned the approach after this experience.

The changing velocity originated in shifts in the demand for money and created serious problems for the central banks. This provoked a thorough rethinking of monetary policy. In the early 1990s central banks started focusing on targeting inflation directly using the short-run interest rate as their central policy variable, abandoning earlier emphasis on money growth. The new strategy proved successful, and today most major central banks follow a flexible inflation targeting.

Legacy

Even though monetarism failed in practical policy, and the close attention to money growth which was at the heart of monetarist analysis is rejected by most economists today, some aspects of monetarism have found their way into modern mainstream economic thinking. Among them are the belief that controlling inflation should be a primary responsibility of the central bank. It is also widely recognized that monetary policy, as well as fiscal policy, can affect output in the short run. In this way, important monetarist thoughts have been subsumed into the new neoclassical synthesis or consensus view of macroeconomics that emerged in the 2000s.

Identity formation

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Identity_formation ...