Search This Blog

Friday, July 10, 2020

Forgetting curve

From Wikipedia, the free encyclopedia
 
Typical Representation of the Forgetting Curve.
 
The forgetting curve hypothesizes the decline of memory retention in time. This curve shows how information is lost over time when there is no attempt to retain it. A related concept is the strength of memory that refers to the durability that memory traces in the brain. The stronger the memory, the longer period of time that a person is able to recall it. A typical graph of the forgetting curve purports to show that humans tend to halve their memory of newly learned knowledge in a matter of days or weeks unless they consciously review the learned material.

The forgetting curve supports one of the seven kinds of memory failures: transience, which is the process of forgetting that occurs with the passage of time.

History

From 1880 to 1885, Hermann Ebbinghaus ran a limited, incomplete study on himself and published his hypothesis in 1885 as Über das Gedächtnis (later translated into English as Memory: A Contribution to Experimental Psychology). Ebbinghaus studied the memorisation of nonsense syllables, such as "WID" and "ZOF" (CVCs or Consonant–Vowel–Consonant) by repeatedly testing himself after various time periods and recording the results. He plotted these results on a graph creating what is now known as the "forgetting curve". Ebbinghaus investigated the rate of forgetting, but not the effect of spaced repetition on the increase in retrievability of memories.

Ebbinghaus's publication also included an equation to approximate his forgetting curve:

 
Here, represents 'Savings' expressed as a percentage, and represents time in minutes. Savings is defined as the relative amount of time saved on the second learning trial as a result of having had the first. A savings of 100% would indicate that all items were still known from the first trial. A 75% savings would mean that relearning missed items required 25% as long as the original learning session (to learn all items). 'Savings' is thus, analogous to retention rate.
 
In 2015, an attempt to replicate the forgetting curve with one study subject has shown the experimental results similar to Ebbinghaus' original data.

Hermann's experiment contributed a lot to experimental psychology. He was the first to carry out a series of well-designed experiments on the subject of forgetting, and he was one of the first to choose artificial stimuli in the research of experimental psychology. Since his introduction of nonsense syllables, a large number of experiments in experimental psychology has been based on highly controlled artificial stimuli. 

Increasing rate of learning

Hermann Ebbinghaus hypothesized that the speed of forgetting depends on a number of factors such as the difficulty of the learned material (e.g. how meaningful it is), its representation and other physiological factors such as stress and sleep. He further hypothesized that the basal forgetting rate differs little between individuals. He concluded that the difference in performance can be explained by mnemonic representation skills.

He went on to hypothesize that basic training in mnemonic techniques can help overcome those differences in part. He asserted that the best methods for increasing the strength of memory are:
  1. better memory representation (e.g. with mnemonic techniques)
  2. repetition based on active recall (especially spaced repetition).
Forgetting Curve with Spaced Repetition

His premise was that each repetition in learning increases the optimum interval before the next repetition is needed (for near-perfect retention, initial repetitions may need to be made within days, but later they can be made after years). He discovered that information is easier to recall when it’s built upon things you already know, and the forgetting curve was flattened by every repetition. It appeared that by applying frequent training in learning, the information was solidified by repeated recalling.

Later research also suggested that, other than the two factors Ebbinghaus proposed, higher original learning would also produce slower forgetting. The more information was originally learned, the slower the forgetting rate would be.

Spending time each day to remember information will greatly decrease the effects of the forgetting curve. Some learning consultants claim reviewing material in the first 24 hours after learning information is the optimum time to re-read notes and reduce the amount of knowledge forgotten. Evidence suggests waiting 10–20% of the time towards when the information will be needed is the optimum time for a single review.

However, some memories remain free from the detrimental effects of interference and do not necessarily follow the typical forgetting curve as various noise and outside factors influence what information would be remembered. There is debate among supporters of the hypothesis about the shape of the curve for events and facts that are more significant to the subject. Some supporters, for example, suggest that memories of shocking events such as the Kennedy Assassination or 9/11 are vividly imprinted in memory (flashbulb memory). Others have compared contemporaneous written recollections with recollections recorded years later, and found considerable variations as the subject's memory incorporates after-acquired information. There is considerable research in this area as it relates to eyewitness identification testimony, and eyewitness accounts are found demonstrably unreliable.

Equations

Many equations have since been proposed to approximate forgetting, perhaps the simplest being an exponential curve described by the equation:


where is retrievability (a measure of how easy it is to retrieve a piece of information from memory), is stability of memory (determines how fast falls over time in the absence of training, testing or other recall), and is time.

Simple equations such as this one were found by Rubin, Hinton, and Wenzel (1999) to provide a good fit to the available data.

Hermann Ebbinghaus

From Wikipedia, the free encyclopedia
 
Hermann Ebbinghaus
Ebbinghaus2.jpg
Hermann Ebbinghaus
BornJanuary 24, 1850
DiedFebruary 26, 1909 (aged 59)
CitizenshipGerman
Known forSerial position effect, Über das Gedächtnis
Scientific career
FieldsPsychology
InstitutionsUniversity of Berlin, University of Breslau, University of Halle
InfluencesGustav Fechner
InfluencedLev Vygotsky, Lewis Terman, Charlotte Bühler, William Stern

Hermann Ebbinghaus (January 24, 1850 – February 26, 1909) was a German psychologist who pioneered the experimental study of memory, and is known for his discovery of the forgetting curve and the spacing effect. He was also the first person to describe the learning curve. He was the father of the neo-Kantian philosopher Julius Ebbinghaus.

Early life

Ebbinghaus was born in Barmen, in the Rhine Province of the Kingdom of Prussia, as the son of a wealthy merchant, Carl Ebbinghaus. Little is known about his infancy except that he was brought up in the Lutheran faith and was a pupil at the town Gymnasium. At the age of 17 (1867), he began attending the University of Bonn, where he had planned to study history and philology. However, during his time there he developed an interest in philosophy. In 1870, his studies were interrupted when he served with the Prussian Army in the Franco-Prussian War. Following this short stint in the military, Ebbinghaus finished his dissertation on Eduard von Hartmann's Philosophie des Unbewussten (philosophy of the unconscious) and received his doctorate on August 16, 1873, when he was 23 years old. During the next three years, he spent time at Halle and Berlin.

Professional career

After acquiring his PhD, Ebbinghaus moved around England and France, tutoring students to support himself. In England, he may have taught in two small schools in the south of the country (Gorfein, 1885). In London, in a used bookstore, he came across Gustav Fechner's book Elemente der Psychophysik (Elements of Psychophysics), which spurred him to conduct his famous memory experiments. After beginning his studies at the University of Berlin, he founded the third psychological testing lab in Germany (third to Wilhelm Wundt and Georg Elias Müller). He began his memory studies here in 1879. In 1885 — the same year that he published his monumental work, Über das Gedächtnis. Untersuchungen zur experimentellen Psychologie, later published in English under the title Memory: A Contribution to Experimental Psychology — he was made a professor at the University of Berlin, most likely in recognition of this publication. In 1890, along with Arthur König, he founded the psychological journal Zeitschrift für Physiologie und Psychologie der Sinnesorgane ("The Psychology and Physiology of the Sense Organs'").

In 1894, he was passed over for promotion to head of the philosophy department at Berlin, most likely due to his lack of publications. Instead, Carl Stumpf received the promotion. As a result of this, Ebbinghaus left to join the University of Breslau (now Wrocław, Poland), in a chair left open by Theodor Lipps (who took over Stumpf's position when he moved to Berlin). While in Breslau, he worked on a commission that studied how children's mental ability declined during the school day. While the specifics on how these mental abilities were measured have been lost, the successes achieved by the commission laid the groundwork for future intelligence testing. At Breslau, he again founded a psychological testing laboratory.

In 1902, Ebbinghaus published his next piece of writing entitled Die Grundzüge der Psychologie (Fundamentals of Psychology). It was an instant success and continued to be long after his death. In 1904, he moved to Halle where he spent the last few years of his life. His last published work, Abriss der Psychologie (Outline of Psychology) was published six years later, in 1908. This, too, continued to be a success, being re-released in eight different editions. Shortly after this publication, on February 26, 1909, Ebbinghaus died from pneumonia at the age of 59.

Research on memory

Ebbinghaus was determined to show that higher mental processes could actually be studied using experimentation, which was in opposition to the popularly held thought of the time. To control for most potentially confounding variables, Ebbinghaus wanted to use simple acoustic encoding and maintenance rehearsal for which a list of words could have been used. As learning would be affected by prior knowledge and understanding, he needed something that could be easily memorized but which had no prior cognitive associations. Easily formable associations with regular words would interfere with his results, so he used items that would later be called "nonsense syllables" (also known as the CVC trigram). A nonsense syllable is a consonant-vowel-consonant combination, where the consonant does not repeat and the syllable does not have prior meaning. BOL (sounds like "Ball") and DOT (already a word) would then not be allowed. However, syllables such as DAX, BOK, and YAT would all be acceptable (though Ebbinghaus left no examples). After eliminating the meaning-laden syllables, Ebbinghaus ended up with 2,300 resultant syllables. Once he had created his collection of syllables, he would pull out a number of random syllables from a box and then write them down in a notebook. Then, to the regular sound of a metronome, and with the same voice inflection, he would read out the syllables, and attempt to recall them at the end of the procedure. One investigation alone required 15,000 recitations.

It was later determined that humans impose meaning even on nonsense syllables to make them more meaningful. The nonsense syllable PED (which is the first three letters of the word "pedal") turns out to be less nonsensical than a syllable such as KOJ; the syllables are said to differ in association value. It appears that Ebbinghaus recognized this, and only referred to the strings of syllables as "nonsense" in that the syllables might be less likely to have a specific meaning and he should make no attempt to make associations with them for easier retrieval.

Limitations to memory research

There are several limitations to his work on memory. The most important one was that Ebbinghaus was the only subject in his study. This limited the study's generalizability to the population. Although he attempted to regulate his daily routine to maintain more control over his results, his decision to avoid the use of participants sacrificed the external validity of the study despite sound internal validity. In addition, although he tried to account for his personal influences, there is an inherent bias when someone serves as researcher as well as participant. Also, Ebbinghaus's memory research halted research in other, more complex matters of memory such as semantic and procedural memory and mnemonics.

Contributions to memory

In 1885, he published his groundbreaking Über das Gedächtnis ("On Memory", later translated to English as Memory. A Contribution to Experimental Psychology) in which he described experiments he conducted on himself to describe the processes of learning and forgetting.

Ebbinghaus made several findings that are still relevant and supported to this day. First, Ebbinghaus made a set of 2,300 three letter syllables to measure mental associations that helped him find that memory is orderly. Second, and arguably his most famous finding, was the forgetting curve. The forgetting curve describes the exponential loss of information that one has learned. The sharpest decline occurs in the first twenty minutes and the decay is significant through the first hour. The curve levels off after about one day.

A typical representation of the forgetting curve
 
The learning curve described by Ebbinghaus refers to how fast one learns information. The sharpest increase occurs after the first try and then gradually evens out, meaning that less and less new information is retained after each repetition. Like the forgetting curve, the learning curve is exponential. Ebbinghaus had also documented the serial position effect, which describes how the position of an item affects recall. The two main concepts in the serial position effect are recency and primacy. The recency effect describes the increased recall of the most recent information because it is still in the short-term memory. The primacy effect causes better memory of the first items in a list due to increased rehearsal and commitment to long-term memory.

Another important discovery is that of savings. This refers to the amount of information retained in the subconscious even after this information cannot be consciously accessed. Ebbinghaus would memorize a list of items until perfect recall and then would not access the list until he could no longer recall any of its items. He then would relearn the list, and compare the new learning curve to the learning curve of his previous memorization of the list. The second list was generally memorized faster, and this difference between the two learning curves is what Ebbinghaus called "savings". Ebbinghaus also described the difference between involuntary and voluntary memory, the former occurring "with apparent spontaneity and without any act of the will" and the latter being brought "into consciousness by an exertion of the will".

Prior to Ebbinghaus, most contributions to the study of memory were undertaken by philosophers and centered on observational description and speculation. For example, Immanuel Kant used pure description to discuss recognition and its components and Sir Francis Bacon claimed that the simple observation of the rote recollection of a previously learned list was "no use to the art" of memory. This dichotomy between descriptive and experimental study of memory would resonate later in Ebbinghaus's life, particularly in his public argument with former colleague Wilhelm Dilthey. However, more than a century before Ebbinghaus, Johann Andreas Segner invented the "Segner-wheel" to see the length of after-images by seeing how fast a wheel with a hot coal attached had to move for the red ember circle from the coal to appear complete.

Ebbinghaus's effect on memory research was almost immediate. With very few works published on memory in the previous two millennia, Ebbinghaus's works spurred memory research in the United States in the 1890s, with 32 papers published in 1894 alone. This research was coupled with the growing development of mechanized mnemometers, or devices that aided in the recording and study of memory.

The reaction to his work in his day was mostly positive. Noted psychologist William James called the studies "heroic" and said that they were "the single most brilliant investigation in the history of psychology". Edward B. Titchener also mentioned that the studies were the greatest undertaking in the topic of memory since Aristotle.

Other contributions

Ebbinghaus can also be credited with pioneering sentence completion exercises, which he developed in studying the abilities of schoolchildren. It was these same exercises that Alfred Binet had borrowed and incorporated into the Binet-Simon intelligence scale. Sentence completion had since then also been used extensively in memory research, especially in tapping into measures of implicit memory, and also has been used in psychotherapy as a tool to help tap into the motivations and drives of the patient. He had also influenced Charlotte Bühler, who along with Lev Vygotsky and others went on to study language meaning and society.

The Ebbinghaus Illusion. Note that the orange circles appear of different sizes, despite them being equal.

Ebbinghaus is also credited with discovering an optical illusion now known after its discoverer—the Ebbinghaus illusion, which is an illusion of relative size perception. In the best-known version of this illusion, two circles of identical size are placed near to each other and one is surrounded by large circles while the other is surrounded by small circles; the first central circle then appears smaller than the second central circle. This illusion is now used extensively in research in cognitive psychology, to find out more about the various perception pathways in our brain.




Ebbinghaus is also largely credited with drafting the first standard research report. In his paper on memory, Ebbinghaus arranged his research into four sections: the introduction, the methods, the results, and a discussion section. The clarity and organization of this format was so impressive to contemporaries that it has now become standard in the discipline, and all research reports follow the same standards laid out by Ebbinghaus.


After Ebbinghaus worked on memory, he also had a contribution with color vision. In 1890, Ebbinghaus came up with the double pyramid design where corners were rounded off.




Unlike notable contemporaries like Titchener and James, Ebbinghaus did not promote any specific school of psychology nor was he known for extensive lifetime research, having done only three works. He never attempted to bestow upon himself the title of the pioneer of experimental psychology, did not seek to have any "disciples", and left the exploitation of the new field to others.

Discourse on the nature of psychology

In addition to pioneering experimental psychology, Ebbinghaus was also a strong defender of this direction of the new science, as is illustrated by his public dispute with University of Berlin colleague, Wilhelm Dilthey. Shortly after Ebbinghaus left Berlin in 1893, Dilthey published a paper extolling the virtues of descriptive psychology, and condemning experimental psychology as boring, claiming that the mind was too complex, and that introspection was the desired method of studying the mind. The debate at the time had been primarily whether psychology should aim to explain or understand the mind and whether it belonged to the natural or human sciences. Many had seen Dilthey's work as an outright attack on experimental psychology, Ebbinghaus included, and he responded to Dilthey with a personal letter and also a long scathing public article. Amongst his counterarguments against Dilthey he mentioned that it is inevitable for psychology to do hypothetical work and that the kind of psychology that Dilthey was attacking was the one that existed before Ebbinghaus's "experimental revolution". Charlotte Bühler echoed his words some forty years later, stating that people like Ebbinghaus "buried the old psychology in the 1890s". Ebbinghaus explained his scathing review by saying that he could not believe that Dilthey was advocating the status quo of structuralists like Wilhelm Wundt and Titchener and attempting to stifle psychology's progress.

Some contemporary texts still describe Ebbinghaus as a philosopher rather than a psychologist and he had also spent his life as a professor of philosophy. However, Ebbinghaus himself would probably describe himself as a psychologist considering that he fought to have psychology viewed as a separate discipline from philosophy.

Influences

There has been some speculation as to what influenced Ebbinghaus in his undertakings. None of his professors seem to have influenced him, nor are there suggestions that his colleagues affected him. Von Hartmann's work, on which Ebbinghaus based his doctorate, did suggest that higher mental processes were hidden from view, which may have spurred Ebbinghaus to attempt to prove otherwise. The one influence that has always been cited as having inspired Ebbinghaus was Gustav Fechner's two-volume Elemente der Psychophysik. ("Elements of Psychophysics", 1860), a book which he purchased second-hand in England. It is said that the meticulous mathematical procedures impressed Ebbinghaus so much that he wanted to do for psychology what Fechner had done for psychophysics. This inspiration is also evident in that Ebbinghaus dedicated his second work Principles of Psychology to Fechner, signing it "I owe everything to you."

Selected publications

Experience curve effects

From Wikipedia, the free encyclopedia

In industry, models of the learning or experience curve effect express the relationship between experience producing a good and efficiency of production, ie. efficiency gains that follow investment in the effort. The effect has large implications for effect of market share, which can increasing competitive advantage over time due to experience curve effects.

History: from psychological learning curves to the learning curve effect

An early empirical demonstration of learning curves was produced in 1885 by the German psychologist Hermann Ebbinghaus. Ebbinghaus was investigating the difficulty of memorizing verbal stimuli. He found that performance increased in proportion to experience (practice and testing) on memorising the word set (More detail about the complex processes of learning are discussed in the Learning curve article).

Wright's law and the discovery of the learning curve effect

Work in human memory was found later to generalize - the more times a task has been performed, the less time is required on each subsequent iteration. This relationship was probably first quantified in the industrial setting in 1936 by Theodore Paul Wright, an engineer at Curtiss-Wright in the United States, Wright found that every time total aircraft production doubled, the required labour time for new craft fell by 20%. Subsequent studies in other industries have yielded different values ranging from only a couple of percent up to 30%, but in most cases, the value in each industry was a constant percentage and did not vary at different scales of operation. The learning curve model posits that for each doubling of the total quantity of items produced, costs decrease by a fixed proportion. Generally, the production of any good or service shows the learning curve or experience curve effect. Each time cumulative volume doubles, value added costs (including administration, marketing, distribution, and manufacturing) fall by a constant percentage. 

The phrase experience curve was proposed by Bruce D. Henderson and the Boston Consulting Group (BCG) based on analyses of overall cost behavior in the 1960s. While accepting that the learning curve formed an attractive explanation, he used the name experience curve, suggesting that "the two are related, but quite different." In 1968, Henderson and BCG began to emphasize the implications of the experience curve for strategy. Research by BCG in the 1960s and 70s observed experience curve effects for various industries that ranged from 10 to 25%.

Unit curve

Mathematically, Wright's law takes the form of a power function. Empirical research has validated the following mathematical form for the unit cost of producing unit-number x (Px), starting with unit P1 for a wide variety of different products and services:
,
where 1-b is the proportion reduction in the unit cost with each doubling in the cumulative production. To see this, note the following:
The exponent b is a statistical parameter and thus does not exactly predict the unit cost of producing any future unit. However, it has been found to be useful in many contexts. Across numerous industries (see below), estimates of b range from 0.75 to 0.9 (i.e., 1-b ranges from 0.1 to 0.25).
The unit curve was expressed in slightly different nomenclature by Henderson:
where:
  • C1 is the cost of the first unit of production
  • Cn is the cost of the n-th unit of production
  • n is the cumulative volume of production
  • a is the elasticity of cost with regard to output
These effects are often expressed graphically. The curve is plotted with the cumulative units produced on the horizontal axis and unit cost on the vertical axis. The BCG group used the value of b to name a given industry curve. Thus a curve showing a 15% cost reduction for every doubling of output was called an “85% experience curve”.

Reasons for the effect

Examples
NASA quotes the following experience curves:
  • Aerospace: 85%
  • Shipbuilding: 80-85%
  • Complex machine tools for new models: 75–85%
  • Repetitive electronics manufacturing: 90–95%
  • Repetitive machining or punch-press operations: 90–95%
  • Repetitive electrical operations: 75–85%
  • Repetitive welding operations: 90%
  • Raw materials: 93–96%
  • Purchased parts: 85–88%
The primary reason for why experience and learning curve effects apply is the complex processes of learning involved. As discussed in the main article, learning generally begins with making successively larger finds and then successively smaller ones. The equations for these effects come from the usefulness of mathematical models for certain somewhat predictable aspects of those generally non-deterministic processes.
They include:
  • Labour efficiency: Workers become physically more dexterous. They become mentally more confident and spend less time hesitating, learning, experimenting, or making mistakes. Over time they learn short-cuts and improvements. This applies to all employees and managers, not just those directly involved in production.
  • Standardization, specialization, and methods improvements: As processes, parts, and products become more standardized, efficiency tends to increase. When employees specialize in a limited set of tasks, they gain more experience with these tasks and operate at a faster rate.
  • Technology-driven learning: Automated production technology and information technology can introduce efficiencies as they are implemented and people learn how to use them efficiently and effectively.
  • Better use of equipment: As total production has increased, manufacturing equipment will have been more fully exploited, lowering fully accounted unit costs. In addition, purchase of more productive equipment can be justifiable.
  • Changes in the resource mix: As a company acquires experience, it can alter its mix of inputs and thereby become more efficient.
  • Product redesign: As the manufacturers and consumers have more experience with the product, they can usually find improvements. This filters through to the manufacturing process. A good example of this is Cadillac's testing of various "bells and whistles" specialty accessories. The ones that did not break became mass-produced in other General Motors products; the ones that didn't stand the test of user "beatings" were discontinued, saving the car company money. As General Motors produced more cars, they learned how to best produce products that work for the least money.
  • Network-building and use-cost reductions (network effects): As a product enters more widespread use, the consumer uses it more efficiently because they're familiar with it. One fax machine in the world can do nothing, but if everyone has one, they build an increasingly efficient network of communications. Another example is email accounts; the more there are, the more efficient the network is, the lower everyone's cost per utility of using it.
  • Shared experience effects: Experience curve effects are reinforced when two or more products share a common activity or resource. Any efficiency learned from one product can be applied to the other products. (This is related to the principle of least astonishment.)

Experience curve discontinuities

The experience curve effect can on occasion come to an abrupt stop. Graphically, the curve is truncated. Existing processes become obsolete and the firm must upgrade to remain competitive. The upgrade will mean the old experience curve will be replaced by a new one. This occurs when:
  • Competitors introduce new products or processes that you must respond to
  • Key suppliers have much bigger customers that determine the price of products and services, and that becomes the main cost driver for the product
  • Technological change requires that you or your suppliers change processes
  • The experience curve strategies must be re-evaluated because

Strategic consequences of the effect

BCG founder Henderson published on the development of the experience curve. According to Henderson, its first "attempt to explain cost behavior over time in a process industry" began in 1966. The datum he focussed on was the striking correlation between competitive profitability and market share. Using price data in the semiconductor industry supplied by the Electronic Industries Association, he suggested that not one but two patterns emerged.
"In one pattern, prices, in current dollars, remained constant for long periods and then began a relatively steep and long continued decline in constant dollars. In the other pattern, prices, in constant dollars, declined steadily at a constant rate of about 25 percent each time accumulated experience doubled. That was the experience curve."
The suggestion was that failure of production to show the learning curve effect was a risk indicator. The BCG strategists examined the consequences of the experience effect for businesses. They concluded that because relatively low cost of operations is a very powerful strategic advantage, firms should invest in maximising these learning and experience effects and that market share is underestimated as an enabler of this investment. The reasoning is increased activity leads to increased learning, which leads to lower costs, which can lead to lower prices, which can lead to increased market share, which can lead to increased profitability and market dominance. This was particularly true when a firm had an early leadership in market share. It was suggested that if a company cannot get enough market share to be competitive, it should exit that business and concentrate resources where it was possible to take advantage of experience effects and gain (preferably dominant) market share. The BCG strategists developed product portfolio techniques like the BCG Matrix (in part) to manage this strategy.

One consequence of experience curve strategy is that it predicts that cost savings should be passed on as price decreases rather than kept as profit margin increases. The BCG strategists felt that maintaining a relatively high price, although very profitable in the short run, spelled disaster for the strategy in the long run. High profits would encourage competitors to enter the market, triggering a steep price decline and a competitive shakeout. If prices were reduced as unit costs fell (due to experience curve effects), then competitive entry would be discouraged while market share increases should increase overall profitability.

Criticisms

Ernst R. Berndt claims that in most organizations, experience effects are so closely intertwined with economies of scale (efficiencies arising from an increased scale of production) that it is impossible to separate the two. In practice, this view suggests, economies of scale coincide with experience effects (efficiencies arising from the learning and experience gained over repeated activities). The approach, however, accepts the existence of both as underlying causes. Economies of scale afford experience and experience may afford economies of scale.

Approaches such as Porter's generic strategies based on product differentiation and focused market segmentation have been proposed as alternative strategies for leadership that do not rely on lower unit costs.

Attempts to use the learning curve effect to improve competitive advantage, for instance by pre-emptively expanding production have been criticised, with factors such as bounded rationality and durable products cited as reasons for this.

The well travelled road effect may lead people to overestimate the effect of the experience curve.

Economies of scale

From Wikipedia, the free encyclopedia
 
As quantity of production increases from Q to Q2, the average cost of each unit decreases from C to C1. LRAC is the long-run average cost

In microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation (typically measured by the amount of output produced), with cost per unit of output decreasing with increasing scale. At the basis of economies of scale there may be technical, statistical, organizational or related factors to the degree of market control.

Economies of scale apply to a variety of organizational and business situations and at various levels, such as a production, plant or an entire enterprise. When average costs start falling as output increases, then economies of scale occur. Some economies of scale, such as capital cost of manufacturing facilities and friction loss of transportation and industrial equipment, have a physical or engineering basis.

Another source of scale economies is the possibility of purchasing inputs at a lower per-unit cost when they are purchased in large quantities.

The economic concept dates back to Adam Smith and the idea of obtaining larger production returns through the use of division of labor. Diseconomies of scale are the opposite.

Economies of scale often have limits, such as passing the optimum design point where costs per additional unit begin to increase. Common limits include exceeding the nearby raw material supply, such as wood in the lumber, pulp and paper industry. A common limit for a low cost per unit weight commodities is saturating the regional market, thus having to ship product uneconomic distances. Other limits include using energy less efficiently or having a higher defect rate. 

Large producers are usually efficient at long runs of a product grade (a commodity) and find it costly to switch grades frequently. They will, therefore, avoid specialty grades even though they have higher margins. Often smaller (usually older) manufacturing facilities remain viable by changing from commodity-grade production to specialty products.

Economies of scale must be distinguished from economies stemming from an increase in the production of a given plant. When a plant is used below its optimal production capacity, increases in its degree of utilization bring about decreases in the total average cost of production. As noticed, among the others, by Nicholas Georgescu-Roegen (1966) and Nicholas Kaldor (1972) these economies are not economies of scale.

Overview

The simple meaning of economies of scale is doing things more efficiently with increasing size. Common sources of economies of scale are purchasing (bulk buying of materials through long-term contracts), managerial (increasing the specialization of managers), financial (obtaining lower-interest charges when borrowing from banks and having access to a greater range of financial instruments), marketing (spreading the cost of advertising over a greater range of output in media markets), and technological (taking advantage of returns to scale in the production function). Each of these factors reduces the long run average costs (LRAC) of production by shifting the short-run average total cost (SRATC) curve down and to the right.

Economies of scale is a concept that may explain real-world phenomena such as patterns of international trade or the number of firms in a market. The exploitation of economies of scale helps explain why companies grow large in some industries. It is also a justification for free trade policies, since some economies of scale may require a larger market than is possible within a particular country—for example, it would not be efficient for Liechtenstein to have its own carmaker if they only sold to their local market. A lone carmaker may be profitable, but even more so if they exported cars to global markets in addition to selling to the local market. Economies of scale also play a role in a "natural monopoly". There is a distinction between two types of economies of scale: internal and external. An industry that exhibits an internal economy of scale is one where the costs of production fall when the number of firms in the industry drops, but the remaining firms increase their production to match previous levels. Conversely, an industry exhibits an external economy of scale when costs drop due to the introduction of more firms, thus allowing for more efficient use of specialized services and machinery.

The determinants of economies of scale

Physical and engineering basis: economies of increased dimension

Some of the economies of scale recognized in engineering have a physical basis, such as the square-cube law, by which the surface of a vessel increases by the square of the dimensions while the volume increases by the cube. This law has a direct effect on the capital cost of such things as buildings, factories, pipelines, ships and airplanes.

In structural engineering, the strength of beams increases with the cube of the thickness. 

Drag loss of vehicles like aircraft or ships generally increases less than proportional with increasing cargo volume, although the physical details can be quite complicated. Therefore, making them larger usually results in less fuel consumption per ton of cargo at a given speed.

Heat loss from industrial processes vary per unit of volume for pipes, tanks and other vessels in a relationship somewhat similar to the square-cube law. In some productions, an increase in the size of the plant reduces the average variable cost, thanks to the energy savings resulting from the lower dispersion of heat. 

Economies of increased dimension are often misinterpreted because of the confusion between indivisibility and three-dimensionality of space. This confusion arises from the fact that three-dimensional production elements, such as pipes and ovens, once installed and operating, are always technically indivisible. However, the economies of scale due to the increase in size do not depend on indivisibility but exclusively on the three-dimensionality of space. Indeed, indivisibility only entails the existence of economies of scale produced by the balancing of productive capacities, considered above; or of increasing returns in te utilisation of a single plant, due to its more efficient use as the quantity produced increases. However, this latter phenomenon has nothing to do with the economies of scale which, by definition, are linked to the use of a larger plant.

Economies in holding stocks and reserves

At the base of economies of scale there are also returns to scale linked to statistical factors. In fact, the greater of the number of resources involved, the smaller, in proportion, is the quantity of reserves necessary to cope with unforeseen contingencies (for instance, machine spare parts, inventories, circulating capital, etc.).

Transaction economies

A larger scale generally determines greater bargaining power over input prices and therefore benefits from pecuniary economies in terms of purchasing raw materials and intermediate goods compared to companies that make orders for smaller amounts. In this case we speak of pecuniary economies, to highlight the fact that nothing changes from the "physical" point of view of the returns to scale. Furthermore, supply contracts entail fixed costs which lead to decreasing average costs if the scale of production increases.

Economies deriving from the balancing of production capacity

Economies of productive capacity balancing derives from the possibility that a larger scale of production involves a more efficient use of the production capacities of the individual phases of the production process. If the inputs are indivisible and complementary, a small scale may be subject to idle times or to the underutilization of the productive capacity of some sub-processes. A higher production scale can make the different production capacities compatible. The reduction in machinery idle times is crucial in the case of a high cost of machinery.

Economies resulting from the division of labour and the use of superior techniques

A larger scale allows for a more efficient division of labour. The economies of division of labour derive from the increase in production speed, from the possibility of using specialized personnel and adopting more efficient techniques. An increase in the division of labour inevitably leads to changes in the quality of inputs and outputs.

Managerial Economics

Many administrative and organizational activities are mostly cognitive and, therefore, largely independent of the scale of production. When the size of the company and the division of labour increase, there are a number of advantages due to the possibility of making organizational management more effective and perfecting accounting and control techniques. Furthermore, the procedures and routines that turned out to be the best can be reproduced by managers at different times and places.

Learning and growth economies

Learning and growth economies are at the base of dynamic economies of scale, associated with the process of growth of the scale dimension and not to the dimension of scale per se. Learning by doing implies improvements in the ability to perform and promotes the introduction of incremental innovations with a progressive lowering of average costs. Learning economies are directly proportional to the cumulative production (experience curve). Growth economies occur when a company acquires an advantage by increasing its size. These economies are due to the presence of some resource or competence that is not fully utilized, or to the existence of specific market positions that create a differential advantage in expanding the size of the firms. That growth economies disappear once the scale size expansion process is completed. For example, a company that owns a supermarket chain benefits from an economy of growth if, opening a new supermarket, it gets an increase in the price of the land it owns around the new supermarket. The sale of these lands to economic operators, who wish to open shops near the supermarket, allows the company in question to make a profit, making a profit on the revaluation of the value of building land.

Capital and operating cost

Overall costs of capital projects are known to be subject to economies of scale. A crude estimate is that if the capital cost for a given sized piece of equipment is known, changing the size will change the capital cost by the 0.6 power of the capacity ratio (the point six to the power rule).

In estimating capital cost, it typically requires an insignificant amount of labor, and possibly not much more in materials, to install a larger capacity electrical wire or pipe having significantly greater capacity.

The cost of a unit of capacity of many types of equipment, such as electric motors, centrifugal pumps, diesel and gasoline engines, decreases as size increases. Also, the efficiency increases with size.

Crew size and other operating costs for ships, trains and airplanes

Operating crew size for ships, airplanes, trains, etc., does not increase in direct proportion to capacity. (Operating crew consists of pilots, co-pilots, navigators, etc. and does not include passenger service personnel.) Many aircraft models were significantly lengthened or "stretched" to increase payload.

Many manufacturing facilities, especially those making bulk materials like chemicals, refined petroleum products, cement and paper, have labor requirements that are not greatly influenced by changes in plant capacity. This is because labor requirements of automated processes tend to be based on the complexity of the operation rather than production rate, and many manufacturing facilities have nearly the same basic number of processing steps and pieces of equipment, regardless of production capacity.

Economical use of byproducts

Karl Marx noted that large scale manufacturing allowed economical use of products that would otherwise be waste. Marx cited the chemical industry as an example, which today along with petrochemicals, remains highly dependent on turning various residual reactant streams into salable products. In the pulp and paper industry it is economical to burn bark and fine wood particles to produce process steam and to recover the spent pulping chemicals for conversion back to a usable form.

Economies of scale and returns to scale

Economies of scale is related to and can easily be confused with the theoretical economic notion of returns to scale. Where economies of scale refer to a firm's costs, returns to scale describe the relationship between inputs and outputs in a long-run (all inputs variable) production function. A production function has constant returns to scale if increasing all inputs by some proportion results in output increasing by that same proportion. Returns are decreasing if, say, doubling inputs results in less than double the output, and increasing if more than double the output. If a mathematical function is used to represent the production function, and if that production function is homogeneous, returns to scale are represented by the degree of homogeneity of the function. Homogeneous production functions with constant returns to scale are first degree homogeneous, increasing returns to scale are represented by degrees of homogeneity greater than one, and decreasing returns to scale by degrees of homogeneity less than one.

If the firm is a perfect competitor in all input markets, and thus the per-unit prices of all its inputs are unaffected by how much of the inputs the firm purchases, then it can be shown that at a particular level of output, the firm has economies of scale if and only if it has increasing returns to scale, has diseconomies of scale if and only if it has decreasing returns to scale, and has neither economies nor diseconomies of scale if it has constant returns to scale. In this case, with perfect competition in the output market the long-run equilibrium will involve all firms operating at the minimum point of their long-run average cost curves (i.e., at the borderline between economies and diseconomies of scale).

If, however, the firm is not a perfect competitor in the input markets, then the above conclusions are modified. For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input's per-unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range.

In essence, returns to scale refer to the variation in the relationship between inputs and output. This relationship is therefore expressed in "physical" terms. But when talking about economies of scale, the relation taken into consideration is that between the average production cost and the dimension of scale. Economies of scale therefore are affected by variations in input prices. If input prices remain the same as their quantities purchased by the firm increase, the notions of increasing returns to scale and economies of scale can be considered equivalent. However, if input prices vary in relation to their quantities purchased by the company, it is necessary to distinguish between returns to scale and economies of scale. The concept of economies of scale is more general than that of returns to scale since it includes the possibility of changes in the price of inputs when the quantity purchased of inputs varies with changes in the scale of production.

The literature assumed that due to the competitive nature of reverse auctions, and in order to compensate for lower prices and lower margins, suppliers seek higher volumes to maintain or increase the total revenue. Buyers, in turn, benefit from the lower transaction costs and economies of scale that result from larger volumes. In part as a result, numerous studies have indicated that the procurement volume must be sufficiently high to provide sufficient profits to attract enough suppliers, and provide buyers with enough savings to cover their additional costs.

However, surprisingly enough, Shalev and Asbjornse found, in their research based on 139 reverse auctions conducted in the public sector by public sector buyers, that the higher auction volume, or economies of scale, did not lead to better success of the auction. They found that auction volume did not correlate with competition, nor with the number of bidders, suggesting that auction volume does not promote additional competition. They noted, however, that their data included a wide range of products, and the degree of competition in each market varied significantly, and offer that further research on this issue should be conducted to determine whether these findings remain the same when purchasing the same product for both small and high volumes. Keeping competitive factors constant, increasing auction volume may further increase competition.

Economies of scale in the history of economic analysis

Economies of scale in classical economists

The first systematic analysis of the advantages of the division of labour capable of generating economies of scale, both in a static and dynamic sense, was that contained in the famous First Book of Wealth of Nations (1776) by Adam Smith, generally considered the founder of political economy as an autonomous discipline. 

John Stuart Mill, in Chapter IX of the First Book of his Principles, referring to the work of Charles Babbage (On the economics of machines and manufactories), widely analyses the relationships between increasing returns and scale of production all inside the production unit.

The economies of scale in Marx and Distributional consequences

In “Das Kapital” (1867), Karl Marx, referring to Charles Babbage, extensively analyses economies of scale and concludes that they are one of the factors underlying the ever-increasing concentration of capital. Marx observes that in the capitalist system the technical conditions of the work process are continuously revolutionized in order to increase the surplus by improving the productive force of work. According to Marx, with the cooperation of many workers brings about an economy in the use of the means of production and an increase in productivity due to the increase in the division of labour. Furthermore, the increase in the size of the machinery allows significant savings in construction, installation and operation costs. The tendency to exploit economies of scale entails a continuous increase in the volume of production which, in turn, requires a constant expansion of the size of the market. However, if the market does not expand at the same rate as production increases, overproduction crises can occur. According to Marx the capitalist system is therefore characterized by two tendencies, connected to economies of scale: towards a growing concentration and towards economic crises due to overproduction.

In his 1844 Economic and Philosophic Manuscripts, Karl Marx observes that economies of scale have historically been associated with an increasing concentration of private wealth and have been used to justify such concentration. Marx points out that concentrated private ownership of large-scale economic enterprises is a historically contingent fact, and not essential to the nature of such enterprises. In the case of agriculture, for example, Marx calls attention to the sophistical nature of the arguments used to justify the system of concentrated ownership of land:
As for large landed property, its defenders have always sophistically identified the economic advantages offered by large-scale agriculture with large-scale landed property, as if it were not precisely as a result of the abolition of property that this advantage, for one thing, received its greatest possible extension, and, for another, only then would be of social benefit.
Instead of concentrated private ownership of land, Marx recommends that economies of scale should instead be realized by associations:
Association, applied to land, shares the economic advantage of large-scale landed property, and first brings to realization the original tendency inherent in land-division, namely, equality. In the same way association re-establishes, now on a rational basis, no longer mediated by serfdom, overlordship and the silly mysticism of property, the intimate ties of man with the earth, for the earth ceases to be an object of huckstering, and through free labor and free enjoyment becomes once more a true personal property of man.

Economies of scale in Marshall

Alfred Marshall notes that "some, among whom Cournot himself", have considered "the internal economies [...] apparently without noticing that their premises lead inevitably to the conclusion that, whatever firm first gets a good start will obtain a monopoly of the whole business of its trade … ". Marshall believes that there are factors that limit this trend toward monopoly, and in particular:
  • the death of the founder of the firm and the difficulty that the successors may have inherited his/her entrepreneurial skills;
  • the difficulty of reaching new markets for one's goods;
  • the growing difficulty of being able to adapt to changes in demand and to new techniques of production;
  • The effects of external economies, that is the particular type of economies of scale connected not to the production scale of an individual production unit, but to that of an entire sector.

Sraffa’s critique

Piero Sraffa observes that Marshall, in order to justify the operation of the law of increasing returns without it coming into conflict with the hypothesis of free competition, tended to highlight the advantages of external economies linked to an increase in the production of an entire sector of activity. However, “those economies which are external from the point of view of the individual firm, but internal as regards the industry in its aggregate, constitute precisely the class which is most seldom to be met with”. “In any case - Sraffa notes – in so far as external economies of the kind in question exist, they are not linked to be called forth by small increases in production”, as required by the marginalist theory of price. Sraffa points out that, in the equilibrium theory of the individual industries, the presence of external economies cannot play an important role because this theory is based on marginal changes in the quantities produced.

Sraffa concludes that, if the hypothesis of perfect competition is maintained, economies of scale should be excluded. He then suggests the possibility of abandoning the assumption of free competition to address the study of firms that have their own particular market. This stimulated a whole series of studies on the cases of imperfect competition in Cambridge. However, in the succeeding years Sraffa will follow a different path of research that will bring him to write and publish his main work Production of commodities by means of commodities (Sraffa, 1960). In this book Sraffa determines relative prices assuming no changes in output, so that no question arises as to the variation or constancy of returns.

Economies of scale and the tendency towards monopoly: ‘Cournot's dilemma’

It has been noted that in many industrial sectors there are numerous companies with different sizes and organizational structures, despite the presence of significant economies of scale. This contradiction, between the empirical evidence and the logical incompatibility between economies of scale and competition, has been called the ‘Cournot dilemma’. As Mario Morroni observes, Cournot's dilemma appears to be unsolvable if we only consider the effects of economies of scale on the dimension of scale. If, on the other hand, the analysis is expanded, including the aspects concerning the development of knowledge and the organization of transactions, it is possible to conclude that economies of scale do not always lead to monopoly. In fact, the competitive advantages deriving from the development of the firm's capabilities and from the management of transactions with suppliers and customers can counterbalance those provided by the scale, thus counteracting the tendency towards a monopoly inherent in economies of scale. In other words, the heterogeneity of the organizational forms and of the size of the companies operating in a sector of activity can be determined by factors regarding the quality of the products, the production flexibility, the contractual methods, the learning opportunities, the heterogeneity of preferences of customers who express a differentiated demand with respect to the quality of the product, and assistance before and after the sale. Very different organizational forms can therefore co-exist in the same sector of activity, even in the presence of economies of scale, such as, for example, flexible production on a large scale, small-scale flexible production, mass production, industrial production based on rigid technologies associated with flexible organizational systems and traditional artisan production. The considerations regarding economies of scale are therefore important, but not sufficient to explain the size of the company and the market structure. It is also necessary to take into account the factors linked to the development of capabilities and the management of transaction costs.

Cetacean intelligence

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cet...