Search This Blog

Friday, July 10, 2020

Hermann Ebbinghaus

From Wikipedia, the free encyclopedia
 
Hermann Ebbinghaus
Ebbinghaus2.jpg
Hermann Ebbinghaus
BornJanuary 24, 1850
DiedFebruary 26, 1909 (aged 59)
CitizenshipGerman
Known forSerial position effect, Über das Gedächtnis
Scientific career
FieldsPsychology
InstitutionsUniversity of Berlin, University of Breslau, University of Halle
InfluencesGustav Fechner
InfluencedLev Vygotsky, Lewis Terman, Charlotte Bühler, William Stern

Hermann Ebbinghaus (January 24, 1850 – February 26, 1909) was a German psychologist who pioneered the experimental study of memory, and is known for his discovery of the forgetting curve and the spacing effect. He was also the first person to describe the learning curve. He was the father of the neo-Kantian philosopher Julius Ebbinghaus.

Early life

Ebbinghaus was born in Barmen, in the Rhine Province of the Kingdom of Prussia, as the son of a wealthy merchant, Carl Ebbinghaus. Little is known about his infancy except that he was brought up in the Lutheran faith and was a pupil at the town Gymnasium. At the age of 17 (1867), he began attending the University of Bonn, where he had planned to study history and philology. However, during his time there he developed an interest in philosophy. In 1870, his studies were interrupted when he served with the Prussian Army in the Franco-Prussian War. Following this short stint in the military, Ebbinghaus finished his dissertation on Eduard von Hartmann's Philosophie des Unbewussten (philosophy of the unconscious) and received his doctorate on August 16, 1873, when he was 23 years old. During the next three years, he spent time at Halle and Berlin.

Professional career

After acquiring his PhD, Ebbinghaus moved around England and France, tutoring students to support himself. In England, he may have taught in two small schools in the south of the country (Gorfein, 1885). In London, in a used bookstore, he came across Gustav Fechner's book Elemente der Psychophysik (Elements of Psychophysics), which spurred him to conduct his famous memory experiments. After beginning his studies at the University of Berlin, he founded the third psychological testing lab in Germany (third to Wilhelm Wundt and Georg Elias Müller). He began his memory studies here in 1879. In 1885 — the same year that he published his monumental work, Über das Gedächtnis. Untersuchungen zur experimentellen Psychologie, later published in English under the title Memory: A Contribution to Experimental Psychology — he was made a professor at the University of Berlin, most likely in recognition of this publication. In 1890, along with Arthur König, he founded the psychological journal Zeitschrift für Physiologie und Psychologie der Sinnesorgane ("The Psychology and Physiology of the Sense Organs'").

In 1894, he was passed over for promotion to head of the philosophy department at Berlin, most likely due to his lack of publications. Instead, Carl Stumpf received the promotion. As a result of this, Ebbinghaus left to join the University of Breslau (now Wrocław, Poland), in a chair left open by Theodor Lipps (who took over Stumpf's position when he moved to Berlin). While in Breslau, he worked on a commission that studied how children's mental ability declined during the school day. While the specifics on how these mental abilities were measured have been lost, the successes achieved by the commission laid the groundwork for future intelligence testing. At Breslau, he again founded a psychological testing laboratory.

In 1902, Ebbinghaus published his next piece of writing entitled Die Grundzüge der Psychologie (Fundamentals of Psychology). It was an instant success and continued to be long after his death. In 1904, he moved to Halle where he spent the last few years of his life. His last published work, Abriss der Psychologie (Outline of Psychology) was published six years later, in 1908. This, too, continued to be a success, being re-released in eight different editions. Shortly after this publication, on February 26, 1909, Ebbinghaus died from pneumonia at the age of 59.

Research on memory

Ebbinghaus was determined to show that higher mental processes could actually be studied using experimentation, which was in opposition to the popularly held thought of the time. To control for most potentially confounding variables, Ebbinghaus wanted to use simple acoustic encoding and maintenance rehearsal for which a list of words could have been used. As learning would be affected by prior knowledge and understanding, he needed something that could be easily memorized but which had no prior cognitive associations. Easily formable associations with regular words would interfere with his results, so he used items that would later be called "nonsense syllables" (also known as the CVC trigram). A nonsense syllable is a consonant-vowel-consonant combination, where the consonant does not repeat and the syllable does not have prior meaning. BOL (sounds like "Ball") and DOT (already a word) would then not be allowed. However, syllables such as DAX, BOK, and YAT would all be acceptable (though Ebbinghaus left no examples). After eliminating the meaning-laden syllables, Ebbinghaus ended up with 2,300 resultant syllables. Once he had created his collection of syllables, he would pull out a number of random syllables from a box and then write them down in a notebook. Then, to the regular sound of a metronome, and with the same voice inflection, he would read out the syllables, and attempt to recall them at the end of the procedure. One investigation alone required 15,000 recitations.

It was later determined that humans impose meaning even on nonsense syllables to make them more meaningful. The nonsense syllable PED (which is the first three letters of the word "pedal") turns out to be less nonsensical than a syllable such as KOJ; the syllables are said to differ in association value. It appears that Ebbinghaus recognized this, and only referred to the strings of syllables as "nonsense" in that the syllables might be less likely to have a specific meaning and he should make no attempt to make associations with them for easier retrieval.

Limitations to memory research

There are several limitations to his work on memory. The most important one was that Ebbinghaus was the only subject in his study. This limited the study's generalizability to the population. Although he attempted to regulate his daily routine to maintain more control over his results, his decision to avoid the use of participants sacrificed the external validity of the study despite sound internal validity. In addition, although he tried to account for his personal influences, there is an inherent bias when someone serves as researcher as well as participant. Also, Ebbinghaus's memory research halted research in other, more complex matters of memory such as semantic and procedural memory and mnemonics.

Contributions to memory

In 1885, he published his groundbreaking Über das Gedächtnis ("On Memory", later translated to English as Memory. A Contribution to Experimental Psychology) in which he described experiments he conducted on himself to describe the processes of learning and forgetting.

Ebbinghaus made several findings that are still relevant and supported to this day. First, Ebbinghaus made a set of 2,300 three letter syllables to measure mental associations that helped him find that memory is orderly. Second, and arguably his most famous finding, was the forgetting curve. The forgetting curve describes the exponential loss of information that one has learned. The sharpest decline occurs in the first twenty minutes and the decay is significant through the first hour. The curve levels off after about one day.

A typical representation of the forgetting curve
 
The learning curve described by Ebbinghaus refers to how fast one learns information. The sharpest increase occurs after the first try and then gradually evens out, meaning that less and less new information is retained after each repetition. Like the forgetting curve, the learning curve is exponential. Ebbinghaus had also documented the serial position effect, which describes how the position of an item affects recall. The two main concepts in the serial position effect are recency and primacy. The recency effect describes the increased recall of the most recent information because it is still in the short-term memory. The primacy effect causes better memory of the first items in a list due to increased rehearsal and commitment to long-term memory.

Another important discovery is that of savings. This refers to the amount of information retained in the subconscious even after this information cannot be consciously accessed. Ebbinghaus would memorize a list of items until perfect recall and then would not access the list until he could no longer recall any of its items. He then would relearn the list, and compare the new learning curve to the learning curve of his previous memorization of the list. The second list was generally memorized faster, and this difference between the two learning curves is what Ebbinghaus called "savings". Ebbinghaus also described the difference between involuntary and voluntary memory, the former occurring "with apparent spontaneity and without any act of the will" and the latter being brought "into consciousness by an exertion of the will".

Prior to Ebbinghaus, most contributions to the study of memory were undertaken by philosophers and centered on observational description and speculation. For example, Immanuel Kant used pure description to discuss recognition and its components and Sir Francis Bacon claimed that the simple observation of the rote recollection of a previously learned list was "no use to the art" of memory. This dichotomy between descriptive and experimental study of memory would resonate later in Ebbinghaus's life, particularly in his public argument with former colleague Wilhelm Dilthey. However, more than a century before Ebbinghaus, Johann Andreas Segner invented the "Segner-wheel" to see the length of after-images by seeing how fast a wheel with a hot coal attached had to move for the red ember circle from the coal to appear complete.

Ebbinghaus's effect on memory research was almost immediate. With very few works published on memory in the previous two millennia, Ebbinghaus's works spurred memory research in the United States in the 1890s, with 32 papers published in 1894 alone. This research was coupled with the growing development of mechanized mnemometers, or devices that aided in the recording and study of memory.

The reaction to his work in his day was mostly positive. Noted psychologist William James called the studies "heroic" and said that they were "the single most brilliant investigation in the history of psychology". Edward B. Titchener also mentioned that the studies were the greatest undertaking in the topic of memory since Aristotle.

Other contributions

Ebbinghaus can also be credited with pioneering sentence completion exercises, which he developed in studying the abilities of schoolchildren. It was these same exercises that Alfred Binet had borrowed and incorporated into the Binet-Simon intelligence scale. Sentence completion had since then also been used extensively in memory research, especially in tapping into measures of implicit memory, and also has been used in psychotherapy as a tool to help tap into the motivations and drives of the patient. He had also influenced Charlotte Bühler, who along with Lev Vygotsky and others went on to study language meaning and society.

The Ebbinghaus Illusion. Note that the orange circles appear of different sizes, despite them being equal.

Ebbinghaus is also credited with discovering an optical illusion now known after its discoverer—the Ebbinghaus illusion, which is an illusion of relative size perception. In the best-known version of this illusion, two circles of identical size are placed near to each other and one is surrounded by large circles while the other is surrounded by small circles; the first central circle then appears smaller than the second central circle. This illusion is now used extensively in research in cognitive psychology, to find out more about the various perception pathways in our brain.




Ebbinghaus is also largely credited with drafting the first standard research report. In his paper on memory, Ebbinghaus arranged his research into four sections: the introduction, the methods, the results, and a discussion section. The clarity and organization of this format was so impressive to contemporaries that it has now become standard in the discipline, and all research reports follow the same standards laid out by Ebbinghaus.


After Ebbinghaus worked on memory, he also had a contribution with color vision. In 1890, Ebbinghaus came up with the double pyramid design where corners were rounded off.




Unlike notable contemporaries like Titchener and James, Ebbinghaus did not promote any specific school of psychology nor was he known for extensive lifetime research, having done only three works. He never attempted to bestow upon himself the title of the pioneer of experimental psychology, did not seek to have any "disciples", and left the exploitation of the new field to others.

Discourse on the nature of psychology

In addition to pioneering experimental psychology, Ebbinghaus was also a strong defender of this direction of the new science, as is illustrated by his public dispute with University of Berlin colleague, Wilhelm Dilthey. Shortly after Ebbinghaus left Berlin in 1893, Dilthey published a paper extolling the virtues of descriptive psychology, and condemning experimental psychology as boring, claiming that the mind was too complex, and that introspection was the desired method of studying the mind. The debate at the time had been primarily whether psychology should aim to explain or understand the mind and whether it belonged to the natural or human sciences. Many had seen Dilthey's work as an outright attack on experimental psychology, Ebbinghaus included, and he responded to Dilthey with a personal letter and also a long scathing public article. Amongst his counterarguments against Dilthey he mentioned that it is inevitable for psychology to do hypothetical work and that the kind of psychology that Dilthey was attacking was the one that existed before Ebbinghaus's "experimental revolution". Charlotte Bühler echoed his words some forty years later, stating that people like Ebbinghaus "buried the old psychology in the 1890s". Ebbinghaus explained his scathing review by saying that he could not believe that Dilthey was advocating the status quo of structuralists like Wilhelm Wundt and Titchener and attempting to stifle psychology's progress.

Some contemporary texts still describe Ebbinghaus as a philosopher rather than a psychologist and he had also spent his life as a professor of philosophy. However, Ebbinghaus himself would probably describe himself as a psychologist considering that he fought to have psychology viewed as a separate discipline from philosophy.

Influences

There has been some speculation as to what influenced Ebbinghaus in his undertakings. None of his professors seem to have influenced him, nor are there suggestions that his colleagues affected him. Von Hartmann's work, on which Ebbinghaus based his doctorate, did suggest that higher mental processes were hidden from view, which may have spurred Ebbinghaus to attempt to prove otherwise. The one influence that has always been cited as having inspired Ebbinghaus was Gustav Fechner's two-volume Elemente der Psychophysik. ("Elements of Psychophysics", 1860), a book which he purchased second-hand in England. It is said that the meticulous mathematical procedures impressed Ebbinghaus so much that he wanted to do for psychology what Fechner had done for psychophysics. This inspiration is also evident in that Ebbinghaus dedicated his second work Principles of Psychology to Fechner, signing it "I owe everything to you."

Selected publications

Experience curve effects

From Wikipedia, the free encyclopedia

In industry, models of the learning or experience curve effect express the relationship between experience producing a good and efficiency of production, ie. efficiency gains that follow investment in the effort. The effect has large implications for effect of market share, which can increasing competitive advantage over time due to experience curve effects.

History: from psychological learning curves to the learning curve effect

An early empirical demonstration of learning curves was produced in 1885 by the German psychologist Hermann Ebbinghaus. Ebbinghaus was investigating the difficulty of memorizing verbal stimuli. He found that performance increased in proportion to experience (practice and testing) on memorising the word set (More detail about the complex processes of learning are discussed in the Learning curve article).

Wright's law and the discovery of the learning curve effect

Work in human memory was found later to generalize - the more times a task has been performed, the less time is required on each subsequent iteration. This relationship was probably first quantified in the industrial setting in 1936 by Theodore Paul Wright, an engineer at Curtiss-Wright in the United States, Wright found that every time total aircraft production doubled, the required labour time for new craft fell by 20%. Subsequent studies in other industries have yielded different values ranging from only a couple of percent up to 30%, but in most cases, the value in each industry was a constant percentage and did not vary at different scales of operation. The learning curve model posits that for each doubling of the total quantity of items produced, costs decrease by a fixed proportion. Generally, the production of any good or service shows the learning curve or experience curve effect. Each time cumulative volume doubles, value added costs (including administration, marketing, distribution, and manufacturing) fall by a constant percentage. 

The phrase experience curve was proposed by Bruce D. Henderson and the Boston Consulting Group (BCG) based on analyses of overall cost behavior in the 1960s. While accepting that the learning curve formed an attractive explanation, he used the name experience curve, suggesting that "the two are related, but quite different." In 1968, Henderson and BCG began to emphasize the implications of the experience curve for strategy. Research by BCG in the 1960s and 70s observed experience curve effects for various industries that ranged from 10 to 25%.

Unit curve

Mathematically, Wright's law takes the form of a power function. Empirical research has validated the following mathematical form for the unit cost of producing unit-number x (Px), starting with unit P1 for a wide variety of different products and services:
,
where 1-b is the proportion reduction in the unit cost with each doubling in the cumulative production. To see this, note the following:
The exponent b is a statistical parameter and thus does not exactly predict the unit cost of producing any future unit. However, it has been found to be useful in many contexts. Across numerous industries (see below), estimates of b range from 0.75 to 0.9 (i.e., 1-b ranges from 0.1 to 0.25).
The unit curve was expressed in slightly different nomenclature by Henderson:
where:
  • C1 is the cost of the first unit of production
  • Cn is the cost of the n-th unit of production
  • n is the cumulative volume of production
  • a is the elasticity of cost with regard to output
These effects are often expressed graphically. The curve is plotted with the cumulative units produced on the horizontal axis and unit cost on the vertical axis. The BCG group used the value of b to name a given industry curve. Thus a curve showing a 15% cost reduction for every doubling of output was called an “85% experience curve”.

Reasons for the effect

Examples
NASA quotes the following experience curves:
  • Aerospace: 85%
  • Shipbuilding: 80-85%
  • Complex machine tools for new models: 75–85%
  • Repetitive electronics manufacturing: 90–95%
  • Repetitive machining or punch-press operations: 90–95%
  • Repetitive electrical operations: 75–85%
  • Repetitive welding operations: 90%
  • Raw materials: 93–96%
  • Purchased parts: 85–88%
The primary reason for why experience and learning curve effects apply is the complex processes of learning involved. As discussed in the main article, learning generally begins with making successively larger finds and then successively smaller ones. The equations for these effects come from the usefulness of mathematical models for certain somewhat predictable aspects of those generally non-deterministic processes.
They include:
  • Labour efficiency: Workers become physically more dexterous. They become mentally more confident and spend less time hesitating, learning, experimenting, or making mistakes. Over time they learn short-cuts and improvements. This applies to all employees and managers, not just those directly involved in production.
  • Standardization, specialization, and methods improvements: As processes, parts, and products become more standardized, efficiency tends to increase. When employees specialize in a limited set of tasks, they gain more experience with these tasks and operate at a faster rate.
  • Technology-driven learning: Automated production technology and information technology can introduce efficiencies as they are implemented and people learn how to use them efficiently and effectively.
  • Better use of equipment: As total production has increased, manufacturing equipment will have been more fully exploited, lowering fully accounted unit costs. In addition, purchase of more productive equipment can be justifiable.
  • Changes in the resource mix: As a company acquires experience, it can alter its mix of inputs and thereby become more efficient.
  • Product redesign: As the manufacturers and consumers have more experience with the product, they can usually find improvements. This filters through to the manufacturing process. A good example of this is Cadillac's testing of various "bells and whistles" specialty accessories. The ones that did not break became mass-produced in other General Motors products; the ones that didn't stand the test of user "beatings" were discontinued, saving the car company money. As General Motors produced more cars, they learned how to best produce products that work for the least money.
  • Network-building and use-cost reductions (network effects): As a product enters more widespread use, the consumer uses it more efficiently because they're familiar with it. One fax machine in the world can do nothing, but if everyone has one, they build an increasingly efficient network of communications. Another example is email accounts; the more there are, the more efficient the network is, the lower everyone's cost per utility of using it.
  • Shared experience effects: Experience curve effects are reinforced when two or more products share a common activity or resource. Any efficiency learned from one product can be applied to the other products. (This is related to the principle of least astonishment.)

Experience curve discontinuities

The experience curve effect can on occasion come to an abrupt stop. Graphically, the curve is truncated. Existing processes become obsolete and the firm must upgrade to remain competitive. The upgrade will mean the old experience curve will be replaced by a new one. This occurs when:
  • Competitors introduce new products or processes that you must respond to
  • Key suppliers have much bigger customers that determine the price of products and services, and that becomes the main cost driver for the product
  • Technological change requires that you or your suppliers change processes
  • The experience curve strategies must be re-evaluated because

Strategic consequences of the effect

BCG founder Henderson published on the development of the experience curve. According to Henderson, its first "attempt to explain cost behavior over time in a process industry" began in 1966. The datum he focussed on was the striking correlation between competitive profitability and market share. Using price data in the semiconductor industry supplied by the Electronic Industries Association, he suggested that not one but two patterns emerged.
"In one pattern, prices, in current dollars, remained constant for long periods and then began a relatively steep and long continued decline in constant dollars. In the other pattern, prices, in constant dollars, declined steadily at a constant rate of about 25 percent each time accumulated experience doubled. That was the experience curve."
The suggestion was that failure of production to show the learning curve effect was a risk indicator. The BCG strategists examined the consequences of the experience effect for businesses. They concluded that because relatively low cost of operations is a very powerful strategic advantage, firms should invest in maximising these learning and experience effects and that market share is underestimated as an enabler of this investment. The reasoning is increased activity leads to increased learning, which leads to lower costs, which can lead to lower prices, which can lead to increased market share, which can lead to increased profitability and market dominance. This was particularly true when a firm had an early leadership in market share. It was suggested that if a company cannot get enough market share to be competitive, it should exit that business and concentrate resources where it was possible to take advantage of experience effects and gain (preferably dominant) market share. The BCG strategists developed product portfolio techniques like the BCG Matrix (in part) to manage this strategy.

One consequence of experience curve strategy is that it predicts that cost savings should be passed on as price decreases rather than kept as profit margin increases. The BCG strategists felt that maintaining a relatively high price, although very profitable in the short run, spelled disaster for the strategy in the long run. High profits would encourage competitors to enter the market, triggering a steep price decline and a competitive shakeout. If prices were reduced as unit costs fell (due to experience curve effects), then competitive entry would be discouraged while market share increases should increase overall profitability.

Criticisms

Ernst R. Berndt claims that in most organizations, experience effects are so closely intertwined with economies of scale (efficiencies arising from an increased scale of production) that it is impossible to separate the two. In practice, this view suggests, economies of scale coincide with experience effects (efficiencies arising from the learning and experience gained over repeated activities). The approach, however, accepts the existence of both as underlying causes. Economies of scale afford experience and experience may afford economies of scale.

Approaches such as Porter's generic strategies based on product differentiation and focused market segmentation have been proposed as alternative strategies for leadership that do not rely on lower unit costs.

Attempts to use the learning curve effect to improve competitive advantage, for instance by pre-emptively expanding production have been criticised, with factors such as bounded rationality and durable products cited as reasons for this.

The well travelled road effect may lead people to overestimate the effect of the experience curve.

Economies of scale

From Wikipedia, the free encyclopedia
 
As quantity of production increases from Q to Q2, the average cost of each unit decreases from C to C1. LRAC is the long-run average cost

In microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation (typically measured by the amount of output produced), with cost per unit of output decreasing with increasing scale. At the basis of economies of scale there may be technical, statistical, organizational or related factors to the degree of market control.

Economies of scale apply to a variety of organizational and business situations and at various levels, such as a production, plant or an entire enterprise. When average costs start falling as output increases, then economies of scale occur. Some economies of scale, such as capital cost of manufacturing facilities and friction loss of transportation and industrial equipment, have a physical or engineering basis.

Another source of scale economies is the possibility of purchasing inputs at a lower per-unit cost when they are purchased in large quantities.

The economic concept dates back to Adam Smith and the idea of obtaining larger production returns through the use of division of labor. Diseconomies of scale are the opposite.

Economies of scale often have limits, such as passing the optimum design point where costs per additional unit begin to increase. Common limits include exceeding the nearby raw material supply, such as wood in the lumber, pulp and paper industry. A common limit for a low cost per unit weight commodities is saturating the regional market, thus having to ship product uneconomic distances. Other limits include using energy less efficiently or having a higher defect rate. 

Large producers are usually efficient at long runs of a product grade (a commodity) and find it costly to switch grades frequently. They will, therefore, avoid specialty grades even though they have higher margins. Often smaller (usually older) manufacturing facilities remain viable by changing from commodity-grade production to specialty products.

Economies of scale must be distinguished from economies stemming from an increase in the production of a given plant. When a plant is used below its optimal production capacity, increases in its degree of utilization bring about decreases in the total average cost of production. As noticed, among the others, by Nicholas Georgescu-Roegen (1966) and Nicholas Kaldor (1972) these economies are not economies of scale.

Overview

The simple meaning of economies of scale is doing things more efficiently with increasing size. Common sources of economies of scale are purchasing (bulk buying of materials through long-term contracts), managerial (increasing the specialization of managers), financial (obtaining lower-interest charges when borrowing from banks and having access to a greater range of financial instruments), marketing (spreading the cost of advertising over a greater range of output in media markets), and technological (taking advantage of returns to scale in the production function). Each of these factors reduces the long run average costs (LRAC) of production by shifting the short-run average total cost (SRATC) curve down and to the right.

Economies of scale is a concept that may explain real-world phenomena such as patterns of international trade or the number of firms in a market. The exploitation of economies of scale helps explain why companies grow large in some industries. It is also a justification for free trade policies, since some economies of scale may require a larger market than is possible within a particular country—for example, it would not be efficient for Liechtenstein to have its own carmaker if they only sold to their local market. A lone carmaker may be profitable, but even more so if they exported cars to global markets in addition to selling to the local market. Economies of scale also play a role in a "natural monopoly". There is a distinction between two types of economies of scale: internal and external. An industry that exhibits an internal economy of scale is one where the costs of production fall when the number of firms in the industry drops, but the remaining firms increase their production to match previous levels. Conversely, an industry exhibits an external economy of scale when costs drop due to the introduction of more firms, thus allowing for more efficient use of specialized services and machinery.

The determinants of economies of scale

Physical and engineering basis: economies of increased dimension

Some of the economies of scale recognized in engineering have a physical basis, such as the square-cube law, by which the surface of a vessel increases by the square of the dimensions while the volume increases by the cube. This law has a direct effect on the capital cost of such things as buildings, factories, pipelines, ships and airplanes.

In structural engineering, the strength of beams increases with the cube of the thickness. 

Drag loss of vehicles like aircraft or ships generally increases less than proportional with increasing cargo volume, although the physical details can be quite complicated. Therefore, making them larger usually results in less fuel consumption per ton of cargo at a given speed.

Heat loss from industrial processes vary per unit of volume for pipes, tanks and other vessels in a relationship somewhat similar to the square-cube law. In some productions, an increase in the size of the plant reduces the average variable cost, thanks to the energy savings resulting from the lower dispersion of heat. 

Economies of increased dimension are often misinterpreted because of the confusion between indivisibility and three-dimensionality of space. This confusion arises from the fact that three-dimensional production elements, such as pipes and ovens, once installed and operating, are always technically indivisible. However, the economies of scale due to the increase in size do not depend on indivisibility but exclusively on the three-dimensionality of space. Indeed, indivisibility only entails the existence of economies of scale produced by the balancing of productive capacities, considered above; or of increasing returns in te utilisation of a single plant, due to its more efficient use as the quantity produced increases. However, this latter phenomenon has nothing to do with the economies of scale which, by definition, are linked to the use of a larger plant.

Economies in holding stocks and reserves

At the base of economies of scale there are also returns to scale linked to statistical factors. In fact, the greater of the number of resources involved, the smaller, in proportion, is the quantity of reserves necessary to cope with unforeseen contingencies (for instance, machine spare parts, inventories, circulating capital, etc.).

Transaction economies

A larger scale generally determines greater bargaining power over input prices and therefore benefits from pecuniary economies in terms of purchasing raw materials and intermediate goods compared to companies that make orders for smaller amounts. In this case we speak of pecuniary economies, to highlight the fact that nothing changes from the "physical" point of view of the returns to scale. Furthermore, supply contracts entail fixed costs which lead to decreasing average costs if the scale of production increases.

Economies deriving from the balancing of production capacity

Economies of productive capacity balancing derives from the possibility that a larger scale of production involves a more efficient use of the production capacities of the individual phases of the production process. If the inputs are indivisible and complementary, a small scale may be subject to idle times or to the underutilization of the productive capacity of some sub-processes. A higher production scale can make the different production capacities compatible. The reduction in machinery idle times is crucial in the case of a high cost of machinery.

Economies resulting from the division of labour and the use of superior techniques

A larger scale allows for a more efficient division of labour. The economies of division of labour derive from the increase in production speed, from the possibility of using specialized personnel and adopting more efficient techniques. An increase in the division of labour inevitably leads to changes in the quality of inputs and outputs.

Managerial Economics

Many administrative and organizational activities are mostly cognitive and, therefore, largely independent of the scale of production. When the size of the company and the division of labour increase, there are a number of advantages due to the possibility of making organizational management more effective and perfecting accounting and control techniques. Furthermore, the procedures and routines that turned out to be the best can be reproduced by managers at different times and places.

Learning and growth economies

Learning and growth economies are at the base of dynamic economies of scale, associated with the process of growth of the scale dimension and not to the dimension of scale per se. Learning by doing implies improvements in the ability to perform and promotes the introduction of incremental innovations with a progressive lowering of average costs. Learning economies are directly proportional to the cumulative production (experience curve). Growth economies occur when a company acquires an advantage by increasing its size. These economies are due to the presence of some resource or competence that is not fully utilized, or to the existence of specific market positions that create a differential advantage in expanding the size of the firms. That growth economies disappear once the scale size expansion process is completed. For example, a company that owns a supermarket chain benefits from an economy of growth if, opening a new supermarket, it gets an increase in the price of the land it owns around the new supermarket. The sale of these lands to economic operators, who wish to open shops near the supermarket, allows the company in question to make a profit, making a profit on the revaluation of the value of building land.

Capital and operating cost

Overall costs of capital projects are known to be subject to economies of scale. A crude estimate is that if the capital cost for a given sized piece of equipment is known, changing the size will change the capital cost by the 0.6 power of the capacity ratio (the point six to the power rule).

In estimating capital cost, it typically requires an insignificant amount of labor, and possibly not much more in materials, to install a larger capacity electrical wire or pipe having significantly greater capacity.

The cost of a unit of capacity of many types of equipment, such as electric motors, centrifugal pumps, diesel and gasoline engines, decreases as size increases. Also, the efficiency increases with size.

Crew size and other operating costs for ships, trains and airplanes

Operating crew size for ships, airplanes, trains, etc., does not increase in direct proportion to capacity. (Operating crew consists of pilots, co-pilots, navigators, etc. and does not include passenger service personnel.) Many aircraft models were significantly lengthened or "stretched" to increase payload.

Many manufacturing facilities, especially those making bulk materials like chemicals, refined petroleum products, cement and paper, have labor requirements that are not greatly influenced by changes in plant capacity. This is because labor requirements of automated processes tend to be based on the complexity of the operation rather than production rate, and many manufacturing facilities have nearly the same basic number of processing steps and pieces of equipment, regardless of production capacity.

Economical use of byproducts

Karl Marx noted that large scale manufacturing allowed economical use of products that would otherwise be waste. Marx cited the chemical industry as an example, which today along with petrochemicals, remains highly dependent on turning various residual reactant streams into salable products. In the pulp and paper industry it is economical to burn bark and fine wood particles to produce process steam and to recover the spent pulping chemicals for conversion back to a usable form.

Economies of scale and returns to scale

Economies of scale is related to and can easily be confused with the theoretical economic notion of returns to scale. Where economies of scale refer to a firm's costs, returns to scale describe the relationship between inputs and outputs in a long-run (all inputs variable) production function. A production function has constant returns to scale if increasing all inputs by some proportion results in output increasing by that same proportion. Returns are decreasing if, say, doubling inputs results in less than double the output, and increasing if more than double the output. If a mathematical function is used to represent the production function, and if that production function is homogeneous, returns to scale are represented by the degree of homogeneity of the function. Homogeneous production functions with constant returns to scale are first degree homogeneous, increasing returns to scale are represented by degrees of homogeneity greater than one, and decreasing returns to scale by degrees of homogeneity less than one.

If the firm is a perfect competitor in all input markets, and thus the per-unit prices of all its inputs are unaffected by how much of the inputs the firm purchases, then it can be shown that at a particular level of output, the firm has economies of scale if and only if it has increasing returns to scale, has diseconomies of scale if and only if it has decreasing returns to scale, and has neither economies nor diseconomies of scale if it has constant returns to scale. In this case, with perfect competition in the output market the long-run equilibrium will involve all firms operating at the minimum point of their long-run average cost curves (i.e., at the borderline between economies and diseconomies of scale).

If, however, the firm is not a perfect competitor in the input markets, then the above conclusions are modified. For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input's per-unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range.

In essence, returns to scale refer to the variation in the relationship between inputs and output. This relationship is therefore expressed in "physical" terms. But when talking about economies of scale, the relation taken into consideration is that between the average production cost and the dimension of scale. Economies of scale therefore are affected by variations in input prices. If input prices remain the same as their quantities purchased by the firm increase, the notions of increasing returns to scale and economies of scale can be considered equivalent. However, if input prices vary in relation to their quantities purchased by the company, it is necessary to distinguish between returns to scale and economies of scale. The concept of economies of scale is more general than that of returns to scale since it includes the possibility of changes in the price of inputs when the quantity purchased of inputs varies with changes in the scale of production.

The literature assumed that due to the competitive nature of reverse auctions, and in order to compensate for lower prices and lower margins, suppliers seek higher volumes to maintain or increase the total revenue. Buyers, in turn, benefit from the lower transaction costs and economies of scale that result from larger volumes. In part as a result, numerous studies have indicated that the procurement volume must be sufficiently high to provide sufficient profits to attract enough suppliers, and provide buyers with enough savings to cover their additional costs.

However, surprisingly enough, Shalev and Asbjornse found, in their research based on 139 reverse auctions conducted in the public sector by public sector buyers, that the higher auction volume, or economies of scale, did not lead to better success of the auction. They found that auction volume did not correlate with competition, nor with the number of bidders, suggesting that auction volume does not promote additional competition. They noted, however, that their data included a wide range of products, and the degree of competition in each market varied significantly, and offer that further research on this issue should be conducted to determine whether these findings remain the same when purchasing the same product for both small and high volumes. Keeping competitive factors constant, increasing auction volume may further increase competition.

Economies of scale in the history of economic analysis

Economies of scale in classical economists

The first systematic analysis of the advantages of the division of labour capable of generating economies of scale, both in a static and dynamic sense, was that contained in the famous First Book of Wealth of Nations (1776) by Adam Smith, generally considered the founder of political economy as an autonomous discipline. 

John Stuart Mill, in Chapter IX of the First Book of his Principles, referring to the work of Charles Babbage (On the economics of machines and manufactories), widely analyses the relationships between increasing returns and scale of production all inside the production unit.

The economies of scale in Marx and Distributional consequences

In “Das Kapital” (1867), Karl Marx, referring to Charles Babbage, extensively analyses economies of scale and concludes that they are one of the factors underlying the ever-increasing concentration of capital. Marx observes that in the capitalist system the technical conditions of the work process are continuously revolutionized in order to increase the surplus by improving the productive force of work. According to Marx, with the cooperation of many workers brings about an economy in the use of the means of production and an increase in productivity due to the increase in the division of labour. Furthermore, the increase in the size of the machinery allows significant savings in construction, installation and operation costs. The tendency to exploit economies of scale entails a continuous increase in the volume of production which, in turn, requires a constant expansion of the size of the market. However, if the market does not expand at the same rate as production increases, overproduction crises can occur. According to Marx the capitalist system is therefore characterized by two tendencies, connected to economies of scale: towards a growing concentration and towards economic crises due to overproduction.

In his 1844 Economic and Philosophic Manuscripts, Karl Marx observes that economies of scale have historically been associated with an increasing concentration of private wealth and have been used to justify such concentration. Marx points out that concentrated private ownership of large-scale economic enterprises is a historically contingent fact, and not essential to the nature of such enterprises. In the case of agriculture, for example, Marx calls attention to the sophistical nature of the arguments used to justify the system of concentrated ownership of land:
As for large landed property, its defenders have always sophistically identified the economic advantages offered by large-scale agriculture with large-scale landed property, as if it were not precisely as a result of the abolition of property that this advantage, for one thing, received its greatest possible extension, and, for another, only then would be of social benefit.
Instead of concentrated private ownership of land, Marx recommends that economies of scale should instead be realized by associations:
Association, applied to land, shares the economic advantage of large-scale landed property, and first brings to realization the original tendency inherent in land-division, namely, equality. In the same way association re-establishes, now on a rational basis, no longer mediated by serfdom, overlordship and the silly mysticism of property, the intimate ties of man with the earth, for the earth ceases to be an object of huckstering, and through free labor and free enjoyment becomes once more a true personal property of man.

Economies of scale in Marshall

Alfred Marshall notes that "some, among whom Cournot himself", have considered "the internal economies [...] apparently without noticing that their premises lead inevitably to the conclusion that, whatever firm first gets a good start will obtain a monopoly of the whole business of its trade … ". Marshall believes that there are factors that limit this trend toward monopoly, and in particular:
  • the death of the founder of the firm and the difficulty that the successors may have inherited his/her entrepreneurial skills;
  • the difficulty of reaching new markets for one's goods;
  • the growing difficulty of being able to adapt to changes in demand and to new techniques of production;
  • The effects of external economies, that is the particular type of economies of scale connected not to the production scale of an individual production unit, but to that of an entire sector.

Sraffa’s critique

Piero Sraffa observes that Marshall, in order to justify the operation of the law of increasing returns without it coming into conflict with the hypothesis of free competition, tended to highlight the advantages of external economies linked to an increase in the production of an entire sector of activity. However, “those economies which are external from the point of view of the individual firm, but internal as regards the industry in its aggregate, constitute precisely the class which is most seldom to be met with”. “In any case - Sraffa notes – in so far as external economies of the kind in question exist, they are not linked to be called forth by small increases in production”, as required by the marginalist theory of price. Sraffa points out that, in the equilibrium theory of the individual industries, the presence of external economies cannot play an important role because this theory is based on marginal changes in the quantities produced.

Sraffa concludes that, if the hypothesis of perfect competition is maintained, economies of scale should be excluded. He then suggests the possibility of abandoning the assumption of free competition to address the study of firms that have their own particular market. This stimulated a whole series of studies on the cases of imperfect competition in Cambridge. However, in the succeeding years Sraffa will follow a different path of research that will bring him to write and publish his main work Production of commodities by means of commodities (Sraffa, 1960). In this book Sraffa determines relative prices assuming no changes in output, so that no question arises as to the variation or constancy of returns.

Economies of scale and the tendency towards monopoly: ‘Cournot's dilemma’

It has been noted that in many industrial sectors there are numerous companies with different sizes and organizational structures, despite the presence of significant economies of scale. This contradiction, between the empirical evidence and the logical incompatibility between economies of scale and competition, has been called the ‘Cournot dilemma’. As Mario Morroni observes, Cournot's dilemma appears to be unsolvable if we only consider the effects of economies of scale on the dimension of scale. If, on the other hand, the analysis is expanded, including the aspects concerning the development of knowledge and the organization of transactions, it is possible to conclude that economies of scale do not always lead to monopoly. In fact, the competitive advantages deriving from the development of the firm's capabilities and from the management of transactions with suppliers and customers can counterbalance those provided by the scale, thus counteracting the tendency towards a monopoly inherent in economies of scale. In other words, the heterogeneity of the organizational forms and of the size of the companies operating in a sector of activity can be determined by factors regarding the quality of the products, the production flexibility, the contractual methods, the learning opportunities, the heterogeneity of preferences of customers who express a differentiated demand with respect to the quality of the product, and assistance before and after the sale. Very different organizational forms can therefore co-exist in the same sector of activity, even in the presence of economies of scale, such as, for example, flexible production on a large scale, small-scale flexible production, mass production, industrial production based on rigid technologies associated with flexible organizational systems and traditional artisan production. The considerations regarding economies of scale are therefore important, but not sufficient to explain the size of the company and the market structure. It is also necessary to take into account the factors linked to the development of capabilities and the management of transaction costs.

Diseconomies of scale

From Wikipedia, the free encyclopedia
 
In microeconomics, diseconomies of scale are the cost disadvantages that economic actors accrue due to an increase in organizational size or in output, resulting in production of goods and services at increased per-unit costs. The concept of diseconomies of scale is the opposite of economies of scale. In business, diseconomies of scale are the features that lead to an increase in average costs as a business grows beyond a certain size.

economies of scale
The rising part of the long-run average cost curve illustrates the effect of diseconomies of scale. The Long Run Average Cost (LRAC) curve plots the average cost of producing the lowest cost method. The Long Run Marginal Cost (LRMC) is the change in total cost attributable to a change in the output of one unit after the plant size has been adjusted to produce that rate of output at minimum LRAC.

Causes

Communication costs

Ideally, all employees of a firm would have one-on-one communication with each other so they know exactly what the other workers are doing. A firm with a single worker does not require any communication between employees. A firm with two workers requires one communication channel, directly between those two workers. A firm with three workers requires three communication channels between employees (between employees A & B, B & C, and A & C). Here is a chart of one-on-one communication channels required:

Workers Communication Channels
1 0
2 1
3 3
4 6
5 10
n

The graph of all one-on-one channels is a complete graph.

The number of one-on-one channels of communication grows more rapidly than the number of workers, thus increasing the time and costs of communication. At some point one-on-one communications between all workers becomes impractical; therefore only certain groups of employees will communicate with one another (e.g. within departments or within geographical locations). This reduces, but does not stop, the increase in unit costs; and also the organisation will incur some inefficiencies due to the reduced level of communication.

Duplication of effort

An organisation with just one person cannot have any duplication of effort between employees. If there are two employees, there could be some duplication of efforts, but this is likely to be minor, as each of the two will generally know what the other is working on. When organisations grow to thousands of workers, it is inevitable that someone, or even a team, will take on a function that is already being handled by another person or team. In colloquial terms, this is described as "one hand not knowing what the other hand is doing". General Motors, for example, developed two in-house CAD/CAM systems: CADANCE was designed by the GM Design Staff, while Fisher Graphics was created by the former Fisher Body division. These similar systems later needed to be combined into a single Corporate Graphics System, CGS, at great expense. A smaller firm would have had neither the money to allow such expensive parallel developments, nor the lack of communication and cooperation which precipitated this event. In addition to CGS, GM also used CADAM, UNIGRAPHICS, CATIA and other off-the-shelf CAD/CAM systems, thus increasing the cost of translating designs from one system to another. This endeavor eventually became so unmanageable that they acquired (and then eventually sold off) Electronic Data Systems (EDS) in an effort to control the situation. Smaller firms typically choose a single off-the-shelf CAD/CAM system, with no need to combine or translate between systems.

Office politics

"Office politics" is management behavior which a manager knows is counter to the best interest of the company, but is in his personal best interest. For example, a manager might intentionally promote an incompetent worker, knowing that the worker will never be able to compete for the manager's job. This type of behavior only makes sense in a company with multiple levels of management. The more levels there are, the more opportunity for this behavior. In a small company, such behavior could cause the company to go bankrupt, and thus cost the manager his job, so he would not make such a decision. In a large company, one manager would not have much effect on the overall health of the company, so such "office politics" are in the interest of individual managers.

Top-heavy companies

As an organisation increases in size, it becomes costly to keep control of a sprawling corporate empire, and this often results in bureaucracy as executives implement more and more levels of management. As firms increase in size, managers will initially provide a net benefit to the firm and increase productivity; however, as a firm grows and covers a larger geographical area and/or employs more people, a principal–agent problem arises, leading to lower productivity. To counter this, executives introduce standards and controls in order to maintain productivity, and this necessitates the hiring of more managers to apply these standards and controls, hence the proportion of managerial to working class begins to lean towards managerial and the company becomes "top-heavy". However, these additional managers are not providing additional output: they are spending their time implementing standards and carrying out supervision that is unnecessary in smaller firms, hence the cost-per-unit has increased.

Supply-chain disruption

Global emergencies, such as COVID-19 in 2020, can easily disrupt supply chains. This disruption has a higher chance of affecting large organizations - especially when there is only a few large suppliers. Smaller organizations with robust, local supply networks can manage supply chain shocks because any localized shock has a smaller effect on the overall ecosystem.

Other effects which reduce competitiveness of large firms

These do not always increase the cost-per-unit, but do reduce the ability of a large firm to compete.

Cannibalization

A small firm only competes with other firms, but larger firms frequently find their own products are competing with each other. A Buick was just as likely to steal customers from another GM make, such as an Oldsmobile, as it was to steal customers from other companies. This may help to explain why Oldsmobiles were discontinued after 2004. This self-competition wastes resources that should be used to compete with other firms.

Isolation of decision-makers from the results of their decisions

If a single person makes and sells donuts and decides to try jalapeño flavoring, they would likely know on the same day whether their decision was good or not, based on the reaction of customers. A decision-maker at a huge company that makes donuts may not know for many months if such a decision is embraced by consumers or if it is rejected, especially if their research or marketing team fails to respond in a timely manner. By that time, the decision-makers may very well have moved on to another division or company and thus see no consequence from their decision. This lack of consequences can lead to poor decisions and cause an upward-sloping average cost curve.

Slow response time

In a reverse example, the smaller firm will know immediately if people begin to request other products, and be able to respond the next day. A large company would need to do research, create an assembly line, determine which distribution chains to use, plan an advertising campaign, etc., before any changes could be made. By this time, the smaller competitors may well have grabbed that market niche.

Inertia (Unwillingness to change)

This will be defined as the "we've always done it that way, so there's no need to ever change" attitude (see appeal to tradition). An old, successful company is far more likely to have this attitude than a new, struggling one. While "change for change's sake" is counter-productive, refusal to consider change, even when indicated, is likewise toxic to a company, as changes in the industry and market conditions will inevitably demand changes in the firm in order to remain successful. An example is Polaroid Corporation's delay in moving into digital imaging, which adversely affected the company, ultimately leading to bankruptcy.

Public and government opposition

Such opposition is largely a function of the size of the firm. Behavior from Microsoft, which would have been ignored from a smaller firm, was seen as an anti-competitive and monopolistic threat, due to Microsoft's size, thus bringing about government lawsuits.

Large market share

A small company with only a 1% market share could relatively easily double market share, and hence revenues, in a year. A large company with 50% market share will find it difficult to do so.

Large market portfolio

A small investment fund can potentially yield a higher return because it can concentrate its investments in a small number of good opportunities without driving up the purchase price as they buy in, and later sell them without driving down the sale price as they sell off. Conversely, a large investment fund must spread its investments among so many securities that its results tend to track those of the market as a whole. As the size of the market controlled grows, the results will be closer to market average.

Inelasticity of supply

A company which is heavily dependent on a resource supply of a fixed or relatively-fixed size will have trouble increasing production. For instance, a timber company cannot increase production above the sustainable harvest rate of its land (although it can still increase production by acquiring more land). Similarly, service companies are limited by available labor (and thus tend to concentrate in large, densely-populated metropolitan areas); STEM (science, technology, engineering, and mathematics) professions are often-cited examples.

Reputation

Larger firms have a reputation to uphold and as a result may place more restrictions on employees, limiting their efficiency. This will be seen amplified in a regulated industry, where a company losing its license would be an extremely serious event.

Other effects related to size

Large firms also tend to be old and in mature markets. Both of these have negative implications for future growth. Old firms tend to have a large retiree base, with high associated pension and health costs, and also tend to be unionized, with associated higher labor costs and lower productivity. Mature markets tend to only offer the potential for small, incremental growth. (Everybody might go out and buy a new invention next year, but it is unlikely they will all buy cars next year, since most people already have them.)

Impact on smaller firms

While diseconomies of scale are typically associated with large mature firms, similar problems have been observed in the growth phase of small and medium-sized manufacturing companies. Mclean has observed that this can occur once the workforce exceeds around 20 employees. At this point business complexity grows more rapidly than revenue. The business experiences falling productivity, leading to rising variable costs along with rapidly rising overheads.

Solutions

Solutions to the diseconomies of scale for large firms may involve splitting the company into smaller organisations. This can either happen by default when the company is in financial difficulties, sells off its profitable divisions and shuts down the rest; or can happen proactively, if the management is willing. 

To avoid the negative effects of diseconomies of scale, a firm must stick to the lowest average output cost and try to recognise any external diseconomies of scale. Moreover, on reaching the lowest average cost, a firm must either expand to other countries to increase demand for its products, or seek new markets or produce new products that do not compete with its original products. However, neither of these actions will necessarily eliminate communications and management problems often associated with large organisations.

A systematic analysis and redesign of business processes, in order to reduce complexity, can counter diseconomies of scale. (Of course, this phase of analysis and revamping in itself can be, and usually is, a diseconomy leading to hiring of new personnel and investment in new, competing systems.) This leads to increased productivity. Improved management systems and more effective control of labor and operations can lower overhead.

Example

Returning to the example of the large donut firm, each retail location could be allowed to operate relatively autonomously from the company headquarters.

For instance, the local management may decide on the following factors instead of relying on the central management:
  1. Employee decisions such as hiring, firing, promotions and wage scales, where the local management is directly involved and likely to have better understanding of each employee. For instance, employers may choose to offer higher wages and charge higher prices if they are in an affluent area.
  2. Purchasing decisions, with each location allowed to choose its own suppliers, which may or may not be owned by the corporation (wherever they find the best quality and prices).
  3. Research and marketing decisions. Each firm may decide to develop their own recipes or utilise different signature flavour unique to their locale. For instance, when fresh apple cider is available at bargain prices from local farmers in October, they may choose to market a cinnamon donut and hot apple cider combo.
While a single, large, centrally-controlled firm may have higher ability to innovate and develop or market new products more effectively than when its resources are divided, it may lack the flexibility to offer individual customizations. Allowing the different retail locations to make decisions independent of the central management may allow them to meet local consumers' demands more efficiently. 

In addition, if the employees own a portion of the local business, employees will also have a more vested interest in its success. 

Note that all these changes will likely result in a substantial reduction in corporate headquarters staff and other support staff. For this reason, many businesses delay such a reorganization until it is too late to be effective. However, the whole company incurs reputation and legal risks arising from each unit.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...