Search This Blog

Tuesday, May 19, 2015

Exponential growth


From Wikipedia, the free encyclopedia


The graph illustrates how exponential growth (green) surpasses both linear (red) and cubic (blue) growth.
  Exponential growth
  Linear growth
  Cubic growth

Exponential growth occurs when the growth rate of the value of a mathematical function is proportional to the function's current value. Exponential decay occurs in the same way when the growth rate is negative. In the case of a discrete domain of definition with equal intervals, it is also called geometric growth or geometric decay (the function values form a geometric progression).

The formula for exponential growth of a variable x at the (positive or negative) growth rate r, as time t goes on in discrete intervals (that is, at integer times 0, 1, 2, 3, ...), is
x_t = x_0(1+r)^t
where x0 is the value of x at time 0. For example, with a growth rate of r = 5% = 0.05, going from any integer value of time to the next integer causes x at the second time to be 1.05 times (i.e., 5% larger than) what it was at the previous time.

Examples


Bacteria exhibit exponential growth under optimal conditions.
  • Biology
    • The number of microorganisms in a culture will increase exponentially until an essential nutrient is exhausted. Typically the first organism splits into two daughter organisms, who then each split to form four, who split to form eight, and so on.
    • A virus (for example SARS, or smallpox) typically will spread exponentially at first, if no artificial immunization is available. Each infected person can infect multiple new people.
    • Human population, if the number of births and deaths per person per year were to remain at current levels (but also see logistic growth). For example, according to the United States Census Bureau, over the last 100 years (1910 to 2010), the population of the United States of America is exponentially increasing at an average rate of one and a half percent a year (1.5%). This means that the doubling time of the American population (depending on the yearly growth in population) is approximately 50 years.[1]
    • Many responses of living beings to stimuli, including human perception, are logarithmic responses, which are the inverse of exponential responses; the loudness and frequency of sound are perceived logarithmically, even with very faint stimulus, within the limits of perception. This is the reason that exponentially increasing the brightness of visual stimuli is perceived by humans as a linear increase, rather than an exponential increase. This has survival value. Generally it is important for the organisms to respond to stimuli in a wide range of levels, from very low levels, to very high levels, while the accuracy of the estimation of differences at high levels of stimulus is much less important for survival.
    • Genetic complexity of life on Earth has doubled every 376 million years. Extrapolating this exponential growth backwards indicates life began 9.7 billion years ago, potentially predating the Earth by 5.2 billion years.[2][3]
  • Physics
    • Avalanche breakdown within a dielectric material. A free electron becomes sufficiently accelerated by an externally applied electrical field that it frees up additional electrons as it collides with atoms or molecules of the dielectric media. These secondary electrons also are accelerated, creating larger numbers of free electrons. The resulting exponential growth of electrons and ions may rapidly lead to complete dielectric breakdown of the material.
    • Nuclear chain reaction (the concept behind nuclear reactors and nuclear weapons). Each uranium nucleus that undergoes fission produces multiple neutrons, each of which can be absorbed by adjacent uranium atoms, causing them to fission in turn. If the probability of neutron absorption exceeds the probability of neutron escape (a function of the shape and mass of the uranium), k > 0 and so the production rate of neutrons and induced uranium fissions increases exponentially, in an uncontrolled reaction. "Due to the exponential rate of increase, at any point in the chain reaction 99% of the energy will have been released in the last 4.6 generations. It is a reasonable approximation to think of the first 53 generations as a latency period leading up to the actual explosion, which only takes 3–4 generations."[4]
    • Positive feedback within the linear range of electrical or electroacoustic amplification can result in the exponential growth of the amplified signal, although resonance effects may favor some component frequencies of the signal over others.
    • Heat transfer experiments yield results whose best fit line are exponential decay curves.
  • Economics
    • Economic growth is expressed in percentage terms, implying exponential growth. For example, U.S. GDP per capita has grown at an exponential rate of approximately two percent since World War 2.[citation needed]
  • Finance
  • Computer technology
    • Processing power of computers. See also Moore's law and technological singularity. (Under exponential growth, there are no singularities. The singularity here is a metaphor.)
    • In computational complexity theory, computer algorithms of exponential complexity require an exponentially increasing amount of resources (e.g. time, computer memory) for only a constant increase in problem size. So for an algorithm of time complexity 2x, if a problem of size x = 10 requires 10 seconds to complete, and a problem of size x = 11 requires 20 seconds, then a problem of size x = 12 will require 40 seconds. This kind of algorithm typically becomes unusable at very small problem sizes, often between 30 and 100 items (most computer algorithms need to be able to solve much larger problems, up to tens of thousands or even millions of items in reasonable times, something that would be physically impossible with an exponential algorithm). Also, the effects of Moore's Law do not help the situation much because doubling processor speed merely allows you to increase the problem size by a constant. E.g. if a slow processor can solve problems of size x in time t, then a processor twice as fast could only solve problems of size x+constant in the same time t. So exponentially complex algorithms are most often impractical, and the search for more efficient algorithms is one of the central goals of computer science today.
    • Internet traffic growth.[citation needed]

Basic formula

A quantity x depends exponentially on time t if
x(t)=a\cdot b^{t/\tau}\,
where the constant a is the initial value of x,
x(0)=a\, ,
the constant b is a positive growth factor, and τ is the time constant—the time required for x to increase by one factor of b:
x(t+\tau)=a \cdot b^{\frac{t+\tau}{\tau}} = a \cdot b^{\frac{t}{\tau}} \cdot b^{\frac{\tau}{\tau}} = x(t)\cdot b\, .
If τ > 0 and b > 1, then x has exponential growth. If τ < 0 and b > 1, or τ > 0 and 0 < b < 1, then x has exponential decay.

Example: If a species of bacteria doubles every ten minutes, starting out with only one bacterium, how many bacteria would be present after one hour? The question implies a = 1, b = 2 and τ = 10 min.
x(t)=a\cdot b^{t/\tau}=1\cdot 2^{(60\text{ min})/(10\text{ min})}
x(1\text{ hr})= 1 \cdot 2^6 =64.
After one hour, or six ten-minute intervals, there would be sixty-four bacteria.

Many pairs (bτ) of a dimensionless non-negative number b and an amount of time τ (a physical quantity which can be expressed as the product of a number of units and a unit of time) represent the same growth rate, with τ proportional to log b. For any fixed b not equal to 1 (e.g. e or 2), the growth rate is given by the non-zero time τ. For any non-zero time τ the growth rate is given by the dimensionless positive number b.

Thus the law of exponential growth can be written in different but mathematically equivalent forms, by using a different base. The most common forms are the following:
x(t) = x_0\cdot e^{kt} = x_0\cdot e^{t/\tau} = x_0 \cdot 2^{t/T}
= x_0\cdot \left( 1 + \frac{r}{100} \right)^{t/p},
where x0 expresses the initial quantity x(0).

Parameters (negative in the case of exponential decay):
The quantities k, τ, and T, and for a given p also r, have a one-to-one connection given by the following equation (which can be derived by taking the natural logarithm of the above):
k = \frac{1}{\tau} = \frac{\ln 2}{T} = \frac{\ln \left( 1 + \frac{r}{100} \right)}{p}\,
where k = 0 corresponds to r = 0 and to τ and T being infinite.

If p is the unit of time the quotient t/p is simply the number of units of time. Using the notation t for the (dimensionless) number of units of time rather than the time itself, t/p can be replaced by t, but for uniformity this has been avoided here. In this case the division by p in the last formula is not a numerical division either, but converts a dimensionless number to the correct quantity including unit.

A popular approximated method for calculating the doubling time from the growth rate is the rule of 70, i.e. T \simeq 70 / r.

Reformulation as log-linear growth

If a variable x exhibits exponential growth according to x(t)=x_0(1+r)^t, then the log (to any base) of x grows linearly over time, as can be seen by taking logarithms of both sides of the exponential growth equation:
\log x(t) = \log x_0 + t \cdot \log (1+r).
This allows an exponentially growing variable to be modeled with a log-linear model. For example, if one wishes to empirically estimate the growth rate from intertemporal data on x, one can linearly regress log x on t.

Differential equation

The exponential function \scriptstyle x(t)=x(0) e^{kt} satisfies the linear differential equation:
 \!\, \frac{dx}{dt} = kx
saying that the growth rate of x at time t is proportional to the value of x(t), and it has the initial value
x(0).\,
The differential equation is solved by direct integration:
\frac{dx}{dt} = kx
\Rightarrow \frac{dx}{x} = k\, dt
\Rightarrow \int_{x(0)}^{x(t)} \frac{dx}{x} = k \int_0^t  \, dt
\Rightarrow \ln \frac{x(t)}{x(0)} =  kt  .
so that
\Rightarrow x(t) =  x(0) e^{kt}\,

For a nonlinear variation of this growth model see logistic function.

Difference equation

The difference equation
x_t = a \cdot x_{t-1}
has solution
x_t = x_0 \cdot a^t,
showing that x experiences exponential growth.

Other growth rates

In the long run, exponential growth of any kind will overtake linear growth of any kind (the basis of the Malthusian catastrophe) as well as any polynomial growth, i.e., for all α:
\lim_{t\rightarrow\infty} {t^\alpha \over ae^t} =0.
There is a whole hierarchy of conceivable growth rates that are slower than exponential and faster than linear (in the long run). See Degree of a polynomial#The degree computed from the function values.

Growth rates may also be faster than exponential.

In the above differential equation, if k < 0, then the quantity experiences exponential decay.

Limitations of models

Exponential growth models of physical phenomena only apply within limited regions, as unbounded growth is not physically realistic. Although growth may initially be exponential, the modelled phenomena will eventually enter a region in which previously ignored negative feedback factors become significant (leading to a logistic growth model) or other underlying assumptions of the exponential growth model, such as continuity or instantaneous feedback, break down.

Exponential stories

Rice on a chessboard

According to an old legend, vizier Sissa Ben Dahir presented an Indian King Sharim with a beautiful, hand-made chessboard. The king asked what he would like in return for his gift and the courtier surprised the king by asking for one grain of rice on the first square, two grains on the second, four grains on the third etc. The king readily agreed and asked for the rice to be brought. All went well at first, but the requirement for 2 n − 1 grains on the nth square demanded over a million grains on the 21st square, more than a million million (aka trillion) on the 41st and there simply was not enough rice in the whole world for the final squares. (From Swirski, 2006)[5]
For variation of this see second half of the chessboard in reference to the point where an exponentially growing factor begins to have a significant economic impact on an organization's overall business strategy.

Water lily

French children are told a story in which they imagine having a pond with water lily leaves floating on the surface. The lily population doubles in size every day and if left unchecked will smother the pond in 30 days, killing all the other living things in the water. Day after day the plant seems small and so it is decided to leave it to grow until it half-covers the pond, before cutting it back. They are then asked on what day half-coverage will occur. This is revealed to be the 29th day, and then there will be just one day to save the pond. (From Meadows et al. 1972)[5]

Monday, May 18, 2015

Thomas Robert Malthus


From Wikipedia, the free encyclopedia

Thomas Robert Malthus
Thomas Robert Malthus Wellcome L0069037 -crop.jpg
Thomas Robert Malthus
Born (1766-02-13)13 February 1766
Westcott, Surrey, Great Britain
Died 29 December 1834(1834-12-29) (aged 68)
Bath, Somerset, United Kingdom
Field demography, macroeconomics
School or tradition
Classical economics
Influences David Ricardo, Jean Charles Léonard de Sismondi
Influenced Charles Darwin, Paul R. Ehrlich, Francis Place, Raynold Kaufgetz, Garrett Hardin, John Maynard Keynes, Pierre François Verhulst, Alfred Russel Wallace, William Thompson, Karl Marx, Mao Zedong
Contributions Malthusian growth model

The Reverend Thomas Robert Malthus FRS (13 February 1766 – 29 December 1834[1]) was an English cleric and scholar, influential in the fields of political economy and demography.[2] Malthus himself used only his middle name Robert.[3]

His An Essay on the Principle of Population observed that sooner or later population will be checked by famine and disease, leading to what is known as a Malthusian catastrophe. He wrote in opposition to the popular view in 18th-century Europe that saw society as improving and in principle as perfectible.[4] He thought that the dangers of population growth precluded progress towards a utopian society: "The power of population is indefinitely greater than the power in the earth to produce subsistence for man".[5] As a cleric, Malthus saw this situation as divinely imposed to teach virtuous behaviour.[6] Malthus wrote:
That the increase of population is necessarily limited by the means of subsistence,
That population does invariably increase when the means of subsistence increase, and,
That the superior power of population is repressed, and the actual population kept equal to the means of subsistence, by misery and vice.[7]
Malthus placed the longer-term stability of the economy above short-term expediency. He criticized the Poor Laws,[8] and (alone among important contemporary economists) supported the Corn Laws, which introduced a system of taxes on British imports of wheat.[9] His views became influential, and controversial, across economic, political, social and scientific thought. Pioneers of evolutionary biology read him, notably Charles Darwin and Alfred Russel Wallace.[10][11] He remains a much-debated writer.

Early life and education

The seventh child of Henrietta Catherine (Graham) and Daniel Malthus,[12][13] Robert Malthus grew up in The Rookery, a country house in Westcott, near Dorking in Surrey. Petersen describes Daniel Malthus as "a gentleman of good family and independent means... [and] a friend of David Hume and Jean-Jacques Rousseau".[14] The young Malthus received his education at home in Bramcote, Nottinghamshire, and then at the Warrington Academy from 1782. Warrington was a dissenting academy, then at the end of its existence. and it closed in 1783. Malthus continued for a period to be tutored by Gilbert Wakefield who had taught him there.[15]

Malthus entered Jesus College, Cambridge in 1784. There he took prizes in English declamation, Latin and Greek, and graduated with honours, Ninth Wrangler in mathematics. His tutor was William Frend.[15][16] He took the MA degree in 1791, and was elected a Fellow of Jesus College two years later.[3] In 1789, he took orders in the Church of England, and became a curate at Oakwood Chapel (also Okewood) in the parish of Wotton, Surrey.[17]

Population growth

Malthus came to prominence for his 1798 essay on population growth. In it he argued that population multiplies geometrically and food arithmetically; therefore, the population will eventually outstrip the food supply. Between 1798 and 1826 he published six editions of An Essay on the Principle of Population, updating each edition to incorporate new material, to address criticism, and to convey changes in his own perspectives on the subject. He wrote the original text in reaction to the optimism of his father and his father's associates (notably Rousseau) regarding the future improvement of society. Malthus also constructed his case as a specific response to writings of William Godwin (1756–1836) and of the Marquis de Condorcet (1743–1794).
The Essay gave rise to the Malthusian controversy during the next decades. The content saw an emphasis on the birth rate and marriage rates. The neo-Malthusian controversy, or related debates of many years later, has seen a similar central role assigned to the numbers of children born.[18]

In 1799 Malthus made a European tour with William Otter, a close college friend, travelling part of the way with Edward Daniel Clarke and John Marten Cripps, visiting Germany, Scandinavia and Russia. Malthus used the trip to gather population data. Otter later wrote a Memoir of Malthus for the second (1836) edition of his Principles of Political Economy.[19][20] During the Peace of Amiens of 1802 he travelled to France and Switzerland, in a party that included his relation and future wife Harriet.[21] In 1803 he became rector of Walesby, Lincolnshire.[3]

Academic

In 1805 Malthus became Professor of History and Political Economy at the East India Company College in Hertfordshire.[22] His students affectionately referred to him as "Pop" or "Population" Malthus.

At the end of 1816 the proposed appointment of Graves Champney Haughton to the College was made a pretext by Randle Jackson and Joseph Hume to launch an attempt to close it down. Malthus wrote a pamphlet defending the College, which was reprieved by the East India Company in 1817.[23] In 1818 Malthus became a Fellow of the Royal Society.

Malthus–Ricardo debate on political economy

During the 1820s there took place a set piece intellectual discussion within the proponents of political economy, often called the "Malthus–Ricardo debate", after the leading figures of Malthus and David Ricardo, a theorist of free trade, both of whom had written books with the title Principles of Political Economy. Under examination were the nature and methods of political economy itself, while it was simultaneously under attack from others.[24] The roots of the debate were in the previous decade. In The Nature of Rent (1815), Malthus had dealt with economic rent, a major concept in classical economics. Ricardo defined a theory of rent in his Principles of Political Economy and Taxation (1817): he regarded rent as value in excess of real production—something caused by ownership rather than by free trade. Rent therefore represented a kind of negative money that landlords could pull out of the production of the land, by means of its scarcity.[25] Contrary to this concept, Malthus proposed rent to be a kind of economic surplus.

The debate developed over the economic concept of a general glut, and the possibility of failure of Say's Law.
Malthus laid importance on economic development and the persistence of disequilibrium.[26] The context was the post-war depression; Malthus had a supporter in William Blake, in denying that capital accumulation (saving) was always good in such circumstances, and John Stuart Mill attacked Blake on the fringes of the debate.[27]

Ricardo corresponded with Malthus from 1817 and his Principles. He was drawn into considering political economy in a less restricted sense, which might be adapted to legislation and its multiple objectives, by the thought of Malthus. In his own work Principles of Political Economy (1820), and elsewhere, Malthus addressed the tension, amounting to conflict, he saw between a narrow view of political economy, and the broader moral and political plane.[28] Leslie Stephen wrote:
If Malthus and Ricardo differed, it was a difference of men who accepted the same first principles. They both professed to interpret Adam Smith as the true prophet, and represented different shades of opinion rather than diverging sects.[29]
It is now considered that the different purposes seen by Malthus and Ricardo for political economy affected their technical discussion, and contributed to the lack of compatible definitions.[26] For example, Jean-Baptiste Say used a definition of production based on goods and services and so queried the restriction of Malthus to "goods" alone.[30]

In terms of public policy, Malthus was a supporter of the protectionist Corn Laws from the end of the Napoleonic Wars. He emerged as the only economist of note to support duties on imported grain.[31] He changed his mind after 1814. By encouraging domestic production, Malthus argued, the Corn Laws would guarantee British self-sufficiency in food.[32] He also wished to abolish poor relief for paupers, a lifelong aim.[33]

Later life

Malthus was a founding member of the Political Economy Club in 1821; there John Cazenove tended to be his ally, against Ricardo and Mill.[34] He was elected in the beginning of 1824 as one of the ten royal associates of the Royal Society of Literature. He was also one of the first fellows of the Statistical Society, founded in March 1834. In 1827 he gave evidence to a committee of the House of Commons on emigration.[35]

After Ricardo's death in 1823, Malthus became isolated among the younger British political economists, who tended to think he had lost the debate. In Definitions in Political Economy (1827) he attacked John Ramsay McCulloch. McCulloch replied cuttingly, implying that he wanted to dictate to others, and the reputation of Malthus as economist dropped away, for the rest of his life.[36]

Malthus died suddenly of heart disease on 23 December 1834, at his father-in-law's house. He was buried in Bath Abbey.[35] His portrait,[37] and descriptions by contemporaries, present him as tall and good-looking, but with a cleft lip and palate.[38] The cleft palate affected his speech: such birth defects had occurred before amongst his relatives.[39]

Family

On 13 March 1804, Malthus married Harriet, daughter of John Eckersall of Claverton House, St. Catherine's, near Bath, Somerset. They had a son and two daughters. His firstborn, son Henry, became vicar of Effingham, Surrey, in 1835, and of Donnington, West Sussex, in 1837; he married Sofia Otter (1807-1889), daughter of William Otter; Henry died in August 1882, aged 76. His middle child, Emily, died in 1885, outliving both of her parents and siblings and youngest, daughter, Lucille, died unmarried and childless in 1825, months before her 18th birthday.[35]

An Essay on the Principle of Population

Malthus argued in his Essay (1798) that population growth generally expanded in times and in regions of plenty until the size of the population relative to the primary resources caused distress:
"Yet in all societies, even those that are most vicious, the tendency to a virtuous attachment is so strong that there is a constant effort towards an increase of population. This constant effort as constantly tends to subject the lower classes of the society to distress and to prevent any great permanent amelioration of their condition".
—Malthus T.R. 1798. An Essay on the Principle of Population. Chapter II, p 18 in Oxford World's Classics reprint.
Malthus argued that two types of checks hold population within resource limits: positive checks, which raise the death rate; and preventive ones, which lower the birth rate. The positive checks include hunger, disease and war; the preventive checks, abortion, birth control, prostitution, postponement of marriage and celibacy.[40] In later editions of his essay, Malthus clarified his view that if society relied on human misery to limit population growth, then sources of misery (e.g., hunger, disease, and war) would inevitably afflict society, as would volatile economic cycles. On the other hand, "preventive checks" to population that limited birthrates, such as later marriages, could ensure a higher standard of living for all, while also increasing economic stability.[41] Regarding possibilities for freeing man from these limits, Malthus argued against a variety of imaginable solutions, such as the notion that agricultural improvements could expand without limit.

Of the relationship between population and economics, Malthus wrote that when the population of laborers grows faster than the production of food, real wages fall because the growing population causes the cost of living (i.e., the cost of food) to go up. Difficulties of raising a family eventually reduce the rate of population growth, until the falling population again leads to higher real wages.

In the second and subsequent editions Malthus put more emphasis on moral restraint as the best means of easing the poverty of the lower classes."[42]

Editions and versions

  • 1798: An Essay on the Principle of Population, as it affects the future improvement of society with remarks on the speculations of Mr. Godwin, M. Condorcet, and other writers.. Anonymously published.
  • 1803: Second and much enlarged edition: An Essay on the Principle of Population; or, a view of its past and present effects on human happiness; with an enquiry into our prospects respecting the future removal or mitigation of the evils which it occasions. Authorship acknowledged.
  • 1806, 1807, 1817 and 1826: editions 3–6, with relatively minor changes from the second edition.
  • 1823: Malthus contributed the article on Population to the supplement of the Encyclopædia Britannica.
  • 1830: Malthus had a long extract from the 1823 article reprinted as A summary view of the Principle of Population.[43]

Other works

1800: The present high price of provisions

In this work, his first published pamphlet, Malthus argues against the notion prevailing in his locale that the greed of intermediaries caused the high price of provisions. Instead, Malthus says that the high price stems from the Poor Laws which "increase the parish allowances in proportion to the price of corn". Thus, given a limited supply, the Poor Laws force up the price of daily necessities. But he concludes by saying that in time of scarcity such Poor Laws, by raising the price of corn more evenly, actually produce a beneficial effect.[44]

1814: Observations on the effects of the Corn Laws

Although government in Britain had regulated the prices of grain, the Corn Laws originated in 1815. At the end of the Napoleonic Wars that year, Parliament passed legislation banning the importation of foreign corn into Britain until domestic corn cost 80 shillings per quarter. The high price caused the cost of food to increase and caused distress among the working classes in the towns. It led to serious rioting in London and to the "Peterloo Massacre" (1819) in Manchester.[45][46]

In this pamphlet, printed during the parliamentary discussion, Malthus tentatively supported the free-traders. He argued that given the increasing expense of raising British corn, advantages accrued from supplementing it from cheaper foreign sources. This view he changed the following year.

1820: Principles of political economy

1836: Second edition, posthumously published.

Malthus intended this work to rival Ricardo's Principles (1817).[47] It, and his 1827 Definitions in political economy, defended Sismondi's views on "general glut" rather than Say's Law, which in effect states "there can be no general glut".

Other publications

  • 1807. A letter to Samuel Whitbread, Esq. M.P. on his proposed Bill for the Amendment of the Poor Laws. Johnson and Hatchard, London.
  • 1808. Spence on Commerce. Edinburgh Review 11, January, 429–448.
  • 1808. Newneham and others on the state of Ireland. Edinburgh Review 12, July, 336–355.
  • 1809. Newneham on the state of Ireland, Edinburgh Review 14 April, 151–170.
  • 1811. Depreciation of paper currency. Edinburgh Review 17, February, 340–372.
  • 1812. Pamphlets on the bullion question. Edinburgh Review 18, August, 448–470.
  • 1813. A letter to the Rt. Hon. Lord Grenville. Johnson, London.
  • 1817. Statement respecting the East-India College. Murray, London.
  • 1821. Godwin on Malthus. Edinburgh Review 35, July, 362–377.
  • 1823. The Measure of Value, stated and illustrated
  • 1823. Tooke – On high and low prices. Quarterly Review, 29 (57), April, 214–239.
  • 1824. Political economy. Quarterly Review 30 (60), January, 297–334.
  • 1829. On the measure of the conditions necessary to the supply of commodities. Transactions of the Royal Society of Literature of the United Kingdom. 1, 171–180. John Murray, London.
  • 1829. On the meaning which is most usually and most correctly attached to the term Value of a Commodity. Transactions of the Royal Society of Literature of the United Kingdom. 2, 74–81. John Murray.

Reception and influence

Malthus developed the theory of demand-supply mismatches that he called gluts. Discounted at the time, this theory foreshadowed later works of an admirer, John Maynard Keynes.[48]
The vast bulk of continuing commentary on Malthus, however, extends and expands on the "Malthusian controversy" of the early 19th century.

References in popular culture

  • Ebenezer Scrooge from A Christmas Carol by Charles Dickens, represents the perceived ideas of Malthus,[49] famously illustrated by his explanation as to why he refuses to donate to the poor and destitute: "If they would rather die they had better do it, and decrease the surplus population". In general, Dickens had some Malthusian concerns (evident in Hard Times and other novels), and he concentrated his attacks on Utilitarianism and many of its proponents, like Smith, and Bentham, whom he thought of, along with Malthus, as unjust and inhumane people.[50]
  • In Aldous Huxley's novel, Brave New World, people generally regard fertility as a nuisance, as in vitro breeding has enabled the society to maintain its population at precisely the level the controllers want. The women, therefore, carry contraceptives with them at all times in a "Malthusian belt".
  • In the popular television show Wiseguy, Kevin Spacey played Mel Proffitt, a self-professed "Malthusian" who quotes Thomas Malthus and keeps a bust of his likeness on display.
  • Officer Lockstock in the Broadway musical, Urinetown, a show in which the world's water resources have become sparse, cries, "Hail Malthus!", at the end of the show in reference to Malthus' theories regarding natural resource scarcity.
  • The Professor on Sliders references Malthus in season 1 episode 10 "Luck of the Draw". The Sliders arrive in a Utopian world that has been implementing a lottery to keep the population low.
  • Chapter 33 in Dan Brown's novel Inferno mentions 'The mathematics of Malthus'.
  • The video game Hydrophobia tells about some eco-terrorists who name themselves "Malthusians" because their ideology is based on Malthus' theories.
  • In the Green Lantern comic books published by DC Comics, the Guardians of the Universe began on a planet named "Maltus," which later became "an extremely overpopulated planet which serves as home to a poverty-stricken humanoid race of unknown origin. While advanced in technology, Maltus suffers from severe problems due to their overpopulation."

Epitaph


The epitaph of Rev. Thomas Robert Malthus, just inside the entrance to Bath Abbey.

The epitaph of Malthus in Bath Abbey reads:
Sacred to the memory of the Rev Thomas Robert Malthus, long known to the lettered world by his admirable writings on the social branches of political economy, particularly by his essay on population.
One of the best men and truest philosophers of any age or country, raised by native dignity of mind above the misrepresentation of the ignorant and the neglect of the great, he lived a serene and happy life devoted to the pursuit and communication of truth.
Supported by a calm but firm conviction of the usefulness of his labours.
Content with the approbation of the wise and good.
His writings will be a lasting monument of the extent and correctness of his understanding.
The spotless integrity of his principles, the equity and candour of his nature, his sweetness of temper, urbanity of manners and tenderness of heart, his benevolence and his piety are still dearer recollections of his family and friends.
Born February 14, 1766 Died 29 December 1834.

Differential equation


From Wikipedia, the free encyclopedia


Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, providing a steady state temperature distribution.

A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.

If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.

History

Differential equations first came into existence with the invention of calculus by Newton and Leibniz. In Chapter 2 of his 1671 work "Methodus fluxionum et Serierum Infinitarum",[1] Isaac Newton listed three kinds of differential equations: those involving two derivatives (or fluxions) \dot{x},\dot{y} and only one undifferentiated quantity x; those involving \dot{x},\dot{y},x and y; and those involving more than two derivatives. As examples of the three cases, he solves the equations:
  • \dot{y}\dot{y}=\dot{x}\dot{y}+\dot{x}\dot{x}xx,
  • \dot{y}ax-\dot{x}xy-aa\dot{x}=0, and
  • 2\dot{x}-\dot{z}+\dot{y}x=0, respectively.
He solves these examples and others using infinite series and discusses the non-uniqueness of solutions.

Jacob Bernoulli solved the Bernoulli differential equation in 1695.[2] This is an ordinary differential equation of the form
y'+ P(x)y = Q(x)y^n\,
for which he obtained exact solutions.[3]

Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange.[4][5][6][7] In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.[8]

The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.

Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics.

Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat),[9] in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now taught to every student of mathematical physics.

Example

For example, in classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow (given the position, velocity, acceleration and various forces acting on the body) one to express these variables dynamically as a differential equation for the unknown position of the body as a function of time.

In some cases, this differential equation (called an equation of motion) may be solved explicitly.

An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance.

Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.

Main topics

Ordinary differential equations

An ordinary differential equation (ODE) is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation which may be with respect to more than one independent variable.
Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.

Partial differential equations

A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.

Linear and non-linear

Both ordinary and partial differential equations are not broadly classified as linear and nonlinear.
  • A differential equation is linear if the unknown function and its derivatives appear to the power 1 (products of the unknown function and its derivatives are not allowed) and nonlinear otherwise. The characteristic property of linear equations is that their solutions form an affine subspace of an appropriate function space, which results in much more developed theory of linear differential equations. Homogeneous linear differential equations are a further subclass for which the space of solutions is a linear subspace i.e. the sum of any set of solutions or multiples of solutions is also a solution. The coefficients of the unknown function and its derivatives in a linear differential equation are allowed to be (known) functions of the independent variable or variables; if these coefficients are constants then one speaks of a constant coefficient linear differential equation.
  • There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.[10]
Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations (see below).

Examples

In the first group of examples, let u be an unknown function of x, and c and ω are known constants.
  • Inhomogeneous first-order linear constant coefficient ordinary differential equation:
 \frac{du}{dx} = cu+x^2.
  • Homogeneous second-order linear ordinary differential equation:
 \frac{d^2u}{dx^2} - x\frac{du}{dx} + u = 0.
  • Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:
 \frac{d^2u}{dx^2} + \omega^2u = 0.
  • Inhomogeneous first-order nonlinear ordinary differential equation:
 \frac{du}{dx} = u^2 + 4.
  • Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:
 L\frac{d^2u}{dx^2} + g\sin u = 0.
In the next group of examples, the unknown function u depends on two variables x and t or x and y.
  • Homogeneous first-order linear partial differential equation:
 \frac{\partial u}{\partial t} + t\frac{\partial u}{\partial x} = 0.
  • Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
 \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0.
 \frac{\partial u}{\partial t} = 6u\frac{\partial u}{\partial x} - \frac{\partial^3 u}{\partial x^3}.

Existence of solutions

Solving differential equations is not like solving algebraic equations. Not only are their solutions oftentimes unclear, but whether solutions are unique or exist at all are also notable subjects of interest.
For first order initial value problems, it is easy to tell whether a unique solution exists. Given any point (a,b) in the xy-plane, define some rectangular region Z, such that Z = [l,m]\times[n,p] and (a,b) is in Z. If we are given a differential equation \frac{\mathrm{d}x}{\mathrm{d}t} = g(x,t) and an initial condition x(t_{0}) = x_{0}, then there is a unique solution to this initial value problem if g(x,t) and \frac{\partial g}{\partial x} are both continuous on Z. This unique solution exists on some interval with its center at a.

However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order:

f_{n}(x)\frac{\mathrm{d}^n y}{\mathrm{d}x^n} + \cdots + f_{1}(x)\frac{\mathrm{d} y}{\mathrm{d}x} + f_{0}(x)y = h(x)
such that

y(x_{0})=y_{0}, y'(x_{0}) = y'_{0}, y''(x_{0}) = y''_{0}, \cdots
For any nonzero f_{n}(x), if \{f_{0},f_{1},\cdots\} and g are continuous on some interval containing x_{0}, y is unique and exists.
[11]

 

Related concepts

  • A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times.

Connection to difference equations

The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve approximation of the solution of a differential equation by the solution of a corresponding difference equation.

Applications and connections to other areas

In general

The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modelling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.

Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.

In physics

Classical mechanics
So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion.
Electrodynamics
Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These fields in turn underlie modern electrical and communications technologies. Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents. They are named after the Scottish physicist and mathematician James Clerk Maxwell, who published an early form of those equations between 1861 and 1862.
General relativity
The Einstein field equations (EFE; also known as "Einstein's equations") are a set of ten partial differential equations in Albert Einstein's general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy.[12] First published by Einstein in 1915[13] as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).[14]
Quantum mechanics
In quantum mechanics, the analogue of Newton's law is Schrödinger's equation (a partial differential equation) for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system's wave function (also called a "state function").[15]:1–2
Other important equations

In biology

Predator-prey equations
The Lotka–Volterra equations, also known as the predator–prey equations, are a pair of first-order, non-linear, differential equations frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey.
Other important equations

In chemistry

Rate equation
The rate law or rate equation for a chemical reaction is a differential equation that links the reaction rate with concentrations or pressures of reactants and constant parameters (normally rate coefficients and partial reaction orders).[16] To determine the rate equation for a particular system one combines the reaction rate with a mass balance for the system.[17]

In economics

Important equations

An Estimate of The Centennial Variability of Global Temperatures


There has been widespread investigation of the drivers of changes in global temperatures. However, there has been remarkably little consideration of the magnitude of the changes to be expected over a period of a few decades or even a century. To address this question, the Holocene records up to 8000 years before present, from several ice cores were examined. The differences in temperatures between all records which are approximately a century apart were determined, after any trends in the data had been removed. The differences were close to normally distributed. The average standard deviation of temperature was 0.98 ± 0.27 °C. This suggests that while some portion of the temperature change observed in the 20th century was probably caused by greenhouse gases, there is a strong likelihood that the major portion was due to natural variations.

Year On

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Year_On T...