Search This Blog

Friday, September 10, 2021

Economics

From Wikipedia, the free encyclopedia

A graph depicting Quantity on the X-axis and Price on the Y-axis
The supply and demand model describes how prices vary as a result of a balance between product availability and demand. 
 
Economics (/ˌkəˈnɒmɪks, ˌɛkə-/) is the social science that studies how people interact with value; in particular, the production, distribution, and consumption of goods and services.

Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analyzes basic elements in the economy, including individual agents and markets, their interactions, and the outcomes of interactions. Individual agents may include, for example, households, firms, buyers, and sellers. Macroeconomics analyzes the economy as a system where production, consumption, saving, and investment interact, and factors affecting it: employment of the resources of labour, capital, and land, currency inflation, economic growth, and public policies that have impact on these elements.

Other broad distinctions within economics include those between positive economics, describing "what is", and normative economics, advocating "what ought to be"; between economic theory and applied economics; between rational and behavioural economics; and between mainstream economics and heterodox economics.

Economic analysis can be applied throughout society, in real estate, business, finance, health care, engineering and government. Economic analysis is also applied to such diverse subjects as crime, education, the family, law, politics, religion, social institutions, war, science, and the environment.

The multiple aspects of economics

The earlier term for the discipline was 'political economy'. In the late 19th century, primarily due to Alfred Marshall, it was renamed 'economics', as a shorter term for 'economic science'. At that time, it was becoming more open to rigorous thinking, including increased use of mathematics, which helped support efforts to have it accepted as a science separate from political science and other social sciences.

There are a variety of modern definitions of economics; some reflect evolving views of the subject or different views among economists. Scottish philosopher Adam Smith (1776) defined what was then called political economy as "an inquiry into the nature and causes of the wealth of nations", in particular as:

a branch of the science of a statesman or legislator [with the twofold objectives of providing] a plentiful revenue or subsistence for the people ... [and] to supply the state or commonwealth with a revenue for the publick services.

Jean-Baptiste Say (1803), distinguishing the subject from its public-policy uses, defines it as the science of production, distribution, and consumption of wealth. On the satirical side, Thomas Carlyle (1849) coined "the dismal science" as an epithet for classical economics, in this context, commonly linked to the pessimistic analysis of Malthus (1798). John Stuart Mill (1844) defines the subject in a social context as:

The science which traces the laws of such of the phenomena of society as arise from the combined operations of mankind for the production of wealth, in so far as those phenomena are not modified by the pursuit of any other object.

Alfred Marshall provides a still widely cited definition in his textbook Principles of Economics (1890) that extends analysis beyond wealth and from the societal to the microeconomic level:

Economics is a study of man in the ordinary business of life. It enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man.

Lionel Robbins (1932) developed implications of what has been termed "[p]erhaps the most commonly accepted current definition of the subject":

Economics is a science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.

Robbins describes the definition as not classificatory in "pick[ing] out certain kinds of behaviour" but rather analytical in "focus[ing] attention on a particular aspect of behaviour, the form imposed by the influence of scarcity." He affirmed that previous economists have usually centred their studies on the analysis of wealth: how wealth is created (production), distributed, and consumed; and how wealth can grow. But he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it (as a sought after end), generates both cost and benefits; and, resources (human life and other costs) are used to attain the goal. If the war is not winnable or if the expected costs outweigh the benefits, the deciding actors (assuming they are rational) may never go to war (a decision) but rather explore other alternatives. We cannot define economics as the science that studies wealth, war, crime, education, and any other field economic analysis can be applied to; but, as the science that studies a particular common aspect of each of those subjects (they all use scarce resources to attain a sought after end).

Some subsequent comments criticized the definition as overly broad in failing to limit its subject matter to analysis of markets. From the 1960s, however, such comments abated as the economic theory of maximizing behaviour and rational-choice modelling expanded the domain of the subject to areas previously treated in other fields. There are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment.

Gary Becker, a contributor to the expansion of economics into new areas, describes the approach he favours as "combin[ing the] assumptions of maximizing behaviour, stable preferences, and market equilibrium, used relentlessly and unflinchingly." One commentary characterizes the remark as making economics an approach rather than a subject matter but with great specificity as to the "choice process and the type of social interaction that [such] analysis involves." The same source reviews a range of definitions included in principles of economics textbooks and concludes that the lack of agreement need not affect the subject-matter that the texts treat. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, or should evolve.

History

Economic writings date from earlier Mesopotamian, Greek, Roman, Indian subcontinent, Chinese, Persian, and Arab civilizations. Economic precepts occur throughout the writings of the Boeotian poet Hesiod and several economic historians have described Hesiod himself as the "first economist". Other notable writers from Antiquity through to the Renaissance include Aristotle, Xenophon, Chanakya (also known as Kautilya), Qin Shi Huang, Thomas Aquinas, and Ibn Khaldun. Joseph Schumpeter described Aquinas as "coming nearer than any other group to being the "founders' of scientific economics" as to monetary, interest, and value theory within a natural-law perspective.

A seaport with a ship arriving
A 1638 painting of a French seaport during the heyday of mercantilism

Two groups, who later were called "mercantilists" and "physiocrats", more directly influenced the subsequent development of the subject. Both groups were associated with the rise of economic nationalism and modern capitalism in Europe. Mercantilism was an economic doctrine that flourished from the 16th to 18th century in a prolific pamphlet literature, whether of merchants or statesmen. It held that a nation's wealth depended on its accumulation of gold and silver. Nations without access to mines could obtain gold and silver from trade only by selling goods abroad and restricting imports other than of gold and silver. The doctrine called for importing cheap raw materials to be used in manufacturing goods, which could be exported, and for state regulation to impose protective tariffs on foreign manufactured goods and prohibit manufacturing in the colonies.

Physiocrats, a group of 18th-century French thinkers and writers, developed the idea of the economy as a circular flow of income and output. Physiocrats believed that only agricultural production generated a clear surplus over cost, so that agriculture was the basis of all wealth. Thus, they opposed the mercantilist policy of promoting manufacturing and trade at the expense of agriculture, including import tariffs. Physiocrats advocated replacing administratively costly tax collections with a single tax on income of land owners. In reaction against copious mercantilist trade regulations, the physiocrats advocated a policy of laissez-faire, which called for minimal government intervention in the economy.

Adam Smith (1723–1790) was an early economic theorist. Smith was harshly critical of the mercantilists but described the physiocratic system "with all its imperfections" as "perhaps the purest approximation to the truth that has yet been published" on the subject.

Classical political economy

Picture of Adam Smith facing to the right
The publication of Adam Smith's The Wealth of Nations in 1776 is considered to be the first formalisation of economic thought.

The publication of Adam Smith's The Wealth of Nations in 1776, has been described as "the effective birth of economics as a separate discipline." The book identified land, labour, and capital as the three factors of production and the major contributors to a nation's wealth, as distinct from the physiocratic idea that only agriculture was productive.

Smith discusses potential benefits of specialization by division of labour, including increased labour productivity and gains from trade, whether between town and country or across countries. His "theorem" that "the division of labor is limited by the extent of the market" has been described as the "core of a theory of the functions of firm and industry" and a "fundamental principle of economic organization." To Smith has also been ascribed "the most important substantive proposition in all of economics" and foundation of resource-allocation theory – that, under competition, resource owners (of labour, land, and capital) seek their most profitable uses, resulting in an equal rate of return for all uses in equilibrium (adjusted for apparent differences arising from such factors as training and unemployment).

In an argument that includes "one of the most famous passages in all economics," Smith represents every individual as trying to employ any capital they might command for their own advantage, not that of the society, and for the sake of profit, which is necessary at some level for employing capital in domestic industry, and positively related to the value of produce. In this:

He generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it.

The Rev. Thomas Robert Malthus (1798) used the concept of diminishing returns to explain low living standards. Human population, he argued, tended to increase geometrically, outstripping the production of food, which increased arithmetically. The force of a rapidly growing population against a limited amount of land meant diminishing returns to labour. The result, he claimed, was chronically low wages, which prevented the standard of living for most of the population from rising above the subsistence level. Economist Julian Lincoln Simon has criticized Malthus's conclusions.

While Adam Smith emphasized the production of income, David Ricardo (1817) focused on the distribution of income among landowners, workers, and capitalists. Ricardo saw an inherent conflict between landowners on the one hand and labour and capital on the other. He posited that the growth of population and capital, pressing against a fixed supply of land, pushes up rents and holds down wages and profits. Ricardo was the first to state and prove the principle of comparative advantage, according to which each country should specialize in producing and exporting goods in that it has a lower relative cost of production, rather relying only on its own production. It has been termed a "fundamental analytical explanation" for gains from trade.

Coming at the end of the classical tradition, John Stuart Mill (1848) parted company with the earlier classical economists on the inevitability of the distribution of income produced by the market system. Mill pointed to a distinct difference between the market's two roles: allocation of resources and distribution of income. The market might be efficient in allocating resources but not in distributing income, he wrote, making it necessary for society to intervene.

Value theory was important in classical theory. Smith wrote that the "real price of every thing ... is the toil and trouble of acquiring it". Smith maintained that, with rent and profit, other costs besides wages also enter the price of a commodity. Other classical economists presented variations on Smith, termed the 'labour theory of value'. Classical economics focused on the tendency of any market economy to settle in a final stationary state made up of a constant stock of physical wealth (capital) and a constant population size.

Marxian economics

Photograph of Karl Marx facing the viewer
The Marxist school of economic thought comes from the work of German economist Karl Marx.

Marxist (later, Marxian) economics descends from classical economics and it derives from the work of Karl Marx. The first volume of Marx's major work, Das Kapital, was published in German in 1867. In it, Marx focused on the labour theory of value and the theory of surplus value which, he believed, explained the exploitation of labour by capital. The labour theory of value held that the value of an exchanged commodity was determined by the labour that went into its production and the theory of surplus value demonstrated how the workers only got paid a proportion of the value their work had created.

Neoclassical economics

At the dawn as a social science, economics was defined and discussed at length as the study of production, distribution, and consumption of wealth by Jean-Baptiste Say in his Treatise on Political Economy or, The Production, Distribution, and Consumption of Wealth (1803). These three items are considered by the science only in relation to the increase or diminution of wealth, and not in reference to their processes of execution. Say's definition has prevailed up to our time, saved by substituting the word "wealth" for "goods and services" meaning that wealth may include non-material objects as well. One hundred and thirty years later, Lionel Robbins noticed that this definition no longer sufficed, because many economists were making theoretical and philosophical inroads in other areas of human activity. In his Essay on the Nature and Significance of Economic Science, he proposed a definition of economics as a study of a particular aspect of human behaviour, the one that falls under the influence of scarcity, which forces people to choose, allocate scarce resources to competing ends, and economize (seeking the greatest welfare while avoiding the wasting of scarce resources). For Robbins, the insufficiency was solved, and his definition allows us to proclaim, with an easy conscience, education economics, safety and security economics, health economics, war economics, and of course, production, distribution and consumption economics as valid subjects of the economic science." Citing Robbins: "Economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses". After discussing it for decades, Robbins' definition became widely accepted by mainstream economists, and it has opened way into current textbooks. Although far from unanimous, most mainstream economists would accept some version of Robbins' definition, even though many have raised serious objections to the scope and method of economics, emanating from that definition. Due to the lack of strong consensus, and that production, distribution and consumption of goods and services is the prime area of study of economics, the old definition still stands in many quarters.

A body of theory later termed "neoclassical economics" or "marginalism" formed from about 1870 to 1910. The term "economics" was popularized by such neoclassical economists as Alfred Marshall as a concise synonym for "economic science" and a substitute for the earlier "political economy". This corresponded to the influence on the subject of mathematical methods used in the natural sciences.

Neoclassical economics systematized supply and demand as joint determinants of price and quantity in market equilibrium, affecting both the allocation of output and the distribution of income. It dispensed with the labour theory of value inherited from classical economics in favour of a marginal utility theory of value on the demand side and a more general theory of costs on the supply side. In the 20th century, neoclassical theorists moved away from an earlier notion suggesting that total utility for a society could be measured in favour of ordinal utility, which hypothesizes merely behaviour-based relations across persons.

In microeconomics, neoclassical economics represents incentives and costs as playing a pervasive role in shaping decision making. An immediate example of this is the consumer theory of individual demand, which isolates how prices (as costs) and income affect quantity demanded. In macroeconomics it is reflected in an early and lasting neoclassical synthesis with Keynesian macroeconomics.

Neoclassical economics is occasionally referred as orthodox economics whether by its critics or sympathizers. Modern mainstream economics builds on neoclassical economics but with many refinements that either supplement or generalize earlier analysis, such as econometrics, game theory, analysis of market failure and imperfect competition, and the neoclassical model of economic growth for analysing long-run variables affecting national income.

Neoclassical economics studies the behaviour of individuals, households, and organizations (called economic actors, players, or agents), when they manage or use scarce resources, which have alternative uses, to achieve desired ends. Agents are assumed to act rationally, have multiple desirable ends in sight, limited resources to obtain these ends, a set of stable preferences, a definite overall guiding objective, and the capability of making a choice. There exists an economic problem, subject to study by economic science, when a decision (choice) is made by one or more resource-controlling players to attain the best possible outcome under bounded rational conditions. In other words, resource-controlling agents maximize value subject to the constraints imposed by the information the agents have, their cognitive limitations, and the finite amount of time they have to make and execute a decision. Economic science centres on the activities of the economic agents that comprise society. They are the focus of economic analysis.

An approach to understanding these processes, through the study of agent behaviour under scarcity, may go as follows:

The continuous interplay (exchange or trade) done by economic actors in all markets sets the prices for all goods and services which, in turn, make the rational managing of scarce resources possible. At the same time, the decisions (choices) made by the same actors, while they are pursuing their own interest, determine the level of output (production), consumption, savings, and investment, in an economy, as well as the remuneration (distribution) paid to the owners of labour (in the form of wages), capital (in the form of profits) and land (in the form of rent). Each period, as if they were in a giant feedback system, economic players influence the pricing processes and the economy, and are in turn influenced by them until a steady state (equilibrium) of all variables involved is reached or until an external shock throws the system toward a new equilibrium point. Because of the autonomous actions of rational interacting agents, the economy is a complex adaptive system.

Keynesian economics

John Maynard Keynes greeting Harry Dexter White, then a senior official in the U.S. Treasury Department
John Maynard Keynes (right) was a key theorist in economics.

Keynesian economics derives from John Maynard Keynes, in particular his book The General Theory of Employment, Interest and Money (1936), which ushered in contemporary macroeconomics as a distinct field. The book focused on determinants of national income in the short run when prices are relatively inflexible. Keynes attempted to explain in broad theoretical detail why high labour-market unemployment might not be self-correcting due to low "effective demand" and why even price flexibility and monetary policy might be unavailing. The term "revolutionary" has been applied to the book in its impact on economic analysis.

Keynesian economics has two successors. Post-Keynesian economics also concentrates on macroeconomic rigidities and adjustment processes. Research on micro foundations for their models is represented as based on real-life practices rather than simple optimizing models. It is generally associated with the University of Cambridge and the work of Joan Robinson.

New-Keynesian economics is also associated with developments in the Keynesian fashion. Within this group researchers tend to share with other economists the emphasis on models employing micro foundations and optimizing behaviour but with a narrower focus on standard Keynesian themes such as price and wage rigidity. These are usually made to be endogenous features of the models, rather than simply assumed as in older Keynesian-style ones.

Chicago school of economics

The Chicago School of economics is best known for its free market advocacy and monetarist ideas. According to Milton Friedman and monetarists, market economies are inherently stable if the money supply does not greatly expand or contract. Ben Bernanke, former Chairman of the Federal Reserve, is among the economists today generally accepting Friedman's analysis of the causes of the Great Depression.

Milton Friedman effectively took many of the basic principles set forth by Adam Smith and the classical economists and modernized them. One example of this is his article in the 13 September 1970 issue of The New York Times Magazine, in which he claims that the social responsibility of business should be "to use its resources and engage in activities designed to increase its profits ... (through) open and free competition without deception or fraud."

Other schools and approaches

Other well-known schools or trends of thought referring to a particular style of economics practised at and disseminated from well-defined groups of academicians that have become known worldwide, include the Austrian School, the Freiburg School, the School of Lausanne, post-Keynesian economics and the Stockholm school. Contemporary mainstream economics is sometimes separated into the Saltwater approach of those universities along the Eastern and Western coasts of the US, and the Freshwater, or Chicago-school approach.

Within macroeconomics there is, in general order of their historical appearance in the literature; classical economics, neoclassical economics, Keynesian economics, the neoclassical synthesis, monetarism, new classical economics, New Keynesian economics and the new neoclassical synthesis. In general, alternative developments include ecological economics, constitutional economics, institutional economics, evolutionary economics, dependency theory, structuralist economics, world systems theory, econophysics, feminist economics and biophysical economics.

Methodology

Theoretical research

Mainstream economic theory relies upon a priori quantitative economic models, which employ a variety of concepts. Theory typically proceeds with an assumption of ceteris paribus, which means holding constant explanatory variables other than the one under consideration. When creating theories, the objective is to find ones which are at least as simple in information requirements, more precise in predictions, and more fruitful in generating additional research than prior theories. While neoclassical economic theory constitutes both the dominant or orthodox theoretical as well as methodological framework, economic theory can also take the form of other schools of thought such as in heterodox economic theories.

In microeconomics, principal concepts include supply and demand, marginalism, rational choice theory, opportunity cost, budget constraints, utility, and the theory of the firm. Early macroeconomic models focused on modelling the relationships between aggregate variables, but as the relationships appeared to change over time macroeconomists, including new Keynesians, reformulated their models in microfoundations.

The aforementioned microeconomic concepts play a major part in macroeconomic models – for instance, in monetary theory, the quantity theory of money predicts that increases in the growth rate of the money supply increase inflation, and inflation is assumed to be influenced by rational expectations. In development economics, slower growth in developed nations has been sometimes predicted because of the declining marginal returns of investment and capital, and this has been observed in the Four Asian Tigers. Sometimes an economic hypothesis is only qualitative, not quantitative.

Expositions of economic reasoning often use two-dimensional graphs to illustrate theoretical relationships. At a higher level of generality, Paul Samuelson's treatise Foundations of Economic Analysis (1947) used mathematical methods beyond graphs to represent the theory, particularly as to maximizing behavioural relations of agents reaching equilibrium. The book focused on examining the class of statements called operationally meaningful theorems in economics, which are theorems that can conceivably be refuted by empirical data.

Empirical research

Economic theories are frequently tested empirically, largely through the use of econometrics using economic data. The controlled experiments common to the physical sciences are difficult and uncommon in economics, and instead broad data is observationally studied; this type of testing is typically regarded as less rigorous than controlled experimentation, and the conclusions typically more tentative. However, the field of experimental economics is growing, and increasing use is being made of natural experiments.

Statistical methods such as regression analysis are common. Practitioners use such methods to estimate the size, economic significance, and statistical significance ("signal strength") of the hypothesized relation(s) and to adjust for noise from other variables. By such means, a hypothesis may gain acceptance, although in a probabilistic, rather than certain, sense. Acceptance is dependent upon the falsifiable hypothesis surviving tests. Use of commonly accepted methods need not produce a final conclusion or even a consensus on a particular question, given different tests, data sets, and prior beliefs.

Criticisms based on professional standards and non-replicability of results serve as further checks against bias, errors, and over-generalization, although much economic research has been accused of being non-replicable, and prestigious journals have been accused of not facilitating replication through the provision of the code and data. Like theories, uses of test statistics are themselves open to critical analysis, although critical commentary on papers in economics in prestigious journals such as the American Economic Review has declined precipitously in the past 40 years. This has been attributed to journals' incentives to maximize citations in order to rank higher on the Social Science Citation Index (SSCI).

In applied economics, input–output models employing linear programming methods are quite common. Large amounts of data are run through computer programs to analyse the impact of certain policies; IMPLAN is one well-known example.

Experimental economics has promoted the use of scientifically controlled experiments. This has reduced the long-noted distinction of economics from natural sciences because it allows direct tests of what were previously taken as axioms. In some cases these have found that the axioms are not entirely correct; for example, the ultimatum game has revealed that people reject unequal offers.

In behavioural economics, psychologist Daniel Kahneman won the Nobel Prize in economics in 2002 for his and Amos Tversky's empirical discovery of several cognitive biases and heuristics. Similar empirical testing occurs in neuroeconomics. Another example is the assumption of narrowly selfish preferences versus a model that tests for selfish, altruistic, and cooperative preferences. These techniques have led some to argue that economics is a "genuine science".

Branches of economics

Microeconomics

A vegetable vendor in a marketplace.
Economists study trade, production and consumption decisions, such as those that occur in a traditional marketplace.
 
Two traders sit at computer monitors with financial information.
Electronic trading brings together buyers and sellers through an electronic trading platform and network to create virtual market places. Pictured: São Paulo Stock Exchange, Brazil.

Microeconomics examines how entities, forming a market structure, interact within a market to create a market system. These entities include private and public players with various classifications, typically operating under scarcity of tradable units and light government regulation. The item traded may be a tangible product such as apples or a service such as repair services, legal counsel, or entertainment.

In theory, in a free market the aggregates (sum of) of quantity demanded by buyers and quantity supplied by sellers may reach economic equilibrium over time in reaction to price changes; in practice, various issues may prevent equilibrium, and any equilibrium reached may not necessarily be morally equitable. For example, if the supply of healthcare services is limited by external factors, the equilibrium price may be unaffordable for many who desire it but cannot pay for it.

Various market structures exist. In perfectly competitive markets, no participants are large enough to have the market power to set the price of a homogeneous product. In other words, every participant is a "price taker" as no participant influences the price of a product. In the real world, markets often experience imperfect competition.

Forms include monopoly (in which there is only one seller of a good), duopoly (in which there are only two sellers of a good), oligopoly (in which there are few sellers of a good), monopolistic competition (in which there are many sellers producing highly differentiated goods), monopsony (in which there is only one buyer of a good), and oligopsony (in which there are few buyers of a good). Unlike perfect competition, imperfect competition invariably means market power is unequally distributed. Firms under imperfect competition have the potential to be "price makers", which means that, by holding a disproportionately high share of market power, they can influence the prices of their products.

Microeconomics studies individual markets by simplifying the economic system by assuming that activity in the market being analysed does not affect other markets. This method of analysis is known as partial-equilibrium analysis (supply and demand). This method aggregates (the sum of all activity) in only one market. General-equilibrium theory studies various markets and their behaviour. It aggregates (the sum of all activity) across all markets. This method studies both changes in markets and their interactions leading towards equilibrium.

Production, cost, and efficiency

In microeconomics, production is the conversion of inputs into outputs. It is an economic process that uses inputs to create a commodity or a service for exchange or direct use. Production is a flow and thus a rate of output per period of time. Distinctions include such production alternatives as for consumption (food, haircuts, etc.) vs. investment goods (new tractors, buildings, roads, etc.), public goods (national defence, smallpox vaccinations, etc.) or private goods (new computers, bananas, etc.), and "guns" vs "butter".

Opportunity cost is the economic cost of production: the value of the next best opportunity foregone. Choices must be made between desirable yet mutually exclusive actions. It has been described as expressing "the basic relationship between scarcity and choice". For example, if a baker uses a sack of flour to make pretzels one morning, then the baker cannot use either the flour or the morning to make bagels instead. Part of the cost of making pretzels is that neither the flour nor the morning are available any longer, for use in some other way. The opportunity cost of an activity is an element in ensuring that scarce resources are used efficiently, such that the cost is weighed against the value of that activity in deciding on more or less of it. Opportunity costs are not restricted to monetary or financial costs but could be measured by the real cost of output forgone, leisure, or anything else that provides the alternative benefit (utility).

Inputs used in the production process include such primary factors of production as labour services, capital (durable produced goods used in production, such as an existing factory), and land (including natural resources). Other inputs may include intermediate goods used in production of final goods, such as the steel in a new car.

Economic efficiency measures how well a system generates desired output with a given set of inputs and available technology. Efficiency is improved if more output is generated without changing inputs, or in other words, the amount of "waste" is reduced. A widely accepted general standard is Pareto efficiency, which is reached when no further change can make someone better off without making someone else worse off.

An example production–possibility frontier with illustrative points marked.

The production–possibility frontier (PPF) is an expository figure for representing scarcity, cost, and efficiency. In the simplest case an economy can produce just two goods (say "guns" and "butter"). The PPF is a table or graph (as at the right) showing the different quantity combinations of the two goods producible with a given technology and total factor inputs, which limit feasible total output. Each point on the curve shows potential total output for the economy, which is the maximum feasible output of one good, given a feasible output quantity of the other good.

Scarcity is represented in the figure by people being willing but unable in the aggregate to consume beyond the PPF (such as at X) and by the negative slope of the curve. If production of one good increases along the curve, production of the other good decreases, an inverse relationship. This is because increasing output of one good requires transferring inputs to it from production of the other good, decreasing the latter.

The slope of the curve at a point on it gives the trade-off between the two goods. It measures what an additional unit of one good costs in units forgone of the other good, an example of a real opportunity cost. Thus, if one more Gun costs 100 units of butter, the opportunity cost of one Gun is 100 Butter. Along the PPF, scarcity implies that choosing more of one good in the aggregate entails doing with less of the other good. Still, in a market economy, movement along the curve may indicate that the choice of the increased output is anticipated to be worth the cost to the agents.

By construction, each point on the curve shows productive efficiency in maximizing output for given total inputs. A point inside the curve (as at A), is feasible but represents production inefficiency (wasteful use of inputs), in that output of one or both goods could increase by moving in a northeast direction to a point on the curve. Examples cited of such inefficiency include high unemployment during a business-cycle recession or economic organization of a country that discourages full use of resources. Being on the curve might still not fully satisfy allocative efficiency (also called Pareto efficiency) if it does not produce a mix of goods that consumers prefer over other points.

Much applied economics in public policy is concerned with determining how the efficiency of an economy can be improved. Recognizing the reality of scarcity and then figuring out how to organize society for the most efficient use of resources has been described as the "essence of economics", where the subject "makes its unique contribution."

Specialization

A map showing the main trade routes for goods within late medieval Europe

Specialization is considered key to economic efficiency based on theoretical and empirical considerations. Different individuals or nations may have different real opportunity costs of production, say from differences in stocks of human capital per worker or capital/labour ratios. According to theory, this may give a comparative advantage in production of goods that make more intensive use of the relatively more abundant, thus relatively cheaper, input.

Even if one region has an absolute advantage as to the ratio of its outputs to inputs in every type of output, it may still specialize in the output in which it has a comparative advantage and thereby gain from trading with a region that lacks any absolute advantage but has a comparative advantage in producing something else.

It has been observed that a high volume of trade occurs among regions even with access to a similar technology and mix of factor inputs, including high-income countries. This has led to investigation of economies of scale and agglomeration to explain specialization in similar but differentiated product lines, to the overall benefit of respective trading parties or regions.

The general theory of specialization applies to trade among individuals, farms, manufacturers, service providers, and economies. Among each of these production systems, there may be a corresponding division of labour with different work groups specializing, or correspondingly different types of capital equipment and differentiated land uses.

An example that combines features above is a country that specializes in the production of high-tech knowledge products, as developed countries do, and trades with developing nations for goods produced in factories where labour is relatively cheap and plentiful, resulting in different in opportunity costs of production. More total output and utility thereby results from specializing in production and trading than if each country produced its own high-tech and low-tech products.

Theory and observation set out the conditions such that market prices of outputs and productive inputs select an allocation of factor inputs by comparative advantage, so that (relatively) low-cost inputs go to producing low-cost outputs. In the process, aggregate output may increase as a by-product or by design. Such specialization of production creates opportunities for gains from trade whereby resource owners benefit from trade in the sale of one type of output for other, more highly valued goods. A measure of gains from trade is the increased income levels that trade may facilitate.

Supply and demand

A graph depicting Quantity on the X-axis and Price on the Y-axis
The supply and demand model describes how prices vary as a result of a balance between product availability and demand. The graph depicts an increase (that is, right-shift) in demand from D1 to D2 along with the consequent increase in price and quantity required to reach a new equilibrium point on the supply curve (S).

Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power.

For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximization" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred.

The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply.

Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged.

That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply.

Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply.

Firms

People frequently do not trade directly on markets. Instead, on the supply side, they may work in and produce through firms. The most obvious kinds of firms are corporations, partnerships and trusts. According to Ronald Coase, people begin to organize their production in firms when the costs of doing business becomes lower than doing it on the market. Firms combine labour and capital, and can achieve far greater economies of scale (when the average cost per unit declines as more units are produced) than individual market trading.

In perfectly competitive markets studied in the theory of supply and demand, there are many producers, none of which significantly influence price. Industrial organization generalizes from that special case to study the strategic behaviour of firms that do have significant control of price. It considers the structure of such markets and their interactions. Common market structures studied besides perfect competition include monopolistic competition, various forms of oligopoly, and monopoly.

Managerial economics applies microeconomic analysis to specific decisions in business firms or other management units. It draws heavily from quantitative methods such as operations research and programming and from statistical methods such as regression analysis in the absence of certainty and perfect knowledge. A unifying theme is the attempt to optimize business decisions, including unit-cost minimization and profit maximization, given the firm's objectives and constraints imposed by technology and market conditions.

Uncertainty and game theory

Uncertainty in economics is an unknown prospect of gain or loss, whether quantifiable as risk or not. Without it, household behaviour would be unaffected by uncertain employment and income prospects, financial and capital markets would reduce to exchange of a single instrument in each market period, and there would be no communications industry. Given its different forms, there are various ways of representing uncertainty and modelling economic agents' responses to it.

Game theory is a branch of applied mathematics that considers strategic interactions between agents, one kind of uncertainty. It provides a mathematical foundation of industrial organization, discussed above, to model different types of firm behaviour, for example in a solipsistic industry (few sellers), but equally applicable to wage negotiations, bargaining, contract design, and any situation where individual agents are few enough to have perceptible effects on each other. In behavioural economics, it has been used to model the strategies agents choose when interacting with others whose interests are at least partially adverse to their own.

In this, it generalizes maximization approaches developed to analyse market actors such as in the supply and demand model and allows for incomplete information of actors. The field dates from the 1944 classic Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. It has significant applications seemingly outside of economics in such diverse subjects as formulation of nuclear strategies, ethics, political science, and evolutionary biology.

Risk aversion may stimulate activity that in well-functioning markets smooths out risk and communicates information about risk, as in markets for insurance, commodity futures contracts, and financial instruments. Financial economics or simply finance describes the allocation of financial resources. It also analyses the pricing of financial instruments, the financial structure of companies, the efficiency and fragility of financial markets, financial crises, and related government policy or regulation.

Some market organizations may give rise to inefficiencies associated with uncertainty. Based on George Akerlof's "Market for Lemons" article, the paradigm example is of a dodgy second-hand car market. Customers without knowledge of whether a car is a "lemon" depress its price below what a quality second-hand car would be. Information asymmetry arises here, if the seller has more relevant information than the buyer but no incentive to disclose it. Related problems in insurance are adverse selection, such that those at most risk are most likely to insure (say reckless drivers), and moral hazard, such that insurance results in riskier behaviour (say more reckless driving).

Both problems may raise insurance costs and reduce efficiency by driving otherwise willing transactors from the market ("incomplete markets"). Moreover, attempting to reduce one problem, say adverse selection by mandating insurance, may add to another, say moral hazard. Information economics, which studies such problems, has relevance in subjects such as insurance, contract law, mechanism design, monetary economics, and health care. Applied subjects include market and legal remedies to spread or reduce risk, such as warranties, government-mandated partial insurance, restructuring or bankruptcy law, inspection, and regulation for quality and information disclosure.

Market failure

A smokestack releasing smoke
Pollution can be a simple example of market failure. If costs of production are not borne by producers but are by the environment, accident victims or others, then prices are distorted.
 
A woman takes samples of water from a river.

The term "market failure" encompasses several problems which may undermine standard economic assumptions. Although economists categorize market failures differently, the following categories emerge in the main texts.

Information asymmetries and incomplete markets may result in economic inefficiency but also a possibility of improving efficiency through market, legal, and regulatory remedies, as discussed above.

Natural monopoly, or the overlapping concepts of "practical" and "technical" monopoly, is an extreme case of failure of competition as a restraint on producers. Extreme economies of scale are one possible cause.

Public goods are goods which are under-supplied in a typical market. The defining features are that people can consume public goods without having to pay for them and that more than one person can consume the good at the same time.

Externalities occur where there are significant social costs or benefits from production or consumption that are not reflected in market prices. For example, air pollution may generate a negative externality, and education may generate a positive externality (less crime, etc.). Governments often tax and otherwise restrict the sale of goods that have negative externalities and subsidize or otherwise promote the purchase of goods that have positive externalities in an effort to correct the price distortions caused by these externalities. Elementary demand-and-supply theory predicts equilibrium but not the speed of adjustment for changes of equilibrium due to a shift in demand or supply.

In many areas, some form of price stickiness is postulated to account for quantities, rather than prices, adjusting in the short run to changes on the demand side or the supply side. This includes standard analysis of the business cycle in macroeconomics. Analysis often revolves around causes of such price stickiness and their implications for reaching a hypothesized long-run equilibrium. Examples of such price stickiness in particular markets include wage rates in labour markets and posted prices in markets deviating from perfect competition.

Some specialized fields of economics deal in market failure more than others. The economics of the public sector is one example. Much environmental economics concerns externalities or "public bads".

Policy options include regulations that reflect cost-benefit analysis or market solutions that change incentives, such as emission fees or redefinition of property rights.

Welfare

Welfare economics uses microeconomics techniques to evaluate well-being from allocation of productive factors as to desirability and economic efficiency within an economy, often relative to competitive general equilibrium. It analyzes social welfare, however measured, in terms of economic activities of the individuals that compose the theoretical society considered. Accordingly, individuals, with associated economic activities, are the basic units for aggregating to social welfare, whether of a group, a community, or a society, and there is no "social welfare" apart from the "welfare" associated with its individual units.

Macroeconomics

Macroeconomics examines the economy as a whole to explain broad aggregates and their interactions "top down", that is, using a simplified form of general-equilibrium theory. Such aggregates include national income and output, the unemployment rate, and price inflation and subaggregates like total consumption and investment spending and their components. It also studies effects of monetary policy and fiscal policy.

Since at least the 1960s, macroeconomics has been characterized by further integration as to micro-based modelling of sectors, including rationality of players, efficient use of market information, and imperfect competition. This has addressed a long-standing concern about inconsistent developments of the same subject.

Macroeconomic analysis also considers factors affecting the long-term level and growth of national income. Such factors include capital accumulation, technological change and labour force growth.

Growth

Growth economics studies factors that explain economic growth – the increase in output per capita of a country over a long period of time. The same factors are used to explain differences in the level of output per capita between countries, in particular why some countries grow faster than others, and whether countries converge at the same rates of growth.

Much-studied factors include the rate of investment, population growth, and technological change. These are represented in theoretical and empirical forms (as in the neoclassical and endogenous growth models) and in growth accounting.

Business cycle

A basic illustration of economic/business cycles

The economics of a depression were the spur for the creation of "macroeconomics" as a separate discipline. During the Great Depression of the 1930s, John Maynard Keynes authored a book entitled The General Theory of Employment, Interest and Money outlining the key theories of Keynesian economics. Keynes contended that aggregate demand for goods might be insufficient during economic downturns, leading to unnecessarily high unemployment and losses of potential output.

He therefore advocated active policy responses by the public sector, including monetary policy actions by the central bank and fiscal policy actions by the government to stabilize output over the business cycle. Thus, a central conclusion of Keynesian economics is that, in some situations, no strong automatic mechanism moves output and employment towards full employment levels. John Hicks' IS/LM model has been the most influential interpretation of The General Theory.

Over the years, understanding of the business cycle has branched into various research programmes, mostly related to or distinct from Keynesianism. The neoclassical synthesis refers to the reconciliation of Keynesian economics with neoclassical economics, stating that Keynesianism is correct in the short run but qualified by neoclassical-like considerations in the intermediate and long run.

New classical macroeconomics, as distinct from the Keynesian view of the business cycle, posits market clearing with imperfect information. It includes Friedman's permanent income hypothesis on consumption and "rational expectations" theory, led by Robert Lucas, and real business cycle theory.

In contrast, the new Keynesian approach retains the rational expectations assumption, however it assumes a variety of market failures. In particular, New Keynesians assume prices and wages are "sticky", which means they do not adjust instantaneously to changes in economic conditions.

Thus, the new classicals assume that prices and wages adjust automatically to attain full employment, whereas the new Keynesians see full employment as being automatically achieved only in the long run, and hence government and central-bank policies are needed because the "long run" may be very long.

Unemployment

US unemployment rate, 1990–2021

The amount of unemployment in an economy is measured by the unemployment rate, the percentage of workers without jobs in the labour force. The labour force only includes workers actively looking for jobs. People who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are excluded from the labour force. Unemployment can be generally broken down into several types that are related to different causes.

Classical models of unemployment occurs when wages are too high for employers to be willing to hire more workers. Consistent with classical unemployment, frictional unemployment occurs when appropriate job vacancies exist for a worker, but the length of time needed to search for and find the job leads to a period of unemployment.

Structural unemployment covers a variety of possible causes of unemployment including a mismatch between workers' skills and the skills required for open jobs. Large amounts of structural unemployment can occur when an economy is transitioning industries and workers find their previous set of skills are no longer in demand. Structural unemployment is similar to frictional unemployment since both reflect the problem of matching workers with job vacancies, but structural unemployment covers the time needed to acquire new skills not just the short term search process.

While some types of unemployment may occur regardless of the condition of the economy, cyclical unemployment occurs when growth stagnates. Okun's law represents the empirical relationship between unemployment and economic growth. The original version of Okun's law states that a 3% increase in output would lead to a 1% decrease in unemployment.

Inflation and monetary policy

Money is a means of final payment for goods in most price system economies, and is the unit of account in which prices are typically stated. Money has general acceptability, relative consistency in value, divisibility, durability, portability, elasticity in supply, and longevity with mass public confidence. It includes currency held by the nonbank public and checkable deposits. It has been described as a social convention, like language, useful to one largely because it is useful to others. In the words of Francis Amasa Walker, a well-known 19th-century economist, "Money is what money does" ("Money is that money does" in the original).

As a medium of exchange, money facilitates trade. It is essentially a measure of value and more importantly, a store of value being a basis for credit creation. Its economic function can be contrasted with barter (non-monetary exchange). Given a diverse array of produced goods and specialized producers, barter may entail a hard-to-locate double coincidence of wants as to what is exchanged, say apples and a book. Money can reduce the transaction cost of exchange because of its ready acceptability. Then it is less costly for the seller to accept money in exchange, rather than what the buyer produces.

At the level of an economy, theory and evidence are consistent with a positive relationship running from the total money supply to the nominal value of total output and to the general price level. For this reason, management of the money supply is a key aspect of monetary policy.

Fiscal policy

Governments implement fiscal policy to influence macroeconomic conditions by adjusting spending and taxation policies to alter aggregate demand. When aggregate demand falls below the potential output of the economy, there is an output gap where some productive capacity is left unemployed. Governments increase spending and cut taxes to boost aggregate demand. Resources that have been idled can be used by the government.

For example, unemployed home builders can be hired to expand highways. Tax cuts allow consumers to increase their spending, which boosts aggregate demand. Both tax cuts and spending have multiplier effects where the initial increase in demand from the policy percolates through the economy and generates additional economic activity.

The effects of fiscal policy can be limited by crowding out. When there is no output gap, the economy is producing at full capacity and there are no excess productive resources. If the government increases spending in this situation, the government uses resources that otherwise would have been used by the private sector, so there is no increase in overall output. Some economists think that crowding out is always an issue while others do not think it is a major issue when output is depressed.

Sceptics of fiscal policy also make the argument of Ricardian equivalence. They argue that an increase in debt will have to be paid for with future tax increases, which will cause people to reduce their consumption and save money to pay for the future tax increase. Under Ricardian equivalence, any boost in demand from tax cuts will be offset by the increased saving intended to pay for future higher taxes.

Public economics

Public economics is the field of economics that deals with economic activities of a public sector, usually government. The subject addresses such matters as tax incidence (who really pays a particular tax), cost-benefit analysis of government programmes, effects on economic efficiency and income distribution of different kinds of spending and taxes, and fiscal politics. The latter, an aspect of public choice theory, models public-sector behaviour analogously to microeconomics, involving interactions of self-interested voters, politicians, and bureaucrats.

Much of economics is positive, seeking to describe and predict economic phenomena. Normative economics seeks to identify what economies ought to be like.

Welfare economics is a normative branch of economics that uses microeconomic techniques to simultaneously determine the allocative efficiency within an economy and the income distribution associated with it. It attempts to measure social welfare by examining the economic activities of the individuals that comprise society.

International economics

List of countries by GDP (PPP) per capita in 2014

International trade studies determinants of goods-and-services flows across international boundaries. It also concerns the size and distribution of gains from trade. Policy applications include estimating the effects of changing tariff rates and trade quotas. International finance is a macroeconomic field which examines the flow of capital across international borders, and the effects of these movements on exchange rates. Increased trade in goods, services and capital between countries is a major effect of contemporary globalization.

Labor economics

Labor economics seeks to understand the functioning and dynamics of the markets for wage labor. Labor markets function through the interaction of workers and employers. Labor economics looks at the suppliers of labor services (workers), the demands of labor services (employers), and attempts to understand the resulting pattern of wages, employment, and income. In economics, labor is a measure of the work done by human beings. It is conventionally contrasted with such other factors of production as land and capital. There are theories which have developed a concept called human capital (referring to the skills that workers possess, not necessarily their actual work), although there are also counter posing macro-economic system theories that think human capital is a contradiction in terms.

Development economics

Development economics examines economic aspects of the economic development process in relatively low-income countries focusing on structural change, poverty, and economic growth. Approaches in development economics frequently incorporate social and political factors.

Agreements

According to various random and anonymous surveys of members of the American Economic Association, economists have agreement about the following propositions by percentage:

  1. A ceiling on rents reduces the quantity and quality of housing available. (93% agree)
  2. Tariffs and import quotas usually reduce general economic welfare. (93% agree)
  3. Flexible and floating exchange rates offer an effective international monetary arrangement. (90% agree)
  4. Fiscal policy (e.g., tax cut and/or government expenditure increase) has a significant stimulative impact on a less than fully employed economy. (90% agree)
  5. The United States should not restrict employers from outsourcing work to foreign countries. (90% agree)
  6. Economic growth in developed countries like the United States leads to greater levels of well-being. (88% agree)
  7. The United States should eliminate agricultural subsidies. (85% agree)
  8. An appropriately designed fiscal policy can increase the long-run rate of capital formation. (85% agree)
  9. Local and state governments should eliminate subsidies to professional sports franchises. (85% agree)
  10. If the federal budget is to be balanced, it should be done over the business cycle rather than yearly. (85% agree)
  11. The gap between Social Security funds and expenditures will become unsustainably large within the next fifty years if current policies remain unchanged. (85% agree)
  12. Cash payments increase the welfare of recipients to a greater degree than do transfers-in-kind of equal cash value. (84% agree)
  13. A large federal budget deficit has an adverse effect on the economy. (83% agree)
  14. The redistribution of income in the United States is a legitimate role for the government. (83% agree)
  15. Inflation is caused primarily by too much growth in the money supply. (83% agree)
  16. The United States should not ban genetically modified crops. (82% agree)
  17. A minimum wage increases unemployment among young and unskilled workers. (79% agree)
  18. The government should restructure the welfare system along the lines of a "negative income tax". (79% agree)
  19. Effluent taxes and marketable pollution permits represent a better approach to pollution control than imposition of pollution ceilings. (78% agree)
  20. Government subsidies on ethanol in the United States should be reduced or eliminated. (78% agree)

Criticisms

General criticisms

"The dismal science" is a derogatory alternative name for economics devised by the Victorian historian Thomas Carlyle in the 19th century. It is often stated that Carlyle gave economics the nickname "the dismal science" as a response to the late 18th century writings of The Reverend Thomas Robert Malthus, who grimly predicted that starvation would result, as projected population growth exceeded the rate of increase in the food supply. However, the actual phrase was coined by Carlyle in the context of a debate with John Stuart Mill on slavery, in which Carlyle argued for slavery, while Mill opposed it.

In The Wealth of Nations, Adam Smith addressed many issues that are currently also the subject of debate and dispute. Smith repeatedly attacks groups of politically aligned individuals who attempt to use their collective influence to manipulate a government into doing their bidding. In Smith's day, these were referred to as factions, but are now more commonly called special interests, a term which can comprise international bankers, corporate conglomerations, outright oligopolies, monopolies, trade unions and other groups.

Economics per se, as a social science, is independent of the political acts of any government or other decision-making organization; however, many policymakers or individuals holding highly ranked positions that can influence other people's lives are known for arbitrarily using a plethora of economic concepts and rhetoric as vehicles to legitimize agendas and value systems, and do not limit their remarks to matters relevant to their responsibilities. The close relation of economic theory and practice with politics is a focus of contention that may shade or distort the most unpretentious original tenets of economics, and is often confused with specific social agendas and value systems.

Notwithstanding, economics legitimately has a role in informing government policy. It is, indeed, in some ways an outgrowth of the older field of political economy. Some academic economic journals have increased their efforts to gauge the consensus of economists regarding certain policy issues in hopes of effecting a more informed political environment. Often there exists a low approval rate from professional economists regarding many public policies. Policy issues featured in one survey of American Economic Association economists include trade restrictions, social insurance for those put out of work by international competition, genetically modified foods, curbside recycling, health insurance (several questions), medical malpractice, barriers to entering the medical profession, organ donations, unhealthy foods, mortgage deductions, taxing internet sales, Wal-Mart, casinos, ethanol subsidies, and inflation targeting.

Issues like central bank independence, central bank policies and rhetoric in central bank governors discourse or the premises of macroeconomic policies (monetary and fiscal policy) of the state, are focus of contention and criticism.

Deirdre McCloskey has argued that many empirical economic studies are poorly reported, and she and Stephen Ziliak argue that although her critique has been well-received, practice has not improved. This latter contention is controversial.

Criticisms of assumptions

Economics has historically been subject to criticism that it relies on unrealistic, unverifiable, or highly simplified assumptions, in some cases because these assumptions simplify the proofs of desired conclusions. Examples of such assumptions include perfect information, profit maximization and rational choices, axioms of neoclassical economics. Such criticisms often conflate neoclassical economics with all of contemporary economics. The field of information economics includes both mathematical-economical research and also behavioural economics, akin to studies in behavioural psychology, and confounding factors to the neoclassical assumptions are the subject of substantial study in many areas of economics.

Prominent historical mainstream economists such as Keynes and Joskow observed that much of the economics of their time was conceptual rather than quantitative, and difficult to model and formalize quantitatively. In a discussion on oligopoly research, Paul Joskow pointed out in 1975 that in practice, serious students of actual economies tended to use "informal models" based upon qualitative factors specific to particular industries. Joskow had a strong feeling that the important work in oligopoly was done through informal observations while formal models were "trotted out ex post". He argued that formal models were largely not important in the empirical work, either, and that the fundamental factor behind the theory of the firm, behaviour, was neglected. Woodford noted in 2009 that this was no longer the case, and that modelling had improved significantly in both theoretical rigour and empiricism, with a strong focus on testable quantitative work.

In the 1990s, feminist critiques of neoclassical economic models gained prominence, leading to the formation of feminist economics. Feminist economists call attention to the social construction of economics and claims to highlight the ways in which its models and methods reflect masculine preferences. Primary criticisms focus on alleged failures to account for: the selfish nature of actors (homo economicus); exogenous tastes; the impossibility of utility comparisons; the exclusion of unpaid work; and the exclusion of class and gender considerations.

Related subjects

Economics is one social science among several and has fields bordering on other areas, including economic geography, economic history, public choice, energy economics, cultural economics, family economics and institutional economics.

Law and economics, or economic analysis of law, is an approach to legal theory that applies methods of economics to law. It includes the use of economic concepts to explain the effects of legal rules, to assess which legal rules are economically efficient, and to predict what the legal rules will be. A seminal article by Ronald Coase published in 1961 suggested that well-defined property rights could overcome the problems of externalities.

Political economy is the interdisciplinary study that combines economics, law, and political science in explaining how political institutions, the political environment, and the economic system (capitalist, socialist, mixed) influence each other. It studies questions such as how monopoly, rent-seeking behaviour, and externalities should impact government policy. Historians have employed political economy to explore the ways in the past that persons and groups with common economic interests have used politics to effect changes beneficial to their interests.

Energy economics is a broad scientific subject area which includes topics related to energy supply and energy demand. Georgescu-Roegen reintroduced the concept of entropy in relation to economics and energy from thermodynamics, as distinguished from what he viewed as the mechanistic foundation of neoclassical economics drawn from Newtonian physics. His work contributed significantly to thermoeconomics and to ecological economics. He also did foundational work which later developed into evolutionary economics.

The sociological subfield of economic sociology arose, primarily through the work of Émile Durkheim, Max Weber and Georg Simmel, as an approach to analysing the effects of economic phenomena in relation to the overarching social paradigm (i.e. modernity). Classic works include Max Weber's The Protestant Ethic and the Spirit of Capitalism (1905) and Georg Simmel's The Philosophy of Money (1900). More recently, the works of Mark Granovetter, Peter Hedstrom and Richard Swedberg have been influential in this field.

Profession

The professionalization of economics, reflected in the growth of graduate programmes on the subject, has been described as "the main change in economics since around 1900". Most major universities and many colleges have a major, school, or department in which academic degrees are awarded in the subject, whether in the liberal arts, business, or for professional study. See Bachelor of Economics and Master of Economics.

In the private sector, professional economists are employed as consultants and in industry, including banking and finance. Economists also work for various government departments and agencies, for example, the national treasury, central bank or National Bureau of Statistics.

There are dozens of prizes awarded to economists each year for outstanding intellectual contributions to the field, the most prominent of which is the Nobel Memorial Prize in Economic Sciences, though it is not a Nobel Prize.

Contemporary economics uses mathematics. Economists draw on the tools of calculus, linear algebra, statistics, game theory, and computer science. Professional economists are expected to be familiar with these tools, while a minority specialize in econometrics and mathematical methods.

See also

General

Basket of deplorables

From Wikipedia, the free encyclopedia
 
"Basket of deplorables" is a phrase from a 2016 presidential election campaign speech delivered by Democratic nominee Hillary Clinton on September 9, 2016, at a campaign fundraising event, which she used to describe half of the supporters of her opponent, Republican nominee Donald Trump saying "They're racist, sexist, homophobic, xenophobic, Islamophobic". The next day, she expressed regret for "saying half", while insisting that Trump had deplorably amplified "hateful views and voices".

The Trump campaign repeatedly used the phrase against Clinton during and after the 2016 presidential election. Many Trump supporters adopted the "deplorable" moniker for themselves in reappropriation. Some journalists and political analysts questioned whether this incident played a role in the election's outcome. Clinton admitted in her 2017 book What Happened that it was one of the factors for her loss.

Background

Throughout her presidential campaign, Hillary Clinton expressed her concerns regarding Donald Trump and his supporters. The New York Times and CNN cited Clinton's earlier articulation of similar ideas to the phrase in her August 25, 2016 campaign speech at a rally in Reno, Nevada. In that speech, Clinton had criticized Trump's campaign for using "racist lies" and allowing the alt-right to gain prominence, claiming that Trump was "taking hate groups mainstream and helping a radical fringe take over the Republican Party". Clinton also criticized Trump for choosing Steve Bannon as his chief executive officer, especially given Bannon's role as the executive chair of the far-right news website Breitbart News. Clinton read various headlines from the site, including "Would You Rather Your Child Had Feminism or Cancer?" and "Hoist It High and Proud: The Confederate Flag Proclaims a Glorious Heritage". On that same day, Clinton posted a video on Twitter depicting white supremacists supporting Donald Trump. Within the video is a CNN interview wherein Trump initially declined to disavow white nationalist David Duke.

During campaign fundraisers in August 2016, Clinton reportedly explained her divide and conquer approach to courting Republican voters by putting Trump supporters into two "baskets": everyday Republicans whom she would target and the alt-right crowd. During a September 8, 2016, interview on Israel's Channel 2, Clinton said: "You can take Trump supporters and put them in two big baskets. There are what I would call the deplorables—you know, the racists and the haters, and the people who are drawn because they think somehow he's going to restore an America that no longer exists."

Speech

At an LGBT campaign fundraising event in New York City on September 9, Clinton gave a speech and said the following:

I know there are only 60 days left to make our case – and don't get complacent; don't see the latest outrageous, offensive, inappropriate comment and think, "Well, he's done this time." We are living in a volatile political environment.

You know, to just be grossly generalistic, you could put half of Trump's supporters into what I call the basket of deplorables. (Laughter/applause) Right? (Laughter/applause) They're racist, sexist, homophobic, xenophobic, Islamophobic – you name it. And unfortunately, there are people like that. And he has lifted them up. He has given voice to their websites that used to only have 11,000 people – now have 11 million. He tweets and retweets their offensive hateful mean-spirited rhetoric. Now, some of those folks – they are irredeemable, but thankfully, they are not America.

But the "other" basket – the other basket – and I know because I look at this crowd I see friends from all over America here: I see friends from Florida and Georgia and South Carolina and Texas and – as well as, you know, New York and California – but that "other" basket of people are people who feel the government has let them down, the economy has let them down, nobody cares about them, nobody worries about what happens to their lives and their futures; and they're just desperate for change. It doesn't really even matter where it comes from. They don't buy everything he says, but – he seems to hold out some hope that their lives will be different. They won't wake up and see their jobs disappear, lose a kid to heroin, feel like they're in a dead-end. Those are people we have to understand and empathize with as well.

— Hillary Clinton, CBS News

Clinton response

The following day Clinton expressed regret for "saying half", while insisting that Trump had deplorably amplified "hateful views and voices". At the second presidential debate in October 2016, after Trump mentioned the speech in a response to James Carter, debate moderator Anderson Cooper asked Clinton: "How can you unite a country if you've written off tens of millions of Americans?" Clinton responded to Cooper's question by saying: "My argument is not with his supporters, it's with him and the hateful, divisive campaign he has run".

On October 20, 2016, during the 71st Alfred E. Smith Memorial Foundation Dinner, Clinton joked about the phrase, telling the guests: "I just want to put you all in a basket of adorables".

Clinton campaign

Clinton's campaign pointed to a series of polls that showed that some of Trump's supporters held negative views toward Latinos, African Americans, and Muslims. Clinton's campaign used the incident to try to force Trump's campaign members to denounce its extreme supporters. For example, after Mike Pence refused to call David Duke "deplorable", Clinton's running mate Tim Kaine accused Pence of enabling racism and xenophobia.

Trump response

Donald Trump criticized Clinton's remark as insulting his supporters. In a rally at Des Moines, Iowa, Trump stated: "While my opponent slanders you as deplorable and irredeemable, I call you hardworking American patriots who love your country". During the rest of the election, Trump invited "deplorable Americans" on stage. For example, at a rally in Miami, Florida, on September 16, 2016, Trump parodied musical Les Misérables with the title Les Déplorables under the song "Do You Hear the People Sing?". Trump also used the label against Clinton in an advertisement, which claimed that Clinton herself is deplorable because she "viciously demoniz[es] hard working people like you". On November 8, 2017, one year after the election, Trump thanked his "deplorables" for his victory.

Trump campaign

Mike Pence responding to Clinton's comments

Others in Trump's campaign responded negatively to the statement. Trump's running mate Mike Pence stated in a Capitol Hill meeting: "For Hillary Clinton to express such disdain for millions of Americans is one more reason that disqualifies her to serve in the highest office." Kellyanne Conway, Trump's campaign manager, had issued a statement on Twitter, claiming: "One day after promising to be aspirational & uplifting, Hillary insults millions of Americans."

Meanwhile, Roger Stone and Donald Trump Jr. posted a parody movie poster of The Expendables on Twitter and Instagram titled "The Deplorables", which included Pepe the Frog's face among those of members of the Trump family and other right-wing figures.

In the final months of the election, the Trump campaign released official merchandise with the word "deplorable".

Trump supporters

Trump supporter at the Conservative Political Action Conference in 2018, wearing a hat with the phrase "Proud to be deplorable"

During and after the election, the "deplorables" nickname was reappropriated by many Trump supporters. Weeks before Trump's inauguration, various celebrations were held using word "deplorable". One notable celebration was DeploraBall, which was celebrated by Trump supporters and several members of the right at the National Press Building from January 19 to January 20, 2017.

Analysis

The day after Hillary Clinton's speech, some political analysts compared the statement to Mitt Romney's 47% gaffe in 2012.

After the election, Diane Hessan, who had been hired by the Clinton campaign to track undecided voters, wrote in The Boston Globe that "all hell broke loose" after the "basket of deplorables" comment, which prompted what she saw as the largest shift of undecided voters towards Trump. Political scientist Charles Murray said, in a post-election interview with Sam Harris, that because the comment helped get Donald Trump elected, it had "changed the history of the world, and he [Haidt] may very well be right. That one comment by itself may have swung enough votes, it certainly was emblematic of the disdain with which the New Upper Class looks at mainstream Americans."

In an interview with CNN on December 4, 2016, Hillary Clinton's campaign manager Robby Mook said that the statement "definitely could have alienated" her voters. Meanwhile, Courtney Weaver of Financial Times believed that Clinton's comment had no effect on the election, stating: "To argue that one word cost Mrs Clinton the election is foolish." However, Weaver acknowledged that the statement "did not hurt her opponent". James Taranto of The Wall Street Journal wrote that Clinton only stated that she regretted saying half without indicating whether she underestimated or overestimated.

Spectator columnist Charles Moore compared the impact of the statement to that of British left-winger Nye Bevan's comments disparaging British Conservative Party members as "lower than vermin" in 1948. Moore noted that these remarks sparked a similar outrage and led to the formation of the "Vermin Club".

Some Trump opponents turned the phrase against the Trump administration, for example Time writer Darlena Cunha opined that several members nominated for Trump's cabinet were a "basket of deplorables" spreading racism, Islamophobia, and antisemitism.

In her 2017 book What Happened, Clinton said that her comments on the "basket of deplorables" were a factor in her electoral loss.

Politico's Rich Lowry describes conservatives' interpretation of Clinton's use of the term as "an unfair, disparaging term for people who believe reasonable but politically incorrect things (immigration should be restricted, NFL players should stand during the national anthem, All Lives Matter, etc.)".

The Structure of Scientific Revolutions

From Wikipedia, the free encyclopedia
 
The Structure of Scientific Revolutions
Structure-of-scientific-revolutions-1st-ed-pb.png
Cover of the first edition
AuthorThomas S. Kuhn
Cover artistTed Lacey
CountryUnited States
LanguageEnglish
SubjectHistory of science
PublisherUniversity of Chicago Press
Publication date
1962
Media typePrint (Hardcover and Paperback)
Pages264
ISBN9780226458113
501
LC ClassQ175.K95

The Structure of Scientific Revolutions (1962; second edition 1970; third edition 1996; fourth edition 2012) is a book about the history of science by the philosopher Thomas S. Kuhn. Its publication was a landmark event in the history, philosophy, and sociology of science. Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science. The discovery of "anomalies" during revolutions in science leads to new paradigms. New paradigms then ask new questions of old data, move beyond the mere "puzzle-solving" of the previous paradigm, change the rules of the game and the "map" directing new research.

For example, Kuhn's analysis of the Copernican Revolution emphasized that, in its beginning, it did not offer more accurate predictions of celestial events, such as planetary positions, than the Ptolemaic system, but instead appealed to some practitioners based on a promise of better, simpler solutions that might be developed at some point in the future. Kuhn called the core concepts of an ascendant revolution its "paradigms" and thereby launched this word into widespread analogical use in the second half of the 20th century. Kuhn's insistence that a paradigm shift was a mélange of sociology, enthusiasm and scientific promise, but not a logically determinate procedure, caused an uproar in reaction to his work. Kuhn addressed concerns in the 1969 postscript to the second edition. For some commentators The Structure of Scientific Revolutions introduced a realistic humanism into the core of science, while for others the nobility of science was tarnished by Kuhn's introduction of an irrational element into the heart of its greatest achievements.

History

The Structure of Scientific Revolutions was first published as a monograph in the International Encyclopedia of Unified Science, then as a book by University of Chicago Press in 1962. In 1969, Kuhn added a postscript to the book in which he replied to critical responses to the first edition. A 50th Anniversary Edition (with an introductory essay by Ian Hacking) was published by the University of Chicago Press in April 2012.

Kuhn dated the genesis of his book to 1947, when he was a graduate student at Harvard University and had been asked to teach a science class for humanities undergraduates with a focus on historical case studies. Kuhn later commented that until then, "I'd never read an old document in science." Aristotle's Physics was astonishingly unlike Isaac Newton's work in its concepts of matter and motion. Kuhn wrote "... as I was reading him, Aristotle appeared not only ignorant of mechanics, but a dreadfully bad physical scientist as well. About motion, in particular, his writings seemed to me full of egregious errors, both of logic and of observation." This was in an apparent contradiction with the fact that Aristotle was a brilliant mind. While perusing Aristotle's Physics, Kuhn formed the view that in order to properly appreciate Aristotle's reasoning, one must be aware of the scientific conventions of the time. Kuhn concluded that Aristotle's concepts were not "bad Newton," just different. This insight was the foundation of The Structure of Scientific Revolutions.

Prior to the publication of Kuhn's book, a number of ideas regarding the process of scientific investigation and discovery had already been proposed. Ludwik Fleck developed the first system of the sociology of scientific knowledge in his book The Genesis and Development of a Scientific Fact (1935). He claimed that the exchange of ideas led to the establishment of a thought collective, which, when developed sufficiently, served to separate the field into esoteric (professional) and exoteric (laymen) circles. Kuhn wrote the foreword to the 1979 edition of Fleck's book, noting that he read it in 1950 and was reassured that someone "saw in the history of science what I myself was finding there."

Kuhn was not confident about how his book would be received. Harvard University had denied his tenure a few years prior. However, by the mid-1980s, his book had achieved blockbuster status. When Kuhn's book came out in the early 1960s, "structure" was an intellectually popular word in many fields in the humanities and social sciences, including linguistics and anthropology, appealing in its idea that complex phenomena could reveal or be studied through basic, simpler structures. Kuhn's book contributed to that idea.

One theory to which Kuhn replies directly is Karl Popper's “falsificationism,” which stresses falsifiability as the most important criterion for distinguishing between that which is scientific and that which is unscientific. Kuhn also addresses verificationism, a philosophical movement that emerged in the 1920s among logical positivists. The verifiability principle claims that meaningful statements must be supported by empirical evidence or logical requirements.

Synopsis

Basic approach

Kuhn's approach to the history and philosophy of science focuses on conceptual issues like the practice of normal science, influence of historical events, emergence of scientific discoveries, nature of scientific revolutions and progress through scientific revolutions. What sorts of intellectual options and strategies were available to people during a given period? What types of lexicons and terminology were known and employed during certain epochs? Stressing the importance of not attributing traditional thought to earlier investigators, Kuhn's book argues that the evolution of scientific theory does not emerge from the straightforward accumulation of facts, but rather from a set of changing intellectual circumstances and possibilities. Such an approach is largely commensurate with the general historical school of non-linear history.

Kuhn did not see scientific theory as proceeding linearly from an objective, unbiased accumulation of all available data, but rather as paradigm-driven. “The operations and measurements that a scientist undertakes in the laboratory are not ‘the given’ of experience but rather ‘the collected with difficulty.’ They are not what the scientist sees—at least not before his research is well advanced and his attention focused. Rather, they are concrete indices to the content of more elementary perceptions, and as such they are selected for the close scrutiny of normal research only because they promise opportunity for the fruitful elaboration of an accepted paradigm. Far more clearly than the immediate experience from which they in part derive, operations and measurements are paradigm-determined. Science does not deal in all possible laboratory manipulations. Instead, it selects those relevant to the juxtaposition of a paradigm with the immediate experience that that paradigm has partially determined. As a result, scientists with different paradigms engage in different concrete laboratory manipulations.” 

Historical examples of chemistry

Kuhn explains his ideas using examples taken from the history of science. For instance, eighteenth-century scientists believed that homogenous solutions were chemical compounds. Therefore, a combination of water and alcohol was generally classified as a compound. Nowadays it is considered to be a solution, but there was no reason then to suspect that it was not a compound. Water and alcohol would not separate spontaneously, nor will they separate completely upon distillation (they form an azeotrope). Water and alcohol can be combined in any proportion.

Under this paradigm, scientists believed that chemical reactions (such as the combination of water and alcohol) did not necessarily occur in fixed proportion. This belief was ultimately overturned by Dalton's atomic theory, which asserted that atoms can only combine in simple, whole-number ratios. Under this new paradigm, any reaction which did not occur in fixed proportion could not be a chemical process. This type world-view transition among the scientific community exemplifies Kuhn's paradigm shift. 

Copernican Revolution

A famous example of a revolution in scientific thought is the Copernican Revolution. In Ptolemy's school of thought, cycles and epicycles (with some additional concepts) were used for modeling the movements of the planets in a cosmos that had a stationary Earth at its center. As accuracy of celestial observations increased, complexity of the Ptolemaic cyclical and epicyclical mechanisms had to increase to maintain the calculated planetary positions close to the observed positions. Copernicus proposed a cosmology in which the Sun was at the center and the Earth was one of the planets revolving around it. For modeling the planetary motions, Copernicus used the tools he was familiar with, namely the cycles and epicycles of the Ptolemaic toolbox. Yet Copernicus' model needed more cycles and epicycles than existed in the then-current Ptolemaic model, and due to a lack of accuracy in calculations, his model did not appear to provide more accurate predictions than the Ptolemy model. Copernicus' contemporaries rejected his cosmology, and Kuhn asserts that they were quite right to do so: Copernicus' cosmology lacked credibility.

Kuhn illustrates how a paradigm shift later became possible when Galileo Galilei introduced his new ideas concerning motion. Intuitively, when an object is set in motion, it soon comes to a halt. A well-made cart may travel a long distance before it stops, but unless something keeps pushing it, it will eventually stop moving. Aristotle had argued that this was presumably a fundamental property of nature: for the motion of an object to be sustained, it must continue to be pushed. Given the knowledge available at the time, this represented sensible, reasonable thinking.

Galileo put forward a bold alternative conjecture: suppose, he said, that we always observe objects coming to a halt simply because some friction is always occurring. Galileo had no equipment with which to objectively confirm his conjecture, but he suggested that without any friction to slow down an object in motion, its inherent tendency is to maintain its speed without the application of any additional force.

The Ptolemaic approach of using cycles and epicycles was becoming strained: there seemed to be no end to the mushrooming growth in complexity required to account for the observable phenomena. Johannes Kepler was the first person to abandon the tools of the Ptolemaic paradigm. He started to explore the possibility that the planet Mars might have an elliptical orbit rather than a circular one. Clearly, the angular velocity could not be constant, but it proved very difficult to find the formula describing the rate of change of the planet's angular velocity. After many years of calculations, Kepler arrived at what we now know as the law of equal areas.

Galileo's conjecture was merely that — a conjecture. So was Kepler's cosmology. But each conjecture increased the credibility of the other, and together, they changed the prevailing perceptions of the scientific community. Later, Newton showed that Kepler's three laws could all be derived from a single theory of motion and planetary motion. Newton solidified and unified the paradigm shift that Galileo and Kepler had initiated.

Coherence

One of the aims of science is to find models that will account for as many observations as possible within a coherent framework. Together, Galileo's rethinking of the nature of motion and Keplerian cosmology represented a coherent framework that was capable of rivaling the Aristotelian/Ptolemaic framework.

Once a paradigm shift has taken place, the textbooks are rewritten. Often the history of science too is rewritten, being presented as an inevitable process leading up to the current, established framework of thought. There is a prevalent belief that all hitherto-unexplained phenomena will in due course be accounted for in terms of this established framework. Kuhn states that scientists spend most (if not all) of their careers in a process of puzzle-solving. Their puzzle-solving is pursued with great tenacity, because the previous successes of the established paradigm tend to generate great confidence that the approach being taken guarantees that a solution to the puzzle exists, even though it may be very hard to find. Kuhn calls this process normal science.

As a paradigm is stretched to its limits, anomalies — failures of the current paradigm to take into account observed phenomena — accumulate. Their significance is judged by the practitioners of the discipline. Some anomalies may be dismissed as errors in observation, others as merely requiring small adjustments to the current paradigm that will be clarified in due course. Some anomalies resolve themselves spontaneously, having increased the available depth of insight along the way. But no matter how great or numerous the anomalies that persist, Kuhn observes, the practicing scientists will not lose faith in the established paradigm until a credible alternative is available; to lose faith in the solvability of the problems would in effect mean ceasing to be a scientist.

In any community of scientists, Kuhn states, there are some individuals who are bolder than most. These scientists, judging that a crisis exists, embark on what Kuhn calls revolutionary science, exploring alternatives to long-held, obvious-seeming assumptions. Occasionally this generates a rival to the established framework of thought. The new candidate paradigm will appear to be accompanied by numerous anomalies, partly because it is still so new and incomplete. The majority of the scientific community will oppose any conceptual change, and, Kuhn emphasizes, so they should. To fulfill its potential, a scientific community needs to contain both individuals who are bold and individuals who are conservative. There are many examples in the history of science in which confidence in the established frame of thought was eventually vindicated. It is almost impossible to predict whether the anomalies in a candidate for a new paradigm will eventually be resolved. Those scientists who possess an exceptional ability to recognize a theory's potential will be the first whose preference is likely to shift in favour of the challenging paradigm. There typically follows a period in which there are adherents of both paradigms. In time, if the challenging paradigm is solidified and unified, it will replace the old paradigm, and a paradigm shift will have occurred.

Phases

Kuhn explains the process of scientific change as the result of various phases of paradigm change.

  • Phase 1 – It exists only once and is the pre-paradigm phase, in which there is no consensus on any particular theory. This phase is characterized by several incompatible and incomplete theories. Consequently, most scientific inquiry takes the form of lengthy books, as there is no common body of facts that may be taken for granted. If the actors in the pre-paradigm community eventually gravitate to one of these conceptual frameworks and ultimately to a widespread consensus on the appropriate choice of methods, terminology and on the kinds of experiment that are likely to contribute to increased insights.
  • Phase 2 – Normal science begins, in which puzzles are solved within the context of the dominant paradigm. As long as there is consensus within the discipline, normal science continues. Over time, progress in normal science may reveal anomalies, facts that are difficult to explain within the context of the existing paradigm. While usually these anomalies are resolved, in some cases they may accumulate to the point where normal science becomes difficult and where weaknesses in the old paradigm are revealed.
  • Phase 3 – If the paradigm proves chronically unable to account for anomalies, the community enters a crisis period. Crises are often resolved within the context of normal science. However, after significant efforts of normal science within a paradigm fail, science may enter the next phase.
  • Phase 4 – Paradigm shift, or scientific revolution, is the phase in which the underlying assumptions of the field are reexamined and a new paradigm is established.
  • Phase 5 – Post-Revolution, the new paradigm's dominance is established and so scientists return to normal science, solving puzzles within the new paradigm.

A science may go through these cycles repeatedly, though Kuhn notes that it is a good thing for science that such shifts do not occur often or easily.

Incommensurability

According to Kuhn, the scientific paradigms preceding and succeeding a paradigm shift are so different that their theories are incommensurable — the new paradigm cannot be proven or disproven by the rules of the old paradigm, and vice versa. (A later interpretation by Kuhn of 'commensurable' versus 'incommensurable' was as a distinction between languages, namely, that statements in commensurable languages were translatable fully from one to the other, while in incommensurable languages, strict translation is not possible.) The paradigm shift does not merely involve the revision or transformation of an individual theory, it changes the way terminology is defined, how the scientists in that field view their subject, and, perhaps most significantly, what questions are regarded as valid, and what rules are used to determine the truth of a particular theory. The new theories were not, as the scientists had previously thought, just extensions of old theories, but were instead completely new world views. Such incommensurability exists not just before and after a paradigm shift, but in the periods in between conflicting paradigms. It is simply not possible, according to Kuhn, to construct an impartial language that can be used to perform a neutral comparison between conflicting paradigms, because the very terms used are integral to the respective paradigms, and therefore have different connotations in each paradigm. The advocates of mutually exclusive paradigms are in a difficult position: "Though each may hope to convert the other to his way of seeing science and its problems, neither may hope to prove his case. The competition between paradigms is not the sort of battle that can be resolved by proofs. (p. 148)" Scientists subscribing to different paradigms end up talking past one another.

Kuhn states that the probabilistic tools used by verificationists are inherently inadequate for the task of deciding between conflicting theories, since they belong to the very paradigms they seek to compare. Similarly, observations that are intended to falsify a statement will fall under one of the paradigms they are supposed to help compare, and will therefore also be inadequate for the task. According to Kuhn, the concept of falsifiability is unhelpful for understanding why and how science has developed as it has. In the practice of science, scientists will only consider the possibility that a theory has been falsified if an alternative theory is available that they judge credible. If there is not, scientists will continue to adhere to the established conceptual framework. If a paradigm shift has occurred, the textbooks will be rewritten to state that the previous theory has been falsified.

Kuhn further developed his ideas regarding incommensurability in the 1980s and 1990s. In his unpublished manuscript The Plurality of Worlds, Kuhn introduces the theory of kind concepts: sets of interrelated concepts that are characteristic of a time period in a science and differ in structure from the modern analogous kind concepts. These different structures imply different “taxonomies” of things and processes, and this difference in taxonomies constitutes incommensurability. This theory is strongly naturalistic and draws on developmental psychology to “found a quasi-transcendental theory of experience and of reality.”

Exemplar

Kuhn introduced the concept of an exemplar in a postscript to the second edition of The Structure of Scientific Revolutions (1970). He noted that he was substituting the term 'exemplars' for 'paradigm', meaning the problems and solutions that students of a subject learn from the beginning of their education. For example, physicists might have as exemplars the inclined plane, Kepler's laws of planetary motion, or instruments like the calorimeter.

According to Kuhn, scientific practice alternates between periods of normal science and revolutionary science. During periods of normalcy, scientists tend to subscribe to a large body of interconnecting knowledge, methods, and assumptions which make up the reigning paradigm. Normal science presents a series of problems that are solved as scientists explore their field. The solutions to some of these problems become well known and are the exemplars of the field.

Those who study a scientific discipline are expected to know its exemplars. There is no fixed set of exemplars, but for a physicist today it would probably include the harmonic oscillator from mechanics and the hydrogen atom from quantum mechanics.

Kuhn on scientific progress

The first edition of The Structure of Scientific Revolutions ended with a chapter titled "Progress through Revolutions", in which Kuhn spelled out his views on the nature of scientific progress. Since he considered problem solving to be a central element of science, Kuhn saw that for a new candidate paradigm to be accepted by a scientific community, "First, the new candidate must seem to resolve some outstanding and generally recognized problem that can be met in no other way. Second, the new paradigm must promise to preserve a relatively large part of the concrete problem solving ability that has accrued to science through its predecessors. While the new paradigm is rarely as expansive as the old paradigm in its initial stages, it must nevertheless have significant promise for future problem-solving. As a result, though new paradigms seldom or never possess all the capabilities of their predecessors, they usually preserve a great deal of the most concrete parts of past achievement and they always permit additional concrete problem-solutions besides.

In the second edition, Kuhn added a postscript in which he elaborated his ideas on the nature of scientific progress. He described a thought experiment involving an observer who has the opportunity to inspect an assortment of theories, each corresponding to a single stage in a succession of theories. What if the observer is presented with these theories without any explicit indication of their chronological order? Kuhn anticipates that it will be possible to reconstruct their chronology on the basis of the theories' scope and content, because the more recent a theory is, the better it will be as an instrument for solving the kinds of puzzle that scientists aim to solve. Kuhn remarked: "That is not a relativist's position, and it displays the sense in which I am a convinced believer in scientific progress."

Influence and reception

The Structure of Scientific Revolutions has been credited with producing the kind of "paradigm shift" Kuhn discussed. Since the book's publication, over one million copies have been sold, including translations into sixteen different languages. In 1987, it was reported to be the twentieth-century book most frequently cited in the period 1976–1983 in the arts and the humanities.

Philosophy

The first extensive review of The Structure of Scientific Revolutions was authored by Dudley Shapere, a philosopher who interpreted Kuhn's work as a continuation of the anti-positivist sentiment of other philosophers of science, including Paul Feyerabend and Norwood Russell Hanson. Shapere noted the book's influence on the philosophical landscape of the time, calling it “a sustained attack on the prevailing image of scientific change as a linear process of ever-increasing knowledge.” According to the philosopher Michael Ruse, Kuhn discredited the ahistorical and prescriptive approach to the philosophy of science of Ernest Nagel's The Structure of Science (1961). Kuhn's book sparked a historicist "revolt against positivism" (the so-called "historical turn in philosophy of science" which looked to the history of science as a source of data for developing a philosophy of science), although this may not have been Kuhn's intention; in fact, he had already approached the prominent positivist Rudolf Carnap about having his work published in the International Encyclopedia of Unified Science. The philosopher Robert C. Solomon noted that Kuhn's views have often been suggested to have an affinity to those of Georg Wilhelm Friedrich Hegel. Kuhn's view of scientific knowledge, as expounded in The Structure of Scientific Revolutions, has been compared to the views of the philosopher Michel Foucault.

Sociology

The first field to claim descent from Kuhn's ideas was the sociology of scientific knowledge. Sociologists working within this new field, including Harry Collins and Steven Shapin, used Kuhn's emphasis on the role of non-evidential community factors in scientific development to argue against logical empiricism, which discouraged inquiry into the social aspects of scientific communities. These sociologists expanded upon Kuhn's ideas, arguing that scientific judgment is determined by social factors, such as professional interests and political ideologies.

Barry Barnes detailed the connection between the sociology of scientific knowledge and Kuhn in his book T. S. Kuhn and Social Science. In particular, Kuhn's ideas regarding science occurring within an established framework informed Barnes's own ideas regarding finitism, a theory wherein meaning is continuously changed (even during periods of normal science) by its usage within the social framework.

The Structure of Scientific Revolutions elicited a number of reactions from the broader sociological community. Following the book's publication, some sociologists expressed the belief that the field of sociology had not yet developed a unifying paradigm, and should therefore strive towards homogenization. Others argued that the field was in the midst of normal science, and speculated that a new revolution would soon emerge. Some sociologists, including John Urry, doubted that Kuhn's theory, which addressed the development of natural science, was necessarily relevant to sociological development.

Economics

Developments in the field of economics are often expressed and legitimized in Kuhnian terms. For instance, neoclassical economists have claimed “to be at the second stage [normal science], and to have been there for a very long time – since Adam Smith, according to some accounts (Hollander, 1987), or Jevons according to others (Hutchison, 1978).” In the 1970s, Post Keynesian economists denied the coherence of the neoclassical paradigm, claiming that their own paradigm would ultimately become dominant.

While perhaps less explicit, Kuhn's influence remains apparent in recent economics. For instance, the abstract of Olivier Blanchard's paper “The State of Macro” (2008) begins:

For a long while after the explosion of macroeconomics in the 1970s, the field looked like a battlefield. Over time however, largely because facts do not go away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism and herding.

Political science

In 1974, The Structure of Scientific Revolutions was ranked as the second most frequently used book in political science courses focused on scope and methods. In particular, Kuhn's theory has been used by political scientists to critique behavioralism, which claims that accurate political statements must be both testable and falsifiable. The book also proved popular with political scientists embroiled in debates about whether a set of formulations put forth by a political scientist constituted a theory, or something else.

The changes that occur in politics, society and business are often expressed in Kuhnian terms, however poor their parallel with the practice of science may seem to scientists and historians of science. The terms "paradigm" and "paradigm shift" have become such notorious clichés and buzzwords that they are sometimes viewed as effectively devoid of content.

Criticisms

Front cover of Imre Lakatos and Alan Musgrave, ed., Criticism and the Growth of Knowledge

The Structure of Scientific Revolutions was soon criticized by Kuhn's colleagues in the history and philosophy of science. In 1965, a special symposium on the book was held at an International Colloquium on the Philosophy of Science that took place at Bedford College, London, and was chaired by Karl Popper. The symposium led to the publication of the symposium's presentations plus other essays, most of them critical, which eventually appeared in an influential volume of essays. Kuhn expressed the opinion that his critics' readings of his book were so inconsistent with his own understanding of it that he was "...tempted to posit the existence of two Thomas Kuhns," one the author of his book, the other the individual who had been criticized in the symposium by "Professors Popper, Feyerabend, Lakatos, Toulmin and Watkins."

A number of the included essays question the existence of normal science. In his essay, Feyerabend suggests that Kuhn's conception of normal science fits organized crime as well as it does science. Popper expresses distaste with the entire premise of Kuhn's book, writing, “the idea of turning for enlightenment concerning the aims of science, and its possible progress, to sociology or to psychology (or. . .to the history of science) is surprising and disappointing.”

Concept of paradigm

In his 1972 work, Human Understanding, Stephen Toulmin argued that a more realistic picture of science than that presented in The Structure of Scientific Revolutions would admit the fact that revisions in science take place much more frequently, and are much less dramatic than can be explained by the model of revolution/normal science. In Toulmin's view, such revisions occur quite often during periods of what Kuhn would call "normal science." For Kuhn to explain such revisions in terms of the non-paradigmatic puzzle solutions of normal science, he would need to delineate what is perhaps an implausibly sharp distinction between paradigmatic and non-paradigmatic science.

Incommensurability of paradigms

In a series of texts published in the early 1970s, Carl R. Kordig asserted a position somewhere between that of Kuhn and the older philosophy of science. His criticism of the Kuhnian position was that the incommensurability thesis was too radical, and that this made it impossible to explain the confrontation of scientific theories that actually occurs. According to Kordig, it is in fact possible to admit the existence of revolutions and paradigm shifts in science while still recognizing that theories belonging to different paradigms can be compared and confronted on the plane of observation. Those who accept the incommensurability thesis do not do so because they admit the discontinuity of paradigms, but because they attribute a radical change in meanings to such shifts.

Kordig maintains that there is a common observational plane. For example, when Kepler and Tycho Brahe are trying to explain the relative variation of the distance of the sun from the horizon at sunrise, both see the same thing (the same configuration is focused on the retina of each individual). This is just one example of the fact that "rival scientific theories share some observations, and therefore some meanings." Kordig suggests that with this approach, he is not reintroducing the distinction between observations and theory in which the former is assigned a privileged and neutral status, but that it is possible to affirm more simply the fact that, even if no sharp distinction exists between theory and observations, this does not imply that there are no comprehensible differences at the two extremes of this polarity.

At a secondary level, for Kordig there is a common plane of inter-paradigmatic standards or shared norms that permit the effective confrontation of rival theories.

In 1973, Hartry Field published an article that also sharply criticized Kuhn's idea of incommensurability. In particular, he took issue with this passage from Kuhn:

Newtonian mass is immutably conserved; that of Einstein is convertible into energy. Only at very low relative velocities can the two masses be measured in the same way, and even then they must not be conceived as if they were the same thing. (Kuhn 1970).

Field takes this idea of incommensurability between the same terms in different theories one step further. Instead of attempting to identify a persistence of the reference of terms in different theories, Field's analysis emphasizes the indeterminacy of reference within individual theories. Field takes the example of the term "mass", and asks what exactly "mass" means in modern post-relativistic physics. He finds that there are at least two different definitions:

  1. Relativistic mass: the mass of a particle is equal to the total energy of the particle divided by the speed of light squared. Since the total energy of a particle in relation to one system of reference differs from the total energy in relation to other systems of reference, while the speed of light remains constant in all systems, it follows that the mass of a particle has different values in different systems of reference.
  2. "Real" mass: the mass of a particle is equal to the non-kinetic energy of a particle divided by the speed of light squared. Since non-kinetic energy is the same in all systems of reference, and the same is true of light, it follows that the mass of a particle has the same value in all systems of reference.

Projecting this distinction backwards in time onto Newtonian dynamics, we can formulate the following two hypotheses:

  • HR: the term "mass" in Newtonian theory denotes relativistic mass.
  • Hp: the term "mass" in Newtonian theory denotes "real" mass.

According to Field, it is impossible to decide which of these two affirmations is true. Prior to the theory of relativity, the term "mass" was referentially indeterminate. But this does not mean that the term "mass" did not have a different meaning than it now has. The problem is not one of meaning but of reference. The reference of such terms as mass is only partially determined: we don't really know how Newton intended his use of this term to be applied. As a consequence, neither of the two terms fully denotes (refers). It follows that it is improper to maintain that a term has changed its reference during a scientific revolution; it is more appropriate to describe terms such as "mass" as "having undergone a denotional refinement."

In 1974, Donald Davidson objected that the concept of incommensurable scientific paradigms competing with each other is logically inconsistent. "In his article Davidson goes well beyond the semantic version of the incommensurability thesis: to make sense of the idea of a language independent of translation requires a distinction between conceptual schemes and the content organized by such schemes. But, Davidson argues, no coherent sense can be made of the idea of a conceptual scheme, and therefore no sense may be attached to the idea of an untranslatable language."

Incommensurability and perception

The close connection between the interpretationalist hypothesis and a holistic conception of beliefs is at the root of the notion of the dependence of perception on theory, a central concept in The Structure of Scientific Revolutions. Kuhn maintained that the perception of the world depends on how the percipient conceives the world: two scientists who witness the same phenomenon and are steeped in two radically different theories will see two different things. According to this view, our interpretation of the world determines what we see.

Jerry Fodor attempts to establish that this theoretical paradigm is fallacious and misleading by demonstrating the impenetrability of perception to the background knowledge of subjects. The strongest case can be based on evidence from experimental cognitive psychology, namely the persistence of perceptual illusions. Knowing that the lines in the Müller-Lyer illusion are equal does not prevent one from continuing to see one line as being longer than the other. This impenetrability of the information elaborated by the mental modules limits the scope of interpretationalism.

In epistemology, for example, the criticism of what Fodor calls the interpretationalist hypothesis accounts for the common-sense intuition (on which naïve physics is based) of the independence of reality from the conceptual categories of the experimenter. If the processes of elaboration of the mental modules are in fact independent of the background theories, then it is possible to maintain the realist view that two scientists who embrace two radically diverse theories see the world exactly in the same manner even if they interpret it differently. The point is that it is necessary to distinguish between observations and the perceptual fixation of beliefs. While it is beyond doubt that the second process involves the holistic relationship between beliefs, the first is largely independent of the background beliefs of individuals.

Other critics, such as Israel Scheffler, Hilary Putnam and Saul Kripke, have focused on the Fregean distinction between sense and reference in order to defend scientific realism. Scheffler contends that Kuhn confuses the meanings of terms such as "mass" with their referents. While their meanings may very well differ, their referents (the objects or entities to which they correspond in the external world) remain fixed.

Subsequent commentary by Kuhn

In 1995 Kuhn argued that the Darwinian metaphor in the book should have been taken more seriously than it had been.

Awards and honors

Editions

See also

Archetype

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Archetype The concept of an archetyp...