Search This Blog

Saturday, December 23, 2023

Macroeconomic model

From Wikipedia, the free encyclopedia
 
A macroeconomic model is an analytical tool designed to describe the operation of the problems of economy of a country or a region. These models are usually designed to examine the comparative statics and dynamics of aggregate quantities such as the total amount of goods and services produced, total income earned, the level of employment of productive resources, and the level of prices.

Macroeconomic models may be logical, mathematical, and/or computational; the different types of macroeconomic models serve different purposes and have different advantages and disadvantages. Macroeconomic models may be used to clarify and illustrate basic theoretical principles; they may be used to test, compare, and quantify different macroeconomic theories; they may be used to produce "what if" scenarios (usually to predict the effects of changes in monetary, fiscal, or other macroeconomic policies); and they may be used to generate economic forecasts. Thus, macroeconomic models are widely used in academia in teaching and research, and are also widely used by international organizations, national governments and larger corporations, as well as by economic consultants and think tanks.

Types

Simple theoretical models

Simple textbook descriptions of the macroeconomy involving a small number of equations or diagrams are often called ‘models’. Examples include the IS-LM model and Mundell–Fleming model of Keynesian macroeconomics, and the Solow model of neoclassical growth theory. These models share several features. They are based on a few equations involving a few variables, which can often be explained with simple diagrams. Many of these models are static, but some are dynamic, describing the economy over many time periods. The variables that appear in these models often represent macroeconomic aggregates (such as GDP or total employment) rather than individual choice variables, and while the equations relating these variables are intended to describe economic decisions, they are not usually derived directly by aggregating models of individual choices. They are simple enough to be used as illustrations of theoretical points in introductory explanations of macroeconomic ideas; but therefore quantitative application to forecasting, testing, or policy evaluation is usually impossible without substantially augmenting the structure of the model.

Empirical forecasting models

In the 1940s and 1950s, as governments began accumulating national income and product accounting data, economists set out to construct quantitative models to describe the dynamics observed in the data. These models estimated the relations between different macroeconomic variables using (mostly linear) time series analysis. Like the simpler theoretical models, these empirical models described relations between aggregate quantities, but many addressed a much finer level of detail (for example, studying the relations between output, employment, investment, and other variables in many different industries). Thus, these models grew to include hundreds or thousands of equations describing the evolution of hundreds or thousands of prices and quantities over time, making computers essential for their solution. While the choice of which variables to include in each equation was partly guided by economic theory (for example, including past income as a determinant of consumption, as suggested by the theory of adaptive expectations), variable inclusion was mostly determined on purely empirical grounds.

Dutch economist Jan Tinbergen developed the first comprehensive national model, which he built for the Netherlands in 1936. He later applied the same modeling structure to the economies of the United States and the United Kingdom. The first global macroeconomic model, Wharton Econometric Forecasting Associates' LINK project, was initiated by Lawrence Klein. The model was cited in 1980 when Klein, like Tinbergen before him, won the Nobel Prize. Large-scale empirical models of this type, including the Wharton model, are still in use today, especially for forecasting purposes.

The Lucas critique of empirical forecasting models

Econometric studies in the first part of the 20th century showed a negative correlation between inflation and unemployment called the Phillips curve. Empirical macroeconomic forecasting models, being based on roughly the same data, had similar implications: they suggested that unemployment could be permanently lowered by permanently increasing inflation. However, in 1968, Milton Friedman and Edmund Phelps argued that this apparent tradeoff was illusory. They claimed that the historical relation between inflation and unemployment was due to the fact that past inflationary episodes had been largely unexpected. They argued that if monetary authorities permanently raised the inflation rate, workers and firms would eventually come to understand this, at which point the economy would return to its previous, higher level of unemployment, but now with higher inflation too. The stagflation of the 1970s appeared to bear out their prediction.

In 1976, Robert Lucas Jr., published an influential paper arguing that the failure of the Phillips curve in the 1970s was just one example of a general problem with empirical forecasting models. He pointed out that such models are derived from observed relationships between various macroeconomic quantities over time, and that these relations differ depending on what macroeconomic policy regime is in place. In the context of the Phillips curve, this means that the relation between inflation and unemployment observed in an economy where inflation has usually been low in the past would differ from the relation observed in an economy where inflation has been high. Furthermore, this means one cannot predict the effects of a new policy regime using an empirical forecasting model based on data from previous periods when that policy regime was not in place. Lucas argued that economists would remain unable to predict the effects of new policies unless they built models based on economic fundamentals (like preferences, technology, and budget constraints) that should be unaffected by policy changes.

Dynamic stochastic general equilibrium models

Partly as a response to the Lucas critique, economists of the 1980s and 1990s began to construct microfounded macroeconomic models based on rational choice, which have come to be called dynamic stochastic general equilibrium (DSGE) models. These models begin by specifying the set of agents active in the economy, such as households, firms, and governments in one or more countries, as well as the preferences, technology, and budget constraint of each one. Each agent is assumed to make an optimal choice, taking into account prices and the strategies of other agents, both in the current period and in the future. Summing up the decisions of the different types of agents, it is possible to find the prices that equate supply with demand in every market. Thus these models embody a type of equilibrium self-consistency: agents choose optimally given the prices, while prices must be consistent with agents’ supplies and demands.

DSGE models often assume that all agents of a given type are identical (i.e. there is a ‘representative household’ and a ‘representative firm’) and can perform perfect calculations that forecast the future correctly on average (which is called rational expectations). However, these are only simplifying assumptions, and are not essential for the DSGE methodology; many DSGE studies aim for greater realism by considering heterogeneous agents or various types of adaptive expectations. Compared with empirical forecasting models, DSGE models typically have fewer variables and equations, mainly because DSGE models are harder to solve, even with the help of computers. Simple theoretical DSGE models, involving only a few variables, have been used to analyze the forces that drive business cycles; this empirical work has given rise to two main competing frameworks called the real business cycle model and the New Keynesian DSGE model. More elaborate DSGE models are used to predict the effects of changes in economic policy and evaluate their impact on social welfare. However, economic forecasting is still largely based on more traditional empirical models, which are still widely believed to achieve greater accuracy in predicting the impact of economic disturbances over time.

DSGE versus CGE models

A closely related methodology that pre-dates DSGE modeling is computable general equilibrium (CGE) modeling. Like DSGE models, CGE models are often microfounded on assumptions about preferences, technology, and budget constraints. However, CGE models focus mostly on long-run relationships, making them most suited to studying the long-run impact of permanent policies like the tax system or the openness of the economy to international trade. DSGE models instead emphasize the dynamics of the economy over time (often at a quarterly frequency), making them suited for studying business cycles and the cyclical effects of monetary and fiscal policy.

Agent-based computational macroeconomic models

Another modeling methodology that has developed at the same time as DSGE models is Agent-based computational economics (ACE), which is a variety of Agent-based modeling. Like the DSGE methodology, ACE seeks to break down aggregate macroeconomic relationships into microeconomic decisions of individual agents. ACE models also begin by defining the set of agents that make up the economy, and specify the types of interactions individual agents can have with each other or with the market as a whole. Instead of defining the preferences of those agents, ACE models often jump directly to specifying their strategies. Or sometimes, preferences are specified, together with an initial strategy and a learning rule whereby the strategy is adjusted according to its past success. Given these strategies, the interaction of large numbers of individual agents (who may be very heterogeneous) can be simulated on a computer, and then the aggregate, macroeconomic relationships that arise from those individual actions can be studied.

Strengths and weaknesses of DSGE and ACE models

DSGE and ACE models have different advantages and disadvantages due to their different underlying structures. DSGE models may exaggerate individual rationality and foresight, and understate the importance of heterogeneity, since the rational expectations, representative agent case remains the simplest and thus the most common type of DSGE model to solve. Also, unlike ACE models, it may be difficult to study local interactions between individual agents in DSGE models, which instead focus mostly on the way agents interact through aggregate prices. On the other hand, ACE models may exaggerate errors in individual decision-making, since the strategies assumed in ACE models may be very far from optimal choices unless the modeler is very careful. A related issue is that ACE models which start from strategies instead of preferences may remain vulnerable to the Lucas critique: a changed policy regime should generally give rise to changed strategies.

Computable general equilibrium

Computable general equilibrium (CGE) models are a class of economic models that use actual economic data to estimate how an economy might react to changes in policy, technology or other external factors. CGE models are also referred to as AGE (applied general equilibrium) models.

Overview

A CGE model consists of equations describing model variables and a database (usually very detailed) consistent with these model equations. The equations tend to be neoclassical in spirit, often assuming cost-minimizing behaviour by producers, average-cost pricing, and household demands based on optimizing behaviour. However, most CGE models conform only loosely to the theoretical general equilibrium paradigm. For example, they may allow for:

  1. non-market clearing, especially for labour (unemployment) or for commodities (inventories)
  2. imperfect competition (e.g., monopoly pricing)
  3. demands not influenced by price (e.g., government demands)

A CGE model database consists of:

  1. tables of transaction values, showing, for example, the value of coal used by the iron industry. Usually the database is presented as an input-output table or as a social accounting matrix (SAM). In either case, it covers the whole economy of a country (or even the whole world), and distinguishes a number of sectors, commodities, primary factors and perhaps types of households. Sectoral coverage ranges from relatively simple representations of capital, labor and intermediates to highly detailed representations of specific sub-sectors (e.g., the electricity sector in GTAP-Power.)
  2. elasticities: dimensionless parameters that capture behavioural response. For example, export demand elasticities specify by how much export volumes might fall if export prices went up. Other elasticities may belong to the constant elasticity of substitution class. Amongst these are Armington elasticities, which show whether products of different countries are close substitutes, and elasticities measuring how easily inputs to production may be substituted for one another. Income elasticity of demand shows how household demands respond to income changes.

CGE models are descended from the input-output models pioneered by Wassily Leontief, but assign a more important role to prices. Thus, where Leontief assumed that, say, a fixed amount of labour was required to produce a ton of iron, a CGE model would normally allow wage levels to (negatively) affect labour demands.

CGE models derive too from the models for planning the economies of poorer countries constructed (usually by a foreign expert) from 1960 onwards. Compared to the Leontief model, development planning models focused more on constraints or shortages—of skilled labour, capital, or foreign exchange.

CGE modelling of richer economies descends from Leif Johansen's 1960 MSG model of Norway, and the static model developed by the Cambridge Growth Project in the UK. Both models were pragmatic in flavour, and traced variables through time. The Australian MONASH model is a modern representative of this class. Perhaps the first CGE model similar to those of today was that of Taylor and Black (1974).

CGE models are useful whenever we wish to estimate the effect of changes in one part of the economy upon the rest. For example, a tax on flour might affect bread prices, the CPI, and hence perhaps wages and employment. They have been used widely to analyse trade policy. More recently, CGE has been a popular way to estimate the economic effects of measures to reduce greenhouse gas emissions.

CGE models always contain more variables than equations—so some variables must be set outside the model. These variables are termed exogenous; the remainder, determined by the model, is called endogenous. The choice of which variables are to be exogenous is called the model closure, and may give rise to controversy. For example, some modelers hold employment and the trade balance fixed; others allow these to vary. Variables defining technology, consumer tastes, and government instruments (such as tax rates) are usually exogenous.

Today there are many CGE models of different countries. One of the most well-known CGE models is global: the GTAP model of world trade.

CGE models are useful to model the economies of countries for which time series data are scarce or not relevant (perhaps because of disturbances such as regime changes). Here, strong, reasonable, assumptions embedded in the model must replace historical evidence. Thus developing economies are often analysed using CGE models, such as those based on the IFPRI template model.

Comparative-static and dynamic CGE models

Many CGE models are comparative static: they model the reactions of the economy at only one point in time. For policy analysis, results from such a model are often interpreted as showing the reaction of the economy in some future period to one or a few external shocks or policy changes. That is, the results show the difference (usually reported in percent change form) between two alternative future states (with and without the policy shock). The process of adjustment to the new equilibrium, in particular the reallocation of labor and capital across sectors, usually is not explicitly represented in such a model.

In contrast, long-run models focus on adjustments to the underlying resource base when modeling policy changes. This can include dynamic adjustment to the labor supply, adjustments in installed and overall capital stocks, and even adjustment to overall productivity and market structure. There are two broad approaches followed in the policy literature to such long-run adjustment. One involves what is called "comparative steady state" analysis. Under such an approach, long-run or steady-state closure rules are used, under either forward-looking or recursive dynamic behavior, to solve for long-run adjustments.

The alternative approach involves explicit modeling of dynamic adjustment paths. These models can seem more realistic, but are more challenging to construct and solve. They require for instance that future changes are predicted for all exogenous variables, not just those affected by a possible policy change. The dynamic elements may arise from partial adjustment processes or from stock/flow accumulation relations: between capital stocks and investment, and between foreign debt and trade deficits. However there is a potential consistency problem because the variables that change from one equilibrium solution to the next are not necessarily consistent with each other during the period of change. The modeling of the path of adjustment may involve forward-looking expectations, where agents' expectations depend on the future state of the economy and it is necessary to solve for all periods simultaneously, leading to full multi-period dynamic CGE models. An alternative is recursive dynamics. Recursive-dynamic CGE models are those that can be solved sequentially (one period at a time). They assume that behaviour depends only on current and past states of the economy. Recursive dynamic models where a single period is solved for, comparative steady-state analysis, is a special case of recursive dynamic modeling over what can be multiple periods.

Techniques

Early CGE models were often solved by a program custom-written for that particular model. Models were expensive to construct and sometimes appeared as a 'black box' to outsiders. Now, most CGE models are formulated and solved using one of the GAMS or GEMPACK software systems. AMPL, Excel and MATLAB are also used. Use of such systems has lowered the cost of entry to CGE modelling; allowed model simulations to be independently replicated; and increased the transparency of the models.

Input–output model

From Wikipedia, the free encyclopedia
 
In economics, an input–output model is a quantitative economic model that represents the interdependencies between different sectors of a national economy or different regional economies. Wassily Leontief (1906–1999) is credited with developing this type of analysis and earned the Nobel Prize in Economics for his development of this model.

Origins

Francois Quesnay had developed a cruder version of this technique called Tableau économique, and Léon Walras's work Elements of Pure Economics on general equilibrium theory also was a forerunner and made a generalization of Leontief's seminal concept.

Alexander Bogdanov has been credited with originating the concept in a report delivered to the All Russia Conference on the Scientific Organisation of Labour and Production Processes, in January 1921. This approach was also developed by L. N. Kritsman and T. F. Remington, who has argued that their work provided a link between Quesnay's tableau économique and the subsequent contributions by Vladimir Groman and Vladimir Bazarov to Gosplan's method of material balance planning.

Wassily Leontief's work in the input–output model was influenced by the works of the classical economists Karl Marx and Jean Charles Léonard de Sismondi. Karl Marx's economics provided an early outline involving a set of tables where the economy consisted of two interlinked departments.

Leontief was the first to use a matrix representation of a national (or regional) economy.

Basic derivation

The model depicts inter-industry relationships within an economy, showing how output from one industrial sector may become an input to another industrial sector. In the inter-industry matrix, column entries typically represent inputs to an industrial sector, while row entries represent outputs from a given sector. This format, therefore, shows how dependent each sector is on every other sector, both as a customer of outputs from other sectors and as a supplier of inputs. Sectors may also depend internally on a portion of their own production as delineated by the entries of the matrix diagonal. Each column of the input–output matrix shows the monetary value of inputs to each sector and each row represents the value of each sector's outputs.

Say that we have an economy with sectors. Each sector produces units of a single homogeneous good. Assume that the th sector, in order to produce 1 unit, must use units from sector . Furthermore, assume that each sector sells some of its output to other sectors (intermediate output) and some of its output to consumers (final output, or final demand). Call final demand in the th sector . Then we might write

or total output equals intermediate output plus final output. If we let be the matrix of coefficients , be the vector of total output, and be the vector of final demand, then our expression for the economy becomes

which after re-writing becomes . If the matrix is invertible then this is a linear system of equations with a unique solution, and so given some final demand vector the required output can be found. Furthermore, if the principal minors of the matrix are all positive (known as the Hawkins–Simon condition), the required output vector is non-negative.

Example

Consider an economy with two goods, A and B. The matrix of coefficients and the final demand is given by

Intuitively, this corresponds to finding the amount of output each sector should produce given that we want 7 units of good A and 4 units of good B. Then solving the system of linear equations derived above gives us

Further research

There is extensive literature on these models. The model has been extended to work with non-linear relationships between sectors.[7] There is the Hawkins–Simon condition on producibility. There has been research on disaggregation to clustered inter-industry flows, and on the study of constellations of industries. A great deal of empirical work has been done to identify coefficients, and data has been published for the national economy as well as for regions. The Leontief system can be extended to a model of general equilibrium; it offers a method of decomposing work done at a macro level.

Regional multipliers

While national input–output tables are commonly created by countries' statistics agencies, officially published regional input–output tables are rare. Therefore, economists often use location quotients to create regional multipliers starting from national data. This technique has been criticized because there are several location quotient regionalization techniques, and none are universally superior across all use-cases.

Introducing transportation

Transportation is implicit in the notion of inter-industry flows. It is explicitly recognized when transportation is identified as an industry – how much is purchased from transportation in order to produce. But this is not very satisfactory because transportation requirements differ, depending on industry locations and capacity constraints on regional production. Also, the receiver of goods generally pays freight cost, and often transportation data are lost because transportation costs are treated as part of the cost of the goods.

Walter Isard and his student, Leon Moses, were quick to see the spatial economy and transportation implications of input–output, and began work in this area in the 1950s developing a concept of interregional input–output. Take a one region versus the world case. We wish to know something about inter-regional commodity flows, so introduce a column into the table headed "exports" and we introduce an "import" row.

Table: Adding Export And Import Transactions
Economic Activities 1 2 ... ... Z Exports Final Demand Total Outputs
1






2






...






...






Z






Imports






A more satisfactory way to proceed would be to tie regions together at the industry level. That is, we could identify both intra-region inter-industry transactions and inter-region inter-industry transactions. The problem here is that the table grows quickly.

Input–output is conceptually simple. Its extension to a model of equilibrium in the national economy has been done successfully using high-quality data. One who wishes to work with input–output systems must deal with industry classification, data estimation, and inverting very large, often ill-conditioned matrices. The quality of the data and matrices of the input-output model can be improved by modelling activities with digital twins and solving the problem of optimizing management decisions. Moreover, changes in relative prices are not readily handled by this modelling approach alone. Input–output accounts are part and parcel to a more flexible form of modelling, computable general equilibrium models.

Two additional difficulties are of interest in transportation work. There is the question of substituting one input for another, and there is the question about the stability of coefficients as production increases or decreases. These are intertwined questions. They have to do with the nature of regional production functions.

Technology Assumptions

To construct input-output tables from supply and use tables, four principle assumptions can be applied. The choice depends on whether product-by-product or industry-by-industry input-output tables are to be established.

Usefulness

Because the input–output model is fundamentally linear in nature, it lends itself to rapid computation as well as flexibility in computing the effects of changes in demand. Input–output models for different regions can also be linked together to investigate the effects of inter-regional trade, and additional columns can be added to the table to perform environmentally extended input–output analysis (EEIOA). For example, information on fossil fuel inputs to each sector can be used to investigate flows of embodied carbon within and between different economies.

The structure of the input–output model has been incorporated into national accounting in many developed countries, and as such can be used to calculate important measures such as national GDP. Input–output economics has been used to study regional economies within a nation, and as a tool for national and regional economic planning. A main use of input–output analysis is to measure the economic impacts of events as well as public investments or programs as shown by IMPLAN and Regional Input–Output Modeling System. It is also used to identify economically related industry clusters and also so-called "key" or "target" industries (industries that are most likely to enhance the internal coherence of a specified economy). By linking industrial output to satellite accounts articulating energy use, effluent production, space needs, and so on, input–output analysts have extended the approaches application to a wide variety of uses.

Input–output and socialist planning

The input–output model is one of the major conceptual models for a socialist planned economy. This model involves the direct determination of physical quantities to be produced in each industry, which are used to formulate a consistent economic plan of resource allocation. This method of planning is contrasted with price-directed Lange-model socialism and Soviet-style material balance planning.

In the economy of the Soviet Union, planning was conducted using the method of material balances up until the country's dissolution. The method of material balances was first developed in the 1930s during the Soviet Union's rapid industrialization drive. Input–output planning was never adopted because the material balance system had become entrenched in the Soviet economy, and input–output planning was shunned for ideological reasons. As a result, the benefits of consistent and detailed planning through input–output analysis were never realized in the Soviet-type economies.

Measuring input–output tables

The mathematics of input–output economics is straightforward, but the data requirements are enormous because the expenditures and revenues of each branch of economic activity have to be represented. As a result, not all countries collect the required data and data quality varies, even though a set of standards for the data's collection has been set out by the United Nations through its System of National Accounts (SNA): the most recent standard is the 2008 SNA. Because the data collection and preparation process for the input–output accounts is necessarily labor and computer intensive, input–output tables are often published long after the year in which the data were collected—typically as much as 5–7 years after. Moreover, the economic "snapshot" that the benchmark version of the tables provides of the economy's cross-section is typically taken only once every few years, at best.

However, many developed countries estimate input–output accounts annually and with much greater recency. This is because while most uses of the input–output analysis focus on the matrix set of inter-industry exchanges, the actual focus of the analysis from the perspective of most national statistical agencies is the benchmarking of gross domestic product. Input–output tables therefore are an instrumental part of national accounts. As suggested above, the core input–output table reports only intermediate goods and services that are exchanged among industries. But an array of row vectors, typically aligned at the bottom of this matrix, record non-industrial inputs by industry like payments for labor; indirect business taxes; dividends, interest, and rents; capital consumption allowances (depreciation); other property-type income (like profits); and purchases from foreign suppliers (imports). At a national level, although excluding the imports, when summed this is called "gross product originating" or "gross domestic product by industry." Another array of column vectors is called "final demand" or "gross product consumed." This displays columns of spending by households, governments, changes in industry stocks, and industries on investment, as well as net exports. (See also Gross domestic product.) In any case, by employing the results of an economic census which asks for the sales, payrolls, and material/equipment/service input of each establishment, statistical agencies back into estimates of industry-level profits and investments using the input–output matrix as a sort of double-accounting framework.

Input–output analysis versus consistency analysis

Despite the clear ability of the input–output model to depict and analyze the dependence of one industry or sector on another, Leontief and others never managed to introduce the full spectrum of dependency relations in a market economy. In 2003, Mohammad Gani, a pupil of Leontief, introduced consistency analysis in his book Foundations of Economic Science, which formally looks exactly like the input–output table but explores the dependency relations in terms of payments and intermediation relations. Consistency analysis explores the consistency of plans of buyers and sellers by decomposing the input–output table into four matrices, each for a different kind of means of payment. It integrates micro and macroeconomics into one model and deals with money in a value-free manner. It deals with the flow of funds via the movement of goods.

Socialist calculation debate

From Wikipedia, the free encyclopedia

The socialist calculation debate, sometimes known as the economic calculation debate, was a discourse on the subject of how a socialist economy would perform economic calculation given the absence of the law of value, money, financial prices for capital goods and private ownership of the means of production. More specifically, the debate was centered on the application of economic planning for the allocation of the means of production as a substitute for capital markets and whether or not such an arrangement would be superior to capitalism in terms of efficiency and productivity.

The historical debate was cast between the Austrian School represented by Ludwig von Mises and Friedrich Hayek, who argued against the feasibility of socialism; and between neoclassical and Marxian economists, most notably Cläre Tisch (as a forerunner), Oskar R. Lange, Abba P. Lerner, Fred M. Taylor, Henry Douglas Dickinson and Maurice Dobb, who took the position that socialism was both feasible and superior to capitalism. A central aspect of the debate concerned the role and scope of the law of value in a socialist economy. Although contributions to the question of economic coordination and calculation under socialism existed within the socialist movement prior to the 20th century, the phrase socialist calculation debate emerged in the 1920s beginning with Mises' critique of socialism.

While the debate was popularly viewed as a debate between proponents of capitalism and proponents of socialism, in reality a significant portion of the debate was between socialists who held differing views regarding the utilization of markets and money in a socialist system and to what degree the law of value would continue to operate in a hypothetical socialist economy. Socialists generally held one of three major positions regarding the unit of calculation, including the view that money would continue to be the unit of calculation under socialism; that labor time would be a unit of calculation; or that socialism would be based on calculation in natura or calculation performed in-kind.

Debate among socialists has existed since the emergence of the broader socialist movement between those advocating market socialism, centrally planned economies and decentralized planning. Recent contributions to the debate in the late 20th century and early 21st century involve proposals for market socialism and the use of information technology and distributed networking as a basis for decentralized economic planning.

Foundations and early contributions

Karl Marx and Friedrich Engels held a broad characterization of socialism, characterized by some form of public or common ownership of the means of production and workers' self-management within economic enterprises and where production of economic value for profit would be replaced by an ex ante production directly for use which implied some form of economic planning and planned growth in place of the dynamic of capital accumulation and therefore the substitution of commodity-based production and market-based allocation of the factors of production with conscious planning.

Although Marx and Engels never elaborated on the specific institutions that would exist in socialism or on processes for conducting planning in a socialist system, their broad characterizations laid the foundation for the general conception of socialism as an economic system devoid of the law of value and law of accumulation and principally where the category of value was replaced by calculation in terms of natural or physical units so that resource allocation, production and distribution would be considered technical affairs to be undertaken by engineers and technical specialists.

An alternative view of socialism prefiguring the neoclassical models of market socialism consisted of conceptions of market socialism based on classical economic theory and Ricardian socialism, where markets were utilized to allocate capital goods among worker-owned cooperatives in a free-market economy. The key characteristics of this system involved direct worker ownership of the means of production through producer and consumer cooperatives and the achievement of genuinely free markets by removing the distorting effects of private property, inequality arising from private appropriation of profits and interest to a rentier class, regulatory capture, and economic exploitation. This view was expounded by mutualism and was severely criticized by Marxists for failing to address the fundamental issues of capitalism involving instability arising from the operation of the law of value, crises caused by overaccumulation of capital and lack of conscious control over the surplus product. This perspective played little to no role during the socialist calculation debate in the early 20th century.

Early arguments against the utilization of central economic planning for a socialist economy were brought up by proponents of decentralized economic planning or market socialism, including Pierre-Joseph Proudhon, Peter Kropotkin and Leon Trotsky. In general, it was argued that centralized forms of economic planning that excluded participation by the workers involved in the industries would not be sufficient at capturing adequate amounts of information to coordinate an economy effectively while also undermining socialism and the concept of workers' self-management and democratic decision-making central to socialism. However, no detailed outlines for decentralized economic planning were proposed by these thinkers at this time. Socialist market abolitionists in favour of decentralized planning also argue that whilst advocates of capitalism and the Austrian School in particular recognize equilibrium prices do not exist, they nonetheless claim that these prices can be used as a rational basis when this is not the case, hence markets are not efficient. Other market abolitionist socialists such as Robin Cox of the Socialist Party of Great Britain argue that decentralized planning allows for a spontaneously self-regulating system of stock control (relying solely on calculation in kind) to come about and that in turn decisively overcomes the objections raised by the economic calculation argument that any large scale economy must necessarily resort to a system of market prices.

Early neoclassical contributions

In the early 20th century, Enrico Barone provided a comprehensive theoretical framework for a planned socialist economy. In his model, assuming perfect computation techniques, simultaneous equations relating inputs and outputs to ratios of equivalence would provide appropriate valuations in order to balance supply and demand.

Proposed units for accounting and calculation

Calculation in kind

Calculation in kind, or calculation in-natura, was often assumed to be the standard form of accounting that would take place in a socialist system where the economy was mobilized in terms of physical or natural units instead of money and financial calculation.

Otto Neurath was adamant that a socialist economy must be moneyless because measures of money failed to capture adequate information regarding material well-being of consumers or failed to factor in all costs and benefits from performing a particular action. He argued that relying on any single unit, whether they be labor-hours or kilowatt-hours, would be inadequate and that demand and calculations be performed by the relevant disaggregated natural units, i.e. kilowatts, tons, meters and so on.

In the 1930s, Soviet mathematician Leonid Kantorovich demonstrated how an economy in purely physical terms could use determinate mathematical procedure to determine which combination of techniques could be used to achieve certain output or plan targets.

Debate on the use of money

In contrast to Neurath, Karl Kautsky argued that money would have to be utilized in a socialist economy. Kautsky states the fundamental difference between socialism and capitalism is not the absence of money in the former; rather, the important difference is in the ability for money to become capital under capitalism. In a socialist economy, there would be no incentive to use money as financial capital, therefore money would have a slightly different role in socialism.

Labor-time calculation

Jan Appel drafted a contribution to the socialist calculation debate which then went through a discussion process before being published as Foundations of Communist Production and Distribution by the General Workers' Union of Germany in 1930. An English translation by Mike Baker was published in 1990.

Interwar debate

Economic calculation problem

Ludwig von Mises believed that private ownership of the means of production was essential for a functional economy, arguing:

Every step that takes us away from private ownership of the means of production and from the use of money also takes us away from rational economics.

His argument against socialism was in response to Otto Neurath arguing for the feasibility of central planning. Mises argued that money and market-determined prices for the means of production were essential in order to make rational decisions regarding their allocation and use.

Criticism of the calculation problem

Bryan Caplan, a libertarian economist, has criticized the version of the calculation problem advanced by Mises arguing that the lack of economic calculation makes socialism impossible and not merely inefficient. Caplan argues that socialism makes economic calculation impossible, yet that problem may not be severe enough to make socialism impossible "beyond the realm of possibility". Caplan points out that the fall of the Soviet Union does not prove that calculation was the main issue there. He suggests that more likely the problems resulted from bad incentives arising out of the one-party political system and degree of power granted to the party elite.

Knowledge problem

Proponents of decentralized economic planning have also criticized central economic planning. Leon Trotsky believed that central planners, regardless of their intellectual capacity, operated without the input and participation of the millions of people who participate in the economy and so they would be unable to respond to local conditions quickly enough to effectively coordinate all economic activity. Trotsky argued:

If a universal mind existed, of the kind that projected itself into the scientific fancy of Laplace – a mind that could register simultaneously all the processes of nature and society, that could measure the dynamics of their motion, that could forecast the results of their inter-reactions – such a mind, of course, could a priori draw up a faultless and exhaustive economic plan, beginning with the number of acres of wheat down to the last button for a vest. The bureaucracy often imagines that just such a mind is at its disposal; that is why it so easily frees itself from the control of the market and of Soviet democracy. But, in reality, the bureaucracy errs frightfully in its estimate of its spiritual resources. [...] The innumerable living participants in the economy, state and private, collective and individual, must serve notice of their needs and of their relative strength not only through the statistical determinations of plan commissions but by the direct pressure of supply and demand.

— Leon Trotsky, The Soviet Economy in Danger

Lange model

Oskar Lange responded to Mises' assertion that socialism and social ownership of the means of production implied that rational calculation was impossible by outlining a model of socialism based on neoclassical economics. Lange conceded that calculations would have to be done in value terms rather than using purely natural or engineering criteria, but he asserted that these values could be attained without capital markets and private ownership of the means of production. In Lange's view, this model qualified as socialist because the means of production would be publicly owned with returns to the public enterprises accruing to society as a whole in a social dividend while workers' self-management could be introduced in the public enterprises.

This model came to be referred to as the Lange model. In this model, a Central Planning Board (CPB) would be responsible for setting prices through a trial-and-error approach to establish equilibrium prices, effectively running a Walrasian auction. Managers of the state-owned firms would be instructed to set prices to equal marginal cost (P=MC) so that economic equilibrium and Pareto efficiency would be achieved. The Lange model was expanded upon by Abba Lerner and became known as the Lange–Lerner theorem.

Paul Auerbach and Dimitris Sotiropoulos have criticized the Lange model for degrading the definition of socialism to a form of "capitalism without capital markets" attempting to replicate capitalism's efficiency achievements through economic planning. Auerbach and Sotiropoulos argue that Friedrich Hayek provided an analysis of the dynamics of capitalism that is more consistent with Marxian economics' analysis because Hayek viewed finance as a fundamental aspect of capitalism and any move through collective ownership or policy reform to undermine the role of capital markets would threaten the integrity of the capitalist system. According to Auerbach and Sotiropoulos, Hayek gave an unexpected endorsement to socialism that is more sophisticated than Lange's superficial defense of socialism.

Contemporary contributions

Networked digital feedback

Peter Joseph argues for a transition from fragmented economic data relay to fully integrated, sensor-based digital systems, or an Internet of things. Using an internet of sensory instruments to measure, track and feed back information, this can unify numerous disparate elements and systems, greatly advancing awareness and efficiency potentials.

In an economic context, this approach could relay and connect data regarding how best to manage resources, production processes, distribution, consumption, recycling, waste disposal behavior, consumer demand and so on. Such a process of networked economic feedback would work on the same principle as modern systems of inventory and distribution found in major commercial warehouses. Many companies today use a range of sensors and sophisticated tracking means to understand rates of demands, exactly what they have, where it is or where it may be moving and when it is gone. It is ultimately an issue of detail and scalability to extend this kind of awareness to all sectors of the economy, macro and micro.

Not only is price no longer needed to gain critical economic feedback, but the information price communicates is long delayed and incomplete in terms of economic measures required to dramatically increase efficiency. Mechanisms related networked digital feedback systems make it possible to efficiently monitor shifting consumer preference, demand, supply and labor value, virtually in real time. Moreover, it can also be used to observe other technical processes price cannot, such as shifts in production protocols, allocation, recycling means, and so on. As of February 2018, it is now possible to track trillions of economic interactions related to the supply chain and consumer behavior by way of sensors and digital relay as seen with the advent of Amazon Go.

Cybernetic coordination

Paul Cockshott, Allin Cottrell, and Andy Pollack have proposed new forms of coordination based on modern information technology for non-market socialism. They argue that economic planning in terms of physical units without any reference to money or prices is computationally tractable given the high-performance computers available for particle physics and weather forecasting. Cybernetic planning would involve an a priori simulation of the equilibration process that idealized markets are intended to achieve.

Participatory economics

Proposals for decentralized economic planning emerged in the late 20th century in the form of participatory economics and negotiated coordination.

Decentralized pricing without markets

David McMullen argues that social ownership of the means of production and the absence of markets for them is fully compatible with a decentralized price system. In a post-capitalist society, transactions between enterprises would entail transfers of social property between custodians rather than an exchange of ownership. Individuals would be motivated by the satisfaction from work and the desire to contribute to good economic outcomes rather than material reward. Bids and offer prices would aim to minimize costs and ensure that output is guided by expected final demand for private and collective consumption. Enterprises and startups would receive their investment funding from project assessment agencies. The required change in human behavior would take a number generations and would have to overcome considerable resistance. However, McMullen believes that economic and cultural development increasingly favors the transition.

Market socialism

James Yunker argues that public ownership of the means of production can be achieved the same way private ownership is achieved in modern capitalism through the shareholder system that separates management from ownership. Yunker posits that social ownership can be achieved by having a public body, designated the Bureau of Public Ownership (BPO), owning the shares of publicly-listed firms without affecting market-based allocation of capital inputs. Yunker termed this model pragmatic market socialism and argued that it would be at least as efficient as modern-day capitalism while providing superior social outcomes as public ownership of large and established enterprises would enable profits to be distributed among the entire population rather than going largely to a class of inheriting rentiers.

Mechanism design

Beginning in the 1970s, new insights into the socialist calculation debate emerged from mechanism design theory. According to mechanism design theorists, the debate between Hayek and Lange became a stalemate that lasted for forty years because neither side was speaking the same language as the other, partially because the appropriate language for discussing socialist calculation had not yet been invented. According to these theorists, what was needed was a better understanding of the informational problems that prevent coordination between people. By fusing game theory with information economics, mechanism design provided the language and framework in which both socialists and advocates of capitalism could compare the merits of their arguments. As Palda (2013) writes in his summary of the contributions of mechanism design to the socialist calculation debate, "[i]t seemed that socialism and capitalism were good at different things. Socialism suffered from cheating, or 'moral hazard', more than capitalism because it did not allow company managers to own shares in their own companies. [...] The flip side of the cheating problem in socialism is the lying or 'adverse selection' problem in capitalism. If potential firm managers are either good or bad, but telling them apart is difficult, bad prospects will lie to become a part of the firm".

Relation to neoclassical economics

In his book Whither Socialism?, Joseph Stiglitz criticized models of market socialism from the era of the socialist calculation debate in the 1930s as part of a more general criticism of neoclassical general equilibrium theory, proposing that market models be augmented with insights from information economics. Alec Nove and János Kornai held similar positions regarding economic equilibrium. Both Nove and Kornai argued that because perfect equilibrium does not exist, a comprehensive economic plan for production cannot be formulated, making planning ineffective just as real-world market economies do not conform to the hypothetical state of perfect competition. In his book The Economics of Feasible Socialism, Nove also outlined a solution involving a socialist economy consisting of a mixture of macro-economic planning with market-based coordination for enterprises where large industries would be publicly owned and small- to medium-sized concerns would be organized as cooperatively-owned enterprises.

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...