Search This Blog

Thursday, November 24, 2022

Post-Keynesian economics

From Wikipedia, the free encyclopedia

Post-Keynesian economics is a school of economic thought with its origins in The General Theory of John Maynard Keynes, with subsequent development influenced to a large degree by Michał Kalecki, Joan Robinson, Nicholas Kaldor, Sidney Weintraub, Paul Davidson, Piero Sraffa and Jan Kregel. Historian Robert Skidelsky argues that the post-Keynesian school has remained closest to the spirit of Keynes' original work. It is a heterodox approach to economics.

Introduction

The term "post-Keynesian" was first used to refer to a distinct school of economic thought by Eichner and Kregel (1975) and by the establishment of the Journal of Post Keynesian Economics in 1978. Prior to 1975, and occasionally in more recent work, post-Keynesian could simply mean economics carried out after 1936, the date of Keynes's General Theory.

Post-Keynesian economists are united in maintaining that Keynes' theory is seriously misrepresented by the two other principal Keynesian schools: neo-Keynesian economics, which was orthodox in the 1950s and 60s, and new Keynesian economics, which together with various strands of neoclassical economics has been dominant in mainstream macroeconomics since the 1980s. Post-Keynesian economics can be seen as an attempt to rebuild economic theory in the light of Keynes' ideas and insights. However, even in the early years, post-Keynesians such as Joan Robinson sought to distance themselves from Keynes, and much current post-Keynesian thought cannot be found in Keynes. Some post-Keynesians took a more progressive view than Keynes himself, with greater emphases on worker-friendly policies and redistribution. Robinson, Paul Davidson and Hyman Minsky emphasized the effects on the economy of practical differences between different types of investments, in contrast to Keynes' more abstract treatment.

The theoretical foundation of post-Keynesian economics is the principle of effective demand, that demand matters in the long as well as the short run, so that a competitive market economy has no natural or automatic tendency towards full employment. Contrary to the views of new Keynesian economists working in the neoclassical tradition, post-Keynesians do not accept that the theoretical basis of the market's failure to provide full employment is rigid or sticky prices or wages. Post-Keynesians typically reject the IS–LM model of John Hicks, which is very influential in neo-Keynesian economics, because they argue endogenous bank lending to be more significant than central banks' money supply for the interest rate.

The contribution of post-Keynesian economics has extended beyond the theory of aggregate employment to theories of income distribution, growth, trade and development in which money demand plays a key role, whereas in neoclassical economics these are determined by the forces of technology, preferences and endowment. In the field of monetary theory, post-Keynesian economists were among the first to emphasise that money supply responds to the demand for bank credit, so that a central bank cannot control the quantity of money, but only manage the interest rate by managing the quantity of monetary reserves.

This view has largely been incorporated into mainstream economics and monetary policy, which now targets the interest rate as an instrument, rather than attempting to accurately control the quantity of money. In the field of finance, Hyman Minsky put forward a theory of financial crisis based on financial fragility, which has received renewed attention.

Main features

In 2009 Marc Lavoie listed the main features of post-Keyenesian economcs:

  • Effective demand
  • Historical and dynamic time

He also lists 5 auxiliary features:

  • The possible negative impact of flexible prices
  • The monetary production of the economy
  • Fundamental uncertainty
  • Relevant and contemporary microeconomics
  • Pluralism of theories and methods

Strands

There are a number of strands to post-Keynesian theory with different emphases. Joan Robinson regarded Michał Kalecki's theory of effective demand to be superior to Keynes' theories. Kalecki's theory is based on a class division between workers and capitalists and imperfect competition. Robinson also led the critique of the use of aggregate production functions based on homogeneous capital – the Cambridge capital controversy – winning the argument but not the battle. The writings of Piero Sraffa were a significant influence on the post-Keynesian position in this debate, though Sraffa and his neo-Ricardian followers drew more inspiration from David Ricardo than Keynes. Much of Nicholas Kaldor's work was based on the ideas of increasing returns to scale, path dependence, and the key differences between the primary and industrial sectors.

Paul Davidson follows Keynes closely in placing time and uncertainty at the centre of theory, from which flow the nature of money and of a monetary economy. Monetary circuit theory, originally developed in continental Europe, places particular emphasis on the distinctive role of money as means of payment. Each of these strands continues to see further development by later generations of economists.

Modern Monetary Theory is a relatively recent offshoot influenced by the macroeconomic modelling of Wynne Godley and Hyman Minsky's ideas on the labour market, as well as chartalism and functional finance.

Recent work in post-Keynesian economics has attempted to provide micro-foundations for capacity underutilization as a coordination failure (economics), justifying government intervention in the form of aggregate demand stimulus.

Current work

Journals

Much post-Keynesian research is published in the Review of Keynesian Economics (ROKE), the Journal of Post Keynesian Economics (founded by Sidney Weintraub and Paul Davidson), the Cambridge Journal of Economics, the Review of Political Economy, and the Journal of Economic Issues (JEI).

United Kingdom

There is also a United Kingdom academic association, the Post Keynesian Economics Society (PKES). This was previously called the Post Keynesian Economics Study Group (PKSG) but changed its name in 2018. In the UK, post-Keynesian economists can be found in:

United States

In the United States, there are several universities with a post-Keynesian bent:

Netherlands

France

Canada

In Canada, post-Keynesians can be found at the University of Ottawa and Laurentian University.

Germany

In Germany, post-Keynesianism is very strong at the Berlin School of Economics and Law and its master's degree course: International Economics [M.A.]. Many German Post-Keynesians are organized in the Forum Macroeconomics and Macroeconomic Policies.

Australia

University of Newcastle

The University of Newcastle in New South Wales, Australia, houses the post-Keynesian think-tank the Centre of Full Employment and Equity (CofFEE).

Disequilibrium macroeconomics

Disequilibrium macroeconomics is a tradition of research centered on the role of disequilibrium in economics. This approach is also known as non-Walrasian theory, equilibrium with rationing, the non-market clearing approach, and non-tâtonnement theory. Early work in the area was done by Don Patinkin, Robert W. Clower, and Axel Leijonhufvud. Their work was formalized into general disequilibrium models, which were very influential in the 1970s. American economists had mostly abandoned these models by the late 1970s, but French economists continued work in the tradition and developed fixprice models.

Macroeconomic disequilibria

In the neoclassical synthesis, equilibrium models were the rule. In these models, rigid wages modeled unemployment at equilibria. These models were challenged by Don Patinkin and later disequilibrium theorists. Patinkin argued that unemployment resulted from disequilibrium. Patinkin, Robert W. Clower, and Axel Leijonhufvud focused on the role of disequilibrium. Clower and Leijonhufvud argued that disequilibrium formed a fundamental part of Keynes's theory and deserved greater attention.

Robert Barro and Herschel Grossman formulated general disequilibrium models, in which individual markets were locked into prices before there was a general equilibrium. These markets produced "false prices" resulting in disequilibrium. Soon after the work of Barro and Grossman, disequilibrium models fell out of favor in the United States and Barro abandoned Keynesianism and adopted new classical, market-clearing hypotheses. However, leading American economists continued work with disequilibrium models, for example Franklin M. Fisher at MIT, Richard E. Quandt at Princeton University, and John Roberts at Stanford University.

Disequilibrium and unemployment

Diagram for Malinvaud's typology of unemployment. Diagram shows curves for the labor and goods markets with Walrasian equilibrium in the center. Regions for Keynesian unemployment, classical unemployment, repressed inflation, and underconsumption
Diagram based on Malinvaud's typology of unemployment shows curves for equilibrium in the goods and labor markets given wage and price levels. Walrasian equilibrium is achieved when both markets are at equilibrium. According to Malinvaud the economy is usually in a state of either Keynesian unemployment, with excess supply of goods and labor, or classical unemployment, with excess supply of labor and excess demand for goods.

While disequilibrium economics had only a supporting role in the US, it had major role in European economics, and indeed a leading role in French-speaking Europe. In France, Jean-Pascal Bénassy (1975) and Yves Younès (1975) studied macroeconomic models with fixed prices. Disequilibrium economics received greater research as mass unemployment returned to Western Europe in the 1970s. Disequilibrium economics also influenced European policy discussions, particularly in France and Belgium. European economists such as Edmond Malinvaud and Jacques Drèze expanded on the disequilibrium tradition and worked to explain price rigidity instead of simply assuming it.

Malinvaud used disequilibrium analysis to develop a theory of unemployment. He argued that disequilibrium in the labor and goods markets could lead to rationing of goods and labor, leading to unemployment. Malinvaud adopted a fixprice framework and argued that pricing would be rigid in modern, industrial prices compared to the relatively flexible pricing systems of raw goods that dominate agricultural economies. In Malinvaud's framework, prices are fixed and only quantities adjust. Malinvaud considers an equilibrium state in classical and Keynesian unemployment as most likely. He pays less attention to the case of repressed inflation and considers underconsumption/unemployment a theoretical curiosity. Work in the neoclassical tradition is confined as a special case of Malinvaud's typology, the Walrasian equilibrium. In Malinvaud's theory, reaching the Walrasian equilibrium case is almost impossible to achieve given the nature of industrial pricing. Malinvaud's work provided different policy prescriptions depending on the state of the economy. Given Keynesian unemployment, fiscal policy could shift both the labor and goods curves upwards leading to higher wages and prices. With this shift, the Walrasian equilibrium would be closer to the actual economic equilibrium. On the other hand, fiscal policy with an economy in the classical unemployment would only make matters worse. A policy leading to higher prices and lower wages would be recommended instead.

"Disequilibrium macroeconometrics" was developed by Drèze's, Henri Sneessens (1981) and Jean-Paul Lambert (1988). A joint paper by Drèze and Sneessens inspired Drèze and Richard Layard to lead the European Unemployment Program, which estimated a common disequilibrium model in ten countries. The results of that successful effort were to inspire policy recommendations in Europe for several years.

Disequilibrium extensions of Arrow–Debreu general equilibrium theory

In Belgium, Jacques Drèze defined equilibria with price rigidities and quantity constraints and studied their properties, extending the Arrow–Debreu model of general equilibrium theory in mathematical economics. Introduced in his 1975 paper, a "Drèze equilibrium" occurs when supply (demand) is constrained only when prices are downward (upward) rigid, whereas a preselected commodity (e.g. money) is never rationed. Existence is proved for arbitrary bounds on prices. A joint paper with Pierre Dehez established the existence of Drèze equilibria with no rationing of the demand side. Stanford's John Roberts studied supply-constrained equilibria at competitive prices; similar results were obtained by Jean-Jacques Herings at Tilburg (1987, 1996). Roberts and Hering proved the existence of a continuum of Drèze equilibria. Then Drèze (113) proved existence of equilibria with arbitrarily severe rationing of supply. Next, in a joint paper with Herings and others (132), the generic existence of a continuum of Pareto-ranked supply-constrained equilibria was established for a standard economy with some fixed prices. The multiplicity of equilibria thus formalises a trade-off between inflation and unemployment, comparable to a Phillips curve. Drèze viewed his approach to macroeconomics as examining the macroeconomic consequences of Arrow–Debreu general equilibrium theory with rationing, an approach complementing the often-announced program of providing microfoundations for macroeconomics.

Specific economic sectors

Credit markets

Disequilibrium credit rationing can occur for one of two reasons. In the presence of usury laws, if the equilibrium interest rate on loans is above the legally allowable rate, the market cannot clear and at the maximum allowable rate the quantity of credit demanded will exceed the quantity of credit supplied.

A more subtle source of credit rationing is that higher interest rates can increase the risk of default by the borrower, making the potential lender reluctant to lend at otherwise attractively high interest rates.

Labour markets

Labour markets are prone to particular sources of price rigidity because the item being transacted is people, and laws or social constraints designed to protect those people may hinder market adjustments. Such constraints include restrictions on who or how many people can be laid off and when (which can affect both the number of layoffs and the number of people hired by firms that are concerned by the restrictions), restrictions on the lowering of wages when a firm experiences a decline in the demand for its product, and long-term labor contracts that pre-specify wages.

Spillovers between markets

Disequilibrium in one market can affect demand or supply in other markets. Specifically, if an economic agent is constrained in one market, his supply or demand in another market may be changed from its unconstrained form, termed the notional demand, into a modified form known as effective demand. If this occurs systematically for a large number of market participants, market outcomes in the latter market for prices and quantities transacted (themselves either equilibrium or disequilibrium outcomes) will be affected.

Examples include:

  • If the supply of mortgage credit to potential homebuyers is rationed, this will decrease the demand for newly built houses.
  • If labourers cannot supply all the labor they wish to, they will have constrained income and their demand in the goods market will be lower.
  • If employers cannot hire all the labor they wish to, they cannot produce as much output as they wish to, and supply in the market for their good will be diminished.

Evolutionary economics

From Wikipedia, the free encyclopedia

Evolutionary economics is part of mainstream economics as well as a heterodox school of economic thought that is inspired by evolutionary biology. Much like mainstream economics, it stresses complex interdependencies, competition, growth, structural change, and resource constraints but differs in the approaches which are used to analyze these phenomena.

Evolutionary economics deals with the study of processes that transform economy for firms, institutions, industries, employment, production, trade and growth within, through the actions of diverse agents from experience and interactions, using evolutionary methodology. Evolutionary economics analyzes the unleashing of a process of technological and institutional innovation by generating and testing a diversity of ideas which discover and accumulate more survival value for the costs incurred than competing alternatives. The evidence suggests that it could be adaptive efficiency that defines economic efficiency. Mainstream economic reasoning begins with the postulates of scarcity and rational agents (that is, agents modeled as maximizing their individual welfare), with the "rational choice" for any agent being a straightforward exercise in mathematical optimization. There has been renewed interest in treating economic systems as evolutionary systems in the developing field of Complexity economics.

Evolutionary economics does not take the characteristics of either the objects of choice or of the decision-maker as fixed. Rather, its focus is on the non-equilibrium processes that transform the economy from within and their implications. The processes in turn emerge from actions of diverse agents with bounded rationality who may learn from experience and interactions and whose differences contribute to the change. The subject draws more recently on evolutionary game theory and on the evolutionary methodology of Charles Darwin and the non-equilibrium economics principle of circular and cumulative causation. It is naturalistic in purging earlier notions of economic change as teleological or necessarily improving the human condition.

A different approach is to apply evolutionary psychology principles to economics which is argued to explain problems such as inconsistencies and biases in rational choice theory. Basic economic concepts such as utility may be better viewed as due to preferences that maximized evolutionary fitness in the ancestral environment but not necessarily in the current one.

Predecessors

In the mid-19th century, Karl Marx presented a schema of stages of historical development, by introducing the notion that human nature was not constant and was not determinative of the nature of the social system; on the contrary, he made it a principle that human behavior was a function of the social and economic system in which it occurred.

Marx based his theory of economic development on the premise of developing economic systems; specifically, over the course of history superior economic systems would replace inferior ones. Inferior systems were beset by internal contradictions and inefficiencies that make them impossible to survive over the long term. In Marx's scheme, feudalism was replaced by capitalism, which would eventually be superseded by socialism.

At approximately the same time, Charles Darwin developed a general framework for comprehending any process whereby small, random variations could accumulate and predominate over time into large-scale changes that resulted in the emergence of wholly novel forms ("speciation").

This was followed shortly after by the work of the American pragmatic philosophers (Peirce, James, Dewey) and the founding of two new disciplines, psychology and anthropology, both of which were oriented toward cataloging and developing explanatory frameworks for the variety of behavior patterns (both individual and collective) that were becoming increasingly obvious to all systematic observers. The state of the world converged with the state of the evidence to make almost inevitable the development of a more "modern" framework for the analysis of substantive economic issues.

Veblen (1898)

Thorstein Veblen (1898) coined the term "evolutionary economics" in English. He began his career in the midst of this period of intellectual ferment, and as a young scholar came into direct contact with some of the leading figures of the various movements that were to shape the style and substance of social sciences into the next century and beyond. Veblen saw the need for taking account of cultural variation in his approach; no universal "human nature" could possibly be invoked to explain the variety of norms and behaviors that the new science of anthropology showed to be the rule, rather than the exception. He emphasis zed the conflict between "industrial" and "pecuniary" or ceremonial values and this Veblenian dichotomy was interpreted in the hands of later writers as the "ceremonial/instrumental dichotomy" (Hodgson 2004);

Veblen saw that every culture is materially based and dependent on tools and skills to support the "life process", while at the same time, every culture appeared to have a stratified structure of status ("invidious distinctions") that ran entirely contrary to the imperatives of the "instrumental" (read: "technological") aspects of group life. The "ceremonial" was related to the past, and conformed to and supported the tribal legends; "instrumental" was oriented toward the technological imperative to judge value by the ability to control future consequences. The "Veblenian dichotomy" was a specialized variant of the "instrumental theory of value" due to John Dewey, with whom Veblen was to make contact briefly at the University of Chicago.

Arguably the most important works by Veblen include, but are not restricted to, his most famous works (The Theory of the Leisure Class; The Theory of Business Enterprise), but his monograph Imperial Germany and the Industrial Revolution and the 1898 essay entitled Why is Economics not an Evolutionary Science have both been influential in shaping the research agenda for following generations of social scientists. TOLC and TOBE together constitute an alternative construction on the neoclassical marginalist theories of consumption and production, respectively.

Both are founded on his dichotomy, which is at its core a valuational principle. The ceremonial patterns of activity are not bound to any past, but to one that generated a specific set of advantages and prejudices that underlie the current institutions. "Instrumental" judgments create benefits according to a new criterion, and therefore are inherently subversive. This line of analysis was more fully and explicitly developed by Clarence E. Ayres of the University of Texas at Austin from the 1920s.

Schumpeter

Joseph A. Schumpeter , who lived in the first half of the 20th century, was the author of the book The Theory of Economic Development (1911, transl. 1934). It is important to note that for the word development he used in his native language, the German word "Entwicklung", which can be translated as development or evolution. The translators of the day used the word "development" from the French "développement", as opposed to "evolution" as this was used by Darwin. (Schumpeter, in his later writings in English as a professor at Harvard, used the word "evolution".) The current term in common use is economic development.

In Schumpeter's book, he proposed an idea radical for its time: the evolutionary perspective. He based his theory on the assumption of usual macroeconomic equilibrium, which is something like "the normal mode of economic affairs". This equilibrium is being perpetually destroyed by entrepreneurs who try to introduce innovations. A successful introduction of an innovation (i.e. a disruptive technology) disturbs the normal flow of economic life, because it forces some of the already existing technologies and means of production to lose their positions within the economy. His vision and economics inspired many economists who wanted to study how the economy develops and lead to a now powerful International Joseph A. Schumpeter Society.

Later development

A seminal article by Armen Alchian (1950) argued for adaptive success of firms faced with uncertainty and incomplete information replacing profit maximization as an appropriate modeling assumption. Milton Friedman proposed that markets act as major selection vehicles. As firms compete, unsuccessful rivals fail to capture an appropriate market shar e, go bankrupt and have to exit. The variety of competing firms is both in their products and practices, that are matched against markets. Both products and practices are determined by routines that firms use: standardized patterns of actions implemented constantly. By imitating these routines, firms propagate them and thus establish inheritance of successful practices. Kenneth Boulding was one of the advocates of the evolutionary methods in social science, as is evident from Kenneth Boulding's Evolutionary Perspective. Kenneth Arrow, Ronald Coase and Douglass North are some of the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel winners who are known for their sympathy to the field.

More narrowly the works Jack Downie and Edith Penrose offer many insights for those thinking about evolution at the level of the firm in an industry.

Nelson and Winter (1982) and after

Richard R. Nelson and Sidney G. Winter's book An Evolutionary Theory of Economic Change (1982, Paperback 1985) was a real seminal work that marked a renaissance of evolutionary economics. It lead to the dissemination of the evolutionary ideas among wide strands of economists and was followed by foundations of International Joseph A. Schumpeter Society, European Association for Evolutionary Political Economy, Japan Association for Evolutionary Economics, and Korean Society for Innovation Management and Economics.

Nelson and Winter have focused mostly on the issue of changes in technology and routines, suggesting a framework for their analysis. Evolution and change must be distinguished. Prices and quantities constantly change but it is not an evolution. For an evolution takes place, there must be something that evolves. Their approach can be compared and contrasted with the population ecology or organizational ecology approach in sociology: see Douma & Schreuder (2013, chapter 11). More recently, Nelson, Dosi, Pyka, Malerba, Winter and other scholars have been proposing an update of the state-of-art in evolutionary economics.

Evolution and change must be distinguished. Prices, quantities and GDPs constantly change through time but they are not evolution. Pier P. Saviotti pointed out as key concepts of evolution three ideas: variation, selection, and reproduction. The concept of reproduction is often replaced by replication or retention. Retention is preferred in evolutionary organization theory. Other related concepts are fitness, adaptation, population, interactions, and environment. Each item is related to selection, learning, population dynamics, economic transactions, and boundary conditions. Nelson and Winter raised two major examples of evolving entities: technologies and organizational routines. Yoshinori Shiozawa listed four entities that evolve: economic behaviors, commodities, technologies, and institutions. Then, mechanisms that provide selection, generate variation and establish self-replication, must be identified. A general theory of this evolutionary process has been proposed by Kurt Dopfer, John Foster and Jason Potts as the micro meso macro framework.

If the change occurs constantly in the economy, then some kind of evolutionary process must be in action, and there has been a proposal that this process is Darwinian in nature. Other economists claimed that evolution of human behaviors can be Lamarckian.

Evolutionary economics had developed and ramified into various fields or topics. They include technology and economic growth, institutional economics, organization studies, innovation study, management, and policy, and criticism of mainstream economics.

Economic processes, as part of life processes, are intrinsically evolutionary. From the evolutionary equation that describe life processes, an analytical formula on the main factors of economic processes, such as fixed cost and variable cost, can be derived. The economic return, or competitiveness, of economic entities of different characteristics under different kinds of environment can be calculated. The change of environment causes the change of competitiveness of different economic entities and systems. This is the process of evolution of economic systems.

In recent years, evolutionary models have been used to assist decision making in applied settings and find solutions to problems such as optimal product design and service portfolio diversification.

Why does evolution matter in economics

Evolutionary economics emerged from dissatisfaction of mainstream (neoclassical) economics. Mainstream economics mainly assumes agents that optimize their objective functions, such as utility function for consumers and profit for firms. Optimization under budget constraint has a solution if the function is continuous and the prices are positive. However, when the number of goods is large, it is often difficult to obtain the bundles of goods that maximize the utility. This is the question of bounded rationality. Herbert A. Simon once stated in Administrative Behavior that whole contents of management science can be reduced to two lines. The same contentions apply to the economics. Most of economics behaviors except deliberated plans are routines which follows satisficing principle. Evolutionary economics is conceived as an economics of large complex system.

Evolutionary psychology

A different approach is to apply evolutionary psychology principles to economics which is argued to explain problems such as inconsistencies and biases in rational choice theory. A basic economic concept such as utility may be better explained in terms of a set of biological preferences that maximized evolutionary fitness in the ancestral environment but not necessarily in the current one. In other words, the preferences for actions/decisions that promise "utility" (e.g. reaching for a piece of cake) were formed in the ancestral environment because of the adaptive advantages of such decisions (e.g. maximizing calorie intake). Loss aversion may be explained as being rational when living at subsistence level where a reduction of resources may have meant death and it thus may have been rational to place a greater value on losses than on gains.

People are sometimes more cooperative and altruistic than predicted by economic theory which may be explained by mechanisms such as reciprocal altruism and group selection for cooperative behavior. An evolutionary approach may also explain differences between groups such as males being less risk-averse than females since males have more variable reproductive success than females. While unsuccessful risk-seeking may limit reproductive success for both sexes, males may potentially increase their reproductive success much more than females from successful risk-seeking. Frequency-dependent selection may explain why people differ in characteristics such as cooperative behavior with cheating becoming an increasingly less successful strategy as the numbers of cheaters increase.

Economic theory is at present characterized by strong disagreements on which is the correct theory of value, distribution and growth. This also influences the attempts to find evolutionary explanations for modern tastes and preferences. For example an acceptance of the neoclassical theory of value and distribution lies behind the argument that humans have a poor intuitive grasp of the economics of the current environment which is very different from the ancestral environment. The argument is that ancestral environment likely had relatively little trade, division of labor, and capital goods. Technological change was very slow, wealth differences were much smaller, and possession of many available resources were likely zero-sum games where large inequalities were caused by various forms of exploitation. Humans, therefore, may have poor intuitive understanding of the benefits of free trade (causing calls for protectionism), the value of capital goods (making the labor theory of value appealing), and may intuitively undervalue the benefits of technological development. The same acceptance of the neoclassical thesis that demand for labour is a decreasing function of the real wage and that income differences reflect different marginal productivities of individual contributions (in labour or savings) lies behind the argument that persistence of pre-capitalist model of thinking may explain a tendency to see the number of available jobs as a zero-sum game with the total number of jobs being fixed which causes people to not realize that minimum wage laws reduce the number of jobs or to believe that an increased number of jobs in other nations necessarily decreases the number of jobs in their own nation, as well as a tendency to view large income inequality as due to exploitation rather than as due to individual differences in productivity. This, it is accordingly argued, may easily cause poor economic policies, especially since individual voters have few incentives to make the effort of studying societal economics instead of relying on their intuitions since an individual's vote counts for so little and since politicians may be reluctant to take a stand against intuitive views that are incorrect but widely held. Most non-neoclassical schools of thought would not judge calls for protectionism necessarily mistaken nor would agree that minimum wage laws reduce the number of jobs nor would reject the basic intuition imperfectly expressed by the labour theory of value and now more rigorously argued by modern Marxian-Sraffian theory (namely, that exploitation is present under capitalism too), and therefore would judge this specific evolutionary argument strictly to depend on a questionable theory of the working of market economies.

Evolution after Unified Growth Theory

The role of evolutionary forces in the process of economic development over the course of human history has been explored in the past few decades. Oded Galor and Omer Moav advanced the hypothesis that evolutionary forces had a significant role in the transition of the world economy from stagnation to growth, highlighting the persistent effects that historical and prehistorical conditions have had on the evolution of the composition of human characteristics during the development process.

Galor and Moav argued that the Malthusian pressure determined the size and the composition of the human population. Lineages whose traits were complementary to the economic environment had higher income, and therefore higher reproductive success, and the inevitable propagation of these traits fostered the growth process and ultimately contributed to the take-off from an epoch of stagnation to the modern era of sustained growth.

Evolution of predisposition towards child quality

Galor and Moav hypothesize that during the Malthusian epoch, natural selection has amplified the prevalence of traits associated with predispositions towards the child quality in the human population, triggering human capital formation, technological progress, the onset of the demographic transition, and the emergence of  sustained economic growth.

The testable predictions of this evolutionary theory and its underlying mechanisms have been confirmed empirically and quantitatively. Specifically, the genealogical record of half a million people in Quebec during the period 1608-1800, suggests that moderate fecundity, and hence tendency towards investment in child quality, was beneficial for long-run reproductive success. This finding reflect the adverse effect of higher fecundity on marital age of children, their level of education, and the likelihood that they will survive to a reproductive age.

Evolution of time preference

Oded Galor and Omer Ozak examine the evolution of time preference in the course of human history. They hypothesize and establish empirically that agricultural characteristics that were favorable to higher return to agricultural investment in the Malthusian era triggered a process of selection, adaptation, and learning that increase the prevalence of long-term orientation among individuals in society. They further establish the variations in these agricultural characteristics across the globe are associated with contemporary differences in economic and human behavior such as technological adoption, education, saving, and smoking.

Evolution of loss aversion

Oded Galor and Viacheslav Savitskiy explore the evolutionary foundation of the phenomenon of loss aversion. They theorize and confirm empirically that the evolution of loss aversion reflects an evolutionary process in which humans have gradually adapted the climatic shocks and their asymmetric effects on reproductive success in a period in which the available resource was very close to the subsistence consumption. In particular, they establish that individuals and ethnic groups that descended from regions that are characterized by greater climatic volatility tend to be loss-neutral, whereas those originated in regions in which climatic conditions are more spatially correlated, tend to be more loss averse.

Evolution of risk aversion

Oded Galor and Stelios Michalopoulos examine the coevolution of entrepreneurial spirit and the process of long-run economic development. Specifically, they argue that in the early stages of development, risk-tolerant entrepreneurial traits generated an evolutionary advantage, and the rise in the prevalence of this trait amplified the pace of the growth process. However, in advanced stages of development, risk-aversion gained an evolutionary advantage, and contributed to convergence across countries.

Wednesday, November 23, 2022

Cognitive model

From Wikipedia, the free encyclopedia

A cognitive model is an approximation of one or more cognitive processes in humans or other animals for the purposes of comprehension and prediction. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard).

Relationship to cognitive architectures

Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable. In contrast to cognitive architectures, cognitive models tend to be focused on a single cognitive phenomenon or process (e.g., list learning), how two or more processes interact (e.g., visual search bsc1780 decision making), or making behavioral predictions for a specific task or tool (e.g., how instituting a new software package will affect productivity). Cognitive architectures tend to be focused on the structural properties of the modeled system, and help constrain the development of cognitive models within the architecture. Likewise, model development helps to inform limitations and shortcomings of the architecture. Some of the most popular architectures for cognitive modeling include ACT-R, Clarion, LIDA, and Soar.

History

Cognitive modeling historically developed within cognitive psychology/cognitive science (including human factors), and has received contributions from the fields of machine learning and artificial intelligence among others.

Box-and-arrow models

A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Typically, they are used by speech pathologists while treating a child patient. The input signal is the speech signal heard by the child, usually assumed to come from an adult speaker. The output signal is the utterance produced by the child. The unseen psychological events that occur between the arrival of an input signal and the production of speech are the focus of psycholinguistic models. Events that process the input signal are referred to as input processes, whereas events that process the production of speech are referred to as output processes. Some aspects of speech processing are thought to happen online—that is, they occur during the actual perception or production of speech and thus require a share of the attentional resources dedicated to the speech task. Other processes, thought to happen offline, take place as part of the child's background mental processing rather than during the time dedicated to the speech task. In this sense, online processing is sometimes defined as occurring in real-time, whereas offline processing is said to be time-free (Hewlett, 1990). In box-and-arrow psycholinguistic models, each hypothesized level of representation or processing can be represented in a diagram by a “box,” and the relationships between them by “arrows,” hence the name. Sometimes (as in the models of Smith, 1973, and Menn, 1978, described later in this paper) the arrows represent processes additional to those shown in boxes. Such models make explicit the hypothesized information- processing activities carried out in a particular cognitive function (such as language), in a manner analogous to computer flowcharts that depict the processes and decisions carried out by a computer program. Box-and-arrow models differ widely in the number of unseen psychological processes they describe and thus in the number of boxes they contain. Some have only one or two boxes between the input and output signals (e.g., Menn, 1978; Smith, 1973), whereas others have multiple boxes representing complex relationships between a number of different information-processing events (e.g., Hewlett, 1990; Hewlett, Gibbon, & Cohen- McKenzie, 1998; Stackhouse & Wells, 1997). The most important box, however, and the source of much ongoing debate, is that representing the underlying representation (or UR). In essence, an underlying representation captures information stored in a child's mind about a word he or she knows and uses. As the following description of several models will illustrate, the nature of this information and thus the type(s) of representation present in the child's knowledge base have captured the attention of researchers for some time. (Elise Baker et al. Psycholinguistic Models of Speech Development and Their Application to Clinical Practice. Journal of Speech, Language, and Hearing Research. June 2001. 44. p 685–702.)

Computational models

A computational model is a mathematical model in computational science that requires extensive computational resources to study the behavior of a complex system by computer simulation. The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by changing the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Theories of operation of the model can be derived/deduced from these computational experiments. Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, and neural network models.

Symbolic

A symbolic model is expressed in characters, usually non-numeric ones, that require translation before they can be used.

Subsymbolic

A cognitive model is subsymbolic if it is made by constituent entities that are not representations in their turn, e.g., pixels, sound images as perceived by the ear, signal samples; subsymbolic units in neural networks can be considered particular cases of this category.

Hybrid

Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations. See more details at hybrid intelligent system.

Dynamical systems

In the traditional computational approach, representations are viewed as static structures of discrete symbols. Cognition takes place by transforming static symbol structures in discrete, sequential steps. Sensory information is transformed into symbolic inputs, which produce symbolic outputs that get transformed into motor outputs. The entire system operates in an ongoing cycle.

What is missing from this traditional view is that human cognition happens continuously and in real time. Breaking down the processes into discrete time steps may not fully capture this behavior. An alternative approach is to define a system with (1) a state of the system at any given time, (2) a behavior, defined as the change over time in overall state, and (3) a state set or state space, representing the totality of overall states the system could be in. The system is distinguished by the fact that a change in any aspect of the system state depends on other aspects of the same or other system states.

A typical dynamical model is formalized by several differential equations that describe how the system's state changes over time. By doing so, the form of the space of possible trajectories and the internal and external forces that shape a specific trajectory that unfold over time, instead of the physical nature of the underlying mechanisms that manifest this dynamics, carry explanatory force. On this dynamical view, parametric inputs alter the system's intrinsic dynamics, rather than specifying an internal state that describes some external state of affairs.

Early dynamical systems

Associative memory

Early work in the application of dynamical systems to cognition can be found in the model of Hopfield networks. These networks were proposed as a model for associative memory. They represent the neural level of memory, modeling systems of around 30 neurons which can be in either an on or off state. By letting the network learn on its own, structure and computational properties naturally arise. Unlike previous models, “memories” can be formed and recalled by inputting a small portion of the entire memory. Time ordering of memories can also be encoded. The behavior of the system is modeled with vectors which can change values, representing different states of the system. This early model was a major step toward a dynamical systems view of human cognition, though many details had yet to be added and more phenomena accounted for.

Language acquisition

By taking into account the evolutionary development of the human nervous system and the similarity of the brain to other organs, Elman proposed that language and cognition should be treated as a dynamical system rather than a digital symbol processor. Neural networks of the type Elman implemented have come to be known as Elman networks. Instead of treating language as a collection of static lexical items and grammar rules that are learned and then used according to fixed rules, the dynamical systems view defines the lexicon as regions of state space within a dynamical system. Grammar is made up of attractors and repellers that constrain movement in the state space. This means that representations are sensitive to context, with mental representations viewed as trajectories through mental space instead of objects that are constructed and remain static. Elman networks were trained with simple sentences to represent grammar as a dynamical system. Once a basic grammar had been learned, the networks could then parse complex sentences by predicting which words would appear next according to the dynamical model.

Cognitive development

A classic developmental error has been investigated in the context of dynamical systems: The A-not-B error is proposed to be not a distinct error occurring at a specific age (8 to 10 months), but a feature of a dynamic learning process that is also present in older children. Children 2 years old were found to make an error similar to the A-not-B error when searching for toys hidden in a sandbox. After observing the toy being hidden in location A and repeatedly searching for it there, the 2-year-olds were shown a toy hidden in a new location B. When they looked for the toy, they searched in locations that were biased toward location A. This suggests that there is an ongoing representation of the toy's location that changes over time. The child's past behavior influences its model of locations of the sandbox, and so an account of behavior and learning must take into account how the system of the sandbox and the child's past actions is changing over time.

Locomotion

One proposed mechanism of a dynamical system comes from analysis of continuous-time recurrent neural networks (CTRNNs). By focusing on the output of the neural networks rather than their states and examining fully interconnected networks, three-neuron central pattern generator (CPG) can be used to represent systems such as leg movements during walking. This CPG contains three motor neurons to control the foot, backward swing, and forward swing effectors of the leg. Outputs of the network represent whether the foot is up or down and how much force is being applied to generate torque in the leg joint. One feature of this pattern is that neuron outputs are either off or on most of the time. Another feature is that the states are quasi-stable, meaning that they will eventually transition to other states. A simple pattern generator circuit like this is proposed to be a building block for a dynamical system. Sets of neurons that simultaneously transition from one quasi-stable state to another are defined as a dynamic module. These modules can in theory be combined to create larger circuits that comprise a complete dynamical system. However, the details of how this combination could occur are not fully worked out.

Modern dynamical systems

Behavioral dynamics

Modern formalizations of dynamical systems applied to the study of cognition vary. One such formalization, referred to as “behavioral dynamics”, treats the agent and the environment as a pair of coupled dynamical systems based on classical dynamical systems theory. In this formalization, the information from the environment informs the agent's behavior and the agent's actions modify the environment. In the specific case of perception-action cycles, the coupling of the environment and the agent is formalized by two functions. The first transforms the representation of the agents action into specific patterns of muscle activation that in turn produce forces in the environment. The second function transforms the information from the environment (i.e., patterns of stimulation at the agent's receptors that reflect the environment's current state) into a representation that is useful for controlling the agents actions. Other similar dynamical systems have been proposed (although not developed into a formal framework) in which the agent's nervous systems, the agent's body, and the environment are coupled together

Adaptive behaviors

Behavioral dynamics have been applied to locomotive behavior. Modeling locomotion with behavioral dynamics demonstrates that adaptive behaviors could arise from the interactions of an agent and the environment. According to this framework, adaptive behaviors can be captured by two levels of analysis. At the first level of perception and action, an agent and an environment can be conceptualized as a pair of dynamical systems coupled together by the forces the agent applies to the environment and by the structured information provided by the environment. Thus, behavioral dynamics emerge from the agent-environment interaction. At the second level of time evolution, behavior can be expressed as a dynamical system represented as a vector field. In this vector field, attractors reflect stable behavioral solutions, where as bifurcations reflect changes in behavior. In contrast to previous work on central pattern generators, this framework suggests that stable behavioral patterns are an emergent, self-organizing property of the agent-environment system rather than determined by the structure of either the agent or the environment.

Open dynamical systems

In an extension of classical dynamical systems theory, rather than coupling the environment's and the agent's dynamical systems to each other, an “open dynamical system” defines a “total system”, an “agent system”, and a mechanism to relate these two systems. The total system is a dynamical system that models an agent in an environment, whereas the agent system is a dynamical system that models an agent's intrinsic dynamics (i.e., the agent's dynamics in the absence of an environment). Importantly, the relation mechanism does not couple the two systems together, but rather continuously modifies the total system into the decoupled agent's total system. By distinguishing between total and agent systems, it is possible to investigate an agent's behavior when it is isolated from the environment and when it is embedded within an environment. This formalization can be seen as a generalization from the classical formalization, whereby the agent system can be viewed as the agent system in an open dynamical system, and the agent coupled to the environment and the environment can be viewed as the total system in an open dynamical system.

Embodied cognition

In the context of dynamical systems and embodied cognition, representations can be conceptualized as indicators or mediators. In the indicator view, internal states carry information about the existence of an object in the environment, where the state of a system during exposure to an object is the representation of that object. In the mediator view, internal states carry information about the environment which is used by the system in obtaining its goals. In this more complex account, the states of the system carries information that mediates between the information the agent takes in from the environment, and the force exerted on the environment by the agents behavior. The application of open dynamical systems have been discussed for four types of classical embodied cognition examples:

  1. Instances where the environment and agent must work together to achieve a goal, referred to as "intimacy". A classic example of intimacy is the behavior of simple agents working to achieve a goal (e.g., insects traversing the environment). The successful completion of the goal relies fully on the coupling of the agent to the environment.
  2. Instances where the use of external artifacts improves the performance of tasks relative to performance without these artifacts. The process is referred to as "offloading". A classic example of offloading is the behavior of Scrabble players; people are able to create more words when playing Scrabble if they have the tiles in front of them and are allowed to physically manipulate their arrangement. In this example, the Scrabble tiles allow the agent to offload working memory demands on to the tiles themselves.
  3. Instances where a functionally equivalent external artifact replaces functions that are normally performed internally by the agent, which is a special case of offloading. One famous example is that of human (specifically the agents Otto and Inga) navigation in a complex environment with or without assistance of an artifact.
  4. Instances where there is not a single agent. The individual agent is part of larger system that contains multiple agents and multiple artifacts. One famous example, formulated by Ed Hutchins in his book Cognition in the Wild, is that of navigating a naval ship.

The interpretations of these examples rely on the following logic: (1) the total system captures embodiment; (2) one or more agent systems capture the intrinsic dynamics of individual agents; (3) the complete behavior of an agent can be understood as a change to the agent's intrinsic dynamics in relation to its situation in the environment; and (4) the paths of an open dynamical system can be interpreted as representational processes. These embodied cognition examples show the importance of studying the emergent dynamics of an agent-environment systems, as well as the intrinsic dynamics of agent systems. Rather than being at odds with traditional cognitive science approaches, dynamical systems are a natural extension of these methods and should be studied in parallel rather than in competition.

Hypothesis

From Wikipedia, the free encyclopedia
 
The hypothesis of Andreas Cellarius, showing the planetary motions in eccentric and epicyclical orbits.

A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the available scientific theories. Even though the words "hypothesis" and "theory" are often used interchangeably, a scientific hypothesis is not the same as a scientific theory. A working hypothesis is a provisionally accepted hypothesis proposed for further research in a process beginning with an educated guess or thought.

A different meaning of the term hypothesis is used in formal logic, to denote the antecedent of a proposition; thus in the proposition "If P, then Q", P denotes the hypothesis (or antecedent); Q can be called a consequent. P is the assumption in a (possibly counterfactual) What If question.

The adjective hypothetical, meaning "having the nature of a hypothesis", or "being assumed to exist as an immediate consequence of a hypothesis", can refer to any of these meanings of the term "hypothesis".

Uses

In its ancient usage, hypothesis referred to a summary of the plot of a classical drama. The English word hypothesis comes from the ancient Greek word ὑπόθεσις hypothesis whose literal or etymological sense is "putting or placing under" and hence in extended use has many other meanings including "supposition".

In Plato's Meno (86e–87b), Socrates dissects virtue with a method used by mathematicians, that of "investigating from a hypothesis." In this sense, 'hypothesis' refers to a clever idea or to a convenient mathematical approach that simplifies cumbersome calculations. Cardinal Bellarmine gave a famous example of this usage in the warning issued to Galileo in the early 17th century: that he must not treat the motion of the Earth as a reality, but merely as a hypothesis.

In common usage in the 21st century, a hypothesis refers to a provisional idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms. A hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a confirmed hypothesis may become part of a theory or occasionally may grow to become a theory itself. Normally, scientific hypotheses have the form of a mathematical model. Sometimes, but not always, one can also formulate them as existential statements, stating that some particular instance of the phenomenon under examination has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.

In entrepreneurial science, a hypothesis is used to formulate provisional ideas within a business setting. The formulated hypothesis is then evaluated where either the hypothesis is proven to be "true" or "false" through a verifiability- or falsifiability-oriented experiment.

Any useful hypothesis will enable predictions by reasoning (including deductive reasoning). It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction may also invoke statistics and only talk about probabilities. Karl Popper, following others, has argued that a hypothesis must be falsifiable, and that one cannot regard a proposition or theory as scientific if it does not admit the possibility of being shown false. Other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability (e.g., verificationism) or coherence (e.g., confirmation holism). The scientific method involves experimentation to test the ability of some hypothesis to adequately answer the question under investigation. In contrast, unfettered observation is not as likely to raise unexplained issues or open questions in science, as would the formulation of a crucial experiment to test the hypothesis. A thought experiment might also be used to test the hypothesis as well.

In framing a hypothesis, the investigator must not currently know the outcome of a test or that it remains reasonably under continuing investigation. Only in such cases does the experiment, test or study potentially increase the probability of showing the truth of a hypothesis. If the researcher already knows the outcome, it counts as a "consequence" — and the researcher should have already considered this while formulating the hypothesis. If one cannot assess the predictions by observation or by experience, the hypothesis needs to be tested by others providing observations. For example, a new technology or theory might make the necessary experiments feasible.

Scientific hypothesis

People refer to a trial solution to a problem as a hypothesis, often called an "educated guess" because it provides a suggested outcome based on the evidence. However, some scientists reject the term "educated guess" as incorrect. Experimenters may test and reject several hypotheses before solving the problem.

According to Schick and Vaughn, researchers weighing up alternative hypotheses may take into consideration:

  • Testability (compare falsifiability as discussed above)
  • Parsimony (as in the application of "Occam's razor", discouraging the postulation of excessive numbers of entities)
  • Scope – the apparent application of the hypothesis to multiple cases of phenomena
  • Fruitfulness – the prospect that a hypothesis may explain further phenomena in the future
  • Conservatism – the degree of "fit" with existing recognized knowledge-systems.

Working hypothesis

A working hypothesis is a hypothesis that is provisionally accepted as a basis for further research in the hope that a tenable theory will be produced, even if the hypothesis ultimately fails. Like all hypotheses, a working hypothesis is constructed as a statement of expectations, which can be linked to the exploratory research purpose in empirical investigation. Working hypotheses are often used as a conceptual framework in qualitative research.

The provisional nature of working hypotheses makes them useful as an organizing device in applied research. Here they act like a useful guide to address problems that are still in a formative phase.

In recent years, philosophers of science have tried to integrate the various approaches to evaluating hypotheses, and the scientific method in general, to form a more complete system that integrates the individual concerns of each approach. Notably, Imre Lakatos and Paul Feyerabend, Karl Popper's colleague and student, respectively, have produced novel attempts at such a synthesis.

Hypotheses, concepts and measurement

Concepts in Hempel's deductive-nomological model play a key role in the development and testing of hypotheses. Most formal hypotheses connect concepts by specifying the expected relationships between propositions. When a set of hypotheses are grouped together, they become a type of conceptual framework. When a conceptual framework is complex and incorporates causality or explanation, it is generally referred to as a theory. According to noted philosopher of science Carl Gustav Hempel, "An adequate empirical interpretation turns a theoretical system into a testable theory: The hypothesis whose constituent terms have been interpreted become capable of test by reference to observable phenomena. Frequently the interpreted hypothesis will be derivative hypotheses of the theory; but their confirmation or disconfirmation by empirical data will then immediately strengthen or weaken also the primitive hypotheses from which they were derived."

Hempel provides a useful metaphor that describes the relationship between a conceptual framework and the framework as it is observed and perhaps tested (interpreted framework). "The whole system floats, as it were, above the plane of observation and is anchored to it by rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of those interpretative connections, the network can function as a scientific theory." Hypotheses with concepts anchored in the plane of observation are ready to be tested. In "actual scientific practice the process of framing a theoretical structure and of interpreting it are not always sharply separated, since the intended interpretation usually guides the construction of the theoretician." It is, however, "possible and indeed desirable, for the purposes of logical clarification, to separate the two steps conceptually."

Statistical hypothesis testing

When a possible correlation or similar relation between phenomena is investigated, such as whether a proposed remedy is effective in treating a disease, the hypothesis that a relation exists cannot be examined the same way one might examine a proposed new law of nature. In such an investigation, if the tested remedy shows no effect in a few cases, these do not necessarily falsify the hypothesis. Instead, statistical tests are used to determine how likely it is that the overall effect would be observed if the hypothesized relation does not exist. If that likelihood is sufficiently small (e.g., less than 1%), the existence of a relation may be assumed. Otherwise, any observed effect may be due to pure chance.

In statistical hypothesis testing, two hypotheses are compared. These are called the null hypothesis and the alternative hypothesis. The null hypothesis is the hypothesis that states that there is no relation between the phenomena whose relation is under investigation, or at least not of the form given by the alternative hypothesis. The alternative hypothesis, as the name suggests, is the alternative to the null hypothesis: it states that there is some kind of relation. The alternative hypothesis may take several forms, depending on the nature of the hypothesized relation; in particular, it can be two-sided (for example: there is some effect, in a yet unknown direction) or one-sided (the direction of the hypothesized relation, positive or negative, is fixed in advance).

Conventional significance levels for testing hypotheses (acceptable probabilities of wrongly rejecting a true null hypothesis) are .10, .05, and .01. The significance level for deciding whether the null hypothesis is rejected and the alternative hypothesis is accepted must be determined in advance, before the observations are collected or inspected. If these criteria are determined later, when the data to be tested are already known, the test is invalid.

The above procedure is actually dependent on the number of the participants (units or sample size) that are included in the study. For instance, to avoid having the sample size be too small to reject a null hypothesis, it is recommended that one specify a sufficient sample size from the beginning. It is advisable to define a small, medium and large effect size for each of a number of important statistical tests which are used to test the hypotheses.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...