Search This Blog

Tuesday, January 15, 2019

Mechanism design

From Wikipedia, the free encyclopedia

The Stanley Reiter diagram above illustrates a game of mechanism design. The upper-left space depicts the type space and the upper-right space X the space of outcomes. The social choice function maps a type profile to an outcome. In games of mechanism design, agents send messages in a game environment . The equilibrium in the game can be designed to implement some social choice function .
 
Mechanism design is a field in economics and game theory that takes an engineering approach to designing economic mechanisms or incentives, toward desired objectives, in strategic settings, where players act rationally. Because it starts at the end of the game, then goes backwards, it is also called reverse game theory. It has broad applications, from economics and politics (markets, auctions, voting procedures) to networked-systems (internet interdomain routing, sponsored search auctions). 

Mechanism design studies solution concepts for a class of private-information games. Leonid Hurwicz explains that 'in a design problem, the goal function is the main "given", while the mechanism is the unknown. Therefore, the design problem is the "inverse" of traditional economic theory, which is typically devoted to the analysis of the performance of a given mechanism.' So, two distinguishing features of these games are:
  • that a game "designer" chooses the game structure rather than inheriting one,
  • that the designer is interested in the game's outcome.
The 2007 Nobel Memorial Prize in Economic Sciences was awarded to Leonid Hurwicz, Eric Maskin, and Roger Myerson "for having laid the foundations of mechanism design theory".

Intuition

In an interesting class of Bayesian games, one player, called the "principal", would like to condition his behavior on information privately known to other players. For example, the principal would like to know the true quality of a used car a salesman is pitching. He cannot learn anything simply by asking the salesman, because it is in his interest to distort the truth. However, in mechanism design the principal does have one advantage: He may design a game whose rules can influence others to act the way he would like.

Without mechanism design theory, the principal's problem would be difficult to solve. He would have to consider all the possible games and choose the one that best influences other players' tactics. In addition, the principal would have to draw conclusions from agents who may lie to him. Thanks to mechanism design, and particularly the revelation principle, the principal only needs to consider games in which agents truthfully report their private information.

Foundations

Mechanism

A game of mechanism design is a game of private information in which one of the agents, called the principal, chooses the payoff structure. Following Harsanyi (1967), the agents receive secret "messages" from nature containing information relevant to payoffs. For example, a message may contain information about their preferences or the quality of a good for sale. We call this information the agent's "type" (usually noted and accordingly the space of types ). Agents then report a type to the principal (usually noted with a hat ) that can be a strategic lie. After the report, the principal and the agents are paid according to the payoff structure the principal chose. 

The timing of the game is:
  1. The principal commits to a mechanism that grants an outcome as a function of reported type;
  2. The agents report, possibly dishonestly, a type profile ;
  3. The mechanism is executed (agents receive outcome ).
In order to understand who gets what, it is common to divide the outcome into a goods allocation and a money transfer, where stands for an allocation of goods rendered or received as a function of type, and stands for a monetary transfer as a function of type.

As a benchmark the designer often defines what would happen under full information. Define a social choice function mapping the (true) type profile directly to the allocation of goods received or rendered,
.
In contrast a mechanism maps the reported type profile to an outcome (again, both a goods allocation and a money transfer )
.

Revelation principle

A proposed mechanism constitutes a Bayesian game (a game of private information), and if it is well-behaved the game has a Bayesian Nash equilibrium. At equilibrium agents choose their reports strategically as a function of type
.
It is difficult to solve for Bayesian equilibria in such a setting because it involves solving for agents' best-response strategies and for the best inference from a possible strategic lie. Thanks to a sweeping result called the revelation principle, no matter the mechanism a designer can confine attention to equilibria in which agents truthfully report type. The revelation principle states: "To every Bayesian Nash equilibrium there corresponds a Bayesian game with the same equilibrium outcome but in which players truthfully report type." 

This is extremely useful. The principle allows one to solve for a Bayesian equilibrium by assuming all players truthfully report type (subject to an incentive compatibility constraint). In one blow it eliminates the need to consider either strategic behavior or lying.

Its proof is quite direct. Assume a Bayesian game in which the agent's strategy and payoff are functions of its type and what others do, . By definition agent i's equilibrium strategy is Nash in expected utility:
.
Simply define a mechanism that would induce agents to choose the same equilibrium. The easiest one to define is for the mechanism to commit to playing the agents' equilibrium strategies for them.
.
Under such a mechanism the agents of course find it optimal to reveal type since the mechanism plays the strategies they found optimal anyway. Formally, choose such that
.

Implementability

The designer of a mechanism generally hopes either
  • to design a mechanism that "implements" a social choice function;
  • to find the mechanism that maximizes some value criterion (e.g. profit).
To implement a social choice function is to find some transfer function that motivates agents to pick outcome . Formally, if the equilibrium strategy profile under the mechanism maps to the same goods allocation as a social choice function,
,
we say the mechanism implements the social choice function.

Thanks to the revelation principle, the designer can usually find a transfer function to implement a social choice by solving an associated truth-telling game. If agents find it optimal to truthfully report type,
,
we say such a mechanism is truthfully implementable (or just "implementable"). The task is then to solve for a truthfully implementable and impute this transfer function to the original game. An allocation is truthfully implementable if there exists a transfer function such that
,
which is also called the incentive compatibility (IC) constraint. 

In applications, the IC condition is the key to describing the shape of in any useful way. Under certain conditions it can even isolate the transfer function analytically. Additionally, a participation (individual rationality) constraint is sometimes added if agents have the option of not playing.

Necessity

Consider a setting in which all agents have a type-contingent utility function . Consider also a goods allocation that is vector-valued and size (which permits number of goods) and assume it is piecewise continuous with respect to its arguments. 

The function is implementable only if
,
whenever and and x is continuous at . This is a necessary condition and is derived from the first- and second-order conditions of the agent's optimization problem assuming truth-telling. 

Its meaning can be understood in two pieces. The first piece says the agent's marginal rate of substitution (MRS) increases as a function of the type, 

.
In short, agents will not tell the truth if the mechanism does not offer higher agent types a better deal. Otherwise, higher types facing any mechanism that punishes high types for reporting will lie and declare they are lower types, violating the truthtelling IC constraint. The second piece is a monotonicity condition waiting to happen,
which, to be positive, means higher types must be given more of the good. 

There is potential for the two pieces to interact. If for some type range the contract offered less quantity to higher types , it is possible the mechanism could compensate by giving higher types a discount. But such a contract already exists for low-type agents, so this solution is pathological. Such a solution sometimes occurs in the process of solving for a mechanism. In these cases it must be "ironed." In a multiple-good environment it is also possible for the designer to reward the agent with more of one good to substitute for less of another (e.g. butter for margarine). Multiple-good mechanisms are an ongoing problem in mechanism design theory.

Sufficiency

Mechanism design papers usually make two assumptions to ensure implementability:
.
This is known by several names: the single-crossing condition, the sorting condition and the Spence–Mirrlees condition. It means the utility function is of such a shape that the agent's MRS is increasing in type.
,
This is a technical condition bounding the rate of growth of the MRS. 

These assumptions are sufficient to provide that any monotonic is implementable (a exists that can implement it). In addition, in the single-good setting the single-crossing condition is sufficient to provide that only a monotonic is implementable, so the designer can confine his search to a monotonic .

Highlighted results

Revenue equivalence theorem

Vickrey (1961) gives a celebrated result that any member of a large class of auctions assures the seller of the same expected revenue and that the expected revenue is the best the seller can do. This is the case if
  • The buyers have identical valuation functions (which may be a function of type)
  • The buyers' types are independently distributed
  • The buyers types are drawn from a continuous distribution
  • The type distribution bears the monotone hazard rate property
  • The mechanism sells the good to the buyer with the highest valuation
The last condition is crucial to the theorem. An implication is that for the seller to achieve higher revenue he must take a chance on giving the item to an agent with a lower valuation. Usually this means he must risk not selling the item at all.

Vickrey–Clarke–Groves mechanisms

The Vickrey (1961) auction model was later expanded by Clarke (1971) and Groves to treat a public choice problem in which a public project's cost is borne by all agents, e.g. whether to build a municipal bridge. The resulting "Vickrey–Clarke–Groves" mechanism can motivate agents to choose the socially efficient allocation of the public good even if agents have privately known valuations. In other words, it can solve the "tragedy of the commons"—under certain conditions, in particular quasilinear utility or if budget balance is not required. 

Consider a setting in which number of agents have quasilinear utility with private valuations where the currency is valued linearly. The VCG designer designs an incentive compatible (hence truthfully implementable) mechanism to obtain the true type profile, from which the designer implements the socially optimal allocation
.
The cleverness of the VCG mechanism is the way it motivates truthful revelation. It eliminates incentives to misreport by penalizing any agent by the cost of the distortion he causes. Among the reports the agent may make, the VCG mechanism permits a "null" report saying he is indifferent to the public good and cares only about the money transfer. This effectively removes the agent from the game. If an agent does choose to report a type, the VCG mechanism charges the agent a fee if his report is pivotal, that is if his report changes the optimal allocation x so as to harm other agents. The payment is calculated
,
which sums the distortion in the utilities of the other agents (and not his own) caused by one agent reporting.

Gibbard–Satterthwaite theorem

Gibbard (1973) and Satterthwaite (1975) give an impossibility result similar in spirit to Arrow's impossibility theorem. For a very general class of games, only "dictatorial" social choice functions can be implemented. 

A social choice function f() is dictatorial if one agent always receives his most-favored goods allocation,
.
The theorem states that under general conditions any truthfully implementable social choice function must be dictatorial,
  • X is finite and contains at least three elements,
  • Preferences are rational,
  • .

Myerson–Satterthwaite theorem

Myerson and Satterthwaite (1983) show there is no efficient way for two parties to trade a good when they each have secret and probabilistically varying valuations for it, without the risk of forcing one party to trade at a loss. It is among the most remarkable negative results in economics—a kind of negative mirror to the fundamental theorems of welfare economics.

Examples

Price discrimination

Mirrlees (1971) introduces a setting in which the transfer function t() is easy to solve for. Due to its relevance and tractability it is a common setting in the literature. Consider a single-good, single-agent setting in which the agent has quasilinear utility with an unknown type parameter
,
and in which the principal has a prior CDF over the agent's type . The principal can produce goods at a convex marginal cost c(x) and wants to maximize the expected profit from the transaction
,
subject to IC and IR conditions
,
.
The principal here is a monopolist trying to set a profit-maximizing price scheme in which it cannot identify the type of the customer. A common example is an airline setting fares for business, leisure and student travelers. Due to the IR condition it has to give every type a good enough deal to induce participation. Due to the IC condition it has to give every type a good enough deal that the type prefers its deal to that of any other. 

A trick given by Mirrlees (1971) is to use the envelope theorem to eliminate the transfer function from the expectation to be maximized,
,
.
Integrating,
,
where is some index type. Replacing the incentive-compatible in the maximand,
, ,
after an integration by parts. This function can be maximized pointwise. 

Because is incentive-compatible already the designer can drop the IC constraint. If the utility function satisfies the Spence–Mirrlees condition then a monotonic function exists. The IR constraint can be checked at equilibrium and the fee schedule raised or lowered accordingly. Additionally, note the presence of a hazard rate in the expression. If the type distribution bears the monotone hazard ratio property, the FOC is sufficient to solve for t(). If not, then it is necessary to check whether the monotonicity constraint (see sufficiency, above) is satisfied everywhere along the allocation and fee schedules. If not, then the designer must use Myerson ironing.

Myerson ironing

It is possible to solve for a goods or price schedule that satisfies the first-order conditions yet is not monotonic. If so it is necessary to "iron" the schedule by choosing some value at which to flatten the function.
 
In some applications the designer may solve the first-order conditions for the price and allocation schedules yet find they are not monotonic. For example, in the quasilinear setting this often happens when the hazard ratio is itself not monotone. By the Spence–Mirrlees condition the optimal price and allocation schedules must be monotonic, so the designer must eliminate any interval over which the schedule changes direction by flattening it. 

Intuitively, what is going on is the designer finds it optimal to bunch certain types together and give them the same contract. Normally the designer motivates higher types to distinguish themselves by giving them a better deal. If there are insufficiently few higher types on the margin the designer does not find it worthwhile to grant lower types a concession (called their information rent) in order to charge higher types a type-specific contract. 

Consider a monopolist principal selling to agents with quasilinear utility, the example above. Suppose the allocation schedule satisfying the first-order conditions has a single interior peak at and a single interior trough at , illustrated at right.
  • Following Myerson (1981) flatten it by choosing satisfying
where is the inverse function of x mapping to and is the inverse function of x mapping to . That is, returns a before the interior peak and returns a after the interior trough.
  • If the nonmonotonic region of borders the edge of the type space, simply set the appropriate function (or both) to the boundary type. If there are multiple regions, see a textbook for an iterative procedure; it may be that more than one troughs should be ironed together.

Proof

The proof uses the theory of optimal control. It considers the set of intervals in the nonmonotonic region of over which it might flatten the schedule. It then writes a Hamiltonian to obtain necessary conditions for a within the intervals
  1. that does satisfy monotonicity,
  2. for which the monotonicity constraint is not binding on the boundaries of the interval.
Condition two ensures that the satisfying the optimal control problem reconnects to the schedule in the original problem at the interval boundaries (no jumps). Any satisfying the necessary conditions must be flat because it must be monotonic and yet reconnect at the boundaries. 

As before maximize the principal's expected payoff, but this time subject to the monotonicity constraint
,
and use a Hamiltonian to do it, with shadow price ,
,
where is a state variable and the control. As usual in optimal control the costate evolution equation must satisfy
.
Taking advantage of condition 2, note the monotonicity constraint is not binding at the boundaries of the interval,
,
meaning the costate variable condition can be integrated and also equals 0;
.
The average distortion of the principal's surplus must be 0. To flatten the schedule, find an such that its inverse image maps to a interval satisfying the condition above.

General equilibrium theory

From Wikipedia, the free encyclopedia

In economics, general equilibrium theory attempts to explain the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that the interaction of demand and supply will result in an overall general equilibrium. General equilibrium theory contrasts to the theory of partial equilibrium, which only analyzes single markets. 
 
General equilibrium theory both studies economies using the model of equilibrium pricing and seeks to determine in which circumstances the assumptions of general equilibrium will hold. The theory dates to the 1870s, particularly the work of French economist Léon Walras in his pioneering 1874 work Elements of Pure Economics.

Overview

It is often assumed that agents are price takers, and under that assumption two common notions of equilibrium exist: Walrasian, or competitive equilibrium, and its generalization: a price equilibrium with transfers.

Broadly speaking, general equilibrium tries to give an understanding of the whole economy using a "bottom-up" approach, starting with individual markets and agents. (Macroeconomics, as developed by the Keynesian economists, focused on a "top-down" approach, where the analysis starts with larger aggregates, the "big picture".) Therefore, general equilibrium theory has traditionally been classified as part of microeconomics

The difference is not as clear as it used to be, since much of modern macroeconomics has emphasized microeconomic foundations, and has constructed general equilibrium models of macroeconomic fluctuations. General equilibrium macroeconomic models usually have a simplified structure that only incorporates a few markets, like a "goods market" and a "financial market". In contrast, general equilibrium models in the microeconomic tradition typically involve a multitude of different goods markets. They are usually complex and require computers to help with numerical solutions.

In a market system the prices and production of all goods, including the price of money and interest, are interrelated. A change in the price of one good, say bread, may affect another price, such as bakers' wages. If bakers don't differ in tastes from others, the demand for bread might be affected by a change in bakers' wages, with a consequent effect on the price of bread. Calculating the equilibrium price of just one good, in theory, requires an analysis that accounts for all of the millions of different goods that are available.

The first attempt in neoclassical economics to model prices for a whole economy was made by Léon Walras. Walras' Elements of Pure Economics provides a succession of models, each taking into account more aspects of a real economy (two commodities, many commodities, production, growth, money). Some think Walras was unsuccessful and that the later models in this series are inconsistent.

In particular, Walras's model was a long-run model in which prices of capital goods are the same whether they appear as inputs or outputs and in which the same rate of profits is earned in all lines of industry. This is inconsistent with the quantities of capital goods being taken as data. But when Walras introduced capital goods in his later models, he took their quantities as given, in arbitrary ratios. (In contrast, Kenneth Arrow and Gérard Debreu continued to take the initial quantities of capital goods as given, but adopted a short run model in which the prices of capital goods vary with time and the own rate of interest varies across capital goods.)

Walras was the first to lay down a research program much followed by 20th-century economists. In particular, the Walrasian agenda included the investigation of when equilibria are unique and stable.(Walras' Lesson 7 shows neither uniqueness, nor stability, nor even existence of an equilibrium is guaranteed.)

Walras also proposed a dynamic process by which general equilibrium might be reached, that of the tâtonnement or groping process.

The tâtonnement process is a model for investigating stability of equilibria. Prices are announced (perhaps by an "auctioneer"), and agents state how much of each good they would like to offer (supply) or purchase (demand). No transactions and no production take place at disequilibrium prices. Instead, prices are lowered for goods with positive prices and excess supply. Prices are raised for goods with excess demand. The question for the mathematician is under what conditions such a process will terminate in equilibrium where demand equates to supply for goods with positive prices and demand does not exceed supply for goods with a price of zero. Walras was not able to provide a definitive answer to this question.

In partial equilibrium analysis, the determination of the price of a good is simplified by just looking at the price of one good, and assuming that the prices of all other goods remain constant. The Marshallian theory of supply and demand is an example of partial equilibrium analysis. Partial equilibrium analysis is adequate when the first-order effects of a shift in the demand curve do not shift the supply curve. Anglo-American economists became more interested in general equilibrium in the late 1920s and 1930s after Piero Sraffa's demonstration that Marshallian economists cannot account for the forces thought to account for the upward-slope of the supply curve for a consumer good.

If an industry uses little of a factor of production, a small increase in the output of that industry will not bid the price of that factor up. To a first-order approximation, firms in the industry will experience constant costs, and the industry supply curves will not slope up. If an industry uses an appreciable amount of that factor of production, an increase in the output of that industry will exhibit increasing costs. But such a factor is likely to be used in substitutes for the industry's product, and an increased price of that factor will have effects on the supply of those substitutes. Consequently, Sraffa argued, the first-order effects of a shift in the demand curve of the original industry under these assumptions includes a shift in the supply curve of substitutes for that industry's product, and consequent shifts in the original industry's supply curve. General equilibrium is designed to investigate such interactions between markets.

Continental European economists made important advances in the 1930s. Walras' proofs of the existence of general equilibrium often were based on the counting of equations and variables. Such arguments are inadequate for non-linear systems of equations and do not imply that equilibrium prices and quantities cannot be negative, a meaningless solution for his models. The replacement of certain equations by inequalities and the use of more rigorous mathematics improved general equilibrium modeling.

Modern concept of general equilibrium in economics

The modern conception of general equilibrium is provided by a model developed jointly by Kenneth Arrow, Gérard Debreu, and Lionel W. McKenzie in the 1950s. Debreu presents this model in Theory of Value (1959) as an axiomatic model, following the style of mathematics promoted by Nicolas Bourbaki. In such an approach, the interpretation of the terms in the theory (e.g., goods, prices) are not fixed by the axioms.

Three important interpretations of the terms of the theory have been often cited. First, suppose commodities are distinguished by the location where they are delivered. Then the Arrow-Debreu model is a spatial model of, for example, international trade.

Second, suppose commodities are distinguished by when they are delivered. That is, suppose all markets equilibrate at some initial instant of time. Agents in the model purchase and sell contracts, where a contract specifies, for example, a good to be delivered and the date at which it is to be delivered. The Arrow–Debreu model of intertemporal equilibrium contains forward markets for all goods at all dates. No markets exist at any future dates.

Third, suppose contracts specify states of nature which affect whether a commodity is to be delivered: "A contract for the transfer of a commodity now specifies, in addition to its physical properties, its location and its date, an event on the occurrence of which the transfer is conditional. This new definition of a commodity allows one to obtain a theory of [risk] free from any probability concept..."

These interpretations can be combined. So the complete Arrow–Debreu model can be said to apply when goods are identified by when they are to be delivered, where they are to be delivered and under what circumstances they are to be delivered, as well as their intrinsic nature. So there would be a complete set of prices for contracts such as "1 ton of Winter red wheat, delivered on 3rd of January in Minneapolis, if there is a hurricane in Florida during December". A general equilibrium model with complete markets of this sort seems to be a long way from describing the workings of real economies, however its proponents argue that it is still useful as a simplified guide as to how real economies function. 

Some of the recent work in general equilibrium has in fact explored the implications of incomplete markets, which is to say an intertemporal economy with uncertainty, where there do not exist sufficiently detailed contracts that would allow agents to fully allocate their consumption and resources through time. While it has been shown that such economies will generally still have an equilibrium, the outcome may no longer be Pareto optimal. The basic intuition for this result is that if consumers lack adequate means to transfer their wealth from one time period to another and the future is risky, there is nothing to necessarily tie any price ratio down to the relevant marginal rate of substitution, which is the standard requirement for Pareto optimality. Under some conditions the economy may still be constrained Pareto optimal, meaning that a central authority limited to the same type and number of contracts as the individual agents may not be able to improve upon the outcome, what is needed is the introduction of a full set of possible contracts. Hence, one implication of the theory of incomplete markets is that inefficiency may be a result of underdeveloped financial institutions or credit constraints faced by some members of the public. Research still continues in this area.

Properties and characterization of general equilibrium

Basic questions in general equilibrium analysis are concerned with the conditions under which an equilibrium will be efficient, which efficient equilibria can be achieved, when an equilibrium is guaranteed to exist and when the equilibrium will be unique and stable.

First Fundamental Theorem of Welfare Economics

The First Fundamental Welfare Theorem asserts that market equilibria are Pareto efficient. In a pure exchange economy, a sufficient condition for the first welfare theorem to hold is that preferences be locally nonsatiated. The first welfare theorem also holds for economies with production regardless of the properties of the production function. Implicitly, the theorem assumes complete markets and perfect information. In an economy with externalities, for example, it is possible for equilibria to arise that are not efficient.

The first welfare theorem is informative in the sense that it points to the sources of inefficiency in markets. Under the assumptions above, any market equilibrium is tautologically efficient. Therefore, when equilibria arise that are not efficient, the market system itself is not to blame, but rather some sort of market failure.

Second Fundamental Theorem of Welfare Economics

Even if every equilibrium is efficient, it may not be that every efficient allocation of resources can be part of an equilibrium. However, the second theorem states that every Pareto efficient allocation can be supported as an equilibrium by some set of prices. In other words, all that is required to reach a particular Pareto efficient outcome is a redistribution of initial endowments of the agents after which the market can be left alone to do its work. This suggests that the issues of efficiency and equity can be separated and need not involve a trade-off. The conditions for the second theorem are stronger than those for the first, as consumers' preferences and production sets now need to be convex (convexity roughly corresponds to the idea of diminishing marginal rates of substitution i.e. "the average of two equally good bundles is better than either of the two bundles").

Existence

Even though every equilibrium is efficient, neither of the above two theorems say anything about the equilibrium existing in the first place. To guarantee that an equilibrium exists, it suffices that consumer preferences be strictly convex. With enough consumers, the convexity assumption can be relaxed both for existence and the second welfare theorem. Similarly, but less plausibly, convex feasible production sets suffice for existence; convexity excludes economies of scale.

Proofs of the existence of equilibrium traditionally rely on fixed-point theorems such as Brouwer fixed-point theorem for functions (or, more generally, the Kakutani fixed-point theorem for set-valued functions). See Competitive equilibrium#Existence of a competitive equilibrium. The proof was first due to Lionel McKenzie, and Kenneth Arrow and Gérard Debreu. In fact, the converse also holds, according to Uzawa's derivation of Brouwer's fixed point theorem from Walras's law. Following Uzawa's theorem, many mathematical economists consider proving existence a deeper result than proving the two Fundamental Theorems. 

Another method of proof of existence, global analysis, uses Sard's lemma and the Baire category theorem; this method was pioneered by Gérard Debreu and Stephen Smale.

Nonconvexities in large economies

Starr (1969) applied the Shapley–Folkman–Starr theorem to prove that even without convex preferences there exists an approximate equilibrium. The Shapley–Folkman–Starr results bound the distance from an "approximate" economic equilibrium to an equilibrium of a "convexified" economy, when the number of agents exceeds the dimension of the goods. Following Starr's paper, the Shapley–Folkman–Starr results were "much exploited in the theoretical literature", according to Guesnerie, who wrote the following:
...some key results obtained under the convexity assumption remain (approximately) relevant in circumstances where convexity fails. For example, in economies with a large consumption side, nonconvexities in preferences do not destroy the standard results of, say Debreu's theory of value. In the same way, if indivisibilities in the production sector are small with respect to the size of the economy, [ . . . ] then standard results are affected in only a minor way.
To this text, Guesnerie appended the following footnote:
The derivation of these results in general form has been one of the major achievements of postwar economic theory.
In particular, the Shapley-Folkman-Starr results were incorporated in the theory of general economic equilibria and in the theory of market failures and of public economics.

Uniqueness

Although generally (assuming convexity) an equilibrium will exist and will be efficient, the conditions under which it will be unique are much stronger. While the issues are fairly technical the basic intuition is that the presence of wealth effects (which is the feature that most clearly delineates general equilibrium analysis from partial equilibrium) generates the possibility of multiple equilibria. When a price of a particular good changes there are two effects. First, the relative attractiveness of various commodities changes; and second, the wealth distribution of individual agents is altered. These two effects can offset or reinforce each other in ways that make it possible for more than one set of prices to constitute an equilibrium. 

A result known as the Sonnenschein–Mantel–Debreu theorem states that the aggregate excess demand function inherits only certain properties of individual's demand functions, and that these (Continuity, Homogeneity of degree zero, Walras' law and boundary behavior when prices are near zero) are the only real restriction one can expect from an aggregate excess demand function: any such function can be rationalized as the excess demand of an economy. In particular uniqueness of equilibrium should not be expected.

There has been much research on conditions when the equilibrium will be unique, or which at least will limit the number of equilibria. One result states that under mild assumptions the number of equilibria will be finite and odd. Furthermore, if an economy as a whole, as characterized by an aggregate excess demand function, has the revealed preference property (which is a much stronger condition than revealed preferences for a single individual) or the gross substitute property then likewise the equilibrium will be unique. All methods of establishing uniqueness can be thought of as establishing that each equilibrium has the same positive local index, in which case by the index theorem there can be but one such equilibrium.

Determinacy

Given that equilibria may not be unique, it is of some interest to ask whether any particular equilibrium is at least locally unique. If so, then comparative statics can be applied as long as the shocks to the system are not too large. As stated above, in a regular economy equilibria will be finite, hence locally unique. One reassuring result, due to Debreu, is that "most" economies are regular.

Work by Michael Mandler (1999) has challenged this claim. The Arrow–Debreu–McKenzie model is neutral between models of production functions as continuously differentiable and as formed from (linear combinations of) fixed coefficient processes. Mandler accepts that, under either model of production, the initial endowments will not be consistent with a continuum of equilibria, except for a set of Lebesgue measure zero. However, endowments change with time in the model and this evolution of endowments is determined by the decisions of agents (e.g., firms) in the model. Agents in the model have an interest in equilibria being indeterminate:
Indeterminacy, moreover, is not just a technical nuisance; it undermines the price-taking assumption of competitive models. Since arbitrary small manipulations of factor supplies can dramatically increase a factor's price, factor owners will not take prices to be parametric.
When technology is modeled by (linear combinations) of fixed coefficient processes, optimizing agents will drive endowments to be such that a continuum of equilibria exist:
The endowments where indeterminacy occurs systematically arise through time and therefore cannot be dismissed; the Arrow-Debreu-McKenzie model is thus fully subject to the dilemmas of factor price theory.
Some have questioned the practical applicability of the general equilibrium approach based on the possibility of non-uniqueness of equilibria.

Stability

In a typical general equilibrium model the prices that prevail "when the dust settles" are simply those that coordinate the demands of various consumers for various goods. But this raises the question of how these prices and allocations have been arrived at, and whether any (temporary) shock to the economy will cause it to converge back to the same outcome that prevailed before the shock. This is the question of stability of the equilibrium, and it can be readily seen that it is related to the question of uniqueness. If there are multiple equilibria, then some of them will be unstable. Then, if an equilibrium is unstable and there is a shock, the economy will wind up at a different set of allocations and prices once the convergence process terminates. However stability depends not only on the number of equilibria but also on the type of the process that guides price changes (for a specific type of price adjustment process see Walrasian auction). Consequently, some researchers have focused on plausible adjustment processes that guarantee system stability, i.e., that guarantee convergence of prices and allocations to some equilibrium. When more than one stable equilibrium exists, where one ends up will depend on where one begins.

Unresolved problems in general equilibrium

Research building on the Arrow–Debreu–McKenzie model has revealed some problems with the model. The Sonnenschein–Mantel–Debreu results show that, essentially, any restrictions on the shape of excess demand functions are stringent. Some think this implies that the Arrow–Debreu model lacks empirical content. At any rate, Arrow–Debreu–McKenzie equilibria cannot be expected to be unique, or stable. 

A model organized around the tâtonnement process has been said to be a model of a centrally planned economy, not a decentralized market economy. Some research has tried to develop general equilibrium models with other processes. In particular, some economists have developed models in which agents can trade at out-of-equilibrium prices and such trades can affect the equilibria to which the economy tends. Particularly noteworthy are the Hahn process, the Edgeworth process and the Fisher process.

The data determining Arrow-Debreu equilibria include initial endowments of capital goods. If production and trade occur out of equilibrium, these endowments will be changed further complicating the picture.
In a real economy, however, trading, as well as production and consumption, goes on out of equilibrium. It follows that, in the course of convergence to equilibrium (assuming that occurs), endowments change. In turn this changes the set of equilibria. Put more succinctly, the set of equilibria is path dependent... [This path dependence] makes the calculation of equilibria corresponding to the initial state of the system essentially irrelevant. What matters is the equilibrium that the economy will reach from given initial endowments, not the equilibrium that it would have been in, given initial endowments, had prices happened to be just right.
(Franklin Fisher).
The Arrow–Debreu model in which all trade occurs in futures contracts at time zero requires a very large number of markets to exist. It is equivalent under complete markets to a sequential equilibrium concept in which spot markets for goods and assets open at each date-state event (they are not equivalent under incomplete markets); market clearing then requires that the entire sequence of prices clears all markets at all times. A generalization of the sequential market arrangement is the temporary equilibrium structure, where market clearing at a point in time is conditional on expectations of future prices which need not be market clearing ones.

Although the Arrow–Debreu–McKenzie model is set out in terms of some arbitrary numéraire, the model does not encompass money. Frank Hahn, for example, has investigated whether general equilibrium models can be developed in which money enters in some essential way. One of the essential questions he introduces, often referred to as the Hahn's problem is : "Can one construct an equilibrium where money has value?" The goal is to find models in which existence of money can alter the equilibrium solutions, perhaps because the initial position of agents depends on monetary prices. 

Some critics of general equilibrium modeling contend that much research in these models constitutes exercises in pure mathematics with no connection to actual economies. In a 1979 article, Nicholas Georgescu-Roegen complains: "There are endeavors that now pass for the most desirable kind of economic contributions although they are just plain mathematical exercises, not only without any economic substance but also without any mathematical value." He cites as an example a paper that assumes more traders in existence than there are points in the set of real numbers.

Although modern models in general equilibrium theory demonstrate that under certain circumstances prices will indeed converge to equilibria, critics hold that the assumptions necessary for these results are extremely strong. As well as stringent restrictions on excess demand functions, the necessary assumptions include perfect rationality of individuals; complete information about all prices both now and in the future; and the conditions necessary for perfect competition. However some results from experimental economics suggest that even in circumstances where there are few, imperfectly informed agents, the resulting prices and allocations may wind up resembling those of a perfectly competitive market (although certainly not a stable general equilibrium in all markets).

Frank Hahn defends general equilibrium modeling on the grounds that it provides a negative function. General equilibrium models show what the economy would have to be like for an unregulated economy to be Pareto efficient.

Computing general equilibrium

Until the 1970s general equilibrium analysis remained theoretical. With advances in computing power and the development of input–output tables, it became possible to model national economies, or even the world economy, and attempts were made to solve for general equilibrium prices and quantities empirically. 

Applied general equilibrium (AGE) models were pioneered by Herbert Scarf in 1967, and offered a method for solving the Arrow–Debreu General Equilibrium system in a numerical fashion. This was first implemented by John Shoven and John Whalley (students of Scarf at Yale) in 1972 and 1973, and were a popular method up through the 1970s. In the 1980s however, AGE models faded from popularity due to their inability to provide a precise solution and its high cost of computation.

Computable general equilibrium (CGE) models surpassed and replaced AGE models in the mid-1980s, as the CGE model was able to provide relatively quick and large computable models for a whole economy, and was the preferred method of governments and the World Bank. CGE models are heavily used today, and while 'AGE' and 'CGE' is used inter-changeably in the literature, Scarf-type AGE models have not been constructed since the mid-1980s, and the CGE literature at current is not based on Arrow-Debreu and General Equilibrium Theory as discussed in this article. CGE models, and what is today referred to as AGE models, are based on static, simultaneously solved, macro balancing equations (from the standard Keynesian macro model), giving a precise and explicitly computable result.

Other schools

General equilibrium theory is a central point of contention and influence between the neoclassical school and other schools of economic thought, and different schools have varied views on general equilibrium theory. Some, such as the Keynesian and Post-Keynesian schools, strongly reject general equilibrium theory as "misleading" and "useless". Other schools, such as new classical macroeconomics, developed from general equilibrium theory.

Keynesian and Post-Keynesian

Keynesian and Post-Keynesian economists, and their underconsumptionist predecessors criticize general equilibrium theory specifically, and as part of criticisms of neoclassical economics generally. Specifically, they argue that general equilibrium theory is neither accurate nor useful, that economies are not in equilibrium, that equilibrium may be slow and painful to achieve, and that modeling by equilibrium is "misleading", and that the resulting theory is not a useful guide, particularly for understanding of economic crises.
Let us beware of this dangerous theory of equilibrium which is supposed to be automatically established. A certain kind of equilibrium, it is true, is reestablished in the long run, but it is after a frightful amount of suffering.
— Simonde de Sismondi, New Principles of Political Economy, vol. 1, 1819, pp. 20-21.
The long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is past the ocean is flat again.
— John Maynard Keynes, A Tract on Monetary Reform, 1923, ch. 3
It is as absurd to assume that, for any long period of time, the variables in the economic organization, or any part of them, will "stay put," in perfect equilibrium, as to assume that the Atlantic Ocean can ever be without a wave.
— Irving Fisher, The Debt-Deflation Theory of Great Depressions, 1933, p. 339
Robert Clower and others have argued for a reformulation of theory toward disequilibrium analysis to incorporate how monetary exchange fundamentally alters the representation of an economy as though a barter system.

New classical macroeconomics

While general equilibrium theory and neoclassical economics generally were originally microeconomic theories, new classical macroeconomics builds a macroeconomic theory on these bases. In new classical models, the macroeconomy is assumed to be at its unique equilibrium, with full employment and potential output, and that this equilibrium is assumed to always have been achieved via price and wage adjustment (market clearing). The best-known such model is Real Business Cycle Theory, in which business cycles are considered to be largely due to changes in the real economy, unemployment is not due to the failure of the market to achieve potential output, but due to equilibrium potential output having fallen and equilibrium unemployment having risen.

Socialist economics

Within socialist economics, a sustained critique of general equilibrium theory (and neoclassical economics generally) is given in Anti-Equilibrium, based on the experiences of János Kornai with the failures of Communist central planning, although Michael Albert and Robin Hahnel later based their Parecon model on the same theory.

Social network analysis

From Wikipedia, the free encyclopedia

A social network diagram displaying friendship ties among a set of Facebook users.

Social network analysis (SNA) is the process of investigating social structures through the use of networks and graph theory. It characterizes networked structures in terms of nodes (individual actors, people, or things within the network) and the ties, edges, or links (relationships or interactions) that connect them. Examples of social structures commonly visualized through social network analysis include social media networks, memes spread, information circulation, friendship and acquaintance networks, business networks, social networks, collaboration graphs, kinship, disease transmission, and sexual relationships. These networks are often visualized through sociograms in which nodes are represented as points and ties are represented as lines. 

Social network analysis has emerged as a key technique in modern sociology. It has also gained a significant following in anthropology, biology, demography, communication studies, economics, geography, history, information science, organizational studies, political science, social psychology, development studies, sociolinguistics, and computer science and is now commonly available as a consumer tool.

History

Social network analysis has its theoretical roots in the work of early sociologists such as Georg Simmel and Émile Durkheim, who wrote about the importance of studying patterns of relationships that connect social actors. Social scientists have used the concept of "social networks" since early in the 20th century to connote complex sets of relationships between members of social systems at all scales, from interpersonal to international. In the 1930s Jacob Moreno and Helen Jennings introduced basic analytical methods. In 1954, John Arundel Barnes started using the term systematically to denote patterns of ties, encompassing concepts traditionally used by the public and those used by social scientists: bounded groups (e.g., tribes, families) and social categories (e.g., gender, ethnicity). Scholars such as Ronald Burt, Kathleen Carley, Mark Granovetter, David Krackhardt, Edward Laumann, Anatol Rapoport, Barry Wellman, Douglas R. White, and Harrison White expanded the use of systematic social network analysis. Even in the study of literature, network analysis has been applied by Anheier, Gerhards and Romo, Wouter De Nooy, and Burgert Senekal. Indeed, social network analysis has found applications in various academic disciplines, as well as practical applications such as countering money laundering and terrorism.

Metrics

Hue (from red=0 to blue=max) indicates each node's betweenness centrality.

Connections

Homophily: The extent to which actors form ties with similar versus dissimilar others. Similarity can be defined by gender, race, age, occupation, educational achievement, status, values or any other salient characteristic. Homophily is also referred to as assortativity.
  • Multiplexity: The number of content-forms contained in a tie. For example, two people who are friends and also work together would have a multiplexity of 2. Multiplexity has been associated with relationship strength.
  • Mutuality/Reciprocity: The extent to which two actors reciprocate each other's friendship or other interaction.
  • Network Closure: A measure of the completeness of relational triads. An individual's assumption of network closure (i.e. that their friends are also friends) is called transitivity. Transitivity is an outcome of the individual or situational trait of Need for Cognitive Closure.
  • Propinquity: The tendency for actors to have more ties with geographically close others.

Distributions

  • Bridge: An individual whose weak ties fill a structural hole, providing the only link between two individuals or clusters. It also includes the shortest route when a longer one is unfeasible due to a high risk of message distortion or delivery failure.
  • Centrality: Centrality refers to a group of metrics that aim to quantify the "importance" or "influence" (in a variety of senses) of a particular node (or group) within a network. Examples of common methods of measuring "centrality" include betweenness centrality, closeness centrality, eigenvector centrality, alpha centrality, and degree centrality.
  • Density: The proportion of direct ties in a network relative to the total number possible.
  • Distance: The minimum number of ties required to connect two particular actors, as popularized by Stanley Milgram's small world experiment and the idea of 'six degrees of separation'.
  • Structural holes: The absence of ties between two parts of a network. Finding and exploiting a structural hole can give an entrepreneur a competitive advantage. This concept was developed by sociologist Ronald Burt, and is sometimes referred to as an alternate conception of social capital.
  • Tie Strength: Defined by the linear combination of time, emotional intensity, intimacy and reciprocity (i.e. mutuality). Strong ties are associated with homophily, propinquity and transitivity, while weak ties are associated with bridges.

Segmentation

Groups are identified as 'cliques' if every individual is directly tied to every other individual, 'social circles' if there is less stringency of direct contact, which is imprecise, or as structurally cohesive blocks if precision is wanted.
  • Clustering coefficient: A measure of the likelihood that two associates of a node are associates. A higher clustering coefficient indicates a greater 'cliquishness'.
  • Cohesion: The degree to which actors are connected directly to each other by cohesive bonds. Structural cohesion refers to the minimum number of members who, if removed from a group, would disconnect the group.

Modelling and visualization of networks

Visual representation of social networks is important to understand the network data and convey the result of the analysis. Numerous methods of visualization for data produced by social network analysis have been presented. Many of the analytic software have modules for network visualization. Exploration of the data is done through displaying nodes and ties in various layouts, and attributing colors, size and other advanced properties to nodes. Visual representations of networks may be a powerful method for conveying complex information, but care should be taken in interpreting node and graph properties from visual displays alone, as they may misrepresent structural properties better captured through quantitative analyses.

Signed graphs can be used to illustrate good and bad relationships between humans. A positive edge between two nodes denotes a positive relationship (friendship, alliance, dating) and a negative edge between two nodes denotes a negative relationship (hatred, anger). Signed social network graphs can be used to predict the future evolution of the graph. In signed social networks, there is the concept of "balanced" and "unbalanced" cycles. A balanced cycle is defined as a cycle where the product of all the signs are positive. According to balance theory, balanced graphs represent a group of people who are unlikely to change their opinions of the other people in the group. Unbalanced graphs represent a group of people who are very likely to change their opinions of the people in their group. For example, a group of 3 people (A, B, and C) where A and B have a positive relationship, B and C have a positive relationship, but C and A have a negative relationship is an unbalanced cycle. This group is very likely to morph into a balanced cycle, such as one where B only has a good relationship with A, and both A and B have a negative relationship with C. By using the concept of balanced and unbalanced cycles, the evolution of signed social network graphs can be predicted.

Especially when using social network analysis as a tool for facilitating change, different approaches of participatory network mapping have proven useful. Here participants / interviewers provide network data by actually mapping out the network (with pen and paper or digitally) during the data collection session. An example of a pen-and-paper network mapping approach, which also includes the collection of some actor attributes (perceived influence and goals of actors) is the * Net-map toolbox. One benefit of this approach is that it allows researchers to collect qualitative data and ask clarifying questions while the network data is collected.

Social networking potential

Social Networking Potential (SNP) is a numeric coefficient, derived through algorithms to represent both the size of an individual's social network and their ability to influence that network. SNP coefficients were first defined and used by Bob Gerstley in 2002. A closely related term is Alpha User, defined as a person with a high SNP. 

SNP coefficients have two primary functions:
  1. The classification of individuals based on their social networking potential, and
  2. The weighting of respondents in quantitative marketing research studies.
By calculating the SNP of respondents and by targeting High SNP respondents, the strength and relevance of quantitative marketing research used to drive viral marketing strategies is enhanced. 

Variables used to calculate an individual's SNP include but are not limited to: participation in Social Networking activities, group memberships, leadership roles, recognition, publication/editing/contributing to non-electronic media, publication/editing/contributing to electronic media (websites, blogs), and frequency of past distribution of information within their network. The acronym "SNP" and some of the first algorithms developed to quantify an individual's social networking potential were described in the white paper "Advertising Research is Changing" (Gerstley, 2003) See Viral Marketing.

The first book to discuss the commercial use of Alpha Users among mobile telecoms audiences was 3G Marketing by Ahonen, Kasper and Melkko in 2004. The first book to discuss Alpha Users more generally in the context of social marketing intelligence was Communities Dominate Brands by Ahonen & Moore in 2005. In 2012, Nicola Greco (UCL) presents at TEDx the Social Networking Potential as a parallelism to the potential energy that users generate and companies should use, stating that "SNP is the new asset that every company should aim to have".

Practical applications

Social network analysis is used extensively in a wide range of applications and disciplines. Some common network analysis applications include data aggregation and mining, network propagation modeling, network modeling and sampling, user attribute and behavior analysis, community-maintained resource support, location-based interaction analysis, social sharing and filtering, recommender systems development, and link prediction and entity resolution. In the private sector, businesses use social network analysis to support activities such as customer interaction and analysis, information system development analysis, marketing, and business intelligence needs. Some public sector uses include development of leader engagement strategies, analysis of individual and group engagement and media use, and community-based problem solving.

Security applications

Social network analysis is also used in intelligence, counter-intelligence and law enforcement activities. This technique allows the analysts to map covert organizations such as a espionage ring, an organized crime family or a street gang. The National Security Agency (NSA) uses its clandestine mass electronic surveillance programs to generate the data needed to perform this type of analysis on terrorist cells and other networks deemed relevant to national security. The NSA looks up to three nodes deep during this network analysis. After the initial mapping of the social network is complete, analysis is performed to determine the structure of the network and determine, for example, the leaders within the network. This allows military or law enforcement assets to launch capture-or-kill decapitation attacks on the high-value targets in leadership positions to disrupt the functioning of the network. The NSA has been performing social network analysis on call detail records (CDRs), also known as metadata, since shortly after the September 11 attacks.

Textual analysis applications

Large textual corpora can be turned into networks and then analysed with the method of social network analysis. In these networks, the nodes are Social Actors, and the links are Actions. The extraction of these networks can be automated by using parsers. The resulting networks, which can contain thousands of nodes, are then analyzed by using tools from network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes. This automates the approach introduced by Quantitative Narrative Analysis, whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.

Narrative network of US Elections 2012

Internet applications

Social network analysis has also been applied to understanding online behavior by individuals, organizations, and between websites. Hyperlink analysis can be used to analyze the connections between websites or webpages to examine how information flows as individuals navigate the web. The connections between organizations has been analyzed via hyperlink analysis to examine which organizations within an issue community.

Social Media Internet Applications

Social network analysis has been applied to social media as a tool to understand behavior between individuals or organizations through their linkages on social media websites such as Twitter and Facebook.

In computer-supported collaborative learning

One of the most current methods of the application of SNA is to the study of computer-supported collaborative learning (CSCL). When applied to CSCL, SNA is used to help understand how learners collaborate in terms of amount, frequency, and length, as well as the quality, topic, and strategies of communication. Additionally, SNA can focus on specific aspects of the network connection, or the entire network as a whole. It uses graphical representations, written representations, and data representations to help examine the connections within a CSCL network. When applying SNA to a CSCL environment the interactions of the participants are treated as a social network. The focus of the analysis is on the "connections" made among the participants – how they interact and communicate – as opposed to how each participant behaved on his or her own.

Key terms

There are several key terms associated with social network analysis research in computer-supported collaborative learning such as: density, centrality, indegree, outdegree, and sociogram.
  • Density refers to the "connections" between participants. Density is defined as the number of connections a participant has, divided by the total possible connections a participant could have. For example, if there are 20 people participating, each person could potentially connect to 19 other people. A density of 100% (19/19) is the greatest density in the system. A density of 5% indicates there is only 1 of 19 possible connections.
  • Centrality focuses on the behavior of individual participants within a network. It measures the extent to which an individual interacts with other individuals in the network. The more an individual connects to others in a network, the greater their centrality in the network.
In-degree and out-degree variables are related to centrality.
  • In-degree centrality concentrates on a specific individual as the point of focus; centrality of all other individuals is based on their relation to the focal point of the "in-degree" individual.
  • Out-degree is a measure of centrality that still focuses on a single individual, but the analytic is concerned with the out-going interactions of the individual; the measure of out-degree centrality is how many times the focus point individual interacts with others.
  • A sociogram is a visualization with defined boundaries of connections in the network. For example, a sociogram which shows out-degree centrality points for Participant A would illustrate all outgoing connections Participant A made in the studied network.

Unique capabilities

Researchers employ social network analysis in the study of computer-supported collaborative learning in part due to the unique capabilities it offers. This particular method allows the study of interaction patterns within a networked learning community and can help illustrate the extent of the participants' interactions with the other members of the group. The graphics created using SNA tools provide visualizations of the connections among participants and the strategies used to communicate within the group. Some authors also suggest that SNA provides a method of easily analyzing changes in participatory patterns of members over time.

A number of research studies have applied SNA to CSCL across a variety of contexts. The findings include the correlation between a network's density and the teacher's presence, a greater regard for the recommendations of "central" participants, infrequency of cross-gender interaction in a network, and the relatively small role played by an instructor in an asynchronous learning network.

Other methods used alongside SNA

Although many studies have demonstrated the value of social network analysis within the computer-supported collaborative learning field, researchers have suggested that SNA by itself is not enough for achieving a full understanding of CSCL. The complexity of the interaction processes and the myriad sources of data make it difficult for SNA to provide an in-depth analysis of CSCL. Researchers indicate that SNA needs to be complemented with other methods of analysis to form a more accurate picture of collaborative learning experiences.

A number of research studies have combined other types of analysis with SNA in the study of CSCL. This can be referred to as a multi-method approach or data triangulation, which will lead to an increase of evaluation reliability in CSCL studies.
  • Qualitative method – The principles of qualitative case study research constitute a solid framework for the integration of SNA methods in the study of CSCL experiences.
    • Ethnographic data such as student questionnaires and interviews and classroom non-participant observations
    • Case studies: comprehensively study particular CSCL situations and relate findings to general schemes
    • Content analysis: offers information about the content of the communication among members
  • Quantitative method – This includes simple descriptive statistical analyses on occurrences to identify particular attitudes of group members who have not been able to be tracked via SNA in order to detect general tendencies.
    • Computer log files: provide automatic data on how collaborative tools are used by learners
    • Multidimensional scaling (MDS): charts similarities among actors, so that more similar input data is closer together
    • Software tools: QUEST, SAMSA (System for Adjacency Matrix and Sociogram-based Analysis), and Nud*IST

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...