Search This Blog

Tuesday, December 24, 2024

Streaming algorithm

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Streaming_algorithm

In computer science, streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes, typically just one. These algorithms are designed to operate with limited memory, generally logarithmic in the size of the stream and/or in the maximum value in the stream, and may also have limited processing time per item.

As a result of these constraints, streaming algorithms often produce approximate answers based on a summary or "sketch" of the data stream.

History

Though streaming algorithms had already been studied by Munro and Paterson as early as 1978, as well as Philippe Flajolet and G. Nigel Martin in 1982/83, the field of streaming algorithms was first formalized and popularized in a 1996 paper by Noga Alon, Yossi Matias, and Mario Szegedy. For this paper, the authors later won the Gödel Prize in 2005 "for their foundational contribution to streaming algorithms." There has since been a large body of work centered around data streaming algorithms that spans a diverse spectrum of computer science fields such as theory, databases, networking, and natural language processing.

Semi-streaming algorithms were introduced in 2005 as a relaxation of streaming algorithms for graphs, in which the space allowed is linear in the number of vertices n, but only logarithmic in the number of edges m. This relaxation is still meaningful for dense graphs, and can solve interesting problems (such as connectivity) that are insoluble in space.

Models

Data stream model

In the data stream model, some or all of the input is represented as a finite sequence of integers (from some finite domain) which is generally not available for random access, but instead arrives one at a time in a "stream". If the stream has length n and the domain has size m, algorithms are generally constrained to use space that is logarithmic in m and n. They can generally make only some small constant number of passes over the stream, sometimes just one.

Turnstile and cash register models

Much of the streaming literature is concerned with computing statistics on frequency distributions that are too large to be stored. For this class of problems, there is a vector (initialized to the zero vector ) that has updates presented to it in a stream. The goal of these algorithms is to compute functions of using considerably less space than it would take to represent precisely. There are two common models for updating such streams, called the "cash register" and "turnstile" models.

In the cash register model, each update is of the form , so that is incremented by some positive integer . A notable special case is when (only unit insertions are permitted).

In the turnstile model, each update is of the form , so that is incremented by some (possibly negative) integer . In the "strict turnstile" model, no at any time may be less than zero.

Sliding window model

Several papers also consider the "sliding window" model. In this model, the function of interest is computing over a fixed-size window in the stream. As the stream progresses, items from the end of the window are removed from consideration while new items from the stream take their place.

Besides the above frequency-based problems, some other types of problems have also been studied. Many graph problems are solved in the setting where the adjacency matrix or the adjacency list of the graph is streamed in some unknown order. There are also some problems that are very dependent on the order of the stream (i.e., asymmetric functions), such as counting the number of inversions in a stream and finding the longest increasing subsequence.

Evaluation

The performance of an algorithm that operates on data streams is measured by three basic factors:

  • The number of passes the algorithm must make over the stream.
  • The available memory.
  • The running time of the algorithm.

These algorithms have many similarities with online algorithms since they both require decisions to be made before all data are available, but they are not identical. Data stream algorithms only have limited memory available but they may be able to defer action until a group of points arrive, while online algorithms are required to take action as soon as each point arrives.

If the algorithm is an approximation algorithm then the accuracy of the answer is another key factor. The accuracy is often stated as an approximation meaning that the algorithm achieves an error of less than with probability .

Applications

Streaming algorithms have several applications in networking such as monitoring network links for elephant flows, counting the number of distinct flows, estimating the distribution of flow sizes, and so on. They also have applications in databases, such as estimating the size of a join .

Some streaming problems

Frequency moments

The kth frequency moment of a set of frequencies is defined as .

The first moment is simply the sum of the frequencies (i.e., the total count). The second moment is useful for computing statistical properties of the data, such as the Gini coefficient of variation. is defined as the frequency of the most frequent items.

The seminal paper of Alon, Matias, and Szegedy dealt with the problem of estimating the frequency moments.

Calculating frequency moments

A direct approach to find the frequency moments requires to maintain a register mi for all distinct elements ai ∈ (1,2,3,4,...,N) which requires at least memory of order . But we have space limitations and require an algorithm that computes in much lower memory. This can be achieved by using approximations instead of exact values. An algorithm that computes an (ε,δ)approximation of Fk, where F'k is the (ε,δ)- approximated value of Fk. Where ε is the approximation parameter and δ is the confidence parameter.

Calculating F0 (distinct elements in a data stream)
FM-Sketch algorithm

Flajolet et al. in  introduced probabilistic method of counting which was inspired from a paper by Robert Morris. Morris in his paper says that if the requirement of accuracy is dropped, a counter n can be replaced by a counter log n which can be stored in log log n bits. Flajolet et al. in improved this method by using a hash function h which is assumed to uniformly distribute the element in the hash space (a binary string of length L).

Let bit(y,k) represent the kth bit in binary representation of y

Let represents the position of least significant 1-bit in the binary representation of yi with a suitable convention for .

Let A be the sequence of data stream of length M whose cardinality need to be determined. Let BITMAP [0...L − 1] be the

hash space where the ρ(hashedvalues) are recorded. The below algorithm then determines approximate cardinality of A.

Procedure FM-Sketch:

    for i in 0 to L − 1 do
        BITMAP[i] := 0 
    end for
    for x in A: do
        Index := ρ(hash(x))
        if BITMAP[index] = 0 then
            BITMAP[index] := 1
        end if
    end for
    B := Position of left most 0 bit of BITMAP[] 
    return 2 ^ B

If there are N distinct elements in a data stream.

  • For then BITMAP[i] is certainly 0
  • For then BITMAP[i] is certainly 1
  • For then BITMAP[i] is a fringes of 0's and 1's
K-minimum value algorithm

The previous algorithm describes the first attempt to approximate F0 in the data stream by Flajolet and Martin. Their algorithm picks a random hash function which they assume to uniformly distribute the hash values in hash space.

Bar-Yossef et al. in  introduced k-minimum value algorithm for determining number of distinct elements in data stream. They used a similar hash function h which can be normalized to [0,1] as . But they fixed a limit t to number of values in hash space. The value of t is assumed of the order (i.e. less approximation-value ε requires more t). KMV algorithm keeps only t-smallest hash values in the hash space. After all the m values of stream have arrived, is used to calculate. That is, in a close-to uniform hash space, they expect at-least t elements to be less than .

Procedure 2 K-Minimum Value

Initialize first t values of KMV 
for a in a1 to an do
    if h(a) < Max(KMV) then
        Remove Max(KMV) from KMV set
        Insert h(a) to KMV 
    end if
end for 
return t/Max(KMV)
Complexity analysis of KMV

KMV algorithm can be implemented in memory bits space. Each hash value requires space of order memory bits. There are hash values of the order . The access time can be reduced if we store the t hash values in a binary tree. Thus the time complexity will be reduced to .

Calculating Fk

Alon et al. estimates Fk by defining random variables that can be computed within given space and time. The expected value of random variables gives the approximate value of Fk.

Assume length of sequence m is known in advance. Then construct a random variable X as follows:

  • Select ap be a random member of sequence A with index at p,
  • Let , represents the number of occurrences of l within the members of the sequence A following ap.
  • Random variable .

Assume S1 be of the order and S2 be of the order . Algorithm takes S2 random variable and outputs the median . Where Yi is the average of Xij where 1 ≤ jS1.

Now calculate expectation of random variable E(X).

Complexity of Fk

From the algorithm to calculate Fk discussed above, we can see that each random variable X stores value of ap and r. So, to compute X we need to maintain only log(n) bits for storing ap and log(n) bits for storing r. Total number of random variable X will be the .

Hence the total space complexity the algorithm takes is of the order of

Simpler approach to calculate F2

The previous algorithm calculates in order of memory bits. Alon et al. in  simplified this algorithm using four-wise independent random variable with values mapped to .

This further reduces the complexity to calculate to

Frequent elements

In the data stream model, the frequent elements problem is to output a set of elements that constitute more than some fixed fraction of the stream. A special case is the majority problem, which is to determine whether or not any value constitutes a majority of the stream.

More formally, fix some positive constant c > 1, let the length of the stream be m, and let fi denote the frequency of value i in the stream. The frequent elements problem is to output the set { i | fi > m/c }.

Some notable algorithms are:

Event detection

Detecting events in data streams is often done using a heavy hitters algorithm as listed above: the most frequent items and their frequency are determined using one of these algorithms, then the largest increase over the previous time point is reported as trend. This approach can be refined by using exponentially weighted moving averages and variance for normalization.

Counting distinct elements

Counting the number of distinct elements in a stream (sometimes called the F0 moment) is another problem that has been well studied. The first algorithm for it was proposed by Flajolet and Martin. In 2010, Daniel Kane, Jelani Nelson and David Woodruff found an asymptotically optimal algorithm for this problem. It uses O(ε2 + log d) space, with O(1) worst-case update and reporting times, as well as universal hash functions and a r-wise independent hash family where r = Ω(log(1/ε) / log log(1/ε)).

Entropy

The (empirical) entropy of a set of frequencies is defined as , where .

Online learning

Learn a model (e.g. a classifier) by a single pass over a training set.

Lower bounds

Lower bounds have been computed for many of the data streaming problems that have been studied. By far, the most common technique for computing these lower bounds has been using communication complexity.

Probabilistic programming

From Wikipedia, the free encyclopedia

Probabilistic programming (PP) is a programming paradigm in which probabilistic models are specified and inference for these models is performed automatically. It represents an attempt to unify probabilistic modeling and traditional general purpose programming in order to make the former easier and more widely applicable. It can be used to create systems that help make decisions in the face of uncertainty.

Programming languages used for probabilistic programming are referred to as "probabilistic programming languages" (PPLs).

Applications

Probabilistic reasoning has been used for a wide variety of tasks such as predicting stock prices, recommending movies, diagnosing computers, detecting cyber intrusions and image detection. However, until recently (partially due to limited computing power), probabilistic programming was limited in scope, and most inference algorithms had to be written manually for each task.

Nevertheless, in 2015, a 50-line probabilistic computer vision program was used to generate 3D models of human faces based on 2D images of those faces. The program used inverse graphics as the basis of its inference method, and was built using the Picture package in Julia. This made possible "in 50 lines of code what used to take thousands".

The Gen probabilistic programming library (also written in Julia) has been applied to vision and robotics tasks.

More recently, the probabilistic programming system Turing.jl has been applied in various pharmaceutical and economics applications.

Probabilistic programming in Julia has also been combined with differentiable programming by combining the Julia package Zygote.jl with Turing.jl. 

Probabilistic programming languages are also commonly used in Bayesian cognitive science to develop and evaluate models of cognition. 

Probabilistic programming languages

PPLs often extend from a basic language. For instance, Turing.jl is based on Julia, Infer.NET is based on .NET Framework, while PRISM extends from Prolog. However, some PPLs, such as WinBUGS, offer a self-contained language that maps closely to the mathematical representation of the statistical models, with no obvious origin in another programming language.

The language for WinBUGS was implemented to perform Bayesian computation using Gibbs Sampling and related algorithms. Although implemented in a relatively unknown programming language (Component Pascal), this language permits Bayesian inference for a wide variety of statistical models using a flexible computational approach. The same BUGS language may be used to specify Bayesian models for inference via different computational choices ("samplers") and conventions or defaults, using a standalone program WinBUGS (or related R packages, rbugs and r2winbugs) and JAGS (Just Another Gibbs Sampler, another standalone program with related R packages including rjags, R2jags, and runjags). More recently, other languages to support Bayesian model specification and inference allow different or more efficient choices for the underlying Bayesian computation, and are accessible from the R data analysis and programming environment, e.g.: Stan, NIMBLE and NUTS. The influence of the BUGS language is evident in these later languages, which even use the same syntax for some aspects of model specification.

Several PPLs are in active development, including some in beta test. Two popular tools are Stan and PyMC.

Relational

A probabilistic relational programming language (PRPL) is a PPL specially designed to describe and infer with probabilistic relational models (PRMs).

A PRM is usually developed with a set of algorithms for reducing, inference about and discovery of concerned distributions, which are embedded into the corresponding PRPL.

Probabilistic logic programming

Probabilistic logic programming is a programming paradigm that extends logic programming with probabilities.

Most approaches to probabilistic logic programming are based on the distribution semantics, which splits a program into a set of probabilistic facts and a logic program. It defines a probability distribution on interpretations of the Herbrand universe of the program.

List of probabilistic programming languages

This list summarises the variety of PPLs that are currently available, and clarifies their origins.

Name Extends from Host language
Analytica
C++
bayesloop Python Python
Bean Machine PyTorch Python
Venture Scheme C++
BayesDB SQLite, Python
PRISM B-Prolog
Infer.NET .NET Framework .NET Framework
diff-SAT Answer set programming, SAT (DIMACS CNF)
PSQL SQL
BUGS
Component Pascal
Dyna Prolog
Figaro Scala Scala
ProbLog Prolog Python
ProBT C++, Python
Stan BUGS C++
Hakaru Haskell Haskell
BAli-Phy (software) Haskell C++
ProbCog
Java, Python
PyMC Python Python
Rainier Scala Scala
greta TensorFlow R
pomegranate Python Python
Lea Python Python
WebPPL JavaScript JavaScript
Picture Julia Julia
Turing.jl Julia Julia
Gen Julia Julia
Edward TensorFlow Python
TensorFlow Probability TensorFlow Python
Edward2 TensorFlow Probability Python
Pyro PyTorch Python
NumPyro JAX Python
Birch
C++
PSI
D
Blang

MultiVerse Python Python
Anglican Clojure Clojure

Difficulty

  • Reasoning about variables as probability distributions causes difficulties for novice programmers, but these difficulties can be addressed through use of Bayesian network visualizations and graphs of variable distributions embedded within the source code editor.
  • As many PPLs rely on the specification of priors on the variables of interest, specifying informed priors is often difficult for novices. In some cases, libraries such as PyMC provide automated methods to find the parameterization of informed priors.
  • Bounded rationality

    From Wikipedia, the free encyclopedia
    https://en.wikipedia.org/wiki/Bounded_rationality

    Bounded rationality is the idea that rationality is limited when individuals make decisions, and under these limitations, rational individuals will select a decision that is satisfactory rather than optimal.

    Limitations include the difficulty of the problem requiring a decision, the cognitive capability of the mind, and the time available to make the decision. Decision-makers, in this view, act as satisficers, seeking a satisfactory solution, with everything that they have at the moment rather than an optimal solution. Therefore, humans do not undertake a full cost-benefit analysis to determine the optimal decision, but rather, choose an option that fulfills their adequacy criteria.

    Some models of human behavior in the social sciences assume that humans can be reasonably approximated or described as rational entities, as in rational choice theory or Downs' political agency model. The concept of bounded rationality complements the idea of rationality as optimization, which views decision-making as a fully rational process of finding an optimal choice given the information available. Therefore, bounded rationality can be said to address the discrepancy between the assumed perfect rationality of human behaviour (which is utilised by other economics theories), and the reality of human cognition. In short, bounded rationality revises notions of perfect rationality to account for the fact that perfectly rational decisions are often not feasible in practice because of the intractability of natural decision problems and the finite computational resources available for making them. The concept of bounded rationality continues to influence (and be debated in) different disciplines, including political science, economics, psychology, law, philosophy, and cognitive science.

    Background and motivation

    Bounded rationality was coined by Herbert A. Simon, where it was proposed as an alternative basis for the mathematical and neoclassical economic modelling of decision-making, as used in economics, political science, and related disciplines. Many economics models assume that agents are on average rational, and can in large quantities be approximated to act according to their preferences in order to maximise utility. With bounded rationality, Simon's goal was "to replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist." Soon after the term bounded rationality appeared, studies in the topic area began examining the issue in depth. A study completed by Allais in 1953 began to generate ideas of the irrationality of decision making as he found that given preferences, individuals will not always choose the most rational decision and therefore the concept of rationality was not always reliable in economic predictions.

    In Models of Man, Simon argues that most people are only partly rational, and are irrational in the remaining part of their actions. In another work, he states "boundedly rational agents experience limits in formulating and solving complex problems and in processing (receiving, storing, retrieving, transmitting) information". Simon used the analogy of a pair of scissors, where one blade represents "cognitive limitations" of actual humans and the other the "structures of the environment", illustrating how minds compensate for limited resources by exploiting known structural regularity in the environment.

    Simon describes a number of dimensions along which classical models of rationality can be made somewhat more realistic, while remaining within the vein of fairly rigorous formalization. These include:

    • limiting the types of utility functions
    • recognizing the costs of gathering and processing information
    • the possibility of having a vector or multi-valued utility function

    Simon suggests that economic agents use heuristics to make decisions rather than a strict rigid rule of optimization. They do this because of the complexity of the situation. An example of behaviour inhibited by heuristics can be seen when comparing the cognitive strategies utilised in simple situations (e.g. tic-tac-toe), in comparison to strategies utilised in difficult situations (e.g. chess). Both games, as defined by game theory economics, are finite games with perfect information, and therefore equivalent. However, within chess, mental capacities and abilities are a binding constraint, therefore optimal choices are not a possibility. Thus, in order to test the mental limits of agents, complex problems, such as those within chess, should be studied to test how individuals work around their cognitive limits, and what behaviours or heuristics are used to form solutions

    Anchoring and adjustment are types of heuristics that give some explanation to bounded rationality and why decision makers do not make rational decisions. A study undertaken by Zenko et al. showed that the amount of physical activity completed by decision makers was able to be influenced by anchoring and adjustment as most decision makers would typically be considered irrational and would unlikely do the amount of physical activity instructed and it was shown that these decision makers use anchoring and adjustment to decide how much exercise they will complete.

    Other heuristics that are closely related to the concept of bounded rationality include the availability heuristic and representativeness heuristic. The availability heuristic refers to how people tend to overestimate the likelihood of events that are easily brought to mind, such as vivid or recent experiences. This can lead to biased judgments based on incomplete or unrepresentative information. The representativeness heuristic states that people often judge the probability of an event based on how closely it resembles a typical or representative case, ignoring other relevant factors like base rates or sample size. These mental shortcuts and systematic errors in thinking demonstrate how people's decision-making abilities are limited and often deviate from perfect rationality.  

    Example

    An example of bounded rationality in individuals would be a customer who made a suboptimal decision to order some food at the restaurant because they felt rushed by the waiter who was waiting beside the table. Another example is a trader who would make a moderate and risky decision to trade their stock due to time pressure and imperfect information of the market at that time.

    In organisational context, a CEO cannot make fully rational decisions in an ad-hoc situation because their cognition was overwhelmed by a lot of information in that tense situation. The CEO also needs to take time to process all the information given to them, but due to the limited time and fast decision making needed, they will disregard some information in determining the decision.

    Bounded rationality can have significant effects on political decision-making, voter behavior, and policy outcomes. A prominent example of this is heuristic-based voting. According to the theory of bounded rationality, individuals have limited time, information, and cognitive resources to make decisions. In the context of voting, this means that most voters cannot realistically gather and process all available information about candidates, issues, and policies. Even if such information were available, the time and effort required to analyze it would be prohibitively high for many voters. As a result, voters often resort to heuristics, which allow voters to make decisions based on cues like party affiliation, candidate appearance, or single-issue positions, rather than engaging in a comprehensive evaluation of all relevant factors. For example, a voter who relies on the heuristic of party affiliation may vote for a candidate whose policies do not actually align with their interests, simply because the candidate belongs to their preferred party.  

    Model extensions

    As decision-makers have to make decisions about how and when to decide, Ariel Rubinstein proposed to model bounded rationality by explicitly specifying decision-making procedures as decision-makers with the same information are also not able to analyse the situation equally thus reach the same rational decision. Rubinstein argues that consistency in reaching final decision for the same level of information must factor in the decision making procedure itself. This puts the study of decision procedures on the research agenda.

    Gerd Gigerenzer stated that decision theorists, to some extent, have not adhered to Simon's original ideas. Rather, they have considered how decisions may be crippled by limitations to rationality, or have modeled how people might cope with their inability to optimize. Gigerenzer proposes and shows that simple heuristics often lead to better decisions than theoretically optimal procedures. Moreover, Gigerenzer claimed, agents react relative to their environment and use their cognitive processes to adapt accordingly.

    Huw Dixon later argued that it may not be necessary to analyze in detail the process of reasoning underlying bounded rationality. If we believe that agents will choose an action that gets them close to the optimum, then we can use the notion of epsilon-optimization, which means we choose our actions so that the payoff is within epsilon of the optimum. If we define the optimum (best possible) payoff as , then the set of epsilon-optimizing options S(ε) can be defined as all those options s such that:

    The notion of strict rationality is then a special case (ε=0). The advantage of this approach is that it avoids having to specify in detail the process of reasoning, but rather simply assumes that whatever the process is, it is good enough to get near to the optimum.

    From a computational point of view, decision procedures can be encoded in algorithms and heuristics. Edward Tsang argues that the effective rationality of an agent is determined by its computational intelligence. Everything else being equal, an agent that has better algorithms and heuristics could make more rational (closer to optimal) decisions than one that has poorer heuristics and algorithms.

    Tshilidzi Marwala and Evan Hurwitz in their study on bounded rationality observed that advances in technology (e.g. computer processing power because of Moore's law, artificial intelligence, and big data analytics) expand the bounds that define the feasible rationality space. Because of this expansion of the bounds of rationality, machine automated decision making makes markets more efficient.

    The model of bounded rationality also extends to bounded self-interest, in which humans are sometimes willing to forsake their own self-interests for the benefits of others due to incomplete information that the individuals have at the time being. This is something that had not been considered in earlier economic models.

    The theory of rational inattention, an extension of bounded rationality, studied by Christopher Sims, found that decisions may be chosen with incomplete information as opposed to affording the cost to receive complete information. This shows that decision makers choose to endure bounded rationality.

    On the other hand, another extension came from the notion of bounded rationality and was explained by Ulrich Hoffrage and Torsten Reimer in their studies of a "fast and frugal heuristic approach". The studies explained that complete information sometimes is not needed as there are easier and simpler ways to reach the same optimal outcome. However, this approach which is usually known as the gaze heuristic was explained to be the theory for non-complex decision making only.

    Behavioral Economics

    Bounded rationality attempts to address assumption points discussed within neoclassical economics theory during the 1950s. This theory assumes that the complex problem, the way in which the problem is presented, all alternative choices, and a utility function, are all provided to decision-makers in advance, where this may not be realistic. This was widely used and accepted for a number of decades, however economists realised some disadvantages exist in utilising this theory. This theory did not consider how problems are initially discovered by decision-makers, which could have an impact on the overall decision. Additionally, personal values, the way in which alternatives are discovered and created, and the environment surrounding the decision-making process are also not considered when using this theory. Alternatively, bounded rationality focuses on the cognitive ability of the decision-maker and the factors which may inhibit optimal decision-making. Additionally, placing a focus on organisations rather than focusing on markets as neoclassical economics theory does, bounded rationality is also the basis for many other economics theories (e.g. organisational theory) as it emphasises that the "...performance and success of an organisation is governed primarily by the psychological limitations of its members..." as stated by John D.W. Morecroft (1981).[27]

    One concept closely related to the idea of bounded rationality is nudging. The connection between nudging and bounded rationality lies in the fact that nudges are designed to help people overcome the cognitive limitations and biases that arise from their bounded rationality. Nudging involves designing choice architectures that guide people towards making better decisions without limiting their freedom of choice. The concept was popularized by Richard Thaler and Cass Sunstein in their 2008 book "Nudge: Improving Decisions About Health, Wealth, and Happiness." As nudging has become more popular in the last decade, governments around the world and nongovernmental organizations like the United Nations have established behavioral insights teams or incorporated nudging into their policy-making processes.

    One way nudges are used is with the aim of simplifying complex decisions by presenting information in a clear and easily understandable format, reducing the cognitive burden on individuals. Nudges can also be designed to counteract common heuristics and biases, such as the default bias (people's tendency to stick with the default option). For example, with adequate other policies in place, making posthumous organ donation the default option with an opt-out provision has been shown to increase actual donation rates. Moreover, in cases where the information needed to make an informed decision is incomplete, nudges can provide the relevant information. For instance, displaying the calorie content of menu items can help people make healthier food choices. Nudges can also guide people towards satisfactory options when they are unable or unwilling to invest the time and effort to find the optimal choice. For example, providing a limited set of well-designed investment options in a retirement plan can help people make better financial decisions.

    In economic models based on behavioral economics, implementing bounded rationality implies finding replacements for utility maximization and profit maximization as used in conventional general equilibrium models. Stock-flow consistent models (SFC) and agent-based models (ABM) often implement that agents follow a sequence of simple rule-of-thumb behavior instead of an optimization procedure. Other dynamic models interpret bounded rationality as “looking for the direction of improvement“ such that agents use a gradient climbing approach to increase their utility.

    Principles of Boundedness

    In addition to bounded rationality, bounded willpower and bounded selfishness are two other key concepts in behavioral economics that challenge the traditional neoclassical economic assumption of perfectly rational, self-interested, and self-disciplined individuals. 

    Bounded willpower refers to the idea that people often have difficulty following through on their long-term plans and intentions due to limited self-control and the tendency to prioritize short-term desires. This can lead to problems like procrastination, impulsive spending, and unhealthy lifestyle choices. The concept of bounded willpower is closely related to the idea of hyperbolic discounting, which describes how people tend to value immediate rewards more highly than future ones, leading to inconsistent preferences over time.

    While traditional economic models assume that people are primarily motivated by self-interest, bounded selfishness suggests that people also have social preferences and care about factors such as fairness, reciprocity, and the well-being of others. This concept helps explain phenomena like charitable giving, cooperation in social dilemmas, and the existence of social norms. However, people's concern for others is often bounded in the sense that it is limited in scope and can be influenced by factors such as in-group favoritism and emotional distance.

    Together, these three concepts form the core of behavioral economics and have been used to develop more realistic models of human decision-making and behavior. By recognizing the limitations and biases that people face in their daily lives, behavioral economists aim to design policies, institutions, and choice architectures that can help people make better decisions and achieve their long-term goals.

    In psychology

    The collaborative works of Daniel Kahneman and Amos Tversky expand upon Herbert A. Simon's ideas in the attempt to create a map of bounded rationality. The research attempted to explore the choices made by what was assumed as rational agents compared to the choices made by individuals optimal beliefs and their satisficing behaviour. Kahneman cites that the research contributes mainly to the school of psychology due to imprecision of psychological research to fit the formal economic models; however, the theories are useful to economic theory as a way to expand simple and precise models and cover diverse psychological phenomena. Three major topics covered by the works of Daniel Kahneman and Amos Tversky include heuristics of judgement, risky choice, and framing effect, which were a culmination of research that fit under what was defined by Herbert A. Simon as the psychology of bounded rationality. In contrast to the work of Simon; Kahneman and Tversky aimed to focus on the effects bounded rationality had on simple tasks which therefore placed more emphasis on errors in cognitive mechanisms irrespective of the situation. The study undertaken by Kahneman found that emotions and the psychology of economic decisions play a larger role in the economics field than originally thought. The study focused on the emotions behind decision making such as fear and personal likes and dislikes and found these to be significant factors in economic decision making.

    Bounded rationality is also shown to be useful in negotiation techniques as shown in research undertaken by Dehai et al. that negotiations done using bounded rationality techniques by labourers and companies when negotiating a higher wage for workers were able to find an equal solution for both parties.

    Influence on social network structure

    Recent research has shown that bounded rationality of individuals may influence the topology of the social networks that evolve among them. In particular, Kasthurirathna and Piraveenan have shown that in socio-ecological systems, the drive towards improved rationality on average might be an evolutionary reason for the emergence of scale-free properties. They did this by simulating a number of strategic games on an initially random network with distributed bounded rationality, then re-wiring the network so that the network on average converged towards Nash equilibria, despite the bounded rationality of nodes. They observed that this re-wiring process results in scale-free networks. Since scale-free networks are ubiquitous in social systems, the link between bounded rationality distributions and social structure is an important one in explaining social phenomena.

    Interactome

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Interactome In molecular biology , an ...