Search This Blog

Wednesday, January 23, 2019

Agent-based model in biology

From Wikipedia, the free encyclopedia
 
Agent-based models have many applications in biology, primarily due to the characteristics of the modeling method. Agent-based modeling is a rule-based, computational modeling methodology that focuses on rules and interactions among the individual components or the agents of the system. The goal of this modeling method is to generate populations of the system components of interest and simulate their interactions in a virtual world. Agent-based models start with rules for behavior and seek to reconstruct, through computational instantiation of those behavioral rules, the observed patterns of behavior. Several of the characteristics of agent-based models important to biological studies include:
  1. Modular structure: The behavior of an agent-based model is defined by the rules of its agents. Existing agent rules can be modified or new agents can be added without having to modify the entire model.
  2. Emergent properties: Through the use of the individual agents that interact locally with rules of behavior, agent-based models result in a synergy that leads to a higher level whole with much more intricate behavior than those of each individual agent.
  3. Abstraction: Either by excluding non-essential details or when details are not available, agent-based models can be constructed in the absence of complete knowledge of the system under study. This allows the model to be as simple and verifiable as possible.
  4. Stochasticity: Biological systems exhibit behavior that appears to be random. The probability of a particular behavior can be determined for a system as a whole and then be translated into rules for the individual agents.
Before the agent-based model can be developed, one must choose the appropriate software or modeling toolkit to be used. Madey and Nikolai provide an extensive list of toolkits in their paper "Tools of the Trade: A Survey of Various Agent Based Modeling Platforms". The paper seeks to provide users with a method of choosing a suitable toolkit by examining five characteristics across the spectrum of toolkits: the programming language required to create the model, the required operating system, availability of user support, the software license type, and the intended toolkit domain. Some of the more commonly used toolkits include Swarm, NetLogo, RePast, and Mason. Listed below are summaries of several articles describing agent-based models that have been employed in biological studies. The summaries will provide a description of the problem space, an overview of the agent-based model and the agents involved, and a brief discussion of the model results.

Forest insect infestations

In the paper titled "Exploring Forest Management Practices Using an Agent-Based Model of Forest Insect Infestations", an agent-based model was developed to simulate attack behavior of the mountain pine beetle, Dendroctonus ponderosae, (MPB) in order to evaluate how different harvesting policies influence spatial characteristics of the forest and spatial propagation of the MPB infestation over time. About two-thirds of the land in British Columbia, Canada is covered by forests that are constantly being modified by natural disturbances such as fire, disease, and insect infestation. Forest resources make up approximately 15% of the province’s economy, so infestations caused by insects such as the MPB can have significant impacts on the economy. The MPB outbreaks are considered a major natural disturbance that can result in widespread mortality of the lodgepole pine tree, one of the most abundant commercial tree species in British Columbia. Insect outbreaks have resulted in the death of trees over areas of several thousand square kilometers.

The agent-based model developed for this study was designed to simulate the MPB attack behavior in order to evaluate how management practices influence the spatial distribution and patterns of insect population and their preferences for attacked and killed trees. Three management strategies were considered by the model: 1) no management, 2) sanitation harvest and 3) salvage harvest. In the model, the Beetle Agent represented the MPB behavior; the Pine Agent represented the forest environment and tree health evolution; the Forest Management Agent represented the different management strategies. The Beetle Agent follows a series of rules to decide where to fly within the forest and to select a healthy tree to attack, feed, and breed. The MPB typically kills host trees in its natural environment in order to successfully reproduce. The beetle larvae feed on the inner bark of mature host trees, eventually killing them. In order for the beetles to reproduce, the host tree must be sufficiently large and have thick inner bark. The MPB outbreaks end when the food supply decreases to the point that there is not enough to sustain the population or when climatic conditions become unfavorable for the beetle. The Pine Agent simulates the resistance of the host tree, specifically the Lodgepole pine tree, and monitors the state and attributes of each stand of trees. At some point in the MPB attack, the number of beetles per tree reaches the host tree capacity. When this point is reached, the beetles release a chemical to direct beetles to attack other trees. The Pine Agent models this behavior by calculating the beetle population density per stand and passes the information to the Beetle Agents. The Forest Management Agent was used, at the stand level, to simulate two common silviculture practices (sanitation and salvage) as well as the strategy where no management practice was employed. With the sanitation harvest strategy, if a stand has an infestation rate greater than a set threshold, the stand is removed as well as any healthy neighbor stand when the average size of the trees exceeded a set threshold. For the salvage harvest strategy, a stand is removed even it is not under a MPB attack if a predetermined number of neighboring stands are under a MPB attack.

The study considered a forested area in the North-Central Interior of British Columbia of approximately 560 hectare. The area consisted primarily of Lodgepole pine with smaller proportions of Douglas fir and White spruce. The model was executed for five time steps, each step representing a single year. Thirty simulation runs were conducted for each forest management strategy considered. The results of the simulation showed that when no management strategy was employed, the highest overall MPB infestation occurred. The results also showed that the salvage forest management technique resulted in a 25% reduction in the number of forest strands killed by the MPB, as opposed to a 19% reduction by the salvage forest management strategy. In summary, the results show that the model can be used as a tool to build forest management policies.

Invasive species

Invasive species refers to "non-native" plants and animals that adversely affect the environments they invade. The introduction of invasive species may have environmental, economic, and ecological implications. In the paper titled "An Agent-Based Model of Border Enforcement for Invasive Species Management", an agent-based model is presented that was developed to evaluate the impacts of port-specific and importer-specific enforcement regimes for a given agricultural commodity that presents invasive species risk. Ultimately, the goal of the study was to improve the allocation of enforcement resources and to provide a tool to policy makers to answer further questions concerning border enforcement and invasive species risk.

The agent-based model developed for the study considered three types of agents: invasive species, importers, and border enforcement agents. In the model, the invasive species can only react to their surroundings, while the importers and border enforcement agents are able to make their own decisions based on their own goals and objectives. The invasive species has the ability to determine if it has been released in an area containing the target crop, and to spread to adjacent plots of the target crop. The model incorporates spatial probability maps that are used to determine if an invasive species becomes established. The study focused on shipments of broccoli from Mexico into California through the ports of entry Calexico, California and Otay Mesa, California. The selected invasive species of concern was the crucifer flea beetle (Phyllotreta cruciferae). California is by far the largest producer of broccoli in the United States and so the concern and potential impact of an invasive species introduction through the chosen ports of entry is significant. The model also incorporated a spatially explicit damage function that was used to model invasive species damage in a realistic manner. Agent-based modeling provides the ability to analyze the behavior of heterogeneous actors, so three different types of importers were considered that differed in terms of commodity infection rates (high, medium, and low), pretreatment choice, and cost of transportation to the ports. The model gave predictions on inspection rates for each port of entry and importer and determined the success rate of border agent inspection, not only for each port and importer but also for each potential level of pretreatment (no pretreatment, level one, level two, and level three).

The model was implemented and ran in NetLogo, version 3.1.5. Spatial information on the location of the ports of entry, major highways, and transportation routes was included in the analysis as well as a map of California broccoli crops layered with invasive species establishment probability maps. BehaviorSpace, a software tool integrated with NetLogo, was used to test the effects of different parameters (e.g. shipment value, pretreatment cost) in the model. On average, 100 iterations were calculated at each level of the parameter being used, where an iteration represented a one-year run. 

The results of the model showed that as inspection efforts increase, importers increase due care, or the pre-treatment of shipments, and the total monetary loss of California crops decreases. The model showed that importers respond to an increase in inspection effort in different ways. Some importers responded to increased inspection rate by increasing pre-treatment effort, while others chose to avoid shipping to a specific port, or shopped for another port. An important result of the model results is that it can show or provide recommendations to policy makers about the point at which importers may start to shop for ports, such as the inspection rate at which port shopping is introduced and the importers associated with a certain level of pest risk or transportation cost are likely to make these changes. Another interesting outcome of the model is that when inspectors were not able to learn to respond to an importer with previously infested shipments, damage to California broccoli crops was estimated to be $150 million. However, when inspectors were able to increase inspection rates of importers with previous violations, damage to the California broccoli crops was reduced by approximately 12%. The model provides a mechanism to predict the introduction of invasive species from agricultural imports and their likely damage. Equally as important, the model provides policy makers and border control agencies with a tool that can be used to determine the best allocation of inspectional resources.

Aphid population dynamics

In the article titled "Aphid Population Dynamics in Agricultural Landscapes: An Agent-based Simulation Model", an agent-based model is presented to study the population dynamics of the bird cherry-oat aphid, Rhopalosiphum padi (L.). The study was conducted in a five square kilometer region of North Yorkshire, a county located in the Yorkshire and the Humber region of England. The agent-based modeling method was chosen because of its focus on the behavior of the individual agents rather than the population as a whole. The authors propose that traditional models that focus on populations as a whole do not take into account the complexity of the concurrent interactions in ecosystems, such as reproduction and competition for resources which may have significant impacts on population trends. The agent-based modeling approach also allows modelers to create more generic and modular models that are more flexible and easier to maintain than modeling approaches that focus on the population as a whole. Other proposed advantages of agent-based models include realistic representation of a phenomenon of interest due to the interactions of a group of autonomous agents, and the capability to integrate quantitative variables, differential equations, and rule based behavior into the same model.

The model was implemented in the modeling toolkit Repast using the JAVA programming language. The model was run in daily time steps and focused on the autumn and winter seasons. Input data for the model included habitat data, daily minimum, maximum, and mean temperatures, and wind speed and direction. For the Aphid agents, age, position, and morphology (alate or apterous) were considered. Age ranged from 0.00 to 2.00, with 1.00 being the point at which the agent becomes an adult. Reproduction by the Aphid agents is dependent on age, morphology, and daily minimum, maximum, and mean temperatures. Once nymphs hatch, they remain in the same location as their parents. The morphology of the nymphs is related to population density and the nutrient quality of the aphid’s food source. The model also considered mortality among the Aphid agents, which is dependent on age, temperatures, and quality of habitat. The speed at which an Aphid agent ages is determined by the daily minimum, maximum, and mean temperatures. The model considered movement of the Aphid agents to occur in two separate phases, a migratory phase and a foraging phase, both of which affect the overall population distribution

The study started the simulation run with an initial population of 10,000 alate aphids distributed across a grid of 25 meter cells. The simulation results showed that there were two major population peaks, the first in early autumn due to an influx of alate immigrants and the second due to lower temperatures later in the year and a lack of immigrants. Ultimately, it is the goal of the researchers to adapt this model to simulate broader ecosystems and animal types.

Aquatic population dynamics

In the article titled "Exploring Multi-Agent Systems In Aquatic Population Dynamics Modeling", a model is proposed to study the population dynamics of two species of macrophytes. Aquatic plants play a vital role in the ecosystems in which they live as they may provide shelter and food for other aquatic organisms. However, they may also have harmful impacts such as the excessive growth of non-native plants or eutrophication of the lakes in which they live leading to anoxic conditions. Given these possibilities, it is important to understand how the environment and other organisms affect the growth of these aquatic plants to allow mitigation or prevention of these harmful impacts.

Potamogeton pectinatus is one of the aquatic plant agents in the model. It is an annual growth plant that absorbs nutrients from the soil and reproduces through root tubers and rhizomes. Reproduction of the plant is not impacted by water flow, but can be influenced by animals, other plants, and humans. The plant can grow up to two meters tall, which is a limiting condition because it can only grow in certain water depths, and most of its biomass is found at the top of the plant in order to capture the most sunlight possible. The second plant agent in the model is Chara aspera, also a rooted aquatic plant. One major difference in the two plants is that the latter reproduces through the use of very small seeds called oospores and bulbills which are spread via the flow of water. Chara aspera only grows up to 20 cm and requires very good light conditions as well as good water quality, all of which are limiting factors on the growth of the plant. Chara aspera has a higher growth rate than Potamogeton pectinatus but has a much shorter life span. The model also considered environmental and animal agents. Environmental agents considered included water flow, light penetration, and water depth. Flow conditions, although not of high importance to Potamogeton pectinatus, directly impact the seed dispersal of Chara aspera. Flow conditions affect the direction as well as the distance the seeds will be distributed. Light penetration strongly influences Chara aspera as it requires high water quality. Extinction coefficient (EC) is a measure of light penetration in water. As EC increases, the growth rate of Chara aspera decreases. Finally, depth is important to both species of plants. As water depth increases, the light penetration decreases making it difficult for either species to survive beyond certain depths. 

The area of interest in the model was a lake in the Netherlands named Lake Veluwe. It is a relatively shallow lake with an average depth of 1.55 meters and covers about 30 square kilometers. The lake is under eutrophication stress which means that nutrients are not a limiting factor for either of the plant agents in the model. The initial position of the plant agents in the model was randomly determined. The model was implemented using Repast software package and was executed to simulate the growth and decay of the two different plant agents, taking into account the environmental agents previously discussed as well as interactions with other plant agents. The results of the model execution show that the population distribution of Chara aspera has a spatial pattern very similar to the GIS maps of observed distributions. The authors of the study conclude that the agent rules developed in the study are reasonable to simulate the spatial pattern of macrophyte growth in this particular lake.

Bacteria aggregation leading to biofilm formation

In the article titled "iDynoMiCS: next-generation individual-based modelling of biofilms", an agent-based model is presented that models the colonisation of bacteria onto a surface, leading to the formation of biofilms. The purpose of iDynoMiCS (standing for individual-based Dynamics of Microbial Communities Simulator) is to simulate the growth of populations and communities of individual microbes (small unicellular organisms such as bacteria, archaea and protists) that compete for space and resources in biofilms immersed in aquatic environments. iDynoMiCS can be used to seek to understand how individual microbial dynamics lead to emergent population- or biofilm-level properties and behaviours. Examining such formations is important in soil and river studies, dental hygiene studies, infectious disease and medical implant related infection research, and for understanding biocorrosion. An agent-based modelling paradigm was employed to make it possible to explore how each individual bacterium, of a particular species, contributes to the development of the biofilm. The initial illustration of iDynoMiCS considered how environmentally fluctuating oxygen availability affects the diversity and composition of a community of denitrifying bacteria that induce the denitrification pathway under anoxic or low oxygen conditions. The study explores the hypothesis that the existence of diverse strategies of denitrification in an environment can be explained by solely assuming that faster response incurs a higher cost. The agent-based model suggests that if metabolic pathways can be switched without cost the faster the switching the better. However, where faster switching incurs a higher cost, there is a strategy with optimal response time for any frequency of environmental fluctuations. This suggests that different types of denitrifying strategies win in different biological environments. Since this introduction the applications of iDynoMiCS continues to increase: a recent exploration of the plasmid invasion in biofilms being one example. This study explored the hypothesis that poor plasmid spread in biofilms is caused by a dependence of conjugation on the growth rate of the plasmid donor agent. Through simulation, the paper suggests that plasmid invasion into a resident biofilm is only limited when plasmid transfer depends on growth. Sensitivity analysis techniques were employed that suggests parameters relating to timing (lag before plasmid transfer between agents) and spatial reach are more important for plasmid invasion into a biofilm than the receiving agents growth rate or probability of segregational loss. Further examples that use iDynoMiCS continue to be published, including use of iDynoMiCS in modelling of a Pseudomonas aeruginosa biofilm with glucose substrate.

iDynoMiCS has been developed by an international team of researchers in order to provide a common platform for further development of all individual-based models of microbial biofilms and such like. The model was originally the result of years of work by Laurent Lardon, Brian Merkey, and Jan-Ulrich Kreft, with code contributions from Joao Xavier. With additional funding from the National Centre for Replacement, Refinement, and Reduction of Animals in Research (NC3Rs) in 2013, the development of iDynoMiCS as a tool for biological exploration continues apace, with new features being added when appropriate. From its inception, the team have committed to releasing iDynoMiCS as an open source platform, encouraging collaborators to develop additional functionality that can then be merged into the next stable release. IDynoMiCS has been implemented in the Java programming language, with MATLAB and R scripts provided to analyse results. Biofilm structures that are formed in simulation can be viewed as a movie using POV-Ray files that are generated as the simulation is run.

Mammary stem cell enrichment following irradiation during puberty

Experiments have shown that exposure to ionizing irradiation of pubertal mammary glands results in an increase in the ratio of mammary stem cells in the gland. This is important because stem cells are thought to be key targets for cancer initiation by ionizing radiation because they have the greatest long-term proliferative potential and mutagenic events persist in multiple daughter cells. Additionally, epidemiology data show that children exposed to ionizing radiation have a substantially greater breast cancer risk than adults. These experiments thus prompted questions about the underlying mechanism for the increase in mammary stem cells following radiation. In this research article titled "Irradiation of Juvenile, but not Adult, Mammary Gland Increases Stem Cell Self-Renewal and Estrogen Receptor Negative Tumors", two agent-based models were developed and were used in parallel with in vivo and in vitro experiments to evaluate cell inactivation, dedifferentiation via epithelial-mesenchymal transition (EMT), and self-renewal (symmetric division) as mechanisms by which radiation could increase stem cells. 

The first agent-based model is a multiscale model of mammary gland development starting with a rudimentary mammary ductal tree at the onset of puberty (during active proliferation) all the way to a full mammary gland at adulthood (when there is little proliferation). The model consists of millions of agents, with each agent representing a mammary stem cell, a progenitor cell, or a differentiated cell in the breast. Simulations were first run on the Lawrence Berkeley National Laboratory Lawrencium supercomputer to parameterize and benchmark the model against a variety of in vivo mammary gland measurements. The model was then used to test the three different mechanisms to determine which one led to simulation results that matched in vivo experiments the best. Surprisingly, radiation-induced cell inactivation by death did not contribute to increased stem cell frequency independently of the dose delivered in the model. Instead the model revealed that the combination of increased self-renewal and cell proliferation during puberty led to stem cell enrichment. In contrast epithelial-mesenchymal transition in the model was shown to increase stem cell frequency not only in pubertal mammary glands but also in adult glands. This latter prediction, however, contradicted the in vivo data; irradiation of adult mammary glands did not lead to increased stem cell frequency. These simulations therefore suggested self-renewal as the primary mechanism behind pubertal stem cell increase.

To further evaluate self-renewal as the mechanism, a second agent-based model was created to simulate the growth dynamics of human mammary epithelial cells (containing stem/progenitor and differentiated cell subpopulations) in vitro after irradiation. By comparing the simulation results with data from the in vitro experiments, the second agent-based model further confirmed that cells must extensively proliferate to observe a self-renewal dependent increase in stem/progenitor cell numbers after irradiation.

The combination of the two agent-based models and the in vitro/in vivo experiments provide insight into why children exposed to ionizing radiation have a substantially greater breast cancer risk than adults. Together, they support the hypothesis that the breast is susceptible to a transient increase in stem cell self-renewal when exposed to radiation during puberty, which primes the adult tissue to develop cancer decades later.

Agent-based model

From Wikipedia, the free encyclopedia
 
An agent-based model (ABM) is a class of computational models for simulating the actions and interactions of autonomous agents (both individual or collective entities such as organizations or groups) with a view to assessing their effects on the system as a whole. It combines elements of game theory, complex systems, emergence, computational sociology, multi-agent systems, and evolutionary programming. Monte Carlo methods are used to introduce randomness. Particularly within ecology, ABMs are also called individual-based models (IBMs), and individuals within IBMs may be simpler than fully autonomous agents within ABMs. A review of recent literature on individual-based models, agent-based models, and multiagent systems shows that ABMs are used on non-computing related scientific domains including biology, ecology and social science. Agent-based modeling is related to, but distinct from, the concept of multi-agent systems or multi-agent simulation in that the goal of ABM is to search for explanatory insight into the collective behavior of agents obeying simple rules, typically in natural systems, rather than in designing agents or solving specific practical or engineering problems.
 
Agent-based models are a kind of microscale model that simulate the simultaneous operations and interactions of multiple agents in an attempt to re-create and predict the appearance of complex phenomena. The process is one of emergence from the lower (micro) level of systems to a higher (macro) level. As such, a key notion is that simple behavioral rules generate complex behavior. This principle, known as K.I.S.S. ("Keep it simple, stupid"), is extensively adopted in the modeling community. Another central tenet is that the whole is greater than the sum of the parts. Individual agents are typically characterized as boundedly rational, presumed to be acting in what they perceive as their own interests, such as reproduction, economic benefit, or social status, using heuristics or simple decision-making rules. ABM agents may experience "learning", adaptation, and reproduction.

Most agent-based models are composed of: (1) numerous agents specified at various scales (typically referred to as agent-granularity); (2) decision-making heuristics; (3) learning rules or adaptive processes; (4) an interaction topology; and (5) an environment. ABMs are typically implemented as computer simulations, either as custom software, or via ABM toolkits, and this software can be then used to test how changes in individual behaviors will affect the system's emerging overall behavior.

History

The idea of agent-based modeling was developed as a relatively simple concept in the late 1940s. Since it requires computation-intensive procedures, it did not become widespread until the 1990s.

Early developments

The history of the agent-based model can be traced back to the Von Neumann machine, a theoretical machine capable of reproduction. The device von Neumann proposed would follow precisely detailed instructions to fashion a copy of itself. The concept was then built upon by von Neumann's friend Stanislaw Ulam, also a mathematician; Ulam suggested that the machine be built on paper, as a collection of cells on a grid. The idea intrigued von Neumann, who drew it up—creating the first of the devices later termed cellular automata. Another advance was introduced by the mathematician John Conway. He constructed the well-known Game of Life. Unlike von Neumann's machine, Conway's Game of Life operated by tremendously simple rules in a virtual world in the form of a 2-dimensional checkerboard.

1970s and 1980s: the first models

One of the earliest agent-based models in concept was Thomas Schelling's segregation model, which was discussed in his paper "Dynamic Models of Segregation" in 1971. Though Schelling originally used coins and graph paper rather than computers, his models embodied the basic concept of agent-based models as autonomous agents interacting in a shared environment with an observed aggregate, emergent outcome. 

In the early 1980s, Robert Axelrod hosted a tournament of Prisoner's Dilemma strategies and had them interact in an agent-based manner to determine a winner. Axelrod would go on to develop many other agent-based models in the field of political science that examine phenomena from ethnocentrism to the dissemination of culture. By the late 1980s, Craig Reynolds' work on flocking models contributed to the development of some of the first biological agent-based models that contained social characteristics. He tried to model the reality of lively biological agents, known as artificial life, a term coined by Christopher Langton

The first use of the word "agent" and a definition as it is currently used today is hard to track down. One candidate appears to be John Holland and John H. Miller's 1991 paper "Artificial Adaptive Agents in Economic Theory", based on an earlier conference presentation of theirs.

At the same time, during the 1980s, social scientists, mathematicians, operations researchers, and a scattering of people from other disciplines developed Computational and Mathematical Organization Theory (CMOT). This field grew as a special interest group of The Institute of Management Sciences (TIMS) and its sister society, the Operations Research Society of America (ORSA).

1990s: expansion

With the appearance of StarLogo in 1990, Swarm and NetLogo in the mid-1990s and RePast and AnyLogic in 2000, or GAMA in 2007 as well as some custom-designed code, modelling software became widely available and the range of domains that ABM was applied to, grew. Bonabeau (2002) is a good survey of the potential of agent-based modeling as of the time.

The 1990s were especially notable for the expansion of ABM within the social sciences, one notable effort was the large-scale ABM, Sugarscape, developed by Joshua M. Epstein and Robert Axtell to simulate and explore the role of social phenomena such as seasonal migrations, pollution, sexual reproduction, combat, and transmission of disease and even culture. Other notable 1990s developments included Carnegie Mellon University's Kathleen Carley ABM, to explore the co-evolution of social networks and culture. During this 1990s time frame Nigel Gilbert published the first textbook on Social Simulation: Simulation for the social scientist (1999) and established a journal from the perspective of social sciences: the Journal of Artificial Societies and Social Simulation (JASSS). Other than JASSS, agent-based models of any discipline are within scope of SpringerOpen journal Complex Adaptive Systems Modeling (CASM).

Through the mid-1990s, the social sciences thread of ABM began to focus on such issues as designing effective teams, understanding the communication required for organizational effectiveness, and the behavior of social networks. CMOT—later renamed Computational Analysis of Social and Organizational Systems (CASOS)—incorporated more and more agent-based modeling. Samuelson (2000) is a good brief overview of the early history, and Samuelson (2005) and Samuelson and Macal (2006) trace the more recent developments.

In the late 1990s, the merger of TIMS and ORSA to form INFORMS, and the move by INFORMS from two meetings each year to one, helped to spur the CMOT group to form a separate society, the North American Association for Computational Social and Organizational Sciences (NAACSOS). Kathleen Carley was a major contributor, especially to models of social networks, obtaining National Science Foundation funding for the annual conference and serving as the first President of NAACSOS. She was succeeded by David Sallach of the University of Chicago and Argonne National Laboratory, and then by Michael Prietula of Emory University. At about the same time NAACSOS began, the European Social Simulation Association (ESSA) and the Pacific Asian Association for Agent-Based Approach in Social Systems Science (PAAA), counterparts of NAACSOS, were organized. As of 2013, these three organizations collaborate internationally. The First World Congress on Social Simulation was held under their joint sponsorship in Kyoto, Japan, in August 2006. The Second World Congress was held in the northern Virginia suburbs of Washington, D.C., in July 2008, with George Mason University taking the lead role in local arrangements.

2000s and later

More recently, Ron Sun developed methods for basing agent-based simulation on models of human cognition, known as cognitive social simulation. Bill McKelvey, Suzanne Lohmann, Dario Nardi, Dwight Read and others at UCLA have also made significant contributions in organizational behavior and decision-making. Since 2001, UCLA has arranged a conference at Lake Arrowhead, California, that has become another major gathering point for practitioners in this field. In 2014, Sadegh Asgari from Columbia University and his colleagues developed an agent-based model of the construction competitive bidding. While his model was used to analyze the low-bid lump-sum construction bids, it could be applied to other bidding methods with little modifications to the model.

Theory

Most computational modeling research describes systems in equilibrium or as moving between equilibria. Agent-based modeling, however, using simple rules, can result in different sorts of complex and interesting behavior. The three ideas central to agent-based models are agents as objects, emergence, and complexity.

Agent-based models consist of dynamically interacting rule-based agents. The systems within which they interact can create real-world-like complexity. Typically agents are situated in space and time and reside in networks or in lattice-like neighborhoods. The location of the agents and their responsive behavior are encoded in algorithmic form in computer programs. In some cases, though not always, the agents may be considered as intelligent and purposeful. In ecological ABM (often referred to as "individual-based models" in ecology), agents may, for example, be trees in forest, and would not be considered intelligent, although they may be "purposeful" in the sense of optimizing access to a resource (such as water). The modeling process is best described as inductive. The modeler makes those assumptions thought most relevant to the situation at hand and then watches phenomena emerge from the agents' interactions. Sometimes that result is an equilibrium. Sometimes it is an emergent pattern. Sometimes, however, it is an unintelligible mangle.

In some ways, agent-based models complement traditional analytic methods. Where analytic methods enable humans to characterize the equilibria of a system, agent-based models allow the possibility of generating those equilibria. This generative contribution may be the most mainstream of the potential benefits of agent-based modeling. Agent-based models can explain the emergence of higher-order patterns—network structures of terrorist organizations and the Internet, power-law distributions in the sizes of traffic jams, wars, and stock-market crashes, and social segregation that persists despite populations of tolerant people. Agent-based models also can be used to identify lever points, defined as moments in time in which interventions have extreme consequences, and to distinguish among types of path dependency.

Rather than focusing on stable states, many models consider a system's robustness—the ways that complex systems adapt to internal and external pressures so as to maintain their functionalities. The task of harnessing that complexity requires consideration of the agents themselves—their diversity, connectedness, and level of interactions.

Framework

Recent work on the Modeling and simulation of Complex Adaptive Systems has demonstrated the need for combining agent-based and complex network based models. describe a framework consisting of four levels of developing models of complex adaptive systems described using several example multidisciplinary case studies:
  1. Complex Network Modeling Level for developing models using interaction data of various system components.
  2. Exploratory Agent-based Modeling Level for developing agent-based models for assessing the feasibility of further research. This can e.g. be useful for developing proof-of-concept models such as for funding applications without requiring an extensive learning curve for the researchers.
  3. Descriptive Agent-based Modeling (DREAM) for developing descriptions of agent-based models by means of using templates and complex network-based models. Building DREAM models allows model comparison across scientific disciplines.
  4. Validated agent-based modeling using Virtual Overlay Multiagent system (VOMAS) for the development of verified and validated models in a formal manner.
Other methods of describing agent-based models include code templates and text-based methods such as the ODD (Overview, Design concepts, and Design Details) protocol.

The role of the environment where agents live, both macro and micro, is also becoming an important factor in agent-based modelling and simulation work. Simple environment affords simple agents, but complex environments generates diversity of behavior.

Applications

In biology

Agent-based modeling has been used extensively in biology, including the analysis of the spread of epidemics, and the threat of biowarfare, biological applications including population dynamics, vegetation ecology, landscape diversity, the growth and decline of ancient civilizations, evolution of ethnocentric behavior, forced displacement/migration, language choice dynamics, cognitive modeling, and biomedical applications including modeling 3D breast tissue formation/morphogenesis, the effects of ionizing radiation on mammary stem cell sub-population dynamics, inflammation, and the human immune system. Agent-based models have also been used for developing decision support systems such as for breast cancer. Agent-based models are increasingly being used to model pharmacological systems in early stage and pre-clinical research to aid in drug development and gain insights into biological systems that would not be possible a priori. Military applications have also been evaluated. Moreover, agent-based models have been recently employed to study molecular-level biological systems.

In business, technology and network theory

Agent-based models have been used since the mid-1990s to solve a variety of business and technology problems. Examples of applications include the modeling of organizational behavior and cognition, team working, supply chain optimization and logistics, modeling of consumer behavior, including word of mouth, social network effects, distributed computing, workforce management, and portfolio management. They have also been used to analyze traffic congestion.

Recently, agent based modelling and simulation has been applied to various domains such as studying the impact of publication venues by researchers in the computer science domain (journals versus conferences). In addition, ABMs have been used to simulate information delivery in ambient assisted environments. A November 2016 article in arXiv analyzed an agent based simulation of posts spread in the Facebook online social network. In the domain of peer-to-peer, ad-hoc and other self-organizing and complex networks, the usefulness of agent based modeling and simulation has been shown. The use of a computer science-based formal specification framework coupled with wireless sensor networks and an agent-based simulation has recently been demonstrated.

Agent based evolutionary search or algorithm is a new research topic for solving complex optimization problems.

In economics and social sciences

Screenshot of an agent-based modeling software program
Graphic user interface for an agent-based modeling tool.
 
Prior to, and in the wake of the financial crisis, interest has grown in ABMs as possible tools for economic analysis. ABMs do not assume the economy can achieve equilibrium and "representative agents" are replaced by agents with diverse, dynamic, and interdependent behavior including herding. ABMs take a "bottom-up" approach and can generate extremely complex and volatile simulated economies. ABMs can represent unstable systems with crashes and booms that develop out of non-linear (disproportionate) responses to proportionally small changes. A July 2010 article in The Economist looked at ABMs as alternatives to DSGE models. The journal Nature also encouraged agent-based modeling with an editorial that suggested ABMs can do a better job of representing financial markets and other economic complexities than standard models along with an essay by J. Doyne Farmer and Duncan Foley that argued ABMs could fulfill both the desires of Keynes to represent a complex economy and of Robert Lucas to construct models based on microfoundations. Farmer and Foley pointed to progress that has been made using ABMs to model parts of an economy, but argued for the creation of a very large model that incorporates low level models. By modeling a complex system of analysts based on three distinct behavioral profiles – imitating, anti-imitating, and indifferent – financial markets were simulated to high accuracy. Results showed a correlation between network morphology and the stock market index.

Since the beginning of the 21st century ABMs have been deployed in architecture and urban planning to evaluate design and to simulate pedestrian flow in the urban environment. There is also a growing field of socio-economic analysis of infrastructure investment impact using ABM's ability to discern systemic impacts upon a socio-economic network.

Organizational ABM: agent-directed simulation

The agent-directed simulation (ADS) metaphor distinguishes between two categories, namely "Systems for Agents" and "Agents for Systems." Systems for Agents (sometimes referred to as agents systems) are systems implementing agents for the use in engineering, human and social dynamics, military applications, and others. Agents for Systems are divided in two subcategories. Agent-supported systems deal with the use of agents as a support facility to enable computer assistance in problem solving or enhancing cognitive capabilities. Agent-based systems focus on the use of agents for the generation of model behavior in a system evaluation (system studies and analyses).

Implementation

Many agent-based modeling software are designed for serial von-Neumann computer architectures. This limits the speed and scalability of these systems. A recent development is the use of data-parallel algorithms on Graphics Processing Units GPUs for ABM simulation. The extreme memory bandwidth combined with the sheer number crunching power of multi-processor GPUs has enabled simulation of millions of agents at tens of frames per second.

Verification and validation

Verification and validation (V&V) of simulation models is extremely important. Verification involves the model being debugged to ensure it works correctly, whereas validation ensures that the right model has been built. Face validation, sensitivity analysis, calibration and statistical validation have also been demonstrated. A discrete-event simulation framework approach for the validation of agent-based systems has been proposed. A comprehensive resource on empirical validation of agent-based models can be found here.

As an example of V&V technique, consider VOMAS (virtual overlay multi-agent system), a software engineering based approach, where a virtual overlay multi-agent system is developed alongside the agent-based model. The agents in the multi-agent system are able to gather data by generation of logs as well as provide run-time validation and verification support by watch agents and also agents to check any violation of invariants at run-time. These are set by the Simulation Specialist with help from the SME (subject-matter expert). Muazi et al. also provide an example of using VOMAS for verification and validation of a forest fire simulation model.

VOMAS provides a formal way of validation and verification. To develop a VOMAS, one must design VOMAS agents along with the agents in the actual simulation, preferably from the start. In essence, by the time the simulation model is complete, one can essentially consider it to be one model containing two models:
  1. An agent-based model of the intended system
  2. An agent-based model of the VOMAS
Unlike all previous work on verification and validation, VOMAS agents ensure that the simulations are validated in-simulation i.e. even during execution. In case of any exceptional situations, which are programmed on the directive of the Simulation Specialist (SS), the VOMAS agents can report them. In addition, the VOMAS agents can be used to log key events for the sake of debugging and subsequent analysis of simulations. In other words, VOMAS allows for a flexible use of any given technique for the sake of verification and validation of an agent-based model in any domain.

Details of validated agent-based modeling using VOMAS along with several case studies are given in. This thesis also gives details of "exploratory agent-based modeling", "descriptive agent-based modeling" and "validated agent-based modeling", using several worked case study examples.

Complex systems modelling

Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models).

In black-box models, the individual-based (mechanistic) mechanisms of a complex dynamic system remain hidden.

Mathematical models for complex systems
 
Black-box models are completely non-mechanistic. They are phenomenological and ignore a composition and internal structure of a complex system. We cannot investigate interactions of subsystems of such a non-transparent model. A white-box model of complex dynamic system has ‘transparent walls’ and directly shows underlying mechanisms. All events at micro-, meso- and macro-levels of a dynamic system are directly visible at all stages of its white-box model evolution. In most cases mathematical modelers use the heavy black-box mathematical methods, which cannot produce mechanistic models of complex dynamic systems. Grey-box models are intermediate and combine black-box and white-box approaches. 

Logical deterministic individual-based cellular automata model of single species population growth
 
Creation of a white-box model of complex system is associated with the problem of the necessity of an a priori basic knowledge of the modeling subject. The deterministic logical cellular automata are necessary but not sufficient condition of a white-box model. The second necessary prerequisite of a white-box model is the presence of the physical ontology of the object under study. The white-box modeling represents an automatic hyper-logical inference from the first principles because it is completely based on the deterministic logic and axiomatic theory of the subject. The purpose of the white-box modeling is to derive from the basic axioms a more detailed, more concrete mechanistic knowledge about the dynamics of the object under study. The necessity to formulate an intrinsic axiomatic system of the subject before creating its white-box model distinguishes the cellular automata models of white-box type from cellular automata models based on arbitrary logical rules. If cellular automata rules have not been formulated from the first principles of the subject, then such a model may have a weak relevance to the real problem.

Money

From Wikipedia, the free encyclopedia

A sample picture of a fictional ATM card. The largest part of the world's money exists only as accounting numbers which are transferred between financial computers. Various plastic cards and other devices give individual consumers the power to electronically transfer such money to and from their bank accounts, without the use of currency.
 
Money is any item or verifiable record that is generally accepted as payment for goods and services and repayment of debts, such as taxes, in a particular country or socio-economic context. The main functions of money are distinguished as: a medium of exchange, a unit of account, a store of value and sometimes, a standard of deferred payment. Any item or verifiable record that fulfils these functions can be considered as money. 

Money is historically an emergent market phenomenon establishing a commodity money, but nearly all contemporary money systems are based on fiat money. Fiat money, like any check or note of debt, is without use value as a physical commodity. It derives its value by being declared by a government to be legal tender; that is, it must be accepted as a form of payment within the boundaries of the country, for "all debts, public and private". Counterfeit money can cause good money to lose its value. 

The money supply of a country consists of currency (banknotes and coins) and, depending on the particular definition used, one or more types of bank money (the balances held in checking accounts, savings accounts, and other types of bank accounts). Bank money, which consists only of records (mostly computerized in modern banking), forms by far the largest part of broad money in developed countries.

Etymology

The word "money" is believed to originate from a temple of Juno, on Capitoline, one of Rome's seven hills. In the ancient world Juno was often associated with money. The temple of Juno Moneta at Rome was the place where the mint of Ancient Rome was located. The name "Juno" may derive from the Etruscan goddess Uni (which means "the one", "unique", "unit", "union", "united") and "Moneta" either from the Latin word "monere" (remind, warn, or instruct) or the Greek word "moneres" (alone, unique). 

In the Western world, a prevalent term for coin-money has been specie, stemming from Latin in specie, meaning 'in kind'.

History

A 640 BC one-third stater electrum coin from Lydia
 
The use of barter-like methods may date back to at least 100,000 years ago, though there is no evidence of a society or economy that relied primarily on barter. Instead, non-monetary societies operated largely along the principles of gift economy and debt. When barter did in fact occur, it was usually between either complete strangers or potential enemies.

Many cultures around the world eventually developed the use of commodity money. The Mesopotamian shekel was a unit of weight, and relied on the mass of something like 160 grains of barley. The first usage of the term came from Mesopotamia circa 3000 BC. Societies in the Americas, Asia, Africa and Australia used shell money – often, the shells of the cowry (Cypraea moneta L. or C. annulus L.). According to Herodotus, the Lydians were the first people to introduce the use of gold and silver coins. It is thought by modern scholars that these first stamped coins were minted around 650–600 BC.

Song Dynasty Jiaozi, the world's earliest paper money
 
The system of commodity money eventually evolved into a system of representative money. This occurred because gold and silver merchants or banks would issue receipts to their depositors – redeemable for the commodity money deposited. Eventually, these receipts became generally accepted as a means of payment and were used as money. Paper money or banknotes were first used in China during the Song dynasty. These banknotes, known as "jiaozi", evolved from promissory notes that had been used since the 7th century. However, they did not displace commodity money, and were used alongside coins. In the 13th century, paper money became known in Europe through the accounts of travelers, such as Marco Polo and William of Rubruck. Marco Polo's account of paper money during the Yuan dynasty is the subject of a chapter of his book, The Travels of Marco Polo, titled "How the Great Kaan Causeth the Bark of Trees, Made Into Something Like Paper, to Pass for Money All Over his Country." Banknotes were first issued in Europe by Stockholms Banco in 1661, and were again also used alongside coins. The gold standard, a monetary system where the medium of exchange are paper notes that are convertible into pre-set, fixed quantities of gold, replaced the use of gold coins as currency in the 17th–19th centuries in Europe. These gold standard notes were made legal tender, and redemption into gold coins was discouraged. By the beginning of the 20th century almost all countries had adopted the gold standard, backing their legal tender notes with fixed amounts of gold. 

After World War II and the Bretton Woods Conference, most countries adopted fiat currencies that were fixed to the U.S. dollar. The U.S. dollar was in turn fixed to gold. In 1971 the U.S. government suspended the convertibility of the U.S. dollar to gold. After this many countries de-pegged their currencies from the U.S. dollar, and most of the world's currencies became unbacked by anything except the governments' fiat of legal tender and the ability to convert the money into goods via payment. According to proponents of modern money theory, fiat money is also backed by taxes. By imposing taxes, states create demand for the currency they issue.

Functions

In Money and the Mechanism of Exchange (1875), William Stanley Jevons famously analyzed money in terms of four functions: a medium of exchange, a common measure of value (or unit of account), a standard of value (or standard of deferred payment), and a store of value. By 1919, Jevons's four functions of money were summarized in the couplet:
Money's a matter of functions four,
A Medium, a Measure, a Standard, a Store.
This couplet would later become widely popular in macroeconomics textbooks. Most modern textbooks now list only three functions, that of medium of exchange, unit of account, and store of value, not considering a standard of deferred payment as a distinguished function, but rather subsuming it in the others.

There have been many historical disputes regarding the combination of money's functions, some arguing that they need more separation and that a single unit is insufficient to deal with them all. One of these arguments is that the role of money as a medium of exchange is in conflict with its role as a store of value: its role as a store of value requires holding it without spending, whereas its role as a medium of exchange requires it to circulate. Others argue that storing of value is just deferral of the exchange, but does not diminish the fact that money is a medium of exchange that can be transported both across space and time. The term "financial capital" is a more general and inclusive term for all liquid instruments, whether or not they are a uniformly recognized tender.

Medium of exchange

When money is used to intermediate the exchange of goods and services, it is performing a function as a medium of exchange. It thereby avoids the inefficiencies of a barter system, such as the "coincidence of wants" problem. Money's most important usage is as a method for comparing the values of dissimilar objects.

Measure of value

A unit of account (in economics) is a standard numerical monetary unit of measurement of the market value of goods, services, and other transactions. Also known as a "measure" or "standard" of relative worth and deferred payment, a unit of account is a necessary prerequisite for the formulation of commercial agreements that involve debt.

Money acts as a standard measure and common denomination of trade. It is thus a basis for quoting and bargaining of prices. It is necessary for developing efficient accounting systems.

Standard of deferred payment

While standard of deferred payment is distinguished by some texts, particularly older ones, other texts subsume this under other functions. A "standard of deferred payment" is an accepted way to settle a debt – a unit in which debts are denominated, and the status of money as legal tender, in those jurisdictions which have this concept, states that it may function for the discharge of debts. When debts are denominated in money, the real value of debts may change due to inflation and deflation, and for sovereign and international debts via debasement and devaluation.

Store of value

To act as a store of value, a money must be able to be reliably saved, stored, and retrieved – and be predictably usable as a medium of exchange when it is retrieved. The value of the money must also remain stable over time. Some have argued that inflation, by reducing the value of money, diminishes the ability of the money to function as a store of value.

Properties

To fulfill its various functions, money must have certain properties:
  • Fungibility: its individual units must be capable of mutual substitution (i.e., interchangeability).
  • Durability: able to withstand repeated use.
  • Portability: easily carried and transported.
  • Cognizability: its value must be easily identified.
  • Stability of value: its value should not fluctuate.

Money supply

Money Base, M1 and M2 in the U.S. from 1981 to 2012
 
Printing paper money at a printing press in Perm
 
In economics, money is any financial instrument that can fulfill the functions of money (detailed above). These financial instruments together are collectively referred to as the money supply of an economy. In other words, the money supply is the number of financial instruments within a specific economy available for purchasing goods or services. Since the money supply consists of various financial instruments (usually currency, demand deposits and various other types of deposits), the amount of money in an economy is measured by adding together these financial instruments creating a monetary aggregate

Modern monetary theory distinguishes among different ways to measure the stock of money or money supply, reflected in different types of monetary aggregates, using a categorization system that focuses on the liquidity of the financial instrument used as money. The most commonly used monetary aggregates (or types of money) are conventionally designated M1, M2 and M3. These are successively larger aggregate categories: M1 is currency (coins and bills) plus demand deposits (such as checking accounts); M2 is M1 plus savings accounts and time deposits under $100,000; and M3 is M2 plus larger time deposits and similar institutional accounts. M1 includes only the most liquid financial instruments, and M3 relatively illiquid instruments. The precise definition of M1, M2 etc. may be different in different countries.

Another measure of money, M0, is also used; unlike the other measures, it does not represent actual purchasing power by firms and households in the economy. M0 is base money, or the amount of money actually issued by the central bank of a country. It is measured as currency plus deposits of banks and other institutions at the central bank. M0 is also the only money that can satisfy the reserve requirements of commercial banks.

Creation of money

In current economic systems, money is created by two procedures: 

Legal tender, or narrow money (M0) is the cash money created by a Central Bank by minting coins and printing banknotes.

Bank money, or broad money (M1/M2) is the money created by private banks through the recording of loans as deposits of borrowing clients, with partial support indicated by the cash ratio. Currently, bank money is created as electronic money.

In most countries, the majority of money is mostly created as M1/M2 by commercial banks making loans. Contrary to some popular misconceptions, banks do not act simply as intermediaries, lending out deposits that savers place with them, and do not depend on central bank money (M0) to create new loans and deposits.

Market liquidity

"Market liquidity" describes how easily an item can be traded for another item, or into the common currency within an economy. Money is the most liquid asset because it is universally recognised and accepted as the common currency. In this way, money gives consumers the freedom to trade goods and services easily without having to barter.

Liquid financial instruments are easily tradable and have low transaction costs. There should be no (or minimal) spread between the prices to buy and sell the instrument being used as money.

Types

Currently, most modern monetary systems are based on fiat money. However, for most of history, almost all money was commodity money, such as gold and silver coins. As economies developed, commodity money was eventually replaced by representative money, such as the gold standard, as traders found the physical transportation of gold and silver burdensome. Fiat currencies gradually took over in the last hundred years, especially since the breakup of the Bretton Woods system in the early 1970s.

Commodity

A 1914 British gold sovereign
 
Many items have been used as commodity money such as naturally scarce precious metals, conch shells, barley, beads etc., as well as many other things that are thought of as having value. Commodity money value comes from the commodity out of which it is made. The commodity itself constitutes the money, and the money is the commodity. Examples of commodities that have been used as mediums of exchange include gold, silver, copper, rice, Wampum, salt, peppercorns, large stones, decorated belts, shells, alcohol, cigarettes, cannabis, candy, etc. These items were sometimes used in a metric of perceived value in conjunction to one another, in various commodity valuation or price system economies. Use of commodity money is similar to barter, but a commodity money provides a simple and automatic unit of account for the commodity which is being used as money. Although some gold coins such as the Krugerrand are considered legal tender, there is no record of their face value on either side of the coin. The rationale for this is that emphasis is laid on their direct link to the prevailing value of their fine gold content. American Eagles are imprinted with their gold content and legal tender face value.

Representative

In 1875, the British economist William Stanley Jevons described the money used at the time as "representative money". Representative money is money that consists of token coins, paper money or other physical tokens such as certificates, that can be reliably exchanged for a fixed quantity of a commodity such as gold or silver. The value of representative money stands in direct and fixed relation to the commodity that backs it, while not itself being composed of that commodity.

Fiat

Gold coins are an example of legal tender that are traded for their intrinsic value, rather than their face value.
 
Fiat money or fiat currency is money whose value is not derived from any intrinsic value or guarantee that it can be converted into a valuable commodity (such as gold). Instead, it has value only by government order (fiat). Usually, the government declares the fiat currency (typically notes and coins from a central bank, such as the Federal Reserve System in the U.S.) to be legal tender, making it unlawful not to accept the fiat currency as a means of repayment for all debts, public and private.

Some bullion coins such as the Australian Gold Nugget and American Eagle are legal tender, however, they trade based on the market price of the metal content as a commodity, rather than their legal tender face value (which is usually only a small fraction of their bullion value).

Fiat money, if physically represented in the form of currency (paper or coins) can be accidentally damaged or destroyed. However, fiat money has an advantage over representative or commodity money, in that the same laws that created the money can also define rules for its replacement in case of damage or destruction. For example, the U.S. government will replace mutilated Federal Reserve Notes (U.S. fiat money) if at least half of the physical note can be reconstructed, or if it can be otherwise proven to have been destroyed. By contrast, commodity money which has been lost or destroyed cannot be recovered.

Coinage

These factors led to the shift of the store of value being the metal itself: at first silver, then both silver and gold, and at one point there was bronze as well. Now we have copper coins and other non-precious metals as coins. Metals were mined, weighed, and stamped into coins. This was to assure the individual taking the coin that he was getting a certain known weight of precious metal. Coins could be counterfeited, but they also created a new unit of account, which helped lead to banking. Archimedes' principle provided the next link: coins could now be easily tested for their fine weight of metal, and thus the value of a coin could be determined, even if it had been shaved, debased or otherwise tampered with. 

In most major economies using coinage, copper, silver and gold formed three tiers of coins. Gold coins were used for large purchases, payment of the military and backing of state activities. Silver coins were used for midsized transactions, and as a unit of account for taxes, dues, contracts and fealty, while copper coins represented the coinage of common transaction. This system had been used in ancient India since the time of the Mahajanapadas. In Europe, this system worked through the medieval period because there was virtually no new gold, silver or copper introduced through mining or conquest.[citation needed] Thus the overall ratios of the three coinages remained roughly equivalent.

Paper

Huizi currency, issued in 1160
 
In pre-modern China, the need for credit and for circulating a medium that was less of a burden than exchanging thousands of copper coins led to the introduction of paper money, commonly known today as banknotes. This economic phenomenon was a slow and gradual process that took place from the late Tang dynasty (618–907) into the Song dynasty (960–1279). It began as a means for merchants to exchange heavy coinage for receipts of deposit issued as promissory notes from shops of wholesalers, notes that were valid for temporary use in a small regional territory. In the 10th century, the Song dynasty government began circulating these notes among the traders in their monopolized salt industry. The Song government granted several shops the sole right to issue banknotes, and in the early 12th century the government finally took over these shops to produce state-issued currency. Yet the banknotes issued were still regionally valid and temporary; it was not until the mid 13th century that a standard and uniform government issue of paper money was made into an acceptable nationwide currency. The already widespread methods of woodblock printing and then Pi Sheng's movable type printing by the 11th century was the impetus for the massive production of paper money in premodern China. 

Paper money from different countries
 
At around the same time in the medieval Islamic world, a vigorous monetary economy was created during the 7th–12th centuries on the basis of the expanding levels of circulation of a stable high-value currency (the dinar). Innovations introduced by Muslim economists, traders and merchants include the earliest uses of credit, checks, promissory notes, savings accounts, transactional accounts, loaning, trusts, exchange rates, the transfer of credit and debt, and banking institutions for loans and deposits.

In Europe, paper money was first introduced in Sweden in 1661. Sweden was rich in copper, thus, because of copper's low value, extraordinarily big coins (often weighing several kilograms) had to be made. The advantages of paper currency were numerous: it reduced transport of gold and silver, and thus lowered the risks; it made loaning gold or silver at interest easier, since the specie (gold or silver) never left the possession of the lender until someone else redeemed the note; and it allowed for a division of currency into credit and specie backed forms. It enabled the sale of stock in joint stock companies, and the redemption of those shares in paper.

However, these advantages held within them disadvantages. First, since a note has no intrinsic value, there was nothing to stop issuing authorities from printing more of it than they had specie to back it with. Second, because it increased the money supply, it increased inflationary pressures, a fact observed by David Hume in the 18th century. The result is that paper money would often lead to an inflationary bubble, which could collapse if people began demanding hard money, causing the demand for paper notes to fall to zero. The printing of paper money was also associated with wars, and financing of wars, and therefore regarded as part of maintaining a standing army. For these reasons, paper currency was held in suspicion and hostility in Europe and America. It was also addictive, since the speculative profits of trade and capital creation were quite large. Major nations established mints to print money and mint coins, and branches of their treasury to collect taxes and hold gold and silver stock.

At this time both silver and gold were considered legal tender, and accepted by governments for taxes. However, the instability in the ratio between the two grew over the course of the 19th century, with the increase both in supply of these metals, particularly silver, and of trade. This is called bimetallism and the attempt to create a bimetallic standard where both gold and silver backed currency remained in circulation occupied the efforts of inflationists. Governments at this point could use currency as an instrument of policy, printing paper currency such as the United States Greenback, to pay for military expenditures. They could also set the terms at which they would redeem notes for specie, by limiting the amount of purchase, or the minimum amount that could be redeemed.

Banknotes with a face value of 5000 of different currencies
 
By 1900, most of the industrializing nations were on some form of gold standard, with paper notes and silver coins constituting the circulating medium. Private banks and governments across the world followed Gresham's Law: keeping gold and silver paid, but paying out in notes. This did not happen all around the world at the same time, but occurred sporadically, generally in times of war or financial crisis, beginning in the early part of the 20th century and continuing across the world until the late 20th century, when the regime of floating fiat currencies came into force. One of the last countries to break away from the gold standard was the United States in 1971. 

No country anywhere in the world today has an enforceable gold standard or silver standard currency system.

Commercial bank

A check, used as a means of converting funds in a demand deposit to cash

Commercial bank money or demand deposits are claims against financial institutions that can be used for the purchase of goods and services. A demand deposit account is an account from which funds can be withdrawn at any time by check or cash withdrawal without giving the bank or financial institution any prior notice. Banks have the legal obligation to return funds held in demand deposits immediately upon demand (or 'at call'). Demand deposit withdrawals can be performed in person, via checks or bank drafts, using automatic teller machines (ATMs), or through online banking.

Commercial bank money is created through fractional-reserve banking, the banking practice where banks keep only a fraction of their deposits in reserve (as cash and other highly liquid assets) and lend out the remainder, while maintaining the simultaneous obligation to redeem all these deposits upon demand. Commercial bank money differs from commodity and fiat money in two ways: firstly it is non-physical, as its existence is only reflected in the account ledgers of banks and other financial institutions, and secondly, there is some element of risk that the claim will not be fulfilled if the financial institution becomes insolvent. The process of fractional-reserve banking has a cumulative effect of money creation by commercial banks, as it expands money supply (cash and demand deposits) beyond what it would otherwise be. Because of the prevalence of fractional reserve banking, the broad money supply of most countries is a multiple larger than the amount of base money created by the country's central bank. That multiple (called the money multiplier) is determined by the reserve requirement or other financial ratio requirements imposed by financial regulators.

The money supply of a country is usually held to be the total amount of currency in circulation plus the total value of checking and savings deposits in the commercial banks in the country. In modern economies, relatively little of the money supply is in physical currency. For example, in December 2010 in the U.S., of the $8853.4 billion in broad money supply (M2), only $915.7 billion (about 10%) consisted of physical coins and paper money.

Digital or electronic

The development of computer technology in the second part of the twentieth century allowed money to be represented digitally. By 1990, in the United States all money transferred between its central bank and commercial banks was in electronic form. By the 2000s most money existed as digital currency in bank databases. In 2012, by number of transaction, 20 to 58 percent of transactions were electronic (dependant on country).

Non-national digital currencies were developed in the early 2000s. In particular, Flooz and Beenz had gained momentum before the Dot-com bubble. Not much innovation occurred until the conception of Bitcoin in 2009, which introduced the concept of a cryptocurrency – a decentralized trustless currency.

Monetary policy

US dollar banknotes
 
When gold and silver are used as money, the money supply can grow only if the supply of these metals is increased by mining. This rate of increase will accelerate during periods of gold rushes and discoveries, such as when Columbus discovered the New World and brought back gold and silver to Spain, or when gold was discovered in California in 1848. This causes inflation, as the value of gold goes down. However, if the rate of gold mining cannot keep up with the growth of the economy, gold becomes relatively more valuable, and prices (denominated in gold) will drop, causing deflation. Deflation was the more typical situation for over a century when gold and paper money backed by gold were used as money in the 18th and 19th centuries.

Modern day monetary systems are based on fiat money and are no longer tied to the value of gold. The control of the amount of money in the economy is known as monetary policy. Monetary policy is the process by which a government, central bank, or monetary authority manages the money supply to achieve specific goals. Usually the goal of monetary policy is to accommodate economic growth in an environment of stable prices. For example, it is clearly stated in the Federal Reserve Act that the Board of Governors and the Federal Open Market Committee should seek "to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates."

A failed monetary policy can have significant detrimental effects on an economy and the society that depends on it. These include hyperinflation, stagflation, recession, high unemployment, shortages of imported goods, inability to export goods, and even total monetary collapse and the adoption of a much less efficient barter economy. This happened in Russia, for instance, after the fall of the Soviet Union

Governments and central banks have taken both regulatory and free market approaches to monetary policy. Some of the tools used to control the money supply include:
  • changing the interest rate at which the central bank loans money to (or borrows money from) the commercial banks
  • currency purchases or sales
  • increasing or lowering government borrowing
  • increasing or lowering government spending
  • manipulation of exchange rates
  • raising or lowering bank reserve requirements
  • regulation or prohibition of private currencies
  • taxation or tax breaks on imports or exports of capital into a country
In the US, the Federal Reserve is responsible for controlling the money supply, while in the Euro area the respective institution is the European Central Bank. Other central banks with significant impact on global finances are the Bank of Japan, People's Bank of China and the Bank of England

For many years much of monetary policy was influenced by an economic theory known as monetarism. Monetarism is an economic theory which argues that management of the money supply should be the primary means of regulating economic activity. The stability of the demand for money prior to the 1980s was a key finding of Milton Friedman and Anna Schwartz supported by the work of David Laidler, and many others. The nature of the demand for money changed during the 1980s owing to technical, institutional, and legal factors and the influence of monetarism has since decreased.

Counterfeit

Counterfeit money is imitation currency produced without the legal sanction of the state or government. Producing or using counterfeit money is a form of fraud or forgery. Counterfeiting is almost as old as money itself. Plated copies (known as Fourrées) have been found of Lydian coins which are thought to be among the first western coins. Before the introduction of paper money, the most prevalent method of counterfeiting involved mixing base metals with pure gold or silver. A form of counterfeiting is the production of documents by legitimate printers in response to fraudulent instructions. During World War II, the Nazis forged British pounds and American dollars. Today some of the finest counterfeit banknotes are called Superdollars because of their high quality and likeness to the real U.S. dollar. There has been significant counterfeiting of Euro banknotes and coins since the launch of the currency in 2002, but considerably less than for the U.S. dollar.

Laundering

Money laundering is the process in which the proceeds of crime are transformed into ostensibly legitimate money or other assets. However, in a number of legal and regulatory systems the term money laundering has become conflated with other forms of financial crime, and sometimes used more generally to include misuse of the financial system (involving things such as securities, digital currencies, credit cards, and traditional currency), including terrorism financing, tax evasion, and evading of international sanctions.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...