Search This Blog

Thursday, December 9, 2021

Post-scarcity economy

From Wikipedia, the free encyclopedia

Post-scarcity is a theoretical economic situation in which most goods can be produced in great abundance with minimal human labor needed, so that they become available to all very cheaply or even freely.

Post-scarcity does not mean that scarcity has been eliminated for all goods and services, but that all people can easily have their basic survival needs met along with some significant proportion of their desires for goods and services. Writers on the topic often emphasize that some commodities will remain scarce in a post-scarcity society.

Models

Speculative technology

Futurists who speak of "post-scarcity" suggest economies based on advances in automated manufacturing technologies, often including the idea of self-replicating machines, the adoption of division of labour which in theory could produce nearly all goods in abundance, given adequate raw materials and energy.

More speculative forms of nanotechnology such as molecular assemblers or nanofactories, which do not currently exist, raise the possibility of devices that can automatically manufacture any specified goods given the correct instructions and the necessary raw materials and energy, and many nanotechnology enthusiasts have suggested it will usher in a post-scarcity world.

In the more near-term future, the increasing automation of physical labor using robots is often discussed as means of creating a post-scarcity economy.

Increasingly versatile forms of rapid prototyping machines, and a hypothetical self-replicating version of such a machine known as a RepRap, have also been predicted to help create the abundance of goods needed for a post-scarcity economy. Advocates of self-replicating machines such as Adrian Bowyer, the creator of the RepRap project, argue that once a self-replicating machine is designed, then since anyone who owns one can make more copies to sell (and would also be free to ask for a lower price than other sellers), market competition will naturally drive the cost of such machines down to the bare minimum needed to make a profit, in this case just above the cost of the physical materials and energy that must be fed into the machine as input, and the same should go for any other goods that the machine can build.

Even with fully automated production, limitations on the number of goods produced would arise from the availability of raw materials and energy, as well as ecological damage associated with manufacturing technologies. Advocates of technological abundance often argue for more extensive use of renewable energy and greater recycling in order to prevent future drops in availability of energy and raw materials, and reduce ecological damage. Solar energy in particular is often emphasized, as the cost of solar panels continues to drop (and could drop far more with automated production by self-replicating machines), and advocates point out the total solar power striking the Earth's surface annually exceeds our civilization's current annual power usage by a factor of thousands.

Advocates also sometimes argue that the energy and raw materials available could be greatly expanded by looking to resources beyond the Earth. For example, asteroid mining is sometimes discussed as a way of greatly reducing scarcity for many useful metals such as nickel. While early asteroid mining might involve crewed missions, advocates hope that eventually humanity could have automated mining done by self-replicating machines. If this were done, then the only capital expenditure would be a single self-replicating unit (whether robotic or nanotechnological), after which the number of units could replicate at no further cost, limited only by the available raw materials needed to build more.

Marxism

Karl Marx, in a section of his Grundrisse that came to be known as the "Fragment on Machines", argued that the transition to a post-capitalist society combined with advances in automation would allow for significant reductions in labor needed to produce necessary goods, eventually reaching a point where all people would have significant amounts of leisure time to pursue science, the arts, and creative activities; a state some commentators later labeled as "post-scarcity". Marx argued that capitalism—the dynamic of economic growth based on capital accumulation—depends on exploiting the surplus labor of workers, but a post-capitalist society would allow for:

The free development of individualities, and hence not the reduction of necessary labour time so as to posit surplus labour, but rather the general reduction of the necessary labour of society to a minimum, which then corresponds to the artistic, scientific etc. development of the individuals in the time set free, and with the means created, for all of them.

Marx's concept of a post-capitalist communist society involves the free distribution of goods made possible by the abundance provided by automation. The fully developed communist economic system is postulated to develop from a preceding socialist system. Marx held the view that socialism—a system based on social ownership of the means of production—would enable progress toward the development of fully developed communism by further advancing productive technology. Under socialism, with its increasing levels of automation, an increasing proportion of goods would be distributed freely.

Marx did not believe in the elimination of most physical labor through technological advancements alone in a capitalist society, because he believed capitalism contained within it certain tendencies which countered increasing automation and prevented it from developing beyond a limited point, so that manual industrial labor could not be eliminated until the overthrow of capitalism. Some commentators on Marx have argued that at the time he wrote the Grundrisse, he thought that the collapse of capitalism due to advancing automation was inevitable despite these counter-tendencies, but that by the time of his major work Capital: Critique of Political Economy he had abandoned this view, and came to believe that capitalism could continually renew itself unless overthrown.

Post-Scarcity Anarchism

Murray Bookchin, in his 1971 essay collection Post-Scarcity Anarchism, outlines an economy based on social ecology, libertarian municipalism, and an abundance of fundamental resources, arguing that post-industrial societies have the potential to be developed into post-scarcity societies. For Bookchin, such development would enable "the fulfillment of the social and cultural potentialities latent in a technology of abundance".

Bookchin claims that the expanded production made possible by the technological advances of the twentieth century were in the pursuit of market profit and at the expense of the needs of humans and of ecological sustainability. The accumulation of capital can no longer be considered a prerequisite for liberation, and the notion that obstructions such as the state, social hierarchy, and vanguard political parties are necessary in the struggle for freedom of the working classes can be dispelled as a myth.

Fiction

  • The Mars trilogy by Kim Stanley Robinson. Over three novels, Robinson charts the terraforming of Mars as a human colony and the establishment of a post-scarcity society.
  • The Culture novels by Iain M. Banks are centered on a post-scarcity economy where technology is advanced to such a degree that all production is automated, and there is no use for money or property (aside from personal possessions with sentimental value). People in the Culture are free to pursue their own interests in an open and socially-permissive society. The society has been described by some commentators as "communist-bloc" or "anarcho-communist". Banks' close friend and fellow science fiction writer Ken MacLeod has said that The Culture can be seen as a realization of Marx's communism, but adds that "however friendly he was to the radical left, Iain had little interest in relating the long-range possibility of utopia to radical politics in the here and now. As he saw it, what mattered was to keep the utopian possibility open by continuing technological progress, especially space development, and in the meantime to support whatever policies and politics in the real world were rational and humane."
  • The Rapture of the Nerds by Cory Doctorow and Charles Stross takes place in a post-scarcity society and involves "disruptive" technology. The title is a derogatory term for the technological singularity coined by SF author Ken MacLeod.
  • Con Blomberg's 1959 short story "Sales Talk" depicts a post-scarcity society in which society incentivizes consumption to reduce the burden of overproduction. To further reduce production, virtual reality is used to fulfill peoples' needs to create.
  • The 24th-century human society of Star Trek: The Next Generation and Star Trek: Deep Space Nine has been labeled a post-scarcity society due to the ability of the fictional " replicator " technology to synthesize a wide variety of goods nearly instantaneously, along with dialogue such as Captain Picard's statement in the film Star Trek: First Contact that "The acquisition of wealth is no longer the driving force of our lives. We work to better ourselves and the rest of humanity." By the 22nd century, money had been rendered obsolete on Earth.
  • Cory Doctorow's novel Walkaway presents a modern take on the idea of post-scarcity. With the advent of 3D printing – and especially the ability to use these to fabricate even better fabricators – and with machines that can search for and reprocess waste or discarded materials, the protagonists no longer have need of regular society for the basic essentials of life, such as food, clothing and shelter.

Post-scarcity in academia

In the paper "The Post-Scarcity World of 2050–2075" the authors assert that the current age is one of scarcity resulting from negligent behavior (as regards the future) of the 19th and 20th centuries. The period between 1975 and 2005 was characterized by relative abundance of resources (oil, water, energy, food, credit, among others) which boosted industrialization and development in the Western economies. An increased demand of resources combined with a rising population led to resource exhaustion. In part, the ideas developed about post-scarcity are motivated by analyses that posit that capitalism takes advantage of scarcity.

One of the main traces of the scarcity periods is the increase and fluctuation of prices. To deal with that situation, advances in technology come into play, driving an efficient use of resources to a certain extent that costs will be considerably reduced (almost everything will be free). Consequently, the authors claim that the period between 2050 and 2075 will be a post-scarcity age in which scarcity will no longer exist.

Economics of digitization

From Wikipedia, the free encyclopedia
 
The economics of digitization is the field of economics that studies how digitization, digitalisation and digital transformation affects markets and how digital data can be used to study economics. Digitization is the process by which technology lowers the costs of storing, sharing, and analyzing data. This has changed how consumers behave, how industrial activity is organized, and how governments operate. The economics of digitization exists as a distinct field of economics for two reasons. First, new economic models are needed because many traditional assumptions about information no longer hold in a digitized world. Second, the new types of data generated by digitization require new methods for their analysis.

Research in the economics of digitization touches on several fields of economics including industrial organization, labor economics, and intellectual property. Consequently, many of the contributions to the economics of digitization have also found an intellectual home in these fields. An underlying theme in much of the work in the field is that existing government regulation of copyright, security, and antitrust is inappropriate in the modern world. For example, information goods, such as news articles and movies, now have zero marginal costs of production and sharing. This has made the redistribution without permission common and has increased competition between providers of information goods. Research in the economics of digitization studies how policy should adapt in response to these changes.

Information technology and access to networks

Technological standards

The Internet is a multi-layered network which is operated by a variety of participants. The Internet has come to mean a combination of standards, networks, and web applications (such as streaming and file-sharing), among other components, that have accumulated around networking technology. The emergence of the Internet coincided with the growth of a new type of organizational structure, the standards committee. Standards committees are responsible for designing critical standards for the Internet such as TCP/IP, HTML, and CSS. These committees are composed of representatives from firms, academia, and non-profit organizations. Their goal is to make decisions that advance technology while retaining interoperability between Internet components. Economists are interested in how these organizational structures make decisions and whether those decisions are optimal.

The supply of Internet access

The commercial supply of Internet access began when the National Science Foundation removed restrictions for using the Internet for commercial purposes. During the 90's internet access was provided by numerous regional and national Internet service providers (ISPs). However, by 2014, the provision of high-speed broadband access was consolidated. About 80% of Americans can only buy 25Mbit/s from one provider and a majority only have a choice of two providers for 10Mbit/s service. Economists are particularly interested by competition and network effects within this industry. Furthermore, the availability of broadband may affect other economic outcomes such as the relative wages of skilled and unskilled workers.

Demand for the Internet

A key issue in the economics of digitization is the economic value of Internet-based services. The motivation for this question is two-fold. First, economists are interested in understanding digitization related policies such as network infrastructure investment and subsidies for Internet access. Second, economists want to measure the gains to consumers from the Internet. The revenues of Internet Service Providers provided one direct measure of the growth in the Internet economy. This is an important topic because many economists believe that traditional measures of economic growth, such as GDP, understate the true benefits of improving technology. The modern digital economy also tends to lead to reliance on inputs with zero price.

The effects of digitization on industrial organization

Platforms and online marketplaces

Digitization has coincided with the increased prominence of platforms and marketplaces that connect diverse agents in social and economic activity. A platform is defined by Bresnahan and Greenstein (1999) as "a reconfigurable base of compatible components on which users build applications". Platforms are most readily identified with their technical standards, i.e., engineering specifications for hardware and standards for software. The pricing and product strategies that platforms use differ from those of traditional firms because of the presence of network effects. Network effects arise within platforms because participation by one group affects the utility of another group. Many online platforms replicate identical process or algorithms at virtually no cost, allowing them to scale the network effect without encountering diminishing returns. Large scale network effects make the analysis of competition between platforms more complex than the analysis of competition between traditional firms. Much work in the economics of digitization studies the question of how these firms should operate and how they compete with each other. A particularly important issue is whether markets for online platforms have a tendency towards "winner-takes-all" competitive outcomes, and should be subject to antitrust actions.

Online platforms often drastically reduce transactions costs, especially in markets where the quality of a good or trading partner is uncertain. For example, eBay drastically increased the market for used consumer goods by offering a search engine, reputation system, and other services that make trade less risky. Other online marketplaces of this type include Airbnb for accommodations, Prosper for lending, and Odesk for labor. Economists are interested in quantifying the gains from these marketplaces and studying how they should be designed. For example, eBay, Odesk, and other marketplaces have adapted the use of auctions as a selling mechanisms. This has prompted a large literature on the comparative advantages of selling goods via auction versus using a fixed price.

User-generated content and open source production

Digitization has coincided with the production of software and content by users who are not directly compensated for their work. Furthermore, those goods are typically distributed for free on the Internet. Prominent examples of open-source software include the Apache HTTP Server, Mozilla Firefox, and the Linux operating system. Economists are interested in the incentives of users to produce this software and how this software either substitutes or complements existing production processes. Another area of study is estimating the degree to which GDP and other measures of economic activity are mis-measured due to open source software. For example, Greenstein and Nagle (2014) estimate that Apache alone accounts for a mis-measurement between $2 billion and $12 billion.

In addition, open source production can be used for hardware, known as open hardware, normally by sharing digital designs such as CAD files. Sharing of open hardware designs can generate significant value because of the ability to digitally replicate products for approximately the cost of materials using technologies such as 3D printers.

Another active area of research studies the incentives to produce user-generated content such as Wikipedia articles, digital videos, blogs, podcasts, etc. For example, Zhang and Zhu (2011) show that Wikipedia contributors are motivated by the social interaction with other contributors. Greenstein and Zhu (2012) show that while many Wikipedia articles exhibit slant, the overall level of slant across articles on Wikipedia has diminished over time.

Advertising

Advertising is an important source of revenue for information goods, both online and offline. Given the prevalence of advertising-supported information goods online, it is important to understand how online advertising works. Economists have spent much effort in trying to quantify the returns to online advertising. One especially interesting aspect of online advertising is its ability to target customers using fine demographic and behavioral data. This ability potentially affects the ability of new and small firms to gain exposure to customers and to grow. Targeted advertising is controversial because it sometimes uses private data about individuals obtained through third-party sources. Quantifying the costs and benefits of using this type of data is an active research area in the field.

The effects of digitization on consumer choice

Search, search engines and recommendation systems

Perhaps the oldest and largest stream of research on the Internet and market frictions emphasizes reduced search costs. This literature builds on an older theory literature in economics that examines how search costs affect prices. Digitization of retail and marketing meant that consumers could easily compare prices across stores, so the empirical work on Internet pricing examined the impact on prices and price dispersion. Initially hypothesized by Bakos (1997), the first wave of this research empirically documented lower prices, but still substantial dispersion.

The newest wave of this research collects data about online searches to examine the actual search process that consumers undertake when looking for a product online. This question also emphasizes that the final stage of purchase is often controlled by a more familiar retail environment, and it raises questions about the growing importance of standards and platforms in the distribution of creative content.

As noted earlier, near-zero marginal costs of distribution for information goods might change where and how information goods get consumed. Geographic boundaries might be less important if information can travel long distances for free. One open question concerns the incidence of the impact of low distribution costs. The benefits might vary by location, with locations with fewer offline options generating a larger benefit from digitization.

Furthermore, online retailers of digital goods can carry many more products and never worry about running out of inventory. Even if a song only sells a handful of times, it is still profitable to be offered for sale on the Internet. At the same time, the zero marginal costs of distribution mean that top-selling (superstar) items never go out of stock and therefore can achieve even higher sales (Anderson, 2006). Several papers in the literature attempt to quantify the economic impact of increased product variety made available through electronic markets. Bar-Isaac et al. (2012) derive a theory of when lower search costs will result in 'superstar' and 'long-tail' effects.

Reputation systems

One particularly important aspect of digitization for consumers is the increased use of reputation systems on retail websites and online marketplaces. Sixty-eight percent of respondents in a 2013 Nielsen survey said that they trusted online reviews. Numerous papers have shown that these review systems affect consumer demand for restaurants books, and hotels. A key area of research in digitization studies whether online reputations accurately reveal both the vertical and horizontal quality of a good. For example, Forman et al. (2008) show that local reviews have more effect than reviews from distant reviewers, suggesting that reviews provide information about both vertical and horizontal differentiation. On the other hand, several show that online review are biased because not everyone leaves reviews, because reviewers are afraid of retaliation, and because sellers may promote their own products using the review system. Newer research proposes designs for reputation systems that more efficiently aggregate information about the experiences of users.

The effects of digitization on labor markets

Digitization has partially or fully replaced many tasks that were previously done by human laborers. At the same time, computers have made some workers much more productive. Economists are interested in understanding how these two forces interact in determining labor market outcomes. For example, a large literature studies the magnitude and causes of skill-biased technical change, the process by which technology improves wages for educated workers. Alternatively, Autor (2014) describes a framework for classifying jobs into those more or less prone to replacement by computers. Furthermore, the use of information technology only increases productivity when it's complemented by organization changes. For example, Garicano and Heation (2010) show that IT increases the productivity of police departments only when those police departments increased training and expanded support personnel. Work by Bresnahan, Brynjolfsson and Hitt (2002) found evidence of organizational complementarities with information technology and boosted the demand for skilled labor.

Another consequence of digitization is that it has drastically reduced the costs of communication between workers across different organizations and locations. This has led to a change in the geographic and contractual organization of production. Economists are interested in the magnitude of this change and its effect on local labor markets. A recent study found that the potential of manufacturing sector jobs to be offshored did not reduce wages in the US. However, survey evidence suggests that 25% of American jobs are potentially offshorable in the future.

Online labor market platforms like Odesk and Amazon Mechanical Turk represent a particularly interesting form of labor production arising out of digitization. Economists who study these platforms are interested in how they compete with or complement more traditional firms. Another active area of research is how to incentivize workers on these platforms to produce more efficiently. While workers engaged in routine, lower-skill tasks such as data entry are particularly susceptible to competition from online labor markets, creative professions are also exposed, as many online platforms now provide opportunities to crowdsource creative work.

Government policy and digitization

Intellectual property and digitization

One main area of policy interest related to digitization concerns intellectual property. The justification for giving copyright and patent right relies on the theory that the potential to gain these rights encourages the production and sharing of intellectual property. However, digitization and ease of copying has made it difficult to defend intellectual property rights, especially in the case of copyright. Varian (2005) supplies a theoretical framework for thinking about this change from an economics perspective. Usually, the economic effect on copyright-holders in the context of free copying is considered to be negative. However, Varian suggests an important counter-argument. If the value a consumer puts on the right to copy is greater than the reduction in sales, a seller can increase profits by allowing that right. Varian also provides a detailed description of several business models which potentially address the greater difficulty of enforcing copyrights as digitization increases. Alternative business models for intellectual property holders include selling complementary goods, subscriptions, personalization, and advertising.

Empirical research in this area studies the effects of Internet file-sharing on the supply and demand for paid content. For example, Danaher et al. 2010 show that the removal of NBC content from iTunes increased the illicit copying of NBC shows by 11.4%. This result shows that licensed and unlicensed content are substitutes. Giorcelli and Moser (2014) show that the spread of copyright in Italy between 1770 and 1900 increased the production of new and better operas. Still, there is little work on how these empirical results should inform copyright rules and security practices.

Net neutrality

Privacy, security, and digitization

Privacy and data security is an area where digitization has substantially changed the costs and benefits to various economic actors. Traditional policies regarding privacy circumscribed the ability of government agencies to access individual data. However, the large-scale ability of firms to collect, parse, and analyze detailed micro-level data about consumers has shifted the policy focus. Now, the concern is whether firms' access consumer data should be regulation and restricted. In the past decade, theoretical work on commercial privacy has tended to focus on behavioral price discrimination as being a potential application of a context where researchers can model privacy concerns from an economics perspective.

Goldfarb and Tucker (2011a) wrote the first paper to empirically study the economic effects of privacy regulation for the advertising-supported Internet. The implementation of privacy regulation in Europe has made it more difficult for firms to collect and use consumer browsing data to target their ads more accurately; the field test data shows these policies are associated with a 65 percent reduction in the influence banner ads have on purchase intent. As well as this main effect, their research also suggests that privacy regulation might change the web landscape in unanticipated ways, with advertising becoming even more intrusive. It also might lead marketers to shift their media buys away from newspapers because of difficulties in finding relevant advertising to show.

Another related concern is what precautions should firms take to prevent data breaches such as those at Target and Staples. Arora et al. (2010) models the firm's effort in securing data from an economics perspective. They find that direct competition reduces the time that a firm takes to patch a vulnerability to its software. Other attempts at measuring the consequences of information security policy from an economics perspective are Miller and Tucker (2011), who look at policies mandating encryption, and Romanosky et al. (2011), who look at mandatory breach notification laws.

Other issues

There are many other policies related to digitization that are of interest to economists. For example, digitization may affect government effectiveness and accountability. Digitization also makes it easier for firms in one jurisdiction to supply consumers in another. This creates challenges for tax enforcement. Another issue is that companies with new, Internet based business models, such as Airbnb and Uber, pose challenges for regulation aimed at traditional service providers. Many safety and quality enforcement regulations may no longer be necessary with the advent of online reputation systems. Lastly, digitization is of great importance to health care policy. For example, electronic medical records have the potential to make healthcare more effective but pose challenges to privacy policy.

Books

In May 2015 the National Bureau of Economic Research published a book with University of Chicago Press entitled "Economic Analysis of the Digital Economy." The editors for the book are Avi Goldfarb, Shane Greenstein, and Catherine Tucker. The volume brings together leading scholars to explore this emerging area of research. This follows on a book that collected twenty-five important articles in the area, published by Edward Elgar Publishing, titled "Economics of Digitization."

Expert system

From Wikipedia, the free encyclopedia
 
A Symbolics Lisp Machine: an early platform for expert systems

In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

History

Early development

Soon after the dawn of modern computers in the late 1940s – early 1950s, researchers started realizing the immense potential these machines had for modern society. One of the first challenges was to make such machine capable of “thinking” like humans. In particular, making these machines capable of making important decisions the way humans do. The medical / healthcare field presented the tantalizing challenge to enable these machines to make medical diagnostic decisions.

Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision-making. For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology. These early diagnostic systems used patients’ symptoms and laboratory test results as inputs to generate a diagnostic outcome. These systems were often described as the early forms of expert systems. However, researchers had realized that there were significant limitations when using traditional methods such as flow-charts statistical pattern-matching, or probability theory.

Formal introduction & later developments

This previous situation gradually led to the development of expert systems, which used knowledge-based approaches. These expert systems in medicine were the MYCIN expert system, the INTERNIST-I expert system and later, in the middle of the 1980s, the CADUCEUS.

Expert systems were formally introduced around 1965 by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes termed the "father of expert systems"; other key early contributors were Bruce Buchanan and Randall Davis. The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral). The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work of Allen Newell and Herbert Simon). Expert systems became some of the first truly successful forms of artificial intelligence (AI) software.

Research on expert systems was also active in France. While in the US the focus tended to be on rules-based systems, first on systems hard coded on top of LISP programming environments and then on expert system shells developed by vendors such as Intellicorp, in France research focused more on systems developed in Prolog. The advantage of expert system shells was that they were somewhat easier for nonprogrammers to use. The advantage of Prolog environments was that they were not focused only on if-then rules; Prolog environments provided a much better realization of a complete first order logic environment.

In the 1980s, expert systems proliferated. Universities offered expert system courses and two thirds of the Fortune 500 companies applied the technology in daily business activities. Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe.

In 1981, the first IBM PC, with the PC DOS operating system, was introduced. The imbalance between the high affordability of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client–server model. Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC. This model also enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client server had a tremendous impact on the expert systems market. Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop. They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts. Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments. With the rise of the PC and client server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC based tools. Also, new vendors, often financed by venture capital (such as Aion Corporation, Neuron Data, Exsys, and many others), started appearing regularly.

The first expert system to be used in a design capacity for a large-scale product was the SID (Synthesis of Integral Design) software program, developed in 1982. Written in LISP, SID generated 93% of the VAX 9000 CPU logic gates. Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves. Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker. The program was highly controversial, but used nevertheless due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion.

During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the beginning of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s. Thanks to Karp's work it became clear that there are certain limitations and possibilities when one designs computer algorithms. His findings describe what computers can do and what they cannot do. Many of the computational problems related to this type of expert systems have certain pragmatic limitations. These findings laid down the groundwork that led to the next developments in the field.

In the 1990s and beyond, the term expert system and the idea of a standalone AI system mostly dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems did not deliver on their over hyped promise. The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose expert systems, to being one of many standard tools. Many of the leading major business application suite vendors (such as SAP, Siebel, and Oracle) integrated expert system abilities into their suite of products as a way of specifying business logic – rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments.

Current approaches to expert systems

The limitations of the previous type of expert systems have urged researchers to develop new types of approaches. They have developed more efficient, flexible, and powerful approaches in order to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism. Recurrent neural networks often take advantage of such mechanisms. Related is the discussion on the disadvantages section.

Modern systems can incorporate new knowledge more easily and thus update themselves easily. Such systems can generalize from existing knowledge better and deal with vast amounts of complex data. Related is the subject of big data here. Sometimes these type of expert systems are called "intelligent systems."

Software architecture

Illustrating example of backward chaining from a 1990 Master's Thesis

An expert system is an example of a knowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. A knowledge-based system is essentially composed of two sub-systems: the knowledge base and the inference engine.

The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables. In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming. The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances. The rules worked by querying and asserting values of the objects.

The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion.

There are mainly two modes for an inference engine: forward chaining and backward chaining. The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule. In forward chaining an antecedent fires and asserts the consequent. For example, consider the following rule:

A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine. It would match R1 and assert Mortal(Socrates) into the knowledge base.

Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true. One of the early innovations of expert systems shells was to integrate inference engines with a user interface. This could be especially powerful with backward chaining. If the system needs to know a particular fact but does not, then it can simply generate an input screen and ask the user if the information is known. So in this example, it could use R1 to ask the user if Socrates was a Man and then use that new information accordingly.

The use of rules to explicitly represent knowledge also enabled explanation abilities. In the simple example above if the system had used R1 to assert that Socrates was Mortal and a user wished to understand why Socrates was mortal they could query the system and the system would look back at the rules which fired to cause the assertion and present those rules to the user as an explanation. In English, if the user asked "Why is Socrates Mortal?" the system would reply "Because all men are mortal and Socrates is a man". A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules.

As expert systems evolved, many new techniques were incorporated into various types of inference engines. Some of the most important of these were:

  • Truth maintenance. These systems record the dependencies in a knowledge-base so that when facts are altered, dependent knowledge can be altered accordingly. For example, if the system learns that Socrates is no longer known to be a man it will revoke the assertion that Socrates is mortal.
  • Hypothetical reasoning. In this, the knowledge base can be divided up into many possible views, a.k.a. worlds. This allows the inference engine to explore multiple possibilities in parallel. For example, the system may want to explore the consequences of both assertions, what will be true if Socrates is a Man and what will be true if he is not?
  • Uncertainty systems. One of the first extensions of simply using rules to represent knowledge was also to associate a probability with each rule. So, not to assert that Socrates is mortal, but to assert Socrates may be mortal with some probability value. Simple probabilities were extended in some systems with sophisticated mechanisms for uncertain reasoning, such as Fuzzy logic, and combination of probabilities.
  • Ontology classification. With the addition of object classes to the knowledge base, a new type of reasoning was possible. Along with reasoning simply about object values, the system could also reason about object structures. In this simple example, Man can represent an object class and R1 can be redefined as a rule that defines the class of all men. These types of special purpose inference engines are termed classifiers. Although they were not highly used in expert systems, classifiers are very powerful for unstructured volatile domains, and are a key technology for the Internet and the emerging Semantic Web.

Advantages

The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit. In a traditional computer program the logic is embedded in code that can typically only be reviewed by an IT specialist. With an expert system the goal was to specify the rules in a format that was intuitive and easily understood, reviewed, and even edited by domain experts rather than IT experts. The benefits of this explicit knowledge representation were rapid development and ease of maintenance.

Ease of maintenance is the most obvious benefit. This was achieved in two ways. First, by removing the need to write conventional code, many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. Essentially, the logical flow of the program (at least at the highest level) was simply a given for the system, simply invoke the inference engine. This also was a reason for the second benefit: rapid prototyping. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects.

A claim for expert system shells that was often made was that they removed the need for trained programmers and that experts could develop systems themselves. In reality, this was seldom if ever true. While the rules for an expert system were more comprehensible than typical computer code, they still had a formal syntax where a misplaced comma or other character could cause havoc as with any other computer language. Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical. Inevitably demands to integrate with, and take advantage of, large legacy databases and systems arose. To accomplish this, integration required the same skills as any other type of system.

Disadvantages

The most common disadvantage cited for expert systems in the academic literature is the knowledge acquisition problem. Obtaining the time of domain experts for any software application is always difficult, but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. As a result of this problem, a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition, to help automate the process of designing, debugging, and maintaining rules defined by experts. However, when looking at the life-cycle of expert systems in actual use, other problems – essentially the same problems as those of any other large system – seem at least as critical as knowledge acquisition: integration, access to large databases, and performance.

Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them. This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such as C). System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcome in most corporate IT environments – programming languages such as Lisp and Prolog, and hardware platforms such as Lisp machines and personal computers. As a result, much effort in the later stages of expert system tool development was focused on integrating with legacy environments such as COBOL and large database systems, and on porting to more standard platforms. These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications.

Another major challenge of expert systems emerges when the size of the knowledge base increases. This causes the processing complexity to increase. For instance, when an expert system with 100 million rules was envisioned as the ultimate expert system, it became obvious that such system would be too complex and it would face too many computational problems. An inference engine would have to be able to process huge numbers of rules to reach a decision.

How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to a satisfiability (SAT) formulation. This is a well-known NP-complete problem Boolean satisfiability problem. If we assume only binary variables, say n of them, and then the corresponding search space is of size 2. Thus, the search space can grow exponentially.

There are also questions on how to prioritize the use of the rules in order to operate more efficiently, or how to resolve ambiguities (for instance, if there are too many else-if sub-structures within a single rule) and so on.

Other problems are related to the overfitting and overgeneralization effects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base. Such problems exist with methods that employ machine learning approaches too.

Another problem related to the knowledge base is how to make updates of its knowledge quickly and effectively. Also how to add a new piece of knowledge (i.e., where to add it among many rules) is challenging. Modern approaches that rely on machine learning methods are easier in this regard.

Because of the above challenges, it became clear that new approaches to AI were required instead of rule-based technologies. These new approaches are based on the use of machine learning techniques, along with the use of feedback mechanisms.

The key challenges that expert systems in medicine (if one considers computer-aided diagnostic systems as modern expert systems), and perhaps in other application domains, include issues related to aspects such as: big data, existing regulations, healthcare practice, various algorithmic issues, and system assessment.

Applications

Hayes-Roth divides expert systems applications into 10 categories illustrated in the following table. The example applications were not in the original Hayes-Roth table, and some of them arose well afterward. Any application that is not footnoted is described in the Hayes-Roth book. Also, while these categories provide an intuitive framework to describe the space of expert systems applications, they are not rigid categories, and in some cases an application may show traits of more than one category.

Category Problem addressed Examples
Interpretation Inferring situation descriptions from sensor data Hearsay (speech recognition), PROSPECTOR
Prediction Inferring likely consequences of given situations Preterm Birth Risk Assessment
Diagnosis Inferring system malfunctions from observables CADUCEUS, MYCIN, PUFF, Mistral, Eydenet, Kaleidos
Design Configuring objects under constraints Dendral, Mortgage Loan Advisor, R1 (DEC VAX Configuration), SID (DEC VAX 9000 CPU)
Planning Designing actions Mission Planning for Autonomous Underwater Vehicle
Monitoring Comparing observations to plan vulnerabilities REACTOR
Debugging Providing incremental solutions for complex problems SAINT, MATHLAB, MACSYMA
Repair Executing a plan to administer a prescribed remedy Toxic Spill Crisis Management
Instruction Diagnosing, assessing, and correcting student behaviour SMH.PAL, Intelligent Clinical Training, STEAMER
Control Interpreting, predicting, repairing, and monitoring system behaviors Real Time Process Control, Space Shuttle Mission Control

Hearsay was an early attempt at solving voice recognition through an expert systems approach. For the most part this category of expert systems was not all that successful. Hearsay and all interpretation systems are essentially pattern recognition systems—looking for patterns in noisy data. In the case of Hearsay recognizing phonemes in an audio stream. Other early examples were analyzing sonar data to detect Russian submarines. These kinds of systems proved much more amenable to a neural network AI solution than a rule-based approach.

CADUCEUS and MYCIN were medical diagnosis systems. The user describes their symptoms to the computer as they would to a doctor and the computer returns a medical diagnosis.

Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuring Digital Equipment Corporation (DEC) VAX computers and mortgage loan application development.

SMH.PAL is an expert system for the assessment of students with multiple disabilities.

Mistral  is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g., Itaipu Dam in Brazil), and on landslide sites under the name of Eydenet, and on monuments under the name of Kaleidos. Mistral is a registered trade mark of CESI.

Government by algorithm

From Wikipedia, the free encyclopedia
 
Government by algorithm (also known as algorithmic regulation, regulation by algorithms, algorithmic governance, algocratic governance, algorithmic legal order or algocracy) is an alternative form of government or social ordering, where the usage of computer algorithms, especially of artificial intelligence and blockchain, is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term 'government by algorithm' appeared in academic literature as an alternative for 'algorithmic governance' in 2013. A related term, algorithmic regulation is defined as setting the standard, monitoring and modification of behaviour by means of computational algorithms — automation of judiciary is in its scope.

Government by algorithm raises new challenges that are not captured in the e-government literature and the practice of public administration. Some sources equate cyberocracy, which is a hypothetical form of government that rules by the effective use of information, with algorithmic governance, although algorithms are not the only means of processing information. Nello Cristianini and Teresa Scantamburlo argued that the combination of a human society and certain regulation algorithms (such as reputation-based scoring) forms a social machine.

History

In 1962, the director of the Institute for Information Transmission Problems of the Russian Academy of Sciences in Moscow (later Kharkevich Institute), Alexander Kharkevich, published an article in the journal "Communist" about a computer network for processing information and control of the economy. In fact, he proposed to make a network like the modern Internet for the needs of algorithmic governance. This created a serious concern among CIA analysts. In particular, Arthur M. Schlesinger Jr. warned that "by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employing self-teaching computers".

Between 1971 and 1973, the Chilean government carried out Project Cybersyn during the presidency of Salvador Allende. This project was aimed at constructing a distributed decision support system to improve the management of the national economy.

Also in the 1960s and 1970s, Herbert A. Simon championed expert systems as tools for rationalization and evaluation of administrative behavior. The automation of rule-based processes was an ambition of tax agencies over many decades resulting in varying success. Early work from this period includes Thorne McCarty's influential TAXMAN project in the US and Ronald Stamper's LEGOL project in the UK. In 1993, the computer scientist Paul Cockshott from the University of Glasgow and the economist Allin Cottrell from the Wake Forest University published the book Towards a New Socialism, where they claim to demonstrate the possibility of a democratically planned economy built on modern computer technology. The Honourable Justice Michael Kirby published a paper in 1998, where he expressed optimism that the then-available computer technologies such as legal expert system could evolve to computer systems, which will strongly affect the practice of courts. In 2006, attorney Lawrence Lessig known for the slogan "Code is law" wrote:

"[T]he invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible"

Since the 2000s, algorithms have been designed and used to automatically analyze surveillance videos.

Sociologist A. Aneesh used the idea of algorithmic governance in 2002 in his theory of algocracy. Aneesh differentiated algocratic systems from bureaucratic systems (legal-rational regulation) as well as market-based systems (price-based regulation).

In 2013, algorithmic regulation was coined by Tim O'Reilly, founder and CEO of O'Reilly Media Inc.:

Sometimes the "rules" aren't really even rules. Gordon Bruce, the former CIO of the city of Honolulu, explained to me that when he entered government from the private sector and tried to make changes, he was told, "That's against the law." His reply was "OK. Show me the law." "Well, it isn't really a law. It's a regulation." "OK. Show me the regulation." "Well, it isn't really a regulation. It's a policy that was put in place by Mr. Somebody twenty years ago." "Great. We can change that!""

[...] Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws. [...] It's time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.

In 2017, Justice Ministry of Ukraine ran experimental government auctions using blockchain technology to ensure transparency and hinder corruption in governmental transactions. "Government by Algorithm?" was the central theme introduced at Data for Policy 2017 conference held on 6-7 September 2017 in London, UK.

Examples

Smart cities

A smart city is an urban area, where collected surveillance data is used to improve various operations in this area. Increase in computational power allows more automated decision making and replacement of public agencies by algorithmic governance. In particular, the combined use of artificial intelligence and blockchains for IoT might lead to a creation of sustainable smart city ecosystems. Intelligent street lighting in Glasgow is an example of benefits brought by government application of AI algorithms.

The cryptocurrency millionaire, Jeffrey Berns, proposed to run local governments by tech firms in Nevada in 2021. Mr. Berns bought 67,000 acres (271 km²) in Nevada's rural Storey County for $170,000,000 (£121,000,000) in 2018 in order to develop a smart city with more than 36,000 residents generating an annual output of $4,600,000,000. Cryptocurrency will be allowed for payments.

Reputation systems

Tim O'Reilly suggested that data sources and reputation systems combined in algorithmic regulation can outperform traditional regulations. For instance, once taxi-drivers are rated by passengers, the quality of their services will improve automatically and "drivers who provide poor service are eliminated". O'Reilly's suggestion is based on control-theoreric concept of feed-back loopimprovements and disimprovements of reputation enforce desired behavior. The usage of feed-loops for the management of social systems is already been suggested in management cybernetics by Stafford Beer before.

These connections are explored by Nello Cristianini and Teresa Scantamburlo  where the reputation-credit scoring system is modeled as an incentive given to the citizens and computed by a social machine, so that rational agents would be motivated to increase their score by adapting their behaviour. Several ethical aspects of that technology are still being discussed .

China's Social Credit System is closely related to China's mass surveillance systems such as the Skynet, which incorporates facial recognition system, big data analysis technology and AI. This system provides assessments of trustworthiness of individuals and businesses. Among behavior, which is considered as misconduct by the system, jaywalking and failing to correctly sort personal waste are cited. Behavior listed as positive factors of credit ratings includes donating blood, donating to charity, volunteering for community services, and so on. Chinese Social Credit System enables punishments of "untrustworthy" citizens like denying purchase of tickets and rewards for "trustworthy" citizen like less waiting time in hospitals and government agencies.

Smart contracts

Smart Contracts, cryptocurrencies, and Decentralized Autonomous Organization are mentioned as means to replace traditional ways of governance. Cryptocurrencies are currencies, which are enabled by algorithms without a governmental central bank. Central bank digital currency often employs similar technology, but is differentiated from the fact that it does use a central bank. It is soon to be employed by major unions and governments such as the European Union and China. Smart contracts are self-executable contracts, whose objectives are the reduction of need in trusted governmental intermediators, arbitrations and enforcement costs. A decentralized autonomous organization is an organization represented by smart contracts that is transparent, controlled by shareholders and not influenced by a central government. Smart contracts have been discussed for use in such applications as use in (temporary) employment contracts and automatic transfership of funds and property (i.e. inheritance, upon registration of a death certificate). Some countries such as Georgia and Sweden have already launched blockchain programs focusing on property (land titles and real estate ownership) Ukraine is also looking at other areas too such as state registers.

Algorithms in government agencies

According to a study of Stanford University, 45% of the studied US federal agencies have experimented with AI and related machine learning (ML) tools up to 2020. US federal agencies counted the following number of artificial intelligence applications.

Agency Name Number of Use Cases
Office of Justice Programs 12
Securities and Exchange Commission 10
National Aeronautics and Space Administration 9
Food and Drug Administration 8
United States Geological Survey 8
United States Postal Service 8
Social Security Administration 7
United States Patent and Trademark Office 6
Bureau of Labor Statistics 5
U.S. Customs and Border Protection 4

53% of these applications were produced by in-house experts. Commercial providers of residual applications include Palantir Technologies.

From 2012, NOPD started a collaboration with Palantir Technologies in the field of predictive policing. Besides Palantir's Gotham software, other similar (numerical analysis software) used by police agencies (such as the NCRIC) include SAS.

In the fight against money laundering, FinCEN employs the FinCEN Artificial Intelligence System (FAIS).

National health administration entities and organisations such as AHIMA (American Health Information Management Association) hold medical records. Medical records serve as the central repository for planning patient care and documenting communication among patient and health care provider and professionals contributing to the patient's care. In the EU, work is ongoing on a European Health Data Space which supports the use of health data.

US Department of Homeland Security has employed the software ATLAS, which run on Amazon Cloud. It scanned more than 16.5 million of records of naturalized Americans and flagged approximately 124.000 of them for manual annalysis and review by USCIS officers regarding denaturalization. There were flagged due to potential fraud, public safety and national security issues. Some of the scanned data came from Terrorist Screening Database and National Crime Information Center.

In Estonia, artificial intelligence is used in its e-government to make it more automated and seamless. A virtual assistant will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment, ...). One example is the automated registering of babies when they are born. Estonia's X-Road system will also be rebuilt to include even more privacy control and accountability into the way the government uses citizen's data.

In Costa Rica, the possible digitalization of public procurement activities (i.e. tenders for public works, ...) has been investigated. The paper discussing this possibility mentions that the use of ICT in procurement has several benefits such as increasing transparency, facilitating digital access to public tenders, reducing direct interaction between procurement officials and companies at moments of high integrity risk, increasing outreach and competition, and easier detection of irregularities.

Besides using e-tenders for regular public works (construction of buildings, roads, ...), e-tenders can also be used for reforestation projects and other carbon sink restoration projects. Carbon sink restoration projects may be part of the nationally determined contributions plans in order to reach the national Paris agreement goals

Government procurement audit software can also be used. Audits are performed in some countries after subsidies have been received.

Some government agencies provide track and trace systems for services they offer. An example is track and trace for applications done by citizens (i.e. driving license procurement).

Some government services use issue tracking system to keep track of ongoing issues.

Justice by algorithm

COMPAS software is used in USA to assess the risk of recidivism in courts.

According to the statement of Beijing Internet Court, China is the first country to create an internet court or cyber court. The Chinese AI judge is a virtual recreation of an actual female judge. She "will help the court's judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work".

Also Estonia plans to employ artificial intelligence to decide small-claim cases of less than €7,000.

Lawbots can perform tasks that are typically done by paralegals or young associates at law firms. One such technology used by US law firms to assist in legal research is from ROSS Intelligence, and others vary in sophistication and dependence on scripted algorithms.[105] Another legal technology chatbot application is DoNotPay.

AI in education

Due to the COVID-19 pandemic in 2020, in-person final exams were impossible for thousands of students. The public high school Westminster High employed algorithms to assign grades. UK's Department for Education also employed a statistical calculus to assign final grades in A-levels, due to the pandemic.

Besides use in grading, software systems and AI are also optimizing coursework and are used in preparation for college entrance exams.

AI teaching assistants are being developed and used for education (i.e. Georgia Tech's Jill Watson) and there is also an ongoing debate on whether perhaps teachers can be entirely replaced by AI systems (i.e. in homeschooling).

AI politicians

In 2018, an activist named Michihito Matsuda ran for mayor in the Tama city area of Tokyo as a human proxy for an artificial intelligence program. While election posters and campaign material used the term robot, and displayed stock images of a feminine android, the "AI mayor" was in fact a machine learning algorithm trained using Tama city datasets. The project was backed by high-profile executives Tetsuzo Matsumoto of Softbank and Norio Murakami of Google. Michihito Matsuda came third in the election, being defeated by Hiroyuki Abe. Organisers claimed that the 'AI mayor' was programmed to analyze citizen petitions put forward to the city council in a more 'fair and balanced' way than human politicians.

In 2019, AI-powered messenger chatbot SAM participated in the discussions on social media connected to an electoral race in New Zealand. The creator of SAM, Nick Gerritsen, believes SAM will be advanced enough to run as a candidate by late 2020, when New Zealand has its next general election.

Management of infection

In February 2020, China launched a mobile app to deal with the Coronavirus outbreak called "close-contact-detector". Users are asked to enter their name and ID number. The app is able to detect "close contact" using surveillance data (i.e. using public transport records, including trains and flights) and therefore a potential risk of infection. Every user can also check the status of three other users. To make this inquiry users scan a Quick Response (QR) code on their smartphones using apps like Alipay or WeChat. The close contact detector can be accessed via popular mobile apps including Alipay. If a potential risk is detected, the app not only recommends self-quarantine, it also alerts local health officials.

Alipay also has the Alipay Health Code which is used to keep citizens safe. This system generates a QR code in one of three colors (green, yellow, or red) after users fill in a form on Alipay with personal details. A green code enables the holder to move around unrestricted. A yellow code requires the user to stay at home for seven days and red means a two-week quarantine. In some cities such as Hangzhou, it has become nearly impossible to get around without showing one's Alipay code.

In Cannes, France, monitoring software has been used on footage shot by CCTV cameras, allowing to monitor their compliance to local social distancing and mask wearing during the COVID-19 pandemic. The system does not store identifying data, but rather allows to alert city authorities and police where breaches of the mask and mask wearing rules are spotted (allowing fining to be carried out where needed). The algorithms used by the monitoring software can be incorporated into existing surveillance systems in public spaces (hospitals, stations, airports, shopping centres, ...) 

Cellphone data is used to locate infected patients in South Korea, Taiwan, Singapore and other countries. In March 2020, the Israeli government enabled security agencies to track mobile phone data of people supposed to have coronavirus. The measure was taken to enforce quarantine and protect those who may come into contact with infected citizens. Also in March 2020, Deutsche Telekom shared private cellphone data with the federal government agency, Robert Koch Institute, in order to research and prevent the spread of the virus. Russia deployed facial recognition technology to detect quarantine breakers. Italian regional health commissioner Giulio Gallera said that "40% of people are continuing to move around anyway", as he has been informed by mobile phone operators. In USA, Europe and UK, Palantir Technologies is taken in charge to provide COVID-19 tracking services.

Prevention and management of environmental disasters

Tsunamis can be detected by Tsunami warning systems. They can make use of AI. Floodings can also be detected using AI systems. Locust breeding areas can be approximated using machine learning, which could help to stop locust swarms in an early phase. Wildfires can be predicted using AI systems. Also, wildfire detection is possible by AI systems (i.e. through satellite data, aerial imagery, and personnel position). and they can also help in evacuation of people during wildfires.

Reception

Benefits

Algorithmic regulation is supposed to be a system of governance where more exact data, collected from citizens via their smart devices and computers, is used to more efficiently organize human life as a collective. As Deloitte estimated in 2017, automation of US government work could save 96.7 million federal hours annually, with a potential savings of $3.3 billion; at the high end, this rises to 1.2 billion hours and potential annual savings of $41.1 billion.

Criticism

There are potential risks associated with the use of algorithms in government. Those include algorithms becoming susceptible to bias, a lack of transparency in how an algorithm may make decisions, and the accountability for any such decisions.

There is also a serious concern that gaming by the regulated parties might occur, once more transparency is brought into the decision making by algorithmic governance, regulated parties might try to manipulate their outcome in own favor and even use adversarial machine learning. According to Harari, the conflict between democracy and dictatorship is seen as a conflict of two different data-processing systems—AI and algorithms may swing the advantage toward the latter by processing enormous amounts of information centrally.

In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators. This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR).

The contributors of the 2019 documentary iHuman expressed apprehension of "infinitely stable dictatorships" created by government AI.

In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm." This protest was successful and the grades were taken back.

In 2020, the US government software ATLAS, which run on Amazon Cloud, sparked uproar from activists and Amazon's own employees.

In 2021, Eticas Foundation has launched a database of governmental algorithms called Observatory of Algorithms with Social Impact (OASI).

Algorithmic bias and transparency

An initial approach towards transparency included the open-sourcing of algorithms. Software code can be looked into and improvements can be proposed through source-code-hosting facilities.

Public acceptance

A 2019 poll conducted by IE University's Center for the Governance of Change in Spain found that 25% of citizens from selected European countries were somewhat or totally in favor of letting an artificial intelligence make important decisions about how their country is run. The following table lists the results by country:

Country Percentage
France 25%
Germany 31%
Ireland 29%
Italy 28%
Netherlands 43%
Portugal 19%
Spain 26%
UK 31%

Researchers found some evidence that when citizens perceive their political leaders or security providers to be untrustworthy, disappointing, or immoral, they prefer to replace them by artificial agents, whom they consider to be more reliable. The evidence is established by survey experiments on university students of all genders.

In popular culture

The novels Daemon and Freedom™ by Daniel Suarez describe a fictional scenario of global algorithmic regulation.

Butane

From Wikipedia, the free encyclopedia ...