Search This Blog

Thursday, December 9, 2021

Expert system

From Wikipedia, the free encyclopedia
 
A Symbolics Lisp Machine: an early platform for expert systems

In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

History

Early development

Soon after the dawn of modern computers in the late 1940s – early 1950s, researchers started realizing the immense potential these machines had for modern society. One of the first challenges was to make such machine capable of “thinking” like humans. In particular, making these machines capable of making important decisions the way humans do. The medical / healthcare field presented the tantalizing challenge to enable these machines to make medical diagnostic decisions.

Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision-making. For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology. These early diagnostic systems used patients’ symptoms and laboratory test results as inputs to generate a diagnostic outcome. These systems were often described as the early forms of expert systems. However, researchers had realized that there were significant limitations when using traditional methods such as flow-charts statistical pattern-matching, or probability theory.

Formal introduction & later developments

This previous situation gradually led to the development of expert systems, which used knowledge-based approaches. These expert systems in medicine were the MYCIN expert system, the INTERNIST-I expert system and later, in the middle of the 1980s, the CADUCEUS.

Expert systems were formally introduced around 1965 by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes termed the "father of expert systems"; other key early contributors were Bruce Buchanan and Randall Davis. The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral). The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work of Allen Newell and Herbert Simon). Expert systems became some of the first truly successful forms of artificial intelligence (AI) software.

Research on expert systems was also active in France. While in the US the focus tended to be on rules-based systems, first on systems hard coded on top of LISP programming environments and then on expert system shells developed by vendors such as Intellicorp, in France research focused more on systems developed in Prolog. The advantage of expert system shells was that they were somewhat easier for nonprogrammers to use. The advantage of Prolog environments was that they were not focused only on if-then rules; Prolog environments provided a much better realization of a complete first order logic environment.

In the 1980s, expert systems proliferated. Universities offered expert system courses and two thirds of the Fortune 500 companies applied the technology in daily business activities. Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe.

In 1981, the first IBM PC, with the PC DOS operating system, was introduced. The imbalance between the high affordability of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client–server model. Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC. This model also enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client server had a tremendous impact on the expert systems market. Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop. They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts. Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments. With the rise of the PC and client server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC based tools. Also, new vendors, often financed by venture capital (such as Aion Corporation, Neuron Data, Exsys, and many others), started appearing regularly.

The first expert system to be used in a design capacity for a large-scale product was the SID (Synthesis of Integral Design) software program, developed in 1982. Written in LISP, SID generated 93% of the VAX 9000 CPU logic gates. Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves. Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker. The program was highly controversial, but used nevertheless due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion.

During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the beginning of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s. Thanks to Karp's work it became clear that there are certain limitations and possibilities when one designs computer algorithms. His findings describe what computers can do and what they cannot do. Many of the computational problems related to this type of expert systems have certain pragmatic limitations. These findings laid down the groundwork that led to the next developments in the field.

In the 1990s and beyond, the term expert system and the idea of a standalone AI system mostly dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems did not deliver on their over hyped promise. The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose expert systems, to being one of many standard tools. Many of the leading major business application suite vendors (such as SAP, Siebel, and Oracle) integrated expert system abilities into their suite of products as a way of specifying business logic – rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments.

Current approaches to expert systems

The limitations of the previous type of expert systems have urged researchers to develop new types of approaches. They have developed more efficient, flexible, and powerful approaches in order to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism. Recurrent neural networks often take advantage of such mechanisms. Related is the discussion on the disadvantages section.

Modern systems can incorporate new knowledge more easily and thus update themselves easily. Such systems can generalize from existing knowledge better and deal with vast amounts of complex data. Related is the subject of big data here. Sometimes these type of expert systems are called "intelligent systems."

Software architecture

Illustrating example of backward chaining from a 1990 Master's Thesis

An expert system is an example of a knowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. A knowledge-based system is essentially composed of two sub-systems: the knowledge base and the inference engine.

The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables. In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming. The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances. The rules worked by querying and asserting values of the objects.

The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion.

There are mainly two modes for an inference engine: forward chaining and backward chaining. The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule. In forward chaining an antecedent fires and asserts the consequent. For example, consider the following rule:

A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine. It would match R1 and assert Mortal(Socrates) into the knowledge base.

Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true. One of the early innovations of expert systems shells was to integrate inference engines with a user interface. This could be especially powerful with backward chaining. If the system needs to know a particular fact but does not, then it can simply generate an input screen and ask the user if the information is known. So in this example, it could use R1 to ask the user if Socrates was a Man and then use that new information accordingly.

The use of rules to explicitly represent knowledge also enabled explanation abilities. In the simple example above if the system had used R1 to assert that Socrates was Mortal and a user wished to understand why Socrates was mortal they could query the system and the system would look back at the rules which fired to cause the assertion and present those rules to the user as an explanation. In English, if the user asked "Why is Socrates Mortal?" the system would reply "Because all men are mortal and Socrates is a man". A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules.

As expert systems evolved, many new techniques were incorporated into various types of inference engines. Some of the most important of these were:

  • Truth maintenance. These systems record the dependencies in a knowledge-base so that when facts are altered, dependent knowledge can be altered accordingly. For example, if the system learns that Socrates is no longer known to be a man it will revoke the assertion that Socrates is mortal.
  • Hypothetical reasoning. In this, the knowledge base can be divided up into many possible views, a.k.a. worlds. This allows the inference engine to explore multiple possibilities in parallel. For example, the system may want to explore the consequences of both assertions, what will be true if Socrates is a Man and what will be true if he is not?
  • Uncertainty systems. One of the first extensions of simply using rules to represent knowledge was also to associate a probability with each rule. So, not to assert that Socrates is mortal, but to assert Socrates may be mortal with some probability value. Simple probabilities were extended in some systems with sophisticated mechanisms for uncertain reasoning, such as Fuzzy logic, and combination of probabilities.
  • Ontology classification. With the addition of object classes to the knowledge base, a new type of reasoning was possible. Along with reasoning simply about object values, the system could also reason about object structures. In this simple example, Man can represent an object class and R1 can be redefined as a rule that defines the class of all men. These types of special purpose inference engines are termed classifiers. Although they were not highly used in expert systems, classifiers are very powerful for unstructured volatile domains, and are a key technology for the Internet and the emerging Semantic Web.

Advantages

The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit. In a traditional computer program the logic is embedded in code that can typically only be reviewed by an IT specialist. With an expert system the goal was to specify the rules in a format that was intuitive and easily understood, reviewed, and even edited by domain experts rather than IT experts. The benefits of this explicit knowledge representation were rapid development and ease of maintenance.

Ease of maintenance is the most obvious benefit. This was achieved in two ways. First, by removing the need to write conventional code, many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. Essentially, the logical flow of the program (at least at the highest level) was simply a given for the system, simply invoke the inference engine. This also was a reason for the second benefit: rapid prototyping. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects.

A claim for expert system shells that was often made was that they removed the need for trained programmers and that experts could develop systems themselves. In reality, this was seldom if ever true. While the rules for an expert system were more comprehensible than typical computer code, they still had a formal syntax where a misplaced comma or other character could cause havoc as with any other computer language. Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical. Inevitably demands to integrate with, and take advantage of, large legacy databases and systems arose. To accomplish this, integration required the same skills as any other type of system.

Disadvantages

The most common disadvantage cited for expert systems in the academic literature is the knowledge acquisition problem. Obtaining the time of domain experts for any software application is always difficult, but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. As a result of this problem, a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition, to help automate the process of designing, debugging, and maintaining rules defined by experts. However, when looking at the life-cycle of expert systems in actual use, other problems – essentially the same problems as those of any other large system – seem at least as critical as knowledge acquisition: integration, access to large databases, and performance.

Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them. This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such as C). System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcome in most corporate IT environments – programming languages such as Lisp and Prolog, and hardware platforms such as Lisp machines and personal computers. As a result, much effort in the later stages of expert system tool development was focused on integrating with legacy environments such as COBOL and large database systems, and on porting to more standard platforms. These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications.

Another major challenge of expert systems emerges when the size of the knowledge base increases. This causes the processing complexity to increase. For instance, when an expert system with 100 million rules was envisioned as the ultimate expert system, it became obvious that such system would be too complex and it would face too many computational problems. An inference engine would have to be able to process huge numbers of rules to reach a decision.

How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to a satisfiability (SAT) formulation. This is a well-known NP-complete problem Boolean satisfiability problem. If we assume only binary variables, say n of them, and then the corresponding search space is of size 2. Thus, the search space can grow exponentially.

There are also questions on how to prioritize the use of the rules in order to operate more efficiently, or how to resolve ambiguities (for instance, if there are too many else-if sub-structures within a single rule) and so on.

Other problems are related to the overfitting and overgeneralization effects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base. Such problems exist with methods that employ machine learning approaches too.

Another problem related to the knowledge base is how to make updates of its knowledge quickly and effectively. Also how to add a new piece of knowledge (i.e., where to add it among many rules) is challenging. Modern approaches that rely on machine learning methods are easier in this regard.

Because of the above challenges, it became clear that new approaches to AI were required instead of rule-based technologies. These new approaches are based on the use of machine learning techniques, along with the use of feedback mechanisms.

The key challenges that expert systems in medicine (if one considers computer-aided diagnostic systems as modern expert systems), and perhaps in other application domains, include issues related to aspects such as: big data, existing regulations, healthcare practice, various algorithmic issues, and system assessment.

Applications

Hayes-Roth divides expert systems applications into 10 categories illustrated in the following table. The example applications were not in the original Hayes-Roth table, and some of them arose well afterward. Any application that is not footnoted is described in the Hayes-Roth book. Also, while these categories provide an intuitive framework to describe the space of expert systems applications, they are not rigid categories, and in some cases an application may show traits of more than one category.

Category Problem addressed Examples
Interpretation Inferring situation descriptions from sensor data Hearsay (speech recognition), PROSPECTOR
Prediction Inferring likely consequences of given situations Preterm Birth Risk Assessment
Diagnosis Inferring system malfunctions from observables CADUCEUS, MYCIN, PUFF, Mistral, Eydenet, Kaleidos
Design Configuring objects under constraints Dendral, Mortgage Loan Advisor, R1 (DEC VAX Configuration), SID (DEC VAX 9000 CPU)
Planning Designing actions Mission Planning for Autonomous Underwater Vehicle
Monitoring Comparing observations to plan vulnerabilities REACTOR
Debugging Providing incremental solutions for complex problems SAINT, MATHLAB, MACSYMA
Repair Executing a plan to administer a prescribed remedy Toxic Spill Crisis Management
Instruction Diagnosing, assessing, and correcting student behaviour SMH.PAL, Intelligent Clinical Training, STEAMER
Control Interpreting, predicting, repairing, and monitoring system behaviors Real Time Process Control, Space Shuttle Mission Control

Hearsay was an early attempt at solving voice recognition through an expert systems approach. For the most part this category of expert systems was not all that successful. Hearsay and all interpretation systems are essentially pattern recognition systems—looking for patterns in noisy data. In the case of Hearsay recognizing phonemes in an audio stream. Other early examples were analyzing sonar data to detect Russian submarines. These kinds of systems proved much more amenable to a neural network AI solution than a rule-based approach.

CADUCEUS and MYCIN were medical diagnosis systems. The user describes their symptoms to the computer as they would to a doctor and the computer returns a medical diagnosis.

Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuring Digital Equipment Corporation (DEC) VAX computers and mortgage loan application development.

SMH.PAL is an expert system for the assessment of students with multiple disabilities.

Mistral  is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g., Itaipu Dam in Brazil), and on landslide sites under the name of Eydenet, and on monuments under the name of Kaleidos. Mistral is a registered trade mark of CESI.

Government by algorithm

From Wikipedia, the free encyclopedia
 
Government by algorithm (also known as algorithmic regulation, regulation by algorithms, algorithmic governance, algocratic governance, algorithmic legal order or algocracy) is an alternative form of government or social ordering, where the usage of computer algorithms, especially of artificial intelligence and blockchain, is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term 'government by algorithm' appeared in academic literature as an alternative for 'algorithmic governance' in 2013. A related term, algorithmic regulation is defined as setting the standard, monitoring and modification of behaviour by means of computational algorithms — automation of judiciary is in its scope.

Government by algorithm raises new challenges that are not captured in the e-government literature and the practice of public administration. Some sources equate cyberocracy, which is a hypothetical form of government that rules by the effective use of information, with algorithmic governance, although algorithms are not the only means of processing information. Nello Cristianini and Teresa Scantamburlo argued that the combination of a human society and certain regulation algorithms (such as reputation-based scoring) forms a social machine.

History

In 1962, the director of the Institute for Information Transmission Problems of the Russian Academy of Sciences in Moscow (later Kharkevich Institute), Alexander Kharkevich, published an article in the journal "Communist" about a computer network for processing information and control of the economy. In fact, he proposed to make a network like the modern Internet for the needs of algorithmic governance. This created a serious concern among CIA analysts. In particular, Arthur M. Schlesinger Jr. warned that "by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employing self-teaching computers".

Between 1971 and 1973, the Chilean government carried out Project Cybersyn during the presidency of Salvador Allende. This project was aimed at constructing a distributed decision support system to improve the management of the national economy.

Also in the 1960s and 1970s, Herbert A. Simon championed expert systems as tools for rationalization and evaluation of administrative behavior. The automation of rule-based processes was an ambition of tax agencies over many decades resulting in varying success. Early work from this period includes Thorne McCarty's influential TAXMAN project in the US and Ronald Stamper's LEGOL project in the UK. In 1993, the computer scientist Paul Cockshott from the University of Glasgow and the economist Allin Cottrell from the Wake Forest University published the book Towards a New Socialism, where they claim to demonstrate the possibility of a democratically planned economy built on modern computer technology. The Honourable Justice Michael Kirby published a paper in 1998, where he expressed optimism that the then-available computer technologies such as legal expert system could evolve to computer systems, which will strongly affect the practice of courts. In 2006, attorney Lawrence Lessig known for the slogan "Code is law" wrote:

"[T]he invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible"

Since the 2000s, algorithms have been designed and used to automatically analyze surveillance videos.

Sociologist A. Aneesh used the idea of algorithmic governance in 2002 in his theory of algocracy. Aneesh differentiated algocratic systems from bureaucratic systems (legal-rational regulation) as well as market-based systems (price-based regulation).

In 2013, algorithmic regulation was coined by Tim O'Reilly, founder and CEO of O'Reilly Media Inc.:

Sometimes the "rules" aren't really even rules. Gordon Bruce, the former CIO of the city of Honolulu, explained to me that when he entered government from the private sector and tried to make changes, he was told, "That's against the law." His reply was "OK. Show me the law." "Well, it isn't really a law. It's a regulation." "OK. Show me the regulation." "Well, it isn't really a regulation. It's a policy that was put in place by Mr. Somebody twenty years ago." "Great. We can change that!""

[...] Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws. [...] It's time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.

In 2017, Justice Ministry of Ukraine ran experimental government auctions using blockchain technology to ensure transparency and hinder corruption in governmental transactions. "Government by Algorithm?" was the central theme introduced at Data for Policy 2017 conference held on 6-7 September 2017 in London, UK.

Examples

Smart cities

A smart city is an urban area, where collected surveillance data is used to improve various operations in this area. Increase in computational power allows more automated decision making and replacement of public agencies by algorithmic governance. In particular, the combined use of artificial intelligence and blockchains for IoT might lead to a creation of sustainable smart city ecosystems. Intelligent street lighting in Glasgow is an example of benefits brought by government application of AI algorithms.

The cryptocurrency millionaire, Jeffrey Berns, proposed to run local governments by tech firms in Nevada in 2021. Mr. Berns bought 67,000 acres (271 km²) in Nevada's rural Storey County for $170,000,000 (£121,000,000) in 2018 in order to develop a smart city with more than 36,000 residents generating an annual output of $4,600,000,000. Cryptocurrency will be allowed for payments.

Reputation systems

Tim O'Reilly suggested that data sources and reputation systems combined in algorithmic regulation can outperform traditional regulations. For instance, once taxi-drivers are rated by passengers, the quality of their services will improve automatically and "drivers who provide poor service are eliminated". O'Reilly's suggestion is based on control-theoreric concept of feed-back loopimprovements and disimprovements of reputation enforce desired behavior. The usage of feed-loops for the management of social systems is already been suggested in management cybernetics by Stafford Beer before.

These connections are explored by Nello Cristianini and Teresa Scantamburlo  where the reputation-credit scoring system is modeled as an incentive given to the citizens and computed by a social machine, so that rational agents would be motivated to increase their score by adapting their behaviour. Several ethical aspects of that technology are still being discussed .

China's Social Credit System is closely related to China's mass surveillance systems such as the Skynet, which incorporates facial recognition system, big data analysis technology and AI. This system provides assessments of trustworthiness of individuals and businesses. Among behavior, which is considered as misconduct by the system, jaywalking and failing to correctly sort personal waste are cited. Behavior listed as positive factors of credit ratings includes donating blood, donating to charity, volunteering for community services, and so on. Chinese Social Credit System enables punishments of "untrustworthy" citizens like denying purchase of tickets and rewards for "trustworthy" citizen like less waiting time in hospitals and government agencies.

Smart contracts

Smart Contracts, cryptocurrencies, and Decentralized Autonomous Organization are mentioned as means to replace traditional ways of governance. Cryptocurrencies are currencies, which are enabled by algorithms without a governmental central bank. Central bank digital currency often employs similar technology, but is differentiated from the fact that it does use a central bank. It is soon to be employed by major unions and governments such as the European Union and China. Smart contracts are self-executable contracts, whose objectives are the reduction of need in trusted governmental intermediators, arbitrations and enforcement costs. A decentralized autonomous organization is an organization represented by smart contracts that is transparent, controlled by shareholders and not influenced by a central government. Smart contracts have been discussed for use in such applications as use in (temporary) employment contracts and automatic transfership of funds and property (i.e. inheritance, upon registration of a death certificate). Some countries such as Georgia and Sweden have already launched blockchain programs focusing on property (land titles and real estate ownership) Ukraine is also looking at other areas too such as state registers.

Algorithms in government agencies

According to a study of Stanford University, 45% of the studied US federal agencies have experimented with AI and related machine learning (ML) tools up to 2020. US federal agencies counted the following number of artificial intelligence applications.

Agency Name Number of Use Cases
Office of Justice Programs 12
Securities and Exchange Commission 10
National Aeronautics and Space Administration 9
Food and Drug Administration 8
United States Geological Survey 8
United States Postal Service 8
Social Security Administration 7
United States Patent and Trademark Office 6
Bureau of Labor Statistics 5
U.S. Customs and Border Protection 4

53% of these applications were produced by in-house experts. Commercial providers of residual applications include Palantir Technologies.

From 2012, NOPD started a collaboration with Palantir Technologies in the field of predictive policing. Besides Palantir's Gotham software, other similar (numerical analysis software) used by police agencies (such as the NCRIC) include SAS.

In the fight against money laundering, FinCEN employs the FinCEN Artificial Intelligence System (FAIS).

National health administration entities and organisations such as AHIMA (American Health Information Management Association) hold medical records. Medical records serve as the central repository for planning patient care and documenting communication among patient and health care provider and professionals contributing to the patient's care. In the EU, work is ongoing on a European Health Data Space which supports the use of health data.

US Department of Homeland Security has employed the software ATLAS, which run on Amazon Cloud. It scanned more than 16.5 million of records of naturalized Americans and flagged approximately 124.000 of them for manual annalysis and review by USCIS officers regarding denaturalization. There were flagged due to potential fraud, public safety and national security issues. Some of the scanned data came from Terrorist Screening Database and National Crime Information Center.

In Estonia, artificial intelligence is used in its e-government to make it more automated and seamless. A virtual assistant will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment, ...). One example is the automated registering of babies when they are born. Estonia's X-Road system will also be rebuilt to include even more privacy control and accountability into the way the government uses citizen's data.

In Costa Rica, the possible digitalization of public procurement activities (i.e. tenders for public works, ...) has been investigated. The paper discussing this possibility mentions that the use of ICT in procurement has several benefits such as increasing transparency, facilitating digital access to public tenders, reducing direct interaction between procurement officials and companies at moments of high integrity risk, increasing outreach and competition, and easier detection of irregularities.

Besides using e-tenders for regular public works (construction of buildings, roads, ...), e-tenders can also be used for reforestation projects and other carbon sink restoration projects. Carbon sink restoration projects may be part of the nationally determined contributions plans in order to reach the national Paris agreement goals

Government procurement audit software can also be used. Audits are performed in some countries after subsidies have been received.

Some government agencies provide track and trace systems for services they offer. An example is track and trace for applications done by citizens (i.e. driving license procurement).

Some government services use issue tracking system to keep track of ongoing issues.

Justice by algorithm

COMPAS software is used in USA to assess the risk of recidivism in courts.

According to the statement of Beijing Internet Court, China is the first country to create an internet court or cyber court. The Chinese AI judge is a virtual recreation of an actual female judge. She "will help the court's judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work".

Also Estonia plans to employ artificial intelligence to decide small-claim cases of less than €7,000.

Lawbots can perform tasks that are typically done by paralegals or young associates at law firms. One such technology used by US law firms to assist in legal research is from ROSS Intelligence, and others vary in sophistication and dependence on scripted algorithms.[105] Another legal technology chatbot application is DoNotPay.

AI in education

Due to the COVID-19 pandemic in 2020, in-person final exams were impossible for thousands of students. The public high school Westminster High employed algorithms to assign grades. UK's Department for Education also employed a statistical calculus to assign final grades in A-levels, due to the pandemic.

Besides use in grading, software systems and AI are also optimizing coursework and are used in preparation for college entrance exams.

AI teaching assistants are being developed and used for education (i.e. Georgia Tech's Jill Watson) and there is also an ongoing debate on whether perhaps teachers can be entirely replaced by AI systems (i.e. in homeschooling).

AI politicians

In 2018, an activist named Michihito Matsuda ran for mayor in the Tama city area of Tokyo as a human proxy for an artificial intelligence program. While election posters and campaign material used the term robot, and displayed stock images of a feminine android, the "AI mayor" was in fact a machine learning algorithm trained using Tama city datasets. The project was backed by high-profile executives Tetsuzo Matsumoto of Softbank and Norio Murakami of Google. Michihito Matsuda came third in the election, being defeated by Hiroyuki Abe. Organisers claimed that the 'AI mayor' was programmed to analyze citizen petitions put forward to the city council in a more 'fair and balanced' way than human politicians.

In 2019, AI-powered messenger chatbot SAM participated in the discussions on social media connected to an electoral race in New Zealand. The creator of SAM, Nick Gerritsen, believes SAM will be advanced enough to run as a candidate by late 2020, when New Zealand has its next general election.

Management of infection

In February 2020, China launched a mobile app to deal with the Coronavirus outbreak called "close-contact-detector". Users are asked to enter their name and ID number. The app is able to detect "close contact" using surveillance data (i.e. using public transport records, including trains and flights) and therefore a potential risk of infection. Every user can also check the status of three other users. To make this inquiry users scan a Quick Response (QR) code on their smartphones using apps like Alipay or WeChat. The close contact detector can be accessed via popular mobile apps including Alipay. If a potential risk is detected, the app not only recommends self-quarantine, it also alerts local health officials.

Alipay also has the Alipay Health Code which is used to keep citizens safe. This system generates a QR code in one of three colors (green, yellow, or red) after users fill in a form on Alipay with personal details. A green code enables the holder to move around unrestricted. A yellow code requires the user to stay at home for seven days and red means a two-week quarantine. In some cities such as Hangzhou, it has become nearly impossible to get around without showing one's Alipay code.

In Cannes, France, monitoring software has been used on footage shot by CCTV cameras, allowing to monitor their compliance to local social distancing and mask wearing during the COVID-19 pandemic. The system does not store identifying data, but rather allows to alert city authorities and police where breaches of the mask and mask wearing rules are spotted (allowing fining to be carried out where needed). The algorithms used by the monitoring software can be incorporated into existing surveillance systems in public spaces (hospitals, stations, airports, shopping centres, ...) 

Cellphone data is used to locate infected patients in South Korea, Taiwan, Singapore and other countries. In March 2020, the Israeli government enabled security agencies to track mobile phone data of people supposed to have coronavirus. The measure was taken to enforce quarantine and protect those who may come into contact with infected citizens. Also in March 2020, Deutsche Telekom shared private cellphone data with the federal government agency, Robert Koch Institute, in order to research and prevent the spread of the virus. Russia deployed facial recognition technology to detect quarantine breakers. Italian regional health commissioner Giulio Gallera said that "40% of people are continuing to move around anyway", as he has been informed by mobile phone operators. In USA, Europe and UK, Palantir Technologies is taken in charge to provide COVID-19 tracking services.

Prevention and management of environmental disasters

Tsunamis can be detected by Tsunami warning systems. They can make use of AI. Floodings can also be detected using AI systems. Locust breeding areas can be approximated using machine learning, which could help to stop locust swarms in an early phase. Wildfires can be predicted using AI systems. Also, wildfire detection is possible by AI systems (i.e. through satellite data, aerial imagery, and personnel position). and they can also help in evacuation of people during wildfires.

Reception

Benefits

Algorithmic regulation is supposed to be a system of governance where more exact data, collected from citizens via their smart devices and computers, is used to more efficiently organize human life as a collective. As Deloitte estimated in 2017, automation of US government work could save 96.7 million federal hours annually, with a potential savings of $3.3 billion; at the high end, this rises to 1.2 billion hours and potential annual savings of $41.1 billion.

Criticism

There are potential risks associated with the use of algorithms in government. Those include algorithms becoming susceptible to bias, a lack of transparency in how an algorithm may make decisions, and the accountability for any such decisions.

There is also a serious concern that gaming by the regulated parties might occur, once more transparency is brought into the decision making by algorithmic governance, regulated parties might try to manipulate their outcome in own favor and even use adversarial machine learning. According to Harari, the conflict between democracy and dictatorship is seen as a conflict of two different data-processing systems—AI and algorithms may swing the advantage toward the latter by processing enormous amounts of information centrally.

In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators. This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR).

The contributors of the 2019 documentary iHuman expressed apprehension of "infinitely stable dictatorships" created by government AI.

In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm." This protest was successful and the grades were taken back.

In 2020, the US government software ATLAS, which run on Amazon Cloud, sparked uproar from activists and Amazon's own employees.

In 2021, Eticas Foundation has launched a database of governmental algorithms called Observatory of Algorithms with Social Impact (OASI).

Algorithmic bias and transparency

An initial approach towards transparency included the open-sourcing of algorithms. Software code can be looked into and improvements can be proposed through source-code-hosting facilities.

Public acceptance

A 2019 poll conducted by IE University's Center for the Governance of Change in Spain found that 25% of citizens from selected European countries were somewhat or totally in favor of letting an artificial intelligence make important decisions about how their country is run. The following table lists the results by country:

Country Percentage
France 25%
Germany 31%
Ireland 29%
Italy 28%
Netherlands 43%
Portugal 19%
Spain 26%
UK 31%

Researchers found some evidence that when citizens perceive their political leaders or security providers to be untrustworthy, disappointing, or immoral, they prefer to replace them by artificial agents, whom they consider to be more reliable. The evidence is established by survey experiments on university students of all genders.

In popular culture

The novels Daemon and Freedom™ by Daniel Suarez describe a fictional scenario of global algorithmic regulation.

Carbon dioxide removal

From Wikipedia, the free encyclopedia
Planting trees is a means of carbon dioxide removal.

Carbon dioxide removal (CDR), also known as negative CO2 emissions, is a process in which carbon dioxide gas (CO2) is removed from the atmosphere and sequestered for long periods of time. Similarly, greenhouse gas removal (GGR) or negative greenhouse gas emissions is the removal of greenhouse gases (GHGs) from the atmosphere by deliberate human activities, i.e., in addition to the removal that would occur via natural carbon cycle or atmospheric chemistry processes. In the context of net zero greenhouse gas emissions targets, CDR is increasingly integrated into climate policy, as a new element of mitigation strategies. CDR and GGR methods are also known as negative emissions technologies, (NET) and may be cheaper than preventing some agricultural greenhouse gas emissions.

CDR methods include afforestation, agricultural practices that sequester carbon in soils, bio-energy with carbon capture and storage, ocean fertilization, enhanced weathering, and direct air capture when combined with storage. To assess whether net negative emissions are achieved by a particular process, comprehensive life cycle analysis of the process must be performed.

A 2019 consensus report by the US National Academies of Sciences, Engineering, and Medicine concluded that using existing CDR methods at scales that can be safely and economically deployed, there is potential to remove and sequester up to 10 gigatons of carbon dioxide per year. This would offset greenhouse gas emissions at about a fifth of the rate at which they are being produced.

In 2021 the IPCC said that emission pathways that limit globally averaged warming to 1.5 °C or 2 °C by the year 2100 assume the use of CDR approaches in combination with emission reductions.

Definitions

The Intergovernmental Panel on Climate Change defines CDR as:

Anthropogenic activities removing CO2 from the atmosphere and durably storing it in geological, terrestrial, or ocean reservoirs, or in products. It includes existing and potential anthropogenic enhancement of biological or geochemical sinks and direct air capture and storage, but excludes natural CO2 uptake not directly caused by human activities.

The U.S.-based National Academies of Sciences, Engineering, and Medicine (NASEM) uses the term "negative emissions technology" with a similar definition.

The concept of deliberately reducing the amount of CO2 in the atmosphere is often mistakenly classified with solar radiation management as a form of climate engineering and assumed to be intrinsically risky. In fact, CDR addresses the root cause of climate change and is part of strategies to reduce net emissions.

Concepts using similar terminology

CDR can be confused with carbon capture and storage (CCS), a process in which carbon dioxide is collected from point-sources such as gas-fired power plants, whose smokestacks emit CO2 in a concentrated stream. The CO2 is then compressed and sequestered or utilized. When used to sequester the carbon from a gas-fired power plant, CCS reduces emissions from continued use of the point source, but does not reduce the amount of carbon dioxide already in the atmosphere.

Potential for climate change mitigation

Using CDR in parallel with other efforts to reduce greenhouse gas emissions, such as deploying renewable energy, is likely to be less expensive and disruptive than using other efforts alone. A 2019 consensus study report by NASEM assessed the potential of all forms of CDR other than ocean fertilization that could be deployed safely and economically using current technologies, and estimated that they could remove up to 10 gigatons of CO2 per year if fully deployed worldwide. This is one-fifth of the 50 gigatons of CO2 emitted per year by human activities. In the IPCC's 2018 analysis of ways to limit climate change, all analyzed mitigation pathways that would prevent more than 1.5 °C of warming included CDR measures.

Some mitigation pathways propose achieving higher rates of CDR through massive deployment of one technology, however these pathways assume that hundreds of millions of hectares of cropland are converted to growing biofuel crops. Further research in the areas of direct air capture, geologic sequestration of carbon dioxide, and carbon mineralization could potentially yield technological advancements that make higher rates of CDR economically feasible.

The IPCC's 2018 report said that reliance on large-scale deployment of CDR would be a "major risk" to achieving the goal of less than 1.5 °C of warming, given the uncertainties in how quickly CDR can be deployed at scale. Strategies for mitigating climate change that rely less on CDR and more on sustainable use of energy carry less of this risk. The possibility of large-scale future CDR deployment has been described as a moral hazard, as it could lead to a reduction in near-term efforts to mitigate climate change. The 2019 NASEM report concludes:

Any argument to delay mitigation efforts because NETs will provide a backstop drastically misrepresents their current capacities and the likely pace of research progress.

Carbon sequestration

Forests, kelp beds, and other forms of plant life absorb carbon dioxide from the air as they grow, and bind it into biomass. As the use of plants as carbon sinks can be undone by events such as wildfires, the long-term reliability of these approaches has been questioned.

Carbon dioxide that has been removed from the atmosphere can also be stored in the Earth's crust by injecting it into the subsurface, or in the form of insoluble carbonate salts (mineral sequestration). This is because they are removing carbon from the atmosphere and sequestering it indefinitely and presumably for a considerable duration (thousands to millions of years).

Methods

Afforestation, reforestation, and forestry management

According to the International Union for Conservation of Nature: "Halting the loss and degradation of natural systems and promoting their restoration have the potential to contribute over one-third of the total climate change mitigation scientists say is required by 2030."

Forests are vital for human society, animals and plant species. This is because trees keep our air clean, regulate the local climate and provide a habitat for numerous species. Trees and plants convert carbon dioxide back into oxygen, using photosynthesis. They are important for regulating CO2 levels in the air, as they remove and store carbon from the air. Without them, the atmosphere would heat up quickly and destabilise the climate.

Biosequestration

Biosequestration is the capture and storage of the atmospheric greenhouse gas carbon dioxide by continual or enhanced biological processes. This form of carbon sequestration occurs through increased rates of photosynthesis via land-use practices such as reforestation, sustainable forest management, and genetic engineering. The SALK Harnessing Plants Initiative led by Joanne Chory is an example of an enhanced photosynthesis initiative Carbon sequestration through biological processes affects the global carbon cycle.

Agricultural practices

Measuring soil respiration on agricultural land.
Carbon farming is a name for a variety of agricultural methods aimed at sequestering atmospheric carbon into the soil and in crop roots, wood and leaves. The aim of carbon farming is to increase the rate at which carbon is sequestered into soil and plant material with the goal of creating a net loss of carbon from the atmosphere. Increasing a soil's organic matter content can aid plant growth, increase total carbon content, improve soil water retention capacity and reduce fertilizer use. As of 2016, variants of carbon farming reached hundreds of millions of hectares globally, of the nearly 5 billion hectares (1.2×1010 acres) of world farmland. In addition to agricultural activities, forests management is also a tool that is used in carbon farming.  The practice of carbon farming is often done by individual land owners who are given incentive to use and to integrate methods that will sequester carbon through policies created by governments.  Carbon farming methods will typically have a cost, meaning farmers and land-owners typically need a way in which they can profit from the use of carbon farming and different governments will have different programs. Potential sequestration alternatives to carbon farming include scrubbing CO2 from the air with machines (direct air capture); fertilizing the oceans to prompt algal blooms that after death carry carbon to the sea bottom; storing the carbon dioxide emitted by electricity generation; and crushing and spreading types of rock such as basalt that absorb atmospheric carbon. Land management techniques that can be combined with farming include planting/restoring forests, burying biochar produced by anaerobically converted biomass and restoring wetlands. (Coal beds are the remains of marshes and peatlands.)

Wetland restoration

Estimates of the economic value of blue carbon ecosystems per hectare. Based on 2009 data from UNEP/GRID-Arendal.
 
Blue carbon is carbon sequestration (the removal of carbon dioxide from the earth's atmosphere) by the world's oceanic and coastal ecosystems, mostly by algae, seagrasses, macroalgae, mangroves, salt marshes and other plants in coastal wetlands. This occurs through plant growth and the accumulation and burial of organic matter in the soil. Because oceans cover 70% of the planet, ocean ecosystem restoration has the greatest blue carbon development potential. Research is ongoing, but in some cases it has been found that these types of ecosystems remove far more carbon than terrestrial forests, and store it for millennia.

Bioenergy with carbon capture & storage

Bioenergy with carbon capture and storage (BECCS) is the process of extracting bioenergy from biomass and capturing and storing the carbon, thereby removing it from the atmosphere. The carbon in the biomass comes from the greenhouse gas carbon dioxide (CO2) which is extracted from the atmosphere by the biomass when it grows. Energy is extracted in useful forms (electricity, heat, biofuels, etc.) as the biomass is utilized through combustion, fermentation, pyrolysis or other conversion methods. Some of the carbon in the biomass is converted to CO2 or biochar which can then be stored by geologic sequestration or land application, respectively, enabling carbon dioxide removal (CDR) and making BECCS a negative emissions technology (NET).

The IPCC Fifth Assessment Report by the Intergovernmental Panel on Climate Change (IPCC), suggests a potential range of negative emissions from BECCS of 0 to 22 gigatonnes per year. As of 2019, five facilities around the world were actively using BECCS technologies and were capturing approximately 1.5 million tonnes per year of CO2. Wide deployment of BECCS is constrained by cost and availability of biomass.

Biochar

Biochar is created by the pyrolysis of biomass, and is under investigation as a method of carbon sequestration. Biochar is a charcoal that is used for agricultural purposes which also aids in carbon sequestration, the capture or hold of carbon. It is created using a process called pyrolysis, which is basically the act of high temperature heating biomass in an environment with low oxygen levels. What remains is a material known as char, similar to charcoal but is made through a sustainable process, thus the use of biomass. Biomass is organic matter produced by living organisms or recently living organisms, most commonly plants or plant based material. A study done by the UK Biochar Research Center has stated that, on a conservative level, biochar can store 1 gigaton of carbon per year. With greater effort in marketing and acceptance of biochar, the benefit could be the storage of 5–9 gigatons per year of carbon in biochar soils.

Enhanced weathering

Enhanced weathering is a chemical approach to remove carbon dioxide involving land- or ocean-based techniques. One example of a land-based enhanced weathering technique is in-situ carbonation of silicates. Ultramafic rock, for example, has the potential to store from hundreds to thousands of years' worth of CO2 emissions, according to estimates. Ocean-based techniques involve alkalinity enhancement, such as grinding, dispersing, and dissolving olivine, limestone, silicates, or calcium hydroxide to address ocean acidification and CO2 sequestration. One example of a research project on the feasibility of enhanced weathering is the CarbFix project in Iceland.

Direct air capture

Flow diagram of direct air capture process using sodium hydroxide as the absorbent and including solvent regeneration.
Flow diagram of direct air capture process using sodium hydroxide as the absorbent and including solvent regeneration.

Direct air capture (DAC) is a process of capturing carbon dioxide (CO2) directly from the ambient air (as opposed to capturing from point sources, such as a cement factory or biomass power plant) and generating a concentrated stream of CO2 for sequestration or utilization or production of carbon-neutral fuel and windgas. Carbon dioxide removal is achieved when ambient air makes contact with chemical media, typically an aqueous alkaline solvent or sorbents. These chemical media are subsequently stripped of CO2 through the application of energy (namely heat), resulting in a CO2 stream that can undergo dehydration and compression, while simultaneously regenerating the chemical media for reuse.

DAC was suggested in 1999 and is still in development, though several commercial plants are in operation or planning across Europe and the US. Large-scale DAC deployment may be accelerated when connected with economical use cases, or policy incentives.

DAC is not an alternative to traditional, point-source carbon capture and storage (CCS), but can be used to manage emissions from distributed sources, like exhaust fumes from cars. When combined with long-term storage of CO2, DAC can act as a carbon dioxide removal tool, although as of 2021 it is not profitable because the cost per tonne of carbon dioxide is several times the carbon price.

Ocean fertilization

A visualization of bloom populations in the North Atlantic and North Pacific oceans from March 2003 to October 2006. The blue areas are nutrient deficient. Green to yellow show blooms fed by dust blown from nearby landmasses.
 
Ocean fertilization or ocean nourishment is a type of climate engineering based on the purposeful introduction of nutrients to the upper ocean to increase marine food production and to remove carbon dioxide from the atmosphere. A number of techniques, including fertilization by iron, urea and phosphorus, have been proposed. But research in the early 2020s suggested that it could only permanently sequester a small amount of carbon.

Issues

Economic issues

A crucial issue for CDR is the cost, which differs substantially among the different methods: some of these are not sufficiently developed to perform cost assessments. In 2021 DAC cost from $250 to $600 per tonne, compared to less than $50 for most reforestation. In early 2021 the EU carbon price was slightly over $50. However the value of BECCS and CDR generally in integrated assessment models in the long term is highly dependent on the discount rate.

On 21 January 2021, Elon Musk announced he was donating $100m for a prize for best carbon capture technology.

Other issues

CDR faces issues common to all forms of climate engineering, including moral hazard.

Removal of other greenhouse gases

Although some researchers have suggested methods for removing methane, others say that nitrous oxide would be a better subject for research due to its longer lifetime in the atmosphere.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...