Search This Blog

Saturday, December 14, 2024

Expert system

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Expert_system
A Symbolics 3640 Lisp machine: an early (1984) platform for expert systems

In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. Expert systems were among the first truly successful forms of AI software. They were created in the 1970s and then proliferated in the 1980s, being then widely regarded as the future of AI — before the advent of successful artificial neural networks. An expert system is divided into two subsystems: 1) a knowledge base, which represents facts and rules; and 2) an inference engine, which applies the rules to the known facts to deduce new facts, and can include explaining and debugging abilities.

History

Early development

Soon after the dawn of modern computers in the late 1940s and early 1950s, researchers started realizing the immense potential these machines had for modern society. One of the first challenges was to make such machines able to “think” like humans – in particular, making these machines able to make important decisions the way humans do. The medical–healthcare field presented the tantalizing challenge of enabling these machines to make medical diagnostic decisions.

Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision making. For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology. These early diagnostic systems used patients’ symptoms and laboratory test results as inputs to generate a diagnostic outcome. These systems were often described as the early forms of expert systems. However, researchers realized that there were significant limits when using traditional methods such as flow charts, statistical pattern matching, or probability theory.

Formal introduction and later developments

This previous situation gradually led to the development of expert systems, which used knowledge-based approaches. These expert systems in medicine were the MYCIN expert system, the Internist-I expert system and later, in the middle of the 1980s, the CADUCEUS.

Expert systems were formally introduced around 1965 by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes termed the "father of expert systems"; other key early contributors were Bruce Buchanan and Randall Davis. The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral). The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work of Allen Newell and Herbert Simon). Expert systems became some of the first truly successful forms of artificial intelligence (AI) software.

Research on expert systems was also active in Europe. In the US, the focus tended to be on the use of production rule systems, first on systems hard coded on top of Lisp programming environments and then on expert system shells developed by vendors such as Intellicorp. In Europe, research focused more on systems and expert systems shells developed in Prolog. The advantage of Prolog systems was that they employed a form of rule-based programming that was based on formal logic.

One such early expert system shell based on Prolog was APES. One of the first use cases of Prolog and APES was in the legal area namely, the encoding of a large portion of the British Nationality Act. Lance Elliot wrote: "The British Nationality Act was passed in 1981 and shortly thereafter was used as a means of showcasing the efficacy of using Artificial Intelligence (AI) techniques and technologies, doing so to explore how the at-the-time newly enacted statutory law might be encoded into a computerized logic-based formalization. A now oft-cited research paper entitled “The British Nationality Act as a Logic Program” was published in 1986 and subsequently became a hallmark for subsequent work in AI and the law."

In the 1980s, expert systems proliferated. Universities offered expert system courses and two-thirds of the Fortune 500 companies applied the technology in daily business activities. Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe.

In 1981, the first IBM PC, with the PC DOS operating system, was introduced. The imbalance between the high affordability of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client–server model. Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC. This model also enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client-server had a tremendous impact on the expert systems market. Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop. They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts. Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments. With the rise of the PC and client-server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC-based tools. Also, new vendors, often financed by venture capital (such as Aion Corporation, Neuron Data, Exsys, VP-Expert, and many others), started appearing regularly.

The first expert system to be used in a design capacity for a large-scale product was the Synthesis of Integral Design (SID) software program, developed in 1982. Written in Lisp, SID generated 93% of the VAX 9000 CPU logic gates. Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves. Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker. The program was highly controversial but used nevertheless due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion.

During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the start of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s. Thanks to Karp's work, together with other scholars, like Hubert L. Dreyfus, it became clear that there are certain limits and possibilities when one designs computer algorithms. His findings describe what computers can do and what they cannot do. Many of the computational problems related to this type of expert systems have certain pragmatic limits. These findings laid down the groundwork that led to the next developments in the field.

In the 1990s and beyond, the term expert system and the idea of a standalone AI system mostly dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems did not deliver on their over hyped promise. The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose expert systems, to being one of many standard tools. Other researchers suggest that Expert Systems caused inter-company power struggles when the IT organization lost its exclusivity in software modifications to users or Knowledge Engineers.

In the first decade of the 2000s, there was a "resurrection" for the technology, while using the term rule-based systems, with significant success stories and adoption. Many of the leading major business application suite vendors (such as SAP, Siebel, and Oracle) integrated expert system abilities into their suite of products as a way to specify business logic. Rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments.

Current approaches to expert systems

The limits of prior type of expert systems prompted researchers to develop new types of approaches. They have developed more efficient, flexible, and powerful methods to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism. Recurrent neural networks often take advantage of such mechanisms. Related is the discussion on the disadvantages section.

Modern systems can incorporate new knowledge more easily and thus update themselves easily. Such systems can generalize from existing knowledge better and deal with vast amounts of complex data. Related is the subject of big data here. Sometimes these type of expert systems are called "intelligent systems."

More recently, it can be argued that expert systems have moved into the area of business rules and business rules management systems.

Software architecture

Illustrating example of backward chaining from a 1990 Master's Thesis

An expert system is an example of a knowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. In general view, an expert system includes the following components: a knowledge base, an inference engine, an explanation facility, a knowledge acquisition facility, and a user interface.

The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables. In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming. The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances. The rules worked by querying and asserting values of the objects.

The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion.

There are mainly two modes for an inference engine: forward chaining and backward chaining. The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule. In forward chaining an antecedent fires and asserts the consequent. For example, consider the following rule:

A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine. It would match R1 and assert Mortal(Socrates) into the knowledge base.

Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true. One of the early innovations of expert systems shells was to integrate inference engines with a user interface. This could be especially powerful with backward chaining. If the system needs to know a particular fact but does not, then it can simply generate an input screen and ask the user if the information is known. So in this example, it could use R1 to ask the user if Socrates was a Man and then use that new information accordingly.

The use of rules to explicitly represent knowledge also enabled explanation abilities. In the simple example above if the system had used R1 to assert that Socrates was Mortal and a user wished to understand why Socrates was mortal they could query the system and the system would look back at the rules which fired to cause the assertion and present those rules to the user as an explanation. In English, if the user asked "Why is Socrates Mortal?" the system would reply "Because all men are mortal and Socrates is a man". A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules.

As expert systems evolved, many new techniques were incorporated into various types of inference engines. Some of the most important of these were:

  • Truth maintenance. These systems record the dependencies in a knowledge-base so that when facts are altered, dependent knowledge can be altered accordingly. For example, if the system learns that Socrates is no longer known to be a man it will revoke the assertion that Socrates is mortal.
  • Hypothetical reasoning. In this, the knowledge base can be divided up into many possible views, a.k.a. worlds. This allows the inference engine to explore multiple possibilities in parallel. For example, the system may want to explore the consequences of both assertions, what will be true if Socrates is a Man and what will be true if he is not?
  • Uncertainty systems. One of the first extensions of simply using rules to represent knowledge was also to associate a probability with each rule. So, not to assert that Socrates is mortal, but to assert Socrates may be mortal with some probability value. Simple probabilities were extended in some systems with sophisticated mechanisms for uncertain reasoning, such as fuzzy logic, and combination of probabilities.
  • Ontology classification. With the addition of object classes to the knowledge base, a new type of reasoning was possible. Along with reasoning simply about object values, the system could also reason about object structures. In this simple example, Man can represent an object class and R1 can be redefined as a rule that defines the class of all men. These types of special purpose inference engines are termed classifiers. Although they were not highly used in expert systems, classifiers are very powerful for unstructured volatile domains, and are a key technology for the Internet and the emerging Semantic Web.

Advantages

The goal of knowledge-based systems is to make the critical information required for the system to work explicit rather than implicit. In a traditional computer program, the logic is embedded in code that can typically only be reviewed by an IT specialist. With an expert system, the goal was to specify the rules in a format that was intuitive and easily understood, reviewed, and even edited by domain experts rather than IT experts. The benefits of this explicit knowledge representation were rapid development and ease of maintenance.

Ease of maintenance is the most obvious benefit. This was achieved in two ways. First, by removing the need to write conventional code, many of the normal problems that can be caused by even small changes to a system could be avoided with expert systems. Essentially, the logical flow of the program (at least at the highest level) was simply a given for the system, simply invoke the inference engine. This also was a reason for the second benefit: rapid prototyping. With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects.

A claim for expert system shells that was often made was that they removed the need for trained programmers and that experts could develop systems themselves. In reality, this was seldom if ever true. While the rules for an expert system were more comprehensible than typical computer code, they still had a formal syntax where a misplaced comma or other character could cause havoc as with any other computer language. Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical. Inevitably demands to integrate with, and take advantage of, large legacy databases and systems arose. To accomplish this, integration required the same skills as any other type of system.

Summing up the benefits of using expert systems, the following can be highlighted:

  1. Increased availability and reliability: Expertise can be accessed on any computer hardware and the system always completes responses on time.
  2. Multiple expertise: Several expert systems can be run simultaneously to solve a problem. and gain a higher level of expertise than a human expert.
  3. Explanation: Expert systems always describe of how the problem was solved.
  4. Fast response: The expert systems are fast and able to solve a problem in real-time.
  5. Reduced cost: The cost of expertise for each user is significantly reduced.

Disadvantages

The most common disadvantage cited for expert systems in the academic literature is the knowledge acquisition problem. Obtaining the time of domain experts for any software application is always difficult, but for expert systems it was especially difficult because the experts were by definition highly valued and in constant demand by the organization. As a result of this problem, a great deal of research in the later years of expert systems was focused on tools for knowledge acquisition, to help automate the process of designing, debugging, and maintaining rules defined by experts. However, when looking at the life-cycle of expert systems in actual use, other problems – essentially the same problems as those of any other large system – seem at least as critical as knowledge acquisition: integration, access to large databases, and performance.

Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them. This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such as C). System and database integration were difficult for early expert systems because the tools were mostly in languages and platforms that were neither familiar to nor welcome in most corporate IT environments – programming languages such as Lisp and Prolog, and hardware platforms such as Lisp machines and personal computers. As a result, much effort in the later stages of expert system tool development was focused on integrating with legacy environments such as COBOL and large database systems, and on porting to more standard platforms. These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications.

Another major challenge of expert systems emerges when the size of the knowledge base increases. This causes the processing complexity to increase. For instance, when an expert system with 100 million rules was envisioned as the ultimate expert system, it became obvious that such system would be too complex and it would face too many computational problems. An inference engine would have to be able to process huge numbers of rules to reach a decision.

How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to a satisfiability (SAT) formulation. This is a well-known NP-complete problem Boolean satisfiability problem. If we assume only binary variables, say n of them, and then the corresponding search space is of size 2. Thus, the search space can grow exponentially.

There are also questions on how to prioritize the use of the rules to operate more efficiently, or how to resolve ambiguities (for instance, if there are too many else-if sub-structures within one rule) and so on.

Other problems are related to the overfitting and overgeneralization effects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base. Such problems exist with methods that employ machine learning approaches too.

Another problem related to the knowledge base is how to make updates of its knowledge quickly and effectively. Also how to add a new piece of knowledge (i.e., where to add it among many rules) is challenging. Modern approaches that rely on machine learning methods are easier in this regard.

Because of the above challenges, it became clear that new approaches to AI were required instead of rule-based technologies. These new approaches are based on the use of machine learning techniques, along with the use of feedback mechanisms.

The key challenges that expert systems in medicine (if one considers computer-aided diagnostic systems as modern expert systems), and perhaps in other application domains, include issues related to aspects such as: big data, existing regulations, healthcare practice, various algorithmic issues, and system assessment.

Finally, the following disadvantages of using expert systems can be summarized:

  1. Expert systems have superficial knowledge, and a simple task can potentially become computationally expensive.
  2. Expert systems require knowledge engineers to input the data, data acquisition is very hard.
  3. The expert system may choose the most inappropriate method for solving a particular problem.
  4. Problems of ethics in the use of any form of AI are very relevant at present.
  5. It is a closed world with specific knowledge, in which there is no deep perception of concepts and their interrelationships until an expert provides them.

Applications

Hayes-Roth divides expert systems applications into 10 categories illustrated in the following table. The example applications were not in the original Hayes-Roth table, and some of them arose well afterward. Any application that is not footnoted is described in the Hayes-Roth book. Also, while these categories provide an intuitive framework to describe the space of expert systems applications, they are not rigid categories, and in some cases an application may show traits of more than one category.

Category Problem addressed Examples
Interpretation Inferring situation descriptions from sensor data Hearsay (speech recognition), PROSPECTOR
Prediction Inferring likely consequences of given situations Preterm Birth Risk Assessment
Diagnosis Inferring system malfunctions from observables CADUCEUS, MYCIN, PUFF, Mistral, Eydenet, Kaleidos, GARVAN-ES1
Design Configuring objects under constraints Dendral, Mortgage Loan Advisor, R1 (DEC VAX Configuration), SID (DEC VAX 9000 CPU)
Planning Designing actions Mission Planning for Autonomous Underwater Vehicle
Monitoring Comparing observations to plan vulnerabilities REACTOR
Debugging Providing incremental solutions for complex problems SAINT, MATHLAB, MACSYMA
Repair Executing a plan to administer a prescribed remedy Toxic Spill Crisis Management
Instruction Diagnosing, assessing, and correcting student behaviour SMH.PAL, Intelligent Clinical Training, STEAMER
Control Interpreting, predicting, repairing, and monitoring system behaviors Real Time Process Control, Space Shuttle Mission Control, Smart Autoclave Cure of Composites

Hearsay was an early attempt at solving voice recognition through an expert systems approach. For the most part this category of expert systems was not all that successful. Hearsay and all interpretation systems are essentially pattern recognition systems—looking for patterns in noisy data. In the case of Hearsay recognizing phonemes in an audio stream. Other early examples were analyzing sonar data to detect Russian submarines. These kinds of systems proved much more amenable to a neural network AI solution than a rule-based approach.

CADUCEUS and MYCIN were medical diagnosis systems. The user describes their symptoms to the computer as they would to a doctor and the computer returns a medical diagnosis.

Dendral was a tool to study hypothesis formation in the identification of organic molecules. The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuring Digital Equipment Corporation (DEC) VAX computers and mortgage loan application development.

SMH.PAL is an expert system for the assessment of students with multiple disabilities.

GARVAN-ES1 was a medical expert system, developed at the Garvan Institute of Medical Research, that provided automated clinical diagnostic comments on endocrine reports from a pathology laboratory. It was one of the first medical expert systems to go into routine clinical use internationally and the first expert system to be used for diagnosis daily in Australia. The system was written in "C" and ran on a PDP-11 in 64K of memory. It had 661 rules that were compiled; not interpreted.

Mistral is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g., Itaipu Dam in Brazil), and on landslide sites under the name of Eydenet, and on monuments under the name of Kaleidos. Mistral is a registered trade mark of CESI.

Cloud robotics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Cloud_robotics

Cloud robotics is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centered on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low-cost, smarter robots with an intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc.

Components

A cloud for robots potentially has at least six significant components:

  • Building a "cloud brain" for robots. It is the main object of cloud robotics.
  • Offering a global library of images, maps, and object data, often with geometry and mechanical properties, expert system, knowledge base (i.e. semantic web, data centres);
  • Massively-parallel computation on demand for sample-based statistical modelling and motion planning, task planning, multi-robot collaboration, scheduling and coordination of system;
  • Robot sharing of outcomes, trajectories, and dynamic control policies and robot learning support;
  • Human sharing of "open-source" code, data, and designs for programming, experimentation, and hardware construction;
  • On-demand human guidance and assistance for evaluation, learning, and error recovery;
  • Augmented human–robot interaction through various way (Semantics knowledge base, Apple SIRI like service etc.).

Applications

Autonomous mobile robots
Google's self-driving cars are cloud robots. The cars use the network to access Google's enormous database of maps and satellite and environment model (like Streetview) and combines it with streaming data from GPS, cameras, and 3D sensors to monitor its own position within centimetres, and with past and current traffic patterns to avoid collisions. Each car can learn something about environments, roads, or driving, or conditions, and it sends the information to the Google cloud, where it can be used to improve the performance of other cars.
Cloud medical robots
a medical cloud (also called a healthcare cluster) consists of various services such as a disease archive, electronic medical records, a patient health management system, practice services, analytics services, clinic solutions, expert systems, etc. A robot can connect to the cloud to provide clinical service to patients, as well as deliver assistance to doctors (e.g. a co-surgery robot). Moreover, it also provides a collaboration service by sharing information between doctors and care givers about clinical treatment.
Assistive robots
A domestic robot can be employed for healthcare and life monitoring for elderly people. The system collects the health status of users and exchange information with cloud expert system or doctors to facilitate elderly peoples life, especially for those with chronic diseases. For example, the robots are able to provide support to prevent the elderly from falling down, emergency healthy support such as heart disease, blooding disease. Care givers of elderly people can also get notification when in emergency from the robot through network.
Industrial robots
As highlighted by the German government's Industry 4.0 Plan, "Industry is on the threshold of the fourth industrial revolution. Driven by the Internet, the real and virtual worlds are growing closer and closer together to form the Internet of Things. Industrial production of the future will be characterised by the strong individualisation of products under the conditions of highly flexible (large series) production, the extensive integration of customers and business partners in business and value-added processes, and the linking of production and high-quality services leading to so-called hybrid products." In manufacturing, such cloud based robot systems could learn to handle tasks such as threading wires or cables, or aligning gaskets from a professional knowledge base. A group of robots can share information for some collaborative tasks. Even more, a consumer is able to place customised product orders to manufacturing robots directly with online ordering systems. Another potential paradigm is shopping-delivery robot systems. Once an order is placed, a warehouse robot dispatches the item to an autonomous car or autonomous drone to deliver it to its recipient.

Learn a Cloud Brain for Robots

Approach: Lifelong Learning. Leveraging lifelong learning to build a cloud brain for robots was proposed by CAS. The author was motivated by the problem of how to make robots fuse and transfer their experience so that they can effectively use prior knowledge and quickly adapt to new environments. To address the problem, they present a learning architecture for navigation in cloud robotic systems: Lifelong Federated Reinforcement Learning (LFRL). In the work, they propose a knowledge fusion algorithm for upgrading a shared model deployed on the cloud. Then, effective transfer learning methods in LFRL are introduced. LFRL is consistent with human cognitive science and fits well in cloud robotic systems. Experiments show that LFRL greatly improves the efficiency of reinforcement learning for robot navigation. The cloud robotic system deployment also shows that LFRL is capable of fusing prior knowledge.

Approach: Federated Learning. Leveraging lifelong learning to build a cloud brain for robots was proposed in 2020. Humans are capable of learning a new behavior by observing others to perform the skill. Similarly, robots can also implement this by imitation learning. Furthermore, if with external guidance, humans can master the new behavior more efficiently. So, how can robots achieve this? To address the issue, The authors present a novel framework named FIL. It provides a heterogeneous knowledge fusion mechanism for cloud robotic systems. Then, a knowledge fusion algorithm in FIL is proposed. It enables the cloud to fuse heterogeneous knowledge from local robots and generate guide models for robots with service requests. After that, we introduce a knowledge transfer scheme to facilitate local robots acquiring knowledge from the cloud. With FIL, a robot is capable of utilizing knowledge from other robots to increase its imitation learning in accuracy and efficiency. Compared with transfer learning and meta-learning, FIL is more suitable to be deployed in cloud robotic systems. They conduct experiments of a self-driving task for robots (cars). The experimental results demonstrate that the shared model generated by FIL increases imitation learning efficiency of local robots in cloud robotic systems.

Approach: Peer-assisted Learning. Leveraging peer-assisted learning to build a cloud brain for robots was proposed by UM. A technological revolution is occurring in the field of robotics with the data-driven deep learning technology. However, building datasets for each local robot is laborious. Meanwhile, data islands between local robots make data unable to be utilized collaboratively. To address this issue, the work presents Peer-Assisted Robotic Learning (PARL) in robotics, which is inspired by the peer-assisted learning in cognitive psychology and pedagogy. PARL implements data collaboration with the framework of cloud robotic systems. Both data and models are shared by robots to the cloud after semantic computing and training locally. The cloud converges the data and performs augmentation, integration, and transferring. Finally, fine tune this larger shared dataset in the cloud to local robots. Furthermore, we propose the DAT Network (Data Augmentation and Transferring Network) to implement the data processing in PARL. DAT Network can realize the augmentation of data from multi-local robots. The authors conduct experiments on a simplified self-driving task for robots (cars). DAT Network has a significant improvement in the augmentation in self-driving scenarios. Along with this, the self-driving experimental results also demonstrate that PARL is capable of improving learning effects with data collaboration of local robots.

Research

RoboEarth  was funded by the European Union's Seventh Framework Programme for research, technological development projects, specifically to explore the field of cloud robotics. The goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction. RoboEarth offers a Cloud Robotics infrastructure. RoboEarth's World-Wide-Web style database stores knowledge generated by humans – and robots – in a machine-readable format. Data stored in the RoboEarth knowledge base include software components, maps for navigation (e.g., object locations, world models), task knowledge (e.g., action recipes, manipulation strategies), and object recognition models (e.g., images, object models). The RoboEarth Cloud Engine includes support for mobile robots, autonomous vehicles, and drones, which require much computation for navigation.

Rapyuta  is an open source cloud robotics framework based on RoboEarth Engine developed by the robotics researcher at ETHZ. Within the framework, each robot connected to Rapyuta can have a secured computing environment (rectangular boxes) giving them the ability to move their heavy computation into the cloud. In addition, the computing environments are tightly interconnected with each other and have a high bandwidth connection to the RoboEarth knowledge repository.

KnowRob  is an extensional project of RoboEarth. It is a knowledge processing system that combines knowledge representation and reasoning methods with techniques for acquiring knowledge and for grounding the knowledge in a physical system and can serve as a common semantic framework for integrating information from different sources.

RoboBrain  is a large-scale computational system that learns from publicly available Internet resources, computer simulations, and real-life robot trials. It accumulates everything robotics into a comprehensive and interconnected knowledge base. Applications include prototyping for robotics research, household robots, and self-driving cars. The goal is as direct as the project's name—to create a centralised, always-online brain for robots to tap into. The project is dominated by Stanford University and Cornell University. And the project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.

MyRobots is a service for connecting robots and intelligent devices to the Internet. It can be regarded as a social network for robots and smart objects (i.e. Facebook for robots). With socialising, collaborating and sharing, robots can benefit from those interactions too by sharing their sensor information giving insight on their perspective of their current state.

COALAS  is funded by the INTERREG IVA France (Channel) – England European cross-border co-operation programme. The project aims to develop new technologies for disabled people through social and technological innovation and through the users' social and psychological integrity. Objectives is to produce a cognitive ambient assistive living system with Healthcare cluster in cloud with domestic service robots like humanoid, intelligent wheelchair which connect with the cloud.

ROS (Robot Operating System) provides an eco-system to support cloud robotics. ROS is a flexible and distributed framework for robot software development. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behaviour across a wide variety of robotic platforms. A library for ROS that is a pure Java implementation, called rosjava, allows Android applications to be developed for robots. Since Android has a booming market and billion users, it would be significant in the field of Cloud Robotics.

DAVinci Project is a proposed software framework that seeks to explore the possibilities of parallelizing some of the robotics algorithms as Map/Reduce tasks in Hadoop. The project aims to build a cloud computing environment capable of providing a compute cluster built with commodity hardware exposing a suite of robotic algorithms as a SaaS and share data co-operatively across the robotic ecosystem. This initiative is not available publicly.

C2RO (C2RO Cloud Robotics) is a platform that processes real-time applications such as collision avoidance and object recognition in the cloud. Previously, high latency times prevented these applications from being processed in the cloud thus requiring on-system computational hardware (e.g. Graphics Processing Unit or GPU). C2RO published a peer-reviewed paper at IEEE PIMRC17 showing its platform could make autonomous navigation and other AI services available on robots- even those with limited computational hardware (e.g. a Raspberry Pi)- from the cloud. C2RO eventually claimed to be the first platform to demonstrate cloud-based SLAM (simultaneous localization and mapping) at RoboBusiness in September 2017.

Noos is a cloud robotics service, providing centralised intelligence to robots that are connected to it. The service went live in December 2017. By using the Noos-API, developers could access services for computer vision, deep learning, and SLAM. Noos was developed and maintained by Ortelio Ltd.

Rocos is a centralized cloud robotics platform that provides the developer tooling and infrastructure to build, test, deploy, operate and automate robot fleets at scale. Founded in October 2017, the platform went live in January 2019.

Limitations of cloud robotics

Though robots can benefit from various advantages of cloud computing, cloud is not the solution to all of robotics.

  • Controlling a robot's motion which relies heavily on (real-time) sensors and feedback of controller may not benefit much from the cloud.
  • Tasks that involve real-time execution require on-board processing.
  • Cloud-based applications can get slow or unavailable due to high-latency responses or network hitch. If a robot relies too much on the cloud, a fault in the network could leave it "brainless."

Challenges

The research and development of cloud robotics has following potential issues and challenges:

Risks

  • Environmental security - The concentration of computing resources and users in a cloud computing environment also represents a concentration of security threats. Because of their size and significance, cloud environments are often targeted by virtual machines and bot malware, brute force attacks, and other attacks.
  • Data privacy and security - Hosting confidential data with cloud service providers involves the transfer of a considerable amount of an organisation's control over data security to the provider. For example, every cloud contains a huge information from the clients include personal data. If a household robot is hacked, users could have risk of their personal privacy and security, like house layout, life snapshot, home-view, etc. It may be accessed and leaked to the world around by criminals. Another problems is once a robot is hacked and controlled by someone else, which may put the user in danger.
  • Ethical problems - Some ethics of robotics, especially for cloud based robotics must be considered. Since a robot is connected via networks, it has risk to be accessed by other people. If a robot is out of control and carries out illegal activities, who should be responsible for it.

History

The term "Cloud Robotics" first appeared in the public lexicon as part of a talk given by James Kuffner in 2010 at the IEEE/RAS International Conference on Humanoid Robotics entitled "Cloud-enabled Robots".  Since then, "Cloud Robotics" has become a general term encompassing the concepts of information sharing, distributed intelligence, and fleet learning that is possible via networked robots and modern cloud computing. Kuffner was part of Google when he delivered his presentation and the technology company has teased its various cloud robotics initiatives until 2019 when it launched the Google Cloud Robotics Platform for developers.

From the early days of robot development, it was common to have computation done on a computer that was separated from the actual robot mechanism, but connected by wires for power and control. As wireless communication technology developed, new forms of experimental "remote brain" robots were developed controlled by small, onboard compute resources for robot control and safety, that were wirelessly connected to a more powerful remote computer for heavy processing. 

The term "cloud computing" was popularized with the launch of Amazon EC2 in 2006. It marked the availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization and service-oriented architecture. In a correspondence with Popular Science in July 2006, Kuffner wrote that after a robot was programmed or successfully learned to perform a task it could share its model and relevant data with all other cloud-connected robots: 

"...the robot could then 'publish' its refined model to some website or universal repository of knowledge that all future robots could download and utilize. My vision is to have a 'robot knowledge database' that will over time improve the capabilities of all future robotic systems. It would serve as a warehouse of information and statistics about the physical world that robots can access and use to improve their reasoning about the consequences of possible actions and make better action plans in terms of accuracy, safety, and robustness. It could also serve as a kind of 'skill library'. For example, if I successfully programmed my butler robot how to cook a perfect omelette, I could 'upload' the software for omelette cooking to a server that all robots could then download whenever they were asked to cook an omelette. There could be a whole community of robot users uploading skill programs, much like the current 'shareware' and 'freeware' software models that are popular for PC users."

— James Kuffner, (July 2006)

Some publications and events related to Cloud Robotics (in chronological order):

  • The IEEE RAS Technical Committee on Internet and Online Robots was founded by Ken Goldberg and Roland Siegwart et al. in May 2001. The committee then expanded to IEEE Society of Robotics and Automation's Technical Committee on Networked Robots in 2004.
  • James J. Kuffner, a former CMU robotics professor, and research scientist at Google, now CEO of Toyota Research Institute—Advanced Development, spoke on cloud robotics in IEEE/RAS International Conference on Humanoid Robotics 2010. It describes "a new approach to robotics that takes advantage of the Internet as a resource for massively parallel computation and sharing of vast data resources."
  • Ryan Hickman, a Google Product Manager, led an internal volunteer effort in 2010 to connect robots with the Google's cloud services. This work was later expanded to include open source ROS support and was demonstrated on stage by Ryan Hickman, Damon Kohler, Brian Gerkey, and Ken Conley at Google I/O 2011.
  • National Robotics Initiative of US announced in 2011 aimed to explore how robots can enhance the work of humans rather than replacing them. It claims that next generation of robots are more aware than oblivious, more social than solitary.
  • NRI Workshop on Cloud Robotics: Challenges and Opportunities- February 2013.
  • A Roadmap for U.S. Robotics From Internet to Robotics 2013 Edition- by Georgia Institute of Technology, Carnegie Mellon University Robotics Technology Consortium, University of Pennsylvania, University of Southern California, Stanford University, University of California–Berkeley, University of Washington, Massachusetts Institute of TechnologyUS and Robotics OA US. The Roadmap highlighted "Cloud" Robotics and Automation for Manufacturing in the future years.
  • Cloud-Based Robot Grasping with the Google Object Recognition Engine.
  • 2013 IEEE IROS Workshop on Cloud Robotics. Tokyo. November 2013.
  • Cloud Robotics-Enable cloud computing for robots. The author proposed some paradigms of using cloud computing in robotics. Some potential field and challenges were coined. R. Li 2014.
  • Special Issue on Cloud Robotics and Automation- A special issue of the IEEE Transactions on Automation Science and Engineering, April 2015.
  • Robot APP Store Robot Applications in Cloud, provide applications for robot just like computer/phone app.
  • DARPA Cloud Robotics.
  • The first industrial cloud robotics platform, Tend, was founded by Mark Silliman, James Gentes and Robert Kieffer in February 2017. Tend allows robots to be remotely controlled and monitored via websockets and NodeJs.
  • Cloud robotic architectures: directions for future research from a comparative analysis.

Information technology

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Information_technology

Information technology
(IT) is a set of related fields that encompass computer systems, software, programming languages, data and information processing, and storage. IT forms part of information and communications technology (ICT). An information technology system (IT system) is generally an information system, a communications system, or, more specifically speaking, a computer system — including all hardware, software, and peripheral equipment — operated by a limited group of IT users, and an IT project usually refers to the commissioning and implementation of an IT system. IT systems play a vital role in facilitating efficient data management, enhancing communication networks, and supporting organizational processes across various industries. Successful IT projects require meticulous planning, seamless integration, and ongoing maintenance to ensure optimal functionality and alignment with organizational objectives.

Although humans have been storing, retrieving, manipulating, and communicating information since the earliest writing systems were developed, the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)." Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.

The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several products or services within an economy are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, and e-commerce.

Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450 – 1840), electromechanical (1840 – 1940), and electronic (1940 to present).

Information technology is a branch of computer science, defined as the study of procedures, structures, and the processing of various types of data. As this field continues to evolve globally, its priority and importance have grown, leading to the introduction of computer science-related courses in K-12 education.

History

Zuse Z3 replica on display at Deutsches Museum in Munich. The Zuse Z3 is the first programmable computer.
This is the Antikythera mechanism, which is considered the first mechanical analog computer, dating back to the first century BC.

Ideas of computer science were first mentioned before the 1950s under the Massachusetts Institute of Technology (MIT) and Harvard University, where they had discussed and began thinking of computer circuits and numerical calculations. As time went on, the field of information technology and computer science became more complex and was able to handle the processing of more data. Scholarly articles began to be published from different organizations.

Looking at early computing, Alan Turing, J. Presper Eckert, and John Mauchly were considered some of the major pioneers of computer technology in the mid-1900s. Giving them such credit for their developments, most of their efforts were focused on designing the first digital computer. Along with that, topics such as artificial intelligence began to be brought up as Turing was beginning to question such technology of the time period.

Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick. The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered the earliest known mechanical analog computer, and the earliest known geared mechanism. Comparable geared devices did not emerge in Europe until the 16th century, and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed.

Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanical Zuse Z3, completed in 1941, was the world's first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. During the Second World War, Colossus developed the first electronic digital computer to decrypt German messages. Although it was programmable, it was not general-purpose, being designed to perform only a single task. It also lacked the ability to store its program in memory; programming was carried out using plugs and switches to alter the internal wiring. The first recognizably modern electronic digital stored-program computer was the Manchester Baby, which ran its first program on 21 June 1948.

The development of transistors in the late 1940s at Bell Laboratories allowed a new generation of computers to be designed with greatly reduced power consumption. The first commercially available stored-program computer, the Ferranti Mark I, contained 4050 valves and had a power consumption of 25 kilowatts. By comparison, the first transistorized computer developed at the University of Manchester and operational by November 1953, consumed only 150 watts in its final version.

Several other breakthroughs in semiconductor technology include the integrated circuit (IC) invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor in 1959, silicon dioxide surface passivation by Carl Frosch and Lincoln Derick in 1955, the first planar silicon dioxide transistors by Frosch and Derick in 1957, the MOSFET demonstration by a Bell Labs team. the planar process by Jean Hoerni in 1959, and the microprocessor invented by Ted Hoff, Federico Faggin, Masatoshi Shima, and Stanley Mazor at Intel in 1971. These important inventions led to the development of the personal computer (PC) in the 1970s, and the emergence of information and communications technology (ICT).

By the year of 1984, according to the National Westminster Bank Quarterly Review, the term information technology had been redefined as "The development of cable television was made possible by the convergence of telecommunications and computing technology (…generally known in Britain as information technology)." We then begin to see the appearance of the term in 1990 contained within documents for the International Organization for Standardization (ISO).

Innovations in technology have already revolutionized the world by the twenty-first century as people were able to access different online services. This has changed the workforce drastically as thirty percent of U.S. workers were already in careers in this profession. 136.9 million people were personally connected to the Internet, which was equivalent to 51 million households. Along with the Internet, new types of technology were also being introduced across the globe, which has improved efficiency and made things easier across the globe.

Along with technology revolutionizing society, millions of processes could be done in seconds. Innovations in communication were also crucial as people began to rely on the computer to communicate through telephone lines and cable. The introduction of the email was considered revolutionary as "companies in one part of the world could communicate by e-mail with suppliers and buyers in another part of the world..."

Not only personally, computers and technology have also revolutionized the marketing industry, resulting in more buyers of their products. During the year of 2002, Americans exceeded $28 billion in goods just over the Internet alone while e-commerce a decade later resulted in $289 billion in sales. And as computers are rapidly becoming more sophisticated by the day, they are becoming more used as people are becoming more reliant on them during the twenty-first century.

Data processing

Ferranti Mark I computer logic board

Storage

Punched tapes were used in early computers to store and represent data.

Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete. Electronic data storage, which is used in modern computers, dates from World War II, when a form of delay-line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line. The first random-access digital storage device was the Williams tube, which was based on a standard cathode ray tube. However, the information stored in it and delay-line memory was volatile in the fact that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932 and used in the Ferranti Mark 1, the world's first commercially available general-purpose electronic computer.

IBM introduced the first hard disk drive in 1956, as a component of their 305 RAMAC computer system. Most digital data today is still stored magnetically on hard disks, or optically on media such as CD-ROMs. Until 2002 most information was stored on analog devices, but that year digital storage capacity exceeded analog for the first time. As of 2007, almost 94% of the data stored worldwide was held digitally: 52% on hard disks, 28% on optical devices, and 11% on digital magnetic tape. It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007, doubling roughly every 3 years.

Databases

Database Management Systems (DMS) emerged in the 1960s to address the problem of storing and retrieving large amounts of data accurately and quickly. An early such system was IBM's Information Management System (IMS), which is still widely deployed more than 50 years later. IMS stores data hierarchically, but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows, and columns. In 1981, the first commercially available relational database management system (RDBMS) was released by Oracle.

All DMS consist of components, they allow the data they store to be accessed simultaneously by many users while maintaining its integrity. All databases are common in one point that the structure of the data they contain is defined and stored separately from the data itself, in a database schema.

In recent years, the extensible markup language (XML) has become a popular format for data representation. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their "robust implementation verified by years of both theoretical and practical effort." As an evolution of the Standard Generalized Markup Language (SGML), XML's text-based structure offers the advantage of being both machine- and human-readable.

Transmission

IBM card storage warehouse located in Alexandria, Virginia in 1959. This is where the government kept storage of punched cards.

Data transmission has three aspects: transmission, propagation, and reception. It can be broadly categorized as broadcasting, in which information is transmitted unidirectionally downstream, or telecommunications, with bidirectional upstream and downstream channels.

XML has been increasingly employed as a means of data interchange since the early 2000s, particularly for machine-oriented interactions such as those involved in web-oriented protocols such as SOAP, describing "data-in-transit rather than... data-at-rest".

Manipulation

Hilbert and Lopez identify the exponential pace of technological change (a kind of Moore's law): machines' application-specific capacity to compute information per capita roughly doubled every 14 months between 1986 and 2007; the per capita capacity of the world's general-purpose computers doubled every 18 months during the same two decades; the global telecommunication capacity per capita doubled every 34 months; the world's storage capacity per capita required roughly 40 months to double (every 3 years); and per capita broadcast information has doubled every 12.3 years.

Massive amounts of data are stored worldwide every day, but unless it can be analyzed and presented effectively it essentially resides in what have been called data tombs: "data archives that are seldom visited". To address that issue, the field of data mining — "the process of discovering interesting patterns and knowledge from large amounts of data" — emerged in the late 1980s.

Services

Email

The technology and services it provides for sending and receiving electronic messages (called "letters" or "electronic letters") over a distributed (including global) computer network. In terms of the composition of elements and the principle of operation, electronic mail practically repeats the system of regular (paper) mail, borrowing both terms (mail, letter, envelope, attachment, box, delivery, and others) and characteristic features — ease of use, message transmission delays, sufficient reliability and at the same time no guarantee of delivery. The advantages of e-mail are: easily perceived and remembered by a person addresses of the form user_name@domain_name (for example, somebody@example.com); the ability to transfer both plain text and formatted, as well as arbitrary files; independence of servers (in the general case, they address each other directly); sufficiently high reliability of message delivery; ease of use by humans and programs.

Disadvantages of e-mail: the presence of such a phenomenon as spam (massive advertising and viral mailings); the theoretical impossibility of guaranteed delivery of a particular letter; possible delays in message delivery (up to several days); limits on the size of one message and on the total size of messages in the mailbox (personal for users).

Search system

A software and hardware complex with a web interface that provides the ability to search for information on the Internet. A search engine usually means a site that hosts the interface (front-end) of the system. The software part of a search engine is a search engine (search engine) — a set of programs that provides the functionality of a search engine and is usually a trade secret of the search engine developer company. Most search engines look for information on World Wide Web sites, but there are also systems that can look for files on FTP servers, items in online stores, and information on Usenet newsgroups. Improving search is one of the priorities of the modern Internet (see the Deep Web article about the main problems in the work of search engines).

Commercial effects

Companies in the information technology field are often discussed as a group as the "tech sector" or the "tech industry." These titles can be misleading at times and should not be mistaken for "tech companies;" which are generally large scale, for-profit corporations that sell consumer technology and software. It is also worth noting that from a business perspective, Information technology departments are a "cost center" the majority of the time. A cost center is a department or staff which incurs expenses, or "costs", within a company rather than generating profits or revenue streams. Modern businesses rely heavily on technology for their day-to-day operations, so the expenses delegated to cover technology that facilitates business in a more efficient manner are usually seen as "just the cost of doing business." IT departments are allocated funds by senior leadership and must attempt to achieve the desired deliverables while staying within that budget. Government and the private sector might have different funding mechanisms, but the principles are more-or-less the same. This is an often overlooked reason for the rapid interest in automation and Artificial Intelligence, but the constant pressure to do more with less is opening the door for automation to take control of at least some minor operations in large companies.

Many companies now have IT departments for managing the computers, networks, and other technical areas of their businesses. Companies have also sought to integrate IT with business outcomes and decision-making through a BizOps or business operations department.

In a business context, the Information Technology Association of America has defined information technology as "the study, design, development, application, implementation, support, or management of computer-based information systems". The responsibilities of those working in the field include network administration, software development and installation, and the planning and management of an organization's technology life cycle, by which hardware and software are maintained, upgraded, and replaced.

Information services

Information services is a term somewhat loosely applied to a variety of IT-related services offered by commercial companies, as well as data brokers.

Ethics

The field of information ethics was established by mathematician Norbert Wiener in the 1940s. Some of the ethical issues associated with the use of information technology include:

  • Breaches of copyright by those downloading files stored without the permission of the copyright holders
  • Employers monitoring their employees' emails and other Internet usage
  • Unsolicited emails
  • Hackers accessing online databases
  • Web sites installing cookies or spyware to monitor a user's online activities, which may be used by data brokers

IT projects

Research suggests that IT projects in business and public administration can easily become significant in scale. Work conducted by McKinsey in collaboration with the University of Oxford suggested that half of all large-scale IT projects (those with initial cost estimates of $15 million or more) often failed to maintain costs within their initial budgets or to complete on time.

BitTorrent

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/BitTorren...