Search This Blog

Sunday, November 26, 2023

Interoperability

From Wikipedia, the free encyclopedia
An example of software interoperability: a mobile device and a TV device both playing the same digital music file that is stored on a server off-screen in the home network

Interoperability is a characteristic of a product or system to work with other products or systems. While the term was initially defined for information technology or systems engineering services to allow for information exchange, a broader definition takes into account social, political, and organizational factors that impact system-to-system performance.

Types of interoperability include syntactic interoperability, where two systems can communicate with each other, and cross-domain interoperability, where multiple organizations work together and exchange information.

Types

If two or more systems use common data formats and communication protocols then they are capable of communicating with each other and they exhibit syntactic interoperability. XML and SQL are examples of common data formats and protocols. Low-level data formats also contribute to syntactic interoperability, ensuring that alphabetical characters are stored in the same ASCII or a Unicode format in all the communicating systems.

Beyond the ability of two or more computer systems to exchange information, semantic interoperability is the ability to automatically interpret the information exchanged meaningfully and accurately in order to produce useful results as defined by the end users of both systems. To achieve semantic interoperability, both sides must refer to a common information exchange reference model. The content of the information exchange requests are unambiguously defined: what is sent is the same as what is understood.

Cross-domain interoperability involves multiple social, organizational, political, legal entities working together for a common interest or information exchange.

Interoperability and open standards

Interoperability implies exchanges between a range of products, or similar products from several different vendors, or even between past and future revisions of the same product. Interoperability may be developed post-facto, as a special measure between two products, while excluding the rest, by using open standards. When a vendor is forced to adapt its system to a dominant system that is not based on Open standards, it is compatibility, not interoperability.

Open standards

Open standards rely on a broadly consultative and inclusive group including representatives from vendors, academics and others holding a stake in the development that discusses and debate the technical and economic merits, demerits and feasibility of a proposed common protocol. After the doubts and reservations of all members are addressed, the resulting common document is endorsed as a common standard. This document may be subsequently released to the public, and henceforth becomes an open standard. It is usually published and is available freely or at a nominal cost to any and all comers, with no further encumbrances. Various vendors and individuals (even those who were not part of the original group) can use the standards document to make products that implement the common protocol defined in the standard and are thus interoperable by design, with no specific liability or advantage for customers for choosing one product over another on the basis of standardized features. The vendors' products compete on the quality of their implementation, user interface, ease of use, performance, price, and a host of other factors, while keeping the customer's data intact and transferable even if he chooses to switch to another competing product for business reasons.

Post facto interoperability

Post facto interoperability may be the result of the absolute market dominance of a particular product in contravention of any applicable standards, or if any effective standards were not present at the time of that product's introduction. The vendor behind that product can then choose to ignore any forthcoming standards and not co-operate in any standardization process at all, using its near-monopoly to insist that its product sets the de facto standard by its very market dominance. This is not a problem if the product's implementation is open and minimally encumbered, but it may well be both closed and heavily encumbered (e.g. by patent claims). Because of the network effect, achieving interoperability with such a product is both critical for any other vendor if it wishes to remain relevant in the market, and difficult to accomplish because of lack of cooperation on equal terms with the original vendor, who may well see the new vendor as a potential competitor and threat. The newer implementations often rely on clean-room reverse engineering in the absence of technical data to achieve interoperability. The original vendors may provide such technical data to others, often in the name of encouraging competition, but such data is invariably encumbered, and may be of limited use. Availability of such data is not equivalent to an open standard, because:

  1. The data is provided by the original vendor on a discretionary basis, and the vendor has every interest in blocking the effective implementation of competing solutions, and may subtly alter or change its product, often in newer revisions, so that competitors' implementations are almost, but not quite completely interoperable, leading customers to consider them unreliable or of lower quality. These changes may not be passed on to other vendors at all, or passed on after a strategic delay, maintaining the market dominance of the original vendor.
  2. The data itself may be encumbered, e.g. by patents or pricing, leading to a dependence of all competing solutions on the original vendor, and possibly leading a revenue stream from the competitors' customers back to the original vendor. This revenue stream is the result of the original product's market dominance and not a result of any innate superiority.
  3. Even when the original vendor is genuinely interested in promoting a healthy competition (so that he may also benefit from the resulting innovative market), post-facto interoperability may often be undesirable as many defects or quirks can be directly traced back to the original implementation's technical limitations. Although in an open process, anyone may identify and correct such limitations, and the resulting cleaner specification may be used by all vendors, this is more difficult post-facto, as customers already have valuable information and processes encoded in the faulty but dominant product, and other vendors are forced to replicate those faults and quirks for the sake of preserving interoperability even if they could design better solutions. Alternatively, it can be argued that even open processes are subject to the weight of past implementations and imperfect past designs and that the power of the dominant vendor to unilaterally correct or improve the system and impose the changes to all users facilitates innovation.
  4. Lack of an open standard can also become problematic for the customers, as in the case of the original vendor's inability to fix a certain problem that is an artifact of technical limitations in the original product. The customer wants that fault fixed, but the vendor has to maintain that faulty state, even across newer revisions of the same product, because that behavior is a de facto standard and many more customers would have to pay the price of any interoperability issues caused by fixing the original problem and introducing new behavior.

Government

eGovernment

Speaking from an e-government perspective, interoperability refers to the collaboration ability of cross-border services for citizens, businesses and public administrations. Exchanging data can be a challenge due to language barriers, different specifications of formats, varieties of categorizations and other hindrances.

If data is interpreted differently, collaboration is limited, takes longer and is inefficient. For instance, if a citizen of country A wants to purchase land in country B, the person will be asked to submit the proper address data. Address data in both countries include full name details, street name and number as well as a postal code. The order of the address details might vary. In the same language, it is not an obstacle to order the provided address data; but across language barriers, it becomes difficult. If the language uses a different writing system it is almost impossible if no translation tools are available.

Flood risk management

Interoperability is used by researchers in the context of urban flood risk management.  Cities and urban areas worldwide are expanding, which creates complex spaces with many interactions between the environment, infrastructure and people.  To address this complexity and manage water in urban areas appropriately, a system of systems approach to water and flood control is necessary. In this context, interoperability is important to facilitate system-of-systems thinking, and is defined as: "the ability of any water management system to redirect water and make use of other system(s) to maintain or enhance its performance function during water exceedance events." By assessing the complex properties of urban infrastructure systems, particularly the interoperability between the drainage systems and other urban systems (e.g. infrastructure such as transport), it could be possible to expand the capacity of the overall system to manage flood water towards achieving improved urban flood resilience.

Military forces

Force interoperability is defined in NATO as the ability of the forces of two or more nations to train, exercise and operate effectively together in the execution of assigned missions and tasks. Additionally NATO defines interoperability more generally as the ability to act together coherently, effectively and efficiently to achieve Allied tactical, operational and strategic objectives.

At the strategic level, interoperability is an enabler for coalition building. It facilitates meaningful contributions by coalition partners. At this level, interoperability issues center on harmonizing world views, strategies, doctrines, and force structures. Interoperability is an element of coalition willingness to work together over the long term to achieve and maintain shared interests against common threats. Interoperability at the operational and tactical levels is where strategic interoperability and technological interoperability come together to help allies shape the environment, manage crises, and win wars. The benefits of interoperability at the operational and tactical levels generally derive from the interchangeability of force elements and units. Technological interoperability reflects the interfaces between organizations and systems. It focuses on communications and computers but also involves the technical capabilities of systems and the resulting mission compatibility between the systems and data of coalition partners. At the technological level, the benefits of interoperability come primarily from their impacts at the operational and tactical levels in terms of enhancing flexibility.

Public safety

Because first responders need to be able to communicate during wide-scale emergencies, interoperability is an important issue for law enforcement, fire fighting, emergency medical services, and other public health and safety departments. It has been a major area of investment and research over the last 12 years. Widely disparate and incompatible hardware impedes the exchange of information between agencies. Agencies' information systems such as computer-aided dispatch systems and records management systems functioned largely in isolation, in so-called information islands. Agencies tried to bridge this isolation with inefficient, stop-gap methods while large agencies began implementing limited interoperable systems. These approaches were inadequate and, in the US, the lack of interoperability in the public safety realm become evident during the 9/11 attacks on the Pentagon and World Trade Center structures. Further evidence of a lack of interoperability surfaced when agencies tackled the aftermath of Hurricane Katrina.

In contrast to the overall national picture, some states, including Utah, have already made great strides forward. The Utah Highway Patrol and other departments in Utah have created a statewide data sharing network.

The Commonwealth of Virginia is one of the leading states in the United States in improving interoperability. The Interoperability Coordinator leverages a regional structure to better allocate grant funding around the Commonwealth so that all areas have an opportunity to improve communications interoperability. Virginia's strategic plan for communications is updated yearly to include new initiatives for the Commonwealth – all projects and efforts are tied to this plan, which is aligned with the National Emergency Communications Plan, authored by the Department of Homeland Security's Office of Emergency Communications.

The State of Washington seeks to enhance interoperability statewide. The State Interoperability Executive Committee (SIEC), established by the legislature in 2003, works to assist emergency responder agencies (police, fire, sheriff, medical, hazmat, etc.) at all levels of government (city, county, state, tribal, federal) to define interoperability for their local region. Washington recognizes that collaborating on system design and development for wireless radio systems enables emergency responder agencies to efficiently provide additional services, increase interoperability, and reduce long-term costs. This work saves the lives of emergency personnel and the citizens they serve.

The U.S. government is making an effort to overcome the nation's lack of public safety interoperability. The Department of Homeland Security's Office for Interoperability and Compatibility (OIC) is pursuing the SAFECOM and CADIP and Project 25 programs, which are designed to help agencies as they integrate their CAD and other IT systems.

The OIC launched CADIP in August 2007. This project will partner the OIC with agencies in several locations, including Silicon Valley. This program will use case studies to identify the best practices and challenges associated with linking CAD systems across jurisdictional boundaries. These lessons will create the tools and resources public safety agencies can use to build interoperable CAD systems and communicate across local, state, and federal boundaries.

As regulator for interoperability

Governance entities can increase interoperability through their legislative and executive powers. For instance, in 2021 the European Commission, after commissioning two impact assessment studies and a technology analysis study, proposed the implementation of a standardization – for iterations of USB-C – of phone charger products, which may increase interoperability along with convergence and convenience for consumers while decreasing resource needs, redundancy and electronic waste.

Commerce and industries

Information technology and computers

Desktop

Desktop interoperability is a subset of software interoperability. In the early days, the focus of interoperability was to integrate web applications with other web applications. Over time, open-system containers were developed to create a virtual desktop environment in which these applications could be registered and then communicate with each other using simple publish–subscribe patterns. Rudimentary UI capabilities were also supported allowing windows to be grouped with other windows. Today, desktop interoperability has evolved into full-service platforms which include container support, basic exchange between web and web, but also native support for other application types and advanced window management. The very latest interop platforms also include application services such as universal search, notifications, user permissions and preferences, 3rd party application connectors and language adapters for in-house applications.

Information search

Search interoperability refers to the ability of two or more information collections to be searched by a single query.

Specifically related to web-based search, the challenge of interoperability stems from the fact designers of web resources typically have little or no need to concern themselves with exchanging information with other web resources. Federated Search technology, which does not place format requirements on the data owner, has emerged as one solution to search interoperability challenges. In addition, standards, such as Open Archives Initiative Protocol for Metadata Harvesting, Resource Description Framework, and SPARQL, have emerged that also help address the issue of search interoperability related to web resources. Such standards also address broader topics of interoperability, such as allowing data mining.

Software

Interoperability: playing the two role network game, when one of the player clients (top left) runs under Sun Microsystems and another under GNU Classpath with JamVM. The applications execute the same bytecode and interoperate using the standard RMI-IIOP messages for communication.

With respect to software, the term interoperability is used to describe the capability of different programs to exchange data via a common set of exchange formats, to read and write the same file formats, and to use the same communication protocols. The lack of interoperability can be a consequence of a lack of attention to standardization during the design of a program. Indeed, interoperability is not taken for granted in the non-standards-based portion of the computing world.

According to ISO/IEC 2382-01, Information Technology Vocabulary, Fundamental Terms, interoperability is defined as follows: "The capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units".

Standards-developing organizations provide open public software specifications to facilitate interoperability; examples include the Oasis-Open organization and buildingSMART (formerly the International Alliance for Interoperability). As far as user communities, Neutral Third Party is creating standards for business process interoperability. Another example of a neutral party is the RFC documents from the Internet Engineering Task Force (IETF).

The Open Service for Lifecycle Collaboration community is working on finding a common standard in order that software tools can share and exchange data e.g. bugs, tasks, requirements etc. The final goal is to agree on an open standard for interoperability of open source application lifecycle management tools.

Java is an example of an interoperable programming language that allows for programs to be written once and run anywhere with a Java virtual machine. A program in Java, so long as it does not use system-specific functionality, will maintain interoperability with all systems that have a Java virtual machine available. Applications will maintain compatibility because, while the implementation is different, the underlying language interfaces are the same.

Achieving software

Software interoperability is achieved through five interrelated ways:

  1. Product testing
    Products produced to a common standard, or to a sub-profile thereof, depend on the clarity of the standards, but there may be discrepancies in their implementations that system or unit testing may not uncover. This requires that systems formally be tested in a production scenario – as they will be finally implemented – to ensure they actually will intercommunicate as advertised, i.e. they are interoperable. Interoperable product testing is different from conformance-based product testing as conformance to a standard does not necessarily engender interoperability with another product which is also tested for conformance.
  2. Product engineering
    Implements the common standard, or a sub-profile thereof, as defined by the industry and community partnerships with the specific intention of achieving interoperability with other software implementations also following the same standard or sub-profile thereof.
  3. Industry and community partnership
    Industry and community partnerships, either domestic or international, sponsor standard workgroups with the purpose of defining a common standard that may be used to allow software systems to intercommunicate for a defined purpose. At times an industry or community will sub-profile an existing standard produced by another organization to reduce options and thus make interoperability more achievable for implementations.
  4. Common technology and intellectual property
    The use of a common technology or intellectual property may speed up and reduce the complexity of interoperability by reducing variability between components from different sets of separately developed software products and thus allowing them to intercommunicate more readily. This technique has some of the same technical results as using a common vendor product to produce interoperability. The common technology can come through third-party libraries or open-source developments.
  5. Standard implementation
    Software interoperability requires a common agreement that is normally arrived at via an industrial, national or international standard.

Each of these has an important role in reducing variability in intercommunication software and enhancing a common understanding of the end goal to be achieved.

Unified interoperability

Unified interoperability is the property of a system that allows for the integration of real-time and non-real time communications, activities, data, and information services (i.e., unified) and the display and coordination of those services across systems and devices (i.e., interoperability). Unified interoperability provides the capability to communicate and exchange processing across different applications, data, and infrastructure.

Market dominance and power

Interoperability tends to be regarded as an issue for experts and its implications for daily living are sometimes underrated. The European Union Microsoft competition case shows how interoperability concerns important questions of power relationships. In 2004, the European Commission found that Microsoft had abused its market power by deliberately restricting interoperability between Windows work group servers and non-Microsoft work group servers. By doing so, Microsoft was able to protect its dominant market position for work group server operating systems, the heart of corporate IT networks. Microsoft was ordered to disclose complete and accurate interface documentation, which could enable rival vendors to compete on an equal footing (the interoperability remedy).

Interoperability has also surfaced in the software patent debate in the European Parliament (June–July 2005). Critics claim that because patents on techniques required for interoperability are kept under RAND (reasonable and non-discriminatory licensing) conditions, customers will have to pay license fees twice: once for the product and, in the appropriate case, once for the patent-protected program the product uses.

Manufacturing

Interoperability has become a common challenge within the manufacturing field in recent years particularly due to legacy systems, and the integration of manufacturing processes under the directive of promoting Industry 4.0. Interoperability has become a cornerstone of manufacturing policy and directives alongside autonomy and sustainability which can be identified within the German Federal policy of 2030 Vision for Industrie 4.0. The current pressing challenge for interoperability is closely linked to that of standardization and the implementation of best practices, which have prevented the driving of I4.0 to link manufacturing throughout the supply chain. Research into the current challenges have indicated that there is a gap within the IT and application landscapes of manufacturing enterprises, which present challenges for the linking of systems and flow of data.

Business precesses

Interoperability is often more of an organizational issue. Interoperability can have a significant impact on the organizations concerned, raising issues of ownership (do people want to share their data? or are they dealing with information silos?), labor relations (are people prepared to undergo training?) and usability. In this context, a more apt definition is captured in the term business process interoperability.

Interoperability can have important economic consequences; for example, research has estimated the cost of inadequate interoperability in the U.S. capital facilities industry to be $15.8 billion a year. If competitors' products are not interoperable (due to causes such as patents, trade secrets or coordination failures), the result may well be monopoly or market failure. For this reason, it may be prudent for user communities or governments to take steps to encourage interoperability in various situations. At least 30 international bodies and countries have implemented eGovernment-based interoperability framework initiatives called e-GIF while in the United States there is the NIEM initiative.

Medical industry

New technology is being introduced in hospitals and labs at an ever-increasing rate. The need for plug-and-play interoperability – the ability to take a medical device out of its box and easily make it work with one's other devices – has attracted great attention from both healthcare providers and industry.

Increasingly, medical devices like incubators and imaging systems feature software that integrates at the point of care and with electronic systems, such as electronic medical records. At the 2016 Regulatory Affairs Professionals Society (RAPS) meeting, experts in the field like Angela N. Johnson with GE Healthcare and representative of the United States Food and Drug Administration provided practical seminars in how companies developing new medical devices, and hospitals installing them, can work more effectively to align interoperable software systems.

Railways

Railways have greater or lesser interoperability depending on conforming to standards of gauge, couplings, brakes, signalling, communications, loading gauge, structure gauge, and operating rules, to mention a few parameters. For passenger rail service, different railway platform height and width clearance standards may also cause interoperability problems.

North American freight and intercity passenger railroads are highly interoperable, but systems in Europe, Asia, Africa, Central and South America, and Australia are much less so. The parameter most difficult to overcome (at reasonable cost) is incompatibility of gauge, though variable gauge axle systems are increasingly used.

Telecommunications

In telecommunication, the term can be defined as:

  1. The ability to provide services to and accept services from other systems, and to use the services exchanged to enable them to operate effectively together. ITU-T provides standards for international telecommunications.
  2. The condition achieved among communications-electronics systems or items of communications-electronics equipment when information or services can be exchanged directly and satisfactorily between them and/or their users. The degree of interoperability should be defined when referring to specific cases.

In two-way radio, interoperability is composed of three dimensions:

  • compatible communications paths (compatible frequencies, equipment and signaling),
  • radio system coverage or adequate signal strength, and;
  • scalable capacity.

Organizations dedicated to interoperability

Many organizations are dedicated to interoperability. All have in common that they want to push the development of the World Wide Web towards the semantic web. Some concentrate on eGovernment, eBusiness or data exchange in general.

Global

Internationally, Network Centric Operations Industry Consortium facilitates global interoperability across borders, language and technical barriers. In the built environment, the International Alliance for Interoperability started in 1994, and was renamed buildingSMART in 2005.

Europe

In Europe, for instance, the European Commission and its IDABC program issue the European Interoperability Framework. IDABC was succeeded by the ISA program. They also initiated the Semantic Interoperability Centre Europe (SEMIC.EU). A European Land Information Service (EULIS) was established in 2006, as a consortium of European National Land Registers. The aim of the service is to establish a single portal through which customers are provided with access to information about individual properties, about land and property registration services, and about the associated legal environment.

The European Interoperability Framework (EIF)  considered four kinds of interoperability: 1) legal interoperability, 2) organizational interoperability, 3) semantic interoperability, and 4) technical interoperability. In the European Research Cluster on the Internet of Things (IERC) and IoT Semantic Interoperability Best Practices; four kinds of interoperability are distinguished: 1) syntactical interoperability, 2) technical interoperability, 3) semantic interoperability, and 4) organizational interoperability.

US

In the United States, the General Services Administration Component Organization and Registration Environment (CORE.GOV) initiative provided a collaboration environment for component development, sharing, registration, and reuse in the early 2000s. A related initiative is the ongoing National Information Exchange Model (NIEM) work and component repository. The National Institute of Standards and Technology serves as an agency for measurement standards.

API

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/API
Screenshot of web API documentation written by NASA

An application programming interface (API) is a way for two or more computer programs to communicate with each other. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how to build or use such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation. Whereas a system's user interface dictates how its end-users interact with the system in question, its API dictates how to write code that takes advantage of that system's capabilities.

In contrast to a user interface, which connects a computer to a person, an application programming interface connects computers or pieces of software to each other. It is not intended to be used directly by a person (the end user) other than a computer programmer who is incorporating it into the software. An API is often made up of different parts which act as tools or services that are available to the programmer. A program or a programmer that uses one of these parts is said to call that portion of the API. The calls that make up the API are also known as subroutines, methods, requests, or endpoints. An API specification defines these calls, meaning that it explains how to use or implement them.

One purpose of APIs is to hide the internal details of how a system works, exposing only those parts that a programmer will find useful, and keeping them consistent even if the internal details change later. An API may be custom-built for a particular pair of systems, or it may be a shared standard allowing interoperability among many systems.

There are APIs for programming languages, software libraries, computer operating systems, and computer hardware. APIs originated in the 1940s, though the term did not emerge until the 1960s and 1970s. Contemporary usage of the term API often refers to web APIs, which allow communication between computers that are joined by the internet. Recent developments in APIs have led to the rise in popularity of microservices, which are loosely coupled services accessed through public APIs.

Purpose

In building applications, an API simplifies programming by abstracting the underlying implementation and only exposing objects or actions the developer needs. While a graphical interface for an email client might provide a user with a button that performs all the steps for fetching and highlighting new emails, an API for file input/output might give the developer a function that copies a file from one location to another without requiring that the developer understand the file system operations occurring behind the scenes.

History of the term

A diagram from 1978 proposing the expansion of the idea of the API to become a general programming interface, beyond application programs alone

The term API initially described an interface only for end-user-facing programs, known as application programs. This origin is still reflected in the name "application programming interface." Today, the term is broader, including also utility software and even hardware interfaces.

1940s and 1950s

The idea of the API is much older than the term itself. British computer scientists Maurice Wilkes and David Wheeler worked on a modular software library in the 1940s for EDSAC, an early computer. The subroutines in this library were stored on punched paper tape organized in a filing cabinet. This cabinet also contained what Wilkes and Wheeler called a "library catalog" of notes about each subroutine and how to incorporate it into a program. Today, such a catalog would be called an API (or an API specification or API documentation) because it instructs a programmer on how to use (or "call") each subroutine that the programmer needs.

Wilkes and Wheeler's 1951 book The Preparation of Programs for an Electronic Digital Computer contains the first published API specification. Joshua Bloch considers that Wilkes and Wheeler "latently invented" the API because it is more of a concept that is discovered than invented.

Although the people who coined the term API were implementing software on a Univac 1108, the goal of their API was to make hardware independent programs possible.

1960s and 1970s

The term "application program interface" (without an ‑ing suffix) is first recorded in a paper called Data structures and techniques for remote computer graphics presented at an AFIPS conference in 1968. The authors of this paper use the term to describe the interaction of an application—a graphics program in this case—with the rest of the computer system. A consistent application interface (consisting of Fortran subroutine calls) was intended to free the programmer from dealing with idiosyncrasies of the graphics display device, and to provide hardware independence if the computer or the display were replaced.

The term was introduced to the field of databases by C. J. Date in a 1974 paper called The Relational and Network Approaches: Comparison of the Application Programming Interface. An API became a part of the ANSI/SPARC framework for database management systems. This framework treated the application programming interface separately from other interfaces, such as the query interface. Database professionals in the 1970s observed these different interfaces could be combined; a sufficiently rich application interface could support the other interfaces as well.

This observation led to APIs that supported all types of programming, not just application programming.

1990s

By 1990, the API was defined simply as "a set of services available to a programmer for performing certain tasks" by technologist Carl Malamud.

The idea of the API was expanded again with the dawn of remote procedure calls and web APIs. As computer networks became common in the 1970s and 1980s, programmers wanted to call libraries located not only on their local computers but on computers located elsewhere. These remote procedure calls were well supported by the Java language in particular. In the 1990s, with the spread of the internet, standards like CORBA, COM, and DCOM competed to become the most common way to expose API services.

2000s

Roy Fielding's dissertation Architectural Styles and the Design of Network-based Software Architectures at UC Irvine in 2000 outlined Representational state transfer (REST) and described the idea of a "network-based Application Programming Interface" that Fielding contrasted with traditional "library-based" APIs. XML and JSON web APIs saw widespread commercial adoption beginning in 2000 and continuing as of 2022. The web API is now the most common meaning of the term API.

The Semantic Web proposed by Tim Berners-Lee in 2001 included "semantic APIs" that recasts the API as an open, distributed data interface rather than a software behavior interface. Proprietary interfaces and agents became more widespread than open ones, but the idea of the API as a data interface took hold. Because web APIs are widely used to exchange data of all kinds online, API has become a broad term describing much of the communication on the internet. When used in this way, the term API has overlap in meaning with the term communication protocol.

Usage

Libraries and frameworks

The interface to a software library is one type of API. The API describes and prescribes the "expected behavior" (a specification) while the library is an "actual implementation" of this set of rules.

A single API can have multiple implementations (or none, being abstract) in the form of different libraries that share the same programming interface.

The separation of the API from its implementation can allow programs written in one language to use a library written in another. For example, because Scala and Java compile to compatible bytecode, Scala developers can take advantage of any Java API.

API use can vary depending on the type of programming language involved. An API for a procedural language such as Lua could consist primarily of basic routines to execute code, manipulate data or handle errors while an API for an object-oriented language, such as Java, would provide a specification of classes and its class methods. Hyrum's law states that "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody." Meanwhile, several studies show that most applications that use an API tend to use a small part of the API.

Language bindings are also APIs. By mapping the features and capabilities of one language to an interface implemented in another language, a language binding allows a library or service written in one language to be used when developing in another language.

Tools such as SWIG and F2PY, a Fortran-to-Python interface generator, facilitate the creation of such interfaces.

An API can also be related to a software framework: a framework can be based on several libraries implementing several APIs, but unlike the normal use of an API, the access to the behavior built into the framework is mediated by extending its content with new classes plugged into the framework itself.

Moreover, the overall program flow of control can be out of the control of the caller and in the framework's hands by inversion of control or a similar mechanism.

Operating systems

An API can specify the interface between an application and the operating system. POSIX, for example, provides a set of common API specifications that aim to enable an application written for a POSIX conformant operating system to be compiled for another POSIX conformant operating system.

Linux and Berkeley Software Distribution are examples of operating systems that implement the POSIX APIs.

Microsoft has shown a strong commitment to a backward-compatible API, particularly within its Windows API (Win32) library, so older applications may run on newer versions of Windows using an executable-specific setting called "Compatibility Mode".

An API differs from an application binary interface (ABI) in that an API is source code based while an ABI is binary based. For instance, POSIX provides APIs while the Linux Standard Base provides an ABI.

Remote APIs

Remote APIs allow developers to manipulate remote resources through protocols, specific standards for communication that allow different technologies to work together, regardless of language or platform. For example, the Java Database Connectivity API allows developers to query many different types of databases with the same set of functions, while the Java remote method invocation API uses the Java Remote Method Protocol to allow invocation of functions that operate remotely but appear local to the developer.

Therefore, remote APIs are useful in maintaining the object abstraction in object-oriented programming; a method call, executed locally on a proxy object, invokes the corresponding method on the remote object, using the remoting protocol, and acquires the result to be used locally as a return value.

A modification of the proxy object will also result in a corresponding modification of the remote object.

Web APIs

Web APIs are a service accessed from client devices (mobile phones, laptops, etc.) to a web server using the Hypertext Transfer Protocol (HTTP). Client devices send a request in the form of an HTTP request, and are met with a response message usually in JavaScript Object Notation (JSON) or Extensible Markup Language (XML) format. Developers typically use Web APIs to query a server for a specific set of data from that server.

An example might be a shipping company API that can be added to an eCommerce-focused website to facilitate ordering shipping services and automatically include current shipping rates, without the site developer having to enter the shipper's rate table into a web database. While "web API" historically has been virtually synonymous with web service, the recent trend (so-called Web 2.0) has been moving away from Simple Object Access Protocol (SOAP) based web services and service-oriented architecture (SOA) towards more direct representational state transfer (REST) style web resources and resource-oriented architecture (ROA). Part of this trend is related to the Semantic Web movement toward Resource Description Framework (RDF), a concept to promote web-based ontology engineering technologies. Web APIs allow the combination of multiple APIs into new applications known as mashups.

In the social media space, web APIs have allowed web communities to facilitate sharing content and data between communities and applications. In this way, content that is created in one place dynamically can be posted and updated to multiple locations on the web. For example, Twitter's REST API allows developers to access core Twitter data and the Search API provides methods for developers to interact with Twitter Search and trends data.

Design

The design of an API has a significant impact on its usage. First of all, the design of programming interfaces represents an important part of software architecture, the organization of a complex piece of software. The principle of information hiding describes the role of programming interfaces as enabling modular programming by hiding the implementation details of the modules so that users of modules need not understand the complexities inside the modules. Aside from the previous underlying principle, other metrics for measuring the usability of an API may include properties such as functional efficiency, overall correctness, and learnability for novices. One straightforward and commonly adopted way of designing APIs is to follow Nielsen's heuristic evaluation guidelines. The Factory method pattern is also typical in designing APIs due to their reusable nature. Thus, the design of an API attempts to provide only the tools a user would expect.

Synchronous versus asynchronous

An application programming interface can be synchronous or asynchronous. A synchronous API call is a design pattern where the call site is blocked while waiting for the called code to finish. With an asynchronous API call, however, the call site is not blocked while waiting for the called code to finish, and instead the calling thread is notified when the reply arrives.

Security

API security is very critical when developing a public facing API. Common threats include SQL injection, Denial-of-service attack (DoS), broken authentication, and exposing sensitive data. Without ensuring proper security practices bad actors can get access to information they should not have or even gain privileges to make changes to your server. Some common security practices include proper connection security using HTTPS, content security to mitigate data injection attacks, and requiring an API key to use your service. Many public facing API services require you to use an assigned API key, and will refuse to serve data without sending the key with your request.

Release policies

APIs are one of the more common ways technology companies integrate. Those that provide and use APIs are considered as being members of a business ecosystem.

The main policies for releasing an API are:

  • Private: The API is for internal company use only.
  • Partner: Only specific business partners can use the API. For example, vehicle for hire companies such as Uber and Lyft allow approved third-party developers to directly order rides from within their apps. This allows the companies to exercise quality control by curating which apps have access to the API and provides them with an additional revenue stream.
  • Public: The API is available for use by the public. For example, Microsoft makes the Windows API public, and Apple releases its API Cocoa so that software can be written for their platforms. Not all public APIs are generally accessible by everybody. For example, Internet service providers like Cloudflare or Voxility, use RESTful APIs to allow customers and resellers access to their infrastructure information, DDoS stats, network performance, or dashboard controls. Access to such APIs is granted either by "API tokens", or customer status validations.

Public API implications

An important factor when an API becomes public is its "interface stability". Changes to the API—for example adding new parameters to a function call—could break compatibility with the clients that depend on that API.

When parts of a publicly presented API are subject to change and thus not stable, such parts of a particular API should be documented explicitly as "unstable". For example, in the Google Guava library, the parts that are considered unstable, and that might change soon, are marked with the Java annotation @Beta.

A public API can sometimes declare parts of itself as deprecated or rescinded. This usually means that part of the API should be considered a candidate for being removed, or modified in a backward incompatible way. Therefore, these changes allow developers to transition away from parts of the API that will be removed or not supported in the future.

On February 19, 2020, Akamai published their annual "State of the Internet" report, showcasing the growing trend of cybercriminals targeting public API platforms at financial services worldwide. From December 2017 through November 2019, Akamai witnessed 85.42 billion credential violation attacks. About 20%, or 16.55 billion, were against hostnames defined as API endpoints. Of these, 473.5 million have targeted financial services sector organizations.

Documentation

API documentation describes the services an API offers and how to use those services, aiming to cover everything a client would need to know for practical purposes.

Documentation is crucial for the development and maintenance of applications using the API. API documentation is traditionally found in documentation files but can also be found in social media such as blogs, forums, and Q&A websites.

Traditional documentation files are often presented via a documentation system, such as Javadoc or Pydoc, that has a consistent appearance and structure. However, the types of content included in the documentation differ from API to API.

In the interest of clarity, API documentation may include a description of classes and methods in the API as well as "typical usage scenarios, code snippets, design rationales, performance discussions, and contracts", but implementation details of the API services themselves are usually omitted.

Reference documentation for a REST API can be generated automatically from an OpenAPI document, which is a machine-readable text file that uses a prescribed format and syntax defined in the OpenAPI Specification. The OpenAPI document defines basic information such as the API's name and description, as well as describing operations the API provides access to.

API documentation can be enriched with metadata information like Java annotations. This metadata can be used by the compiler, tools, and by the run-time environment to implement custom behaviors or custom handling.

Dispute over copyright protection for APIs

In 2010, Oracle Corporation sued Google for having distributed a new implementation of Java embedded in the Android operating system. Google had not acquired any permission to reproduce the Java API, although permission had been given to the similar OpenJDK project. Google had approached Oracle to negotiate a license for their API, but were turned down due to trust issues. Despite the disagreement, Google chose to use Oracle's code anyway. Judge William Alsup ruled in the Oracle v. Google case that APIs cannot be copyrighted in the U.S and that a victory for Oracle would have widely expanded copyright protection to a "functional set of symbols" and allowed the copyrighting of simple software commands:

To accept Oracle's claim would be to allow anyone to copyright one version of code to carry out a system of commands and thereby bar all others from writing its different versions to carry out all or part of the same commands.

Alsup's ruling was overturned in 2014 on appeal to the Court of Appeals for the Federal Circuit, though the question of whether such use of APIs constitutes fair use was left unresolved.

In 2016, following a two-week trial, a jury determined that Google's reimplementation of the Java API constituted fair use, but Oracle vowed to appeal the decision. Oracle won on its appeal, with the Court of Appeals for the Federal Circuit ruling that Google's use of the APIs did not qualify for fair use. In 2019, Google appealed to the Supreme Court of the United States over both the copyrightability and fair use rulings, and the Supreme Court granted review. Due to the COVID-19 pandemic, the oral hearings in the case were delayed until October 2020.

The case was decided by the Supreme Court in Google's favor with a ruling of 6–2. Justice Stephen Breyer delivered the opinion of the court and at one point mentioned that "The declaring code is, if copyrightable at all, further than are most computer programs from the core of copyright." This means the code used in APIs are more similar to dictionaries than novels in terms of copyright protection.

Device driver

From Wikipedia, the free encyclopedia

A driver communicates with the device through the computer bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device (drives it). Once the device sends data back to the driver, the driver may invoke routines in the original calling program.

Drivers are hardware dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface.

Purpose

The main purpose of device drivers is to provide abstraction by acting as a translator between a hardware device and the applications or operating systems that use it. Programmers can write higher-level application code independently of whatever specific hardware the end-user is using. For example, a high-level application for interacting with a serial port may simply have two functions for "send data" and "receive data". At a lower level, a device driver implementing these functions would communicate to the particular serial port controller installed on a user's computer. The commands needed to control a 16550 UART are much different from the commands needed to control an FTDI serial port converter, but each hardware-specific device driver abstracts these details into the same (or similar) software interface.

Development

Writing a device driver requires an in-depth understanding of how the hardware and the software works for a given platform function. Because drivers require low-level access to hardware functions in order to operate, drivers typically operate in a highly privileged environment and can cause system operational issues if something goes wrong. In contrast, most user-level software on modern operating systems can be stopped without greatly affecting the rest of the system. Even drivers executing in user mode can crash a system if the device is erroneously programmed. These factors make it more difficult and dangerous to diagnose problems.

The task of writing drivers thus usually falls to software engineers or computer engineers who work for hardware-development companies. This is because they have better information than most outsiders about the design of their hardware. Moreover, it was traditionally considered in the hardware manufacturer's interest to guarantee that their clients can use their hardware in an optimum way. Typically, the Logical Device Driver (LDD) is written by the operating system vendor, while the Physical Device Driver (PDD) is implemented by the device vendor. However, in recent years, non-vendors have written numerous device drivers for proprietary devices, mainly for use with free and open source operating systems. In such cases, it is important that the hardware manufacturer provide information on how the device communicates. Although this information can instead be learned by reverse engineering, this is much more difficult with hardware than it is with software.

Microsoft has attempted to reduce system instability due to poorly written device drivers by creating a new framework for driver development, called Windows Driver Frameworks (WDF). This includes User-Mode Driver Framework (UMDF) that encourages development of certain types of drivers—primarily those that implement a message-based protocol for communicating with their devices—as user-mode drivers. If such drivers malfunction, they do not cause system instability. The Kernel-Mode Driver Framework (KMDF) model continues to allow development of kernel-mode device drivers, but attempts to provide standard implementations of functions that are known to cause problems, including cancellation of I/O operations, power management, and plug and play device support.

Apple has an open-source framework for developing drivers on macOS, called I/O Kit.

In Linux environments, programmers can build device drivers as parts of the kernel, separately as loadable modules, or as user-mode drivers (for certain types of devices where kernel interfaces exist, such as for USB devices). Makedev includes a list of the devices in Linux, including ttyS (terminal), lp (parallel port), hd (disk), loop, and sound (these include mixer, sequencer, dsp, and audio).

Microsoft Windows .sys files and Linux .ko files can contain loadable device drivers. The advantage of loadable device drivers is that they can be loaded only when necessary and then unloaded, thus saving kernel memory.

Kernel mode vs. user mode

Device drivers, particularly on modern Microsoft Windows platforms, can run in kernel-mode (Ring 0 on x86 CPUs) or in user-mode (Ring 3 on x86 CPUs). The primary benefit of running a driver in user mode is improved stability, since a poorly written user-mode device driver cannot crash the system by overwriting kernel memory. On the other hand, user/kernel-mode transitions usually impose a considerable performance overhead, thus making kernel-mode drivers preferred for low-latency networking.

Kernel space can be accessed by user-mode only through the use of system calls. End user programs like the UNIX shell or other GUI-based applications are part of user space. These applications interact with hardware through kernel supported functions.

Applications

Because of the diversity of modern hardware and operating systems, drivers operate in many different environments. Drivers may interface with:

Common levels of abstraction for device drivers include:

  • For hardware:
    • Interfacing directly
    • Writing to or reading from a device control register
    • Using some higher-level interface (e.g. Video BIOS)
    • Using another lower-level device driver (e.g. file system drivers using disk drivers)
    • Simulating work with hardware, while doing something entirely different
  • For software:
    • Allowing the operating system direct access to hardware resources
    • Implementing only primitives
    • Implementing an interface for non-driver software (e.g. TWAIN)
    • Implementing a language, sometimes quite high-level (e.g. PostScript)

So choosing and installing the correct device drivers for given hardware is often a key component of computer system configuration.

Virtual device drivers

Virtual device drivers represent a particular variant of device drivers. They are used to emulate a hardware device, particularly in virtualization environments, for example when a DOS program is run on a Microsoft Windows computer or when a guest operating system is run on, for example, a Xen host. Instead of enabling the guest operating system to dialog with hardware, virtual device drivers take the opposite role and emulates a piece of hardware, so that the guest operating system and its drivers running inside a virtual machine can have the illusion of accessing real hardware. Attempts by the guest operating system to access the hardware are routed to the virtual device driver in the host operating system as e.g., function calls. The virtual device driver can also send simulated processor-level events like interrupts into the virtual machine.

Virtual devices may also operate in a non-virtualized environment. For example, a virtual network adapter is used with a virtual private network, while a virtual disk device is used with iSCSI. A good example for virtual device drivers can be Daemon Tools.

There are several variants of virtual device drivers, such as VxDs, VLMs, and VDDs.

Open source drivers

Solaris descriptions of commonly used device drivers:

  • fas: Fast/wide SCSI controller
  • hme: Fast (10/100 Mbit/s) Ethernet
  • isp: Differential SCSI controllers and the SunSwift card
  • glm: (Gigabaud Link Module) UltraSCSI controllers
  • scsi: Small Computer Serial Interface (SCSI) devices
  • sf: soc+ or social Fiber Channel Arbitrated Loop (FCAL)
  • soc: SPARC Storage Array (SSA) controllers and the control device
  • social: Serial optical controllers for FCAL (soc+)

APIs

Identifiers

A device on the PCI bus or USB is identified by two IDs which consist of 4 hexadecimal numbers each. The vendor ID identifies the vendor of the device. The device ID identifies a specific device from that manufacturer/vendor.

A PCI device has often an ID pair for the main chip of the device, and also a subsystem ID pair which identifies the vendor, which may be different from the chip manufacturer.

Security

Devices often have a large number of diverse and customized device drivers running in their operating system (OS) kernel and often contain various bugs and vulnerabilities, making them a target for exploits. Bring Your Own Vulnerable Driver (BYOVD) uses signed, old drivers that contain flaws that allow hackers to insert malicious code into the kernel.

There is a lack of effective kernel vulnerability detection tools, especially for closed-source OSes such as Microsoft Windows where the source code of the device drivers is mostly not public (open source) and the drivers often also have many privileges.

Such vulnerabilities also exist in drivers in laptops, drivers for WiFi and bluetooth, gaming/graphics drivers, and drivers in printers.

A group of security researchers considers the lack of isolation as one of the main factors undermining kernel security, and published a isolation framework to protect operating system kernels, primarily the monolithic Linux kernel which, according to them, gets ~80,000 commits/year to its drivers.

An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviours (security). These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection.

The mechanisms or policies provided by the kernel can be classified according to several criteria, including: static (enforced at compile time) or dynamic (enforced at run time); pre-emptive or post-detection; according to the protection principles they satisfy (e.g., Denning); whether they are hardware supported or language based; whether they are more an open mechanism or a binding policy; and many more.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...