Search This Blog

Thursday, October 10, 2019

Information management

From Wikipedia, the free encyclopedia
 
Information management (IM) concerns a cycle of organizational activity: the acquisition of information from one or more sources, the custodianship and the distribution of that information to those who need it, and its ultimate disposition through archiving or deletion.

This cycle of organisational involvement with information involves a variety of stakeholders, including those who are responsible for assuring the quality, accessibility and utility of acquired information; those who are responsible for its safe storage and disposal; and those who need it for decision making. Stakeholders might have rights to originate, change, distribute or delete information according to organisational information management policies.

Information management embraces all the generic concepts of management, including the planning, organizing, structuring, processing, controlling, evaluation and reporting of information activities, all of which is needed in order to meet the needs of those with organisational roles or functions that depend on information. These generic concepts allow the information to be presented to the audience or the correct group of people. After individuals are able to put that information to use, it then gains more value.
Information management is closely related to, and overlaps with, the management of data, systems, technology, processes and – where the availability of information is critical to organisational success – strategy. This broad view of the realm of information management contrasts with the earlier, more traditional view, that the life cycle of managing information is an operational matter that requires specific procedures, organisational capabilities and standards that deal with information as a product or a service.

History

Emergent ideas out of data management

In the 1970s, the management of information largely concerned matters closer to what would now be called data management: punched cards, magnetic tapes and other record-keeping media, involving a life cycle of such formats requiring origination, distribution, backup, maintenance and disposal. At this time the huge potential of information technology began to be recognised: for example a single chip storing a whole book, or electronic mail moving messages instantly around the world, remarkable ideas at the time. With the proliferation of information technology and the extending reach of information systems in the 1980s and 1990s, information management took on a new form. Progressive businesses such as British Petroleum transformed the vocabulary of what was then "IT management", so that “systems analysts” became “business analysts”, “monopoly supply” became a mixture of “insourcing” and “outsourcing”, and the large IT function was transformed into “lean teams” that began to allow some agility in the processes that harness information for business benefit. The scope of senior management interest in information at British Petroleum extended from the creation of value through improved business processes, based upon the effective management of information, permitting the implementation of appropriate information systems (or “applications”) that were operated on IT infrastructure that was outsourced. In this way, information management was no longer a simple job that could be performed by anyone who had nothing else to do, it became highly strategic and a matter for senior management attention. An understanding of the technologies involved, an ability to manage information systems projects and business change well, and a willingness to align technology and business strategies all became necessary.

Positioning information management in the bigger picture

In the transitional period leading up to the strategic view of information management, Venkatraman (a strong advocate of this transition and transformation, proffered a simple arrangement of ideas that succinctly brought together the managements of data, information, and knowledge (see the figure)) argued that:
  • Data that is maintained in IT infrastructure has to be interpreted in order to render information.
  • The information in our information systems has to be understood in order to emerge as knowledge.
  • Knowledge allows managers to take effective decisions.
  • Effective decisions have to lead to appropriate actions.
  • Appropriate actions are expected to deliver meaningful results.
This simple model summarises a presentation by Venkatraman in 1996, as reported by Ward and Peppard (2002, page 207).
 
This is often referred to as the DIKAR model: Data, Information, Knowledge, Action and Result, it gives a strong clue as to the layers involved in aligning technology and organisational strategies, and it can be seen as a pivotal moment in changing attitudes to information management. The recognition that information management is an investment that must deliver meaningful results is important to all modern organisations that depend on information and good decision-making for their success.

Theoretical background

Behavioural and organisational theories

It is commonly believed that good information management is crucial to the smooth working of organisations, and although there is no commonly accepted theory of information management per se, behavioural and organisational theories help. Following the behavioural science theory of management, mainly developed at Carnegie Mellon University and prominently supported by March and Simon, most of what goes on in modern organizations is actually information handling and decision making. One crucial factor in information handling and decision making is an individual's ability to process information and to make decisions under limitations that might derive from the context: a person's age, the situational complexity, or a lack of requisite quality in the information that is at hand – all of which is exacerbated by the rapid advance of technology and the new kinds of system that it enables, especially as the social web emerges as a phenomenon that business cannot ignore. And yet, well before there was any general recognition of the importance of information management in organisations, March and Simon argued that organizations have to be considered as cooperative systems, with a high level of information processing and a vast need for decision making at various levels. Instead of using the model of the "economic man", as advocated in classical theory  they proposed "administrative man" as an alternative, based on their argumentation about the cognitive limits of rationality. Additionally they proposed the notion of satisficing, which entails searching through the available alternatives until an acceptability threshold is met - another idea that still has currency.

Economic theory

In addition to the organisational factors mentioned by March and Simon, there are other issues that stem from economic and environmental dynamics. There is the cost of collecting and evaluating the information needed to take a decision, including the time and effort required. The transaction cost associated with information processes can be high. In particular, established organizational rules and procedures can prevent the taking of the most appropriate decision, leading to sub-optimum outcomes. This is an issue that has been presented as a major problem with bureaucratic organizations that lose the economies of strategic change because of entrenched attitudes.

Strategic information management

Background

According to the Carnegie Mellon School an organization's ability to process information is at the core of organizational and managerial competency, and an organization's strategies must be designed to improve information processing capability and as information systems that provide that capability became formalised and automated, competencies were severely tested at many levels. It was recognised that organisations needed to be able to learn and adapt in ways that were never so evident before  and academics began to organise and publish definitive works concerning the strategic management of information, and information systems. Concurrently, the ideas of business process management and knowledge management although much of the optimistic early thinking about business process redesign has since been discredited in the information management literature. In the strategic studies field, it is considered of the highest priority the understanding of the information environment, conceived as the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consists of three interrelated dimensions which continuously interact with individuals, organizations, and systems. These dimensions are the physical, informational, and cognitive.

Aligning technology and business strategy with information management

Venkatraman has provided a simple view of the requisite capabilities of an organisation that wants to manage information well – the DIKAR model (see above). He also worked with others to understand how technology and business strategies could be appropriately aligned in order to identify specific capabilities that are needed. This work was paralleled by other writers in the world of consulting, practice and academia.

A contemporary portfolio model for information

Bytheway has collected and organised basic tools and techniques for information management in a single volume. At the heart of his view of information management is a portfolio model that takes account of the surging interest in external sources of information and the need to organise un-structured information external so as to make it useful (see the figure). 

This portfolio model organizes issues of internal and external sourcing and management of information, that may be either structured or unstructured.

Such an information portfolio as this shows how information can be gathered and usefully organised, in four stages:

Stage 1: Taking advantage of public information: recognise and adopt well-structured external schemes of reference data, such as post codes, weather data, GPS positioning data and travel timetables, exemplified in the personal computing press.

Stage 2: Tagging the noise on the world wide web: use existing schemes such as post codes and GPS data or more typically by adding “tags”, or construct a formal ontology that provides structure. Shirky provides an overview of these two approaches.

Stage 3: Sifting and analysing: in the wider world the generalised ontologies that are under development extend to hundreds of entities and hundreds of relations between them and provide the means to elicit meaning from large volumes of data. Structured data in databases works best when that structure reflects a higher-level information model – an ontology, or an entity-relationship model.

Stage 4: Structuring and archiving: with the large volume of data available from sources such as the social web and from the miniature telemetry systems used in personal health management, new ways to archive and then trawl data for meaningful information. Map-reduce methods, originating from functional programming, are a more recent way of eliciting information from large archival datasets that is becoming interesting to regular businesses that have very large data resources to work with, but it requires advanced multi-processor resources.

Competencies to manage information well

The Information Management Body of Knowledge was made available on the world wide web in 2004  and sets out to show that the required management competencies to derive real benefits from an investment in information are complex and multi-layered. The framework model that is the basis for understanding competencies comprises six “knowledge” areas and four “process” areas:

This framework is the basis of organising the "Information Management Body of Knowledge" first made available in 2004. This version is adapted by the addition of "Business information" in 2014.
The information management knowledge areas
The IMBOK is based on the argument that there are six areas of required management competency, two of which (“business process management” and “business information management”) are very closely related.
  • Information technology: The pace of change of technology and the pressure to constantly acquire the newest technological products can undermine the stability of the infrastructure that supports systems, and thereby optimises business processes and delivers benefits. It is necessary to manage the “supply side” and recognise that technology is, increasingly, becoming a commodity.
  • Information system: While historically information systems were developed in-house, over the years it has become possible to acquire most of the software systems that an organisation needs from the software package industry. However, there is still the potential for competitive advantage from the implementation of new systems ideas that deliver to the strategic intentions of organisations.
  • Business processes and Business information: Information systems are applied to business processes in order to improve them, and they bring data to the business that becomes useful as business information. Business process management is still seen as a relatively new idea because it is not universally adopted, and it has been difficult in many cases; business information management is even more of a challenge.
  • Business benefit: What are the benefits that we are seeking? It is necessary not only to be brutally honest about what can be achieved, but also to ensure the active management and assessment of benefit delivery. Since the emergence and popularisation of the Balanced scorecard  there has been huge interest in business performance management but not much serious effort has been made to relate business performance management to the benefits of information technology investments and the introduction of new information systems until the turn of the millennium.
  • Business strategy: Although a long way from the workaday issues of managing information in organisations, strategy in most organisations simply has to be informed by information technology and information systems opportunities, whether to address poor performance or to improve differentiation and competitiveness. Strategic analysis tools such as the value chain and critical success factor analysis are directly dependent on proper attention to the information that is (or could be) managed 
The information management processes
Even with full capability and competency within the six knowledge areas, it is argued that things can still go wrong. The problem lies in the migration of ideas and information management value from one area of competency to another. Summarising what Bytheway explains in some detail (and supported by selected secondary references):
  • Projects: Information technology is without value until it is engineered into information systems that meet the needs of the business by means of good project management.
  • Business change: The best information systems succeed in delivering benefits through the achievement of change within the business systems, but people do not appreciate change that makes new demands upon their skills in the ways that new information systems often do. Contrary to common expectations, there is some evidence that the public sector has succeeded with information technology induced business change.
  • Business operations: With new systems in place, with business processes and business information improved, and with staff finally ready and able to work with new processes, then the business can get to work, even when new systems extend far beyond the boundaries of a single business.
  • Performance management: Investments are no longer solely about financial results, financial success must be balanced with internal efficiency, customer satisfaction, and with organisational learning and development.

Summary

There are always many ways to see a business, and the information management viewpoint is only one way. It is important to remember that other areas of business activity will also contribute to strategy – it is not only good information management that moves a business forwards. Corporate governance, human resource management, product development and marketing will all have an important role to play in strategic ways, and we must not see one domain of activity alone as the sole source of strategic success. On the other hand, corporate governance, human resource management, product development and marketing are all dependent on effective information management, and so in the final analysis our competency to manage information well, on the broad basis that is offered here, can be said to be predominant.

Operationalising information management

Managing requisite change

Organizations are often confronted with many information management challenges and issues at the operational level, especially when organisational change is engendered. The novelty of new systems architectures and a lack of experience with new styles of information management requires a level of organisational change management that is notoriously difficult to deliver. As a result of a general organisational reluctance to change, to enable new forms of information management, there might be (for example): a shortfall in the requisite resources, a failure to acknowledge new classes of information and the new procedures that use them, a lack of support from senior management leading to a loss of strategic vision, and even political manoeuvring that undermines the operation of the whole organisation. However, the implementation of new forms of information management should normally lead to operational benefits.

The early work of Galbraith

In early work, taking an information processing view of organisation design, Jay Galbraith has identified five tactical areas to increase information processing capacity and reduce the need for information processing.
  • Developing, implementing, and monitoring all aspects of the “environment” of an organization.
  • Creation of slack resources so as to decrease the load on the overall hierarchy of resources and to reduce information processing relating to overload.
  • Creation of self-contained tasks with defined boundaries and that can achieve proper closure, and with all the resources at hand required to perform the task.
  • Recognition of lateral relations that cut across functional units, so as to move decision power to the process instead of fragmenting it within the hierarchy.
  • Investment in vertical information systems that route information flows for a specific task (or set of tasks) in accordance to the applied business logic.

The matrix organisation

The lateral relations concept leads to an organizational form that is different from the simple hierarchy, the “matrix organization”. This brings together the vertical (hierarchical) view of an organisation and the horizontal (product or project) view of the work that it does visible to the outside world. The creation of a matrix organization is one management response to a persistent fluidity of external demand, avoiding multifarious and spurious responses to episodic demands that tend to be dealt with individually.

Data modeling

From Wikipedia, the free encyclopedia
 
The data modeling process. The figure illustrates the way data models are developed and used today . A conceptual data model is developed based on the data requirements for the application that is being developed, perhaps in the context of an activity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface or database design.
 
Data modeling in software engineering is the process of creating a data model for an information system by applying certain formal techniques.

Overview

Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system. 

There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details. Data modeling defines not just data elements, but also their structures and the relationships between them.

Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization, e.g., using data modeling:
  • to assist business analysts, programmers, testers, manual writers, IT package selectors, engineers, managers, related organizations and clients to understand and use an agreed semi-formal model the concepts of the organization and how they relate to one another
  • to manage data as a resource
  • for the integration of information systems
  • for designing databases/data warehouses (aka data repositories)
Data modeling may be performed during various types of projects and in multiple phases of projects. Data models are progressive; there is no such thing as the final data model for a business or application. Instead a data model should be considered a living document that will change in response to a changing business. The data models should ideally be stored in a repository so that they can be retrieved, expanded, and edited over time. Whitten et al. (2004) determined two types of data modeling:
  • Strategic data modeling: This is part of the creation of an information systems strategy, which defines an overall vision and architecture for information systems. Information technology engineering is a methodology that embraces this approach.
  • Data modeling during systems analysis: In systems analysis logical data models are created as part of the development of new databases.
Data modeling is also used as a technique for detailing business requirements for specific databases. It is sometimes called database modeling because a data model is eventually implemented in a database.

Data modeling topics

Data models

How data models deliver benefit.
 
Data models provide a framework for data to be used within information systems by providing specific definition and format. If a data model is used consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data seamlessly. The results of this are indicated in the diagram. However, systems and interfaces are often expensive to build, operate, and maintain. They may also constrain the business rather than support it. This may occur when the quality of the data models implemented in systems and interfaces is poor.

Some common problems found in data models are:
  • Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces. So, business rules need to be implemented in a flexible way that does not result in complicated dependencies, rather the data model should be flexible enough so that changes in the business can be implemented within the data model in a relatively quick and efficient way.
  • Entity types are often not identified, or are identified incorrectly. This can lead to replication of data, data structure and functionality, together with the attendant costs of that duplication in development and maintenance.Therefore, data definitions should be made as explicit and easy to understand as possible to minimize misinterpretation and duplication.
  • Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25-70% of the cost of current systems. Required interfaces should be considered inherently while designing a data model, as a data model on its own would not be usable without interfaces within different systems.
  • Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data has not been standardised. To obtain optimal value from an implemented data model, it is very important to define standards that will ensure that data models will both meet business needs and be consistent.

Conceptual, logical and physical schemas

The ANSI/SPARC three level architecture. This shows that a data model can be an external model (or view), a conceptual model, or a physical model. This is not the only way to look at data models, but it is a useful way, particularly when comparing models.
 
In 1975 ANSI described three kinds of data-model instance:
  • Conceptual schema: describes the semantics of a domain (the scope of the model). For example, it may be a model of the interest area of an organization or of an industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationships assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial "language" with a scope that is limited by the scope of the model. Simply described, a conceptual schema is the first step in organizing the data requirements.
  • Logical schema: describes the structure of some domain of information. This consists of descriptions of (for example) tables, columns, object-oriented classes, and XML tags. The logical schema and conceptual schema are sometimes implemented as one and the same.
  • Physical schema: describes the physical means used to store data. This is concerned with partitions, CPUs, tablespaces, and the like.
According to ANSI, this approach allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual schema. The table/column structure can change without (necessarily) affecting the conceptual schema. In each case, of course, the structures must remain consistent across all schemas of the same data model.

Data modeling process

Data modeling in the context of Business Process Integration.
 
In the context of business process integration (see figure), data modeling complements business process modeling, and ultimately results in database generation.

The process of designing a database involves producing the previously described three types of schemas - conceptual, logical, and physical. The database design documented in these schemas are converted through a Data Definition Language, which can then be used to generate a database. A fully attributed data model contains detailed attributes (descriptions) for every entity within it. The term "database design" can describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term "database design" could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the Database Management System or DBMS.

In the process, system interfaces account for 25% to 70% of the development and support costs of current systems. The primary reason for this cost is that these systems do not share a common data model. If data models are developed on a system by system basis, then not only is the same analysis repeated in overlapping areas, but further analysis must be performed to create the interfaces between them. Most systems within an organization contain the same basic data, redeveloped for a specific purpose. Therefore, an efficiently designed basic data model can minimize rework with minimal modifications for the purposes of different systems within the organization

Modeling methodologies

Data models represent information areas of interest. While there are many ways to create data models, according to Len Silverston (1997) only two modeling methodologies stand out, top-down and bottom-up:
  • Bottom-up models or View Integration models are often the result of a reengineering effort. They usually start with existing data structures forms, fields on application screens, or reports. These models are usually physical, application-specific, and incomplete from an enterprise perspective. They may not promote data sharing, especially if they are built without reference to other parts of the organization.
  • Top-down logical data models, on the other hand, are created in an abstract way by getting information from people who know the subject area. A system may not implement all the entities in a logical model, but the model serves as a reference point or template.
Sometimes models are created in a mixture of the two methods: by considering the data needs and structure of an application and by consistently referencing a subject-area model. Unfortunately, in many environments the distinction between a logical data model and a physical data model is blurred. In addition, some CASE tools don't make a distinction between logical and physical data models.

Entity relationship diagrams

Example of an IDEF1X Entity relationship diagrams used to model IDEF1X itself. The name of the view is mm. The domain hierarchy and constraints are also given. The constraints are expressed as sentences in the formal theory of the meta model.
 
There are several notations for data modeling. The actual model is frequently called "Entity relationship model", because it depicts data in terms of the entities and relationships described in the data. An entity-relationship model (ERM) is an abstract conceptual representation of structured data. Entity-relationship modeling is a relational schema database modeling method, used in software engineering to produce a type of conceptual data model (or semantic data model) of a system, often a relational database, and its requirements in a top-down fashion.

These models are being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain universe of discourse i.e. area of interest.

Several techniques have been developed for the design of data models. While these methodologies guide data modelers in their work, two different people using the same methodology will often come up with very different results. Most notable are:

Generic data modeling

Example of a Generic data model.
 
Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type. The definition of generic data model is similar to the definition of a natural language. For example, a generic data model may define relation types such as a 'classification relation', being a binary relation between an individual thing and a kind of thing (a class) and a 'part-whole relation', being a binary relation between two things, one with the role of part, the other with the role of whole, regardless the kind of things that are related.

Given an extensible list of classes, this allows the classification of any individual thing and to specify part-whole relations for any individual object. By standardization of an extensible list of relation types, a generic data model enables the expression of an unlimited number of kinds of facts and will approach the capabilities of natural languages. Conventional data models, on the other hand, have a fixed and limited domain scope, because the instantiation (usage) of such a model only allows expressions of kinds of facts that are predefined in the model.

Semantic data modeling

The logical data structure of a DBMS, whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. That is unless the semantic data model is implemented in the database on purpose, a choice which may slightly impact performance but generally vastly improves productivity. 

Semantic data models.
 
Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure the real world, in terms of resources, ideas, events, etc., are symbolically defined within physical data stores. A semantic data model is an abstraction which defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.

A semantic data model can be used to serve many purposes, such as:
  • planning of data resources
  • building of shareable databases
  • evaluation of vendor software
  • integration of existing databases
The overall goal of semantic data models is to capture more meaning of data by integrating relational concepts with more powerful abstraction concepts known from the Artificial Intelligence field. The idea is to provide high level modeling primitives as integral part of a data model in order to facilitate the representation of real world situations.

Wednesday, October 9, 2019

Interpreter (computing)

From Wikipedia, the free encyclopedia

In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution:
  1. Parse the source code and perform its behavior directly;
  2. Translate source code into some efficient intermediate representation and immediately execute this;
  3. Explicitly execute stored precompiled code made by a compiler which is part of the interpreter system.
Early versions of Lisp programming language and Dartmouth BASIC would be examples of the first type. Perl, Python, MATLAB, and Ruby are examples of the second, while UCSD Pascal is an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, which is then linked at run-time and executed by an interpreter and/or compiler (for JIT systems). Some systems, such as Smalltalk and contemporary versions of BASIC and Java may also combine two and three. Interpreters of various types have also been constructed for many languages traditionally associated with compilation, such as Algol, Fortran, Cobol and C/C++.

While interpretation and compilation are the two main means by which programming languages are implemented, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers. The terms "interpreted language" or "compiled language" signify that the canonical implementation of that language is an interpreter or a compiler, respectively. A high level language is ideally an abstraction independent of particular implementations.

History

Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed. The first interpreted high-level language was Lisp. Lisp was first implemented in 1958 by Steve Russell on an IBM 704 computer. Russell had read John McCarthy's paper, and realized (to McCarthy's surprise) that the Lisp eval function could be implemented in machine code. The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".

Compilers versus interpreters

An illustration of the linking process. Object files and static libraries are assembled into a new library or executable
 
Programs written in a high level language are either directly executed by some kind of interpreter or converted into machine code by a compiler (and assembler and linker) for the CPU to execute.

While compilers (and assemblers) generally produce machine code directly executable by computer hardware, they can often (optionally) produce an intermediate form called object code. This is basically the same machine specific code but augmented with a symbol table with names and tags to make executable blocks (or modules) identifiable and relocatable. Compiled programs will typically use building blocks (functions) kept in a library of such object code modules. A linker is used to combine (pre-made) library files with the object file(s) of the application to form a single executable file. The object files that are used to generate an executable file are thus often produced at different times, and sometimes even by different languages (capable of generating the same object format).

A simple interpreter written in a low level language (e.g. assembly) may have similar machine code blocks implementing functions of the high level language stored, and executed when a function's entry in a look up table points to that code. However, an interpreter written in a high level language typically uses another approach, such as generating and then walking a parse tree, or by generating and executing intermediate software-defined instructions, or both.

Thus, both compilers and interpreters generally turn source code (text files) into tokens, both may (or may not) generate a parse tree, and both may generate immediate instructions (for a stack machine, quadruple code, or by other means). The basic difference is that a compiler system, including a (built in or separate) linker, generates a stand-alone machine code program, while an interpreter system instead performs the actions described by the high level program.

A compiler can thus make almost all the conversions from source code semantics to the machine level once and for all (i.e. until the program has to be changed) while an interpreter has to do some of this conversion work every time a statement or function is executed. However, in an efficient interpreter, much of the translation work (including analysis of types, and similar) is factored out and done only the first time a program, module, function, or even statement, is run, thus quite akin to how a compiler works. However, a compiled program still runs much faster, under most circumstances, in part because compilers are designed to optimize code, and may be given ample time for this. This is especially true for simpler high level languages without (many) dynamic data structures, checks, or type-checks.

In traditional compilation, the executable output of the linkers (.exe files or .dll files or a library, see picture) is typically relocatable when run under a general operating system, much like the object code modules are but with the difference that this relocation is done dynamically at run time, i.e. when the program is loaded for execution. On the other hand, compiled and linked programs for small embedded systems are typically statically allocated, often hard coded in a NOR flash memory, as there is often no secondary storage and no operating system in this sense. 

Historically, most interpreter-systems have had a self-contained editor built in. This is becoming more common also for compilers (then often called an IDE), although some programmers prefer to use an editor of their choice and run the compiler, linker and other tools manually. Historically, compilers predate interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation.

Development cycle

During the software development cycle, programmers make frequent changes to source code. When using a compiler, each time a change is made to the source code, they must wait for the compiler to translate the altered source files and link all of the binary code files together before the program can be executed. The larger the program, the longer the wait. By contrast, a programmer using an interpreter does a lot less waiting, as the interpreter usually just needs to translate the code being worked on to an intermediate representation (or not translate it at all), thus requiring much less time before the changes can be tested. Effects are evident upon saving the source code and reloading the program. Compiled code is generally less readily debugged as editing, compiling, and linking are sequential processes that have to be conducted in the proper sequence with a proper set of commands. For this reason, many compilers also have an executive aid, known as a Make file and program. The Make file lists compiler and linker command lines and program source code files, but might take a simple command line menu input (e.g. "Make 3") which selects the third group (set) of instructions then issues the commands to the compiler, and linker feeding the specified source code files.

Distribution

A compiler converts source code into binary instruction for a specific processor's architecture, thus making it less portable. This conversion is made just once, on the developer's environment, and after that the same binary can be distributed to the user's machines where it can be executed without further translation. A cross compiler can generate binary code for the user machine even if it has a different processor than the machine where the code is compiled. 

An interpreted program can be distributed as source code. It needs to be translated in each final machine, which takes more time but makes the program distribution independent of the machine's architecture. However, the portability of interpreted source code is dependent on the target machine actually having a suitable interpreter. If the interpreter needs to be supplied along with the source, the overall installation process is more complex than delivery of a monolithic executable since the interpreter itself is part of what need be installed. 

The fact that interpreted code can easily be read and copied by humans can be of concern from the point of view of copyright. However, various systems of encryption and obfuscation exist. Delivery of intermediate code, such as bytecode, has a similar effect to obfuscation, but bytecode could be decoded with a decompiler or disassembler.

Efficiency

The main disadvantage of interpreters is that an interpreted program typically runs slower than if it had been compiled. The difference in speeds could be tiny or great; often an order of magnitude and sometimes more. It generally takes longer to run a program under an interpreter than to run the compiled code but it can take less time to interpret it than the total time required to compile and run it. This is especially important when prototyping and testing code when an edit-interpret-debug cycle can often be much shorter than an edit-compile-run-debug cycle.

Interpreting code is slower than running the compiled code because the interpreter must analyze each statement in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation. This run-time analysis is known as "interpretive overhead". Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time.

There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems (such as some Lisps) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed. Many interpreters do not execute the source code as it stands but convert it into some more compact internal form. Many BASIC interpreters replace keywords with single byte tokens which can be used to find the instruction in a jump table. A few interpreters, such as the PBASIC interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a variable-length code requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation.

Toy expression interpreter

An interpreter might well use the same lexical analyzer and parser as the compiler and then interpret the resulting abstract syntax tree. Example data type definitions for the latter, and a toy interpreter for syntax trees obtained from C expressions are shown in the box.

Regression

Interpretation cannot be used as the sole method of execution: even though an interpreter can itself be interpreted and so on, a directly executed program is needed somewhere at the bottom of the stack because the code being interpreted is not, by definition, the same as the machine code that the CPU can execute.

Variations

Bytecode interpreters

There is a spectrum of possibilities between interpreting and compiling, depending on the amount of analysis performed before the program is executed. For example, Emacs Lisp is compiled to bytecode, which is a highly compressed and optimized representation of the Lisp source, but is not machine code (and therefore not tied to any particular hardware). This "compiled" code is then interpreted by a bytecode interpreter (itself written in C). The compiled code in this case is machine code for a virtual machine, which is implemented not in hardware, but in the bytecode interpreter. Such compiling interpreters are sometimes also called compreters. In a bytecode interpreter each instruction starts with a byte, and therefore bytecode interpreters have up to 256 instructions, although not all may be used. Some bytecodes may take multiple bytes, and may be arbitrarily complicated. 

Control tables - that do not necessarily ever need to pass through a compiling phase - dictate appropriate algorithmic control flow via customized interpreters in similar fashion to bytecode interpreters.

Threaded code interpreters

Threaded code interpreters are similar to bytecode interpreters but instead of bytes they use pointers. Each "instruction" is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. Unlike bytecode there is no effective limit on the number of different instructions other than available memory and address space. The classic example of threaded code is the Forth code used in Open Firmware systems: the source language is compiled into "F code" (a bytecode), which is then interpreted by a virtual machine.

Abstract syntax tree interpreters

In the spectrum between interpreting and compiling, another approach is to transform the source code into an optimized abstract syntax tree (AST), then execute the program following this tree structure, or use it to generate native code just-in-time. In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, the AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation. Thus, using AST has been proposed as a better intermediate format for just-in-time compilers than bytecode. Also, it allows the system to perform better analysis during runtime. 

However, for interpreters, an AST causes more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.

Just-in-time compilation

Further blurring the distinction between interpreters, bytecode interpreters and compilation is just-in-time compilation (JIT), a technique in which the intermediate representation is compiled to native machine code at runtime. This confers the efficiency of running native code, at the cost of startup time and increased memory use when the bytecode or AST is first compiled. Adaptive optimization is a complementary technique in which the interpreter profiles the running program and compiles its most frequently executed parts into native code. Both techniques are a few decades old, appearing in languages such as Smalltalk in the 1980s.

Just-in-time compilation has gained mainstream attention amongst language implementers in recent years, with Java, the .NET Framework, most modern JavaScript implementations, and Matlab now including JITs.

Self-interpreter

A self-interpreter is a programming language interpreter written in a programming language which can interpret itself; an example is a BASIC interpreter written in BASIC. Self-interpreters are related to self-hosting compilers.

If no compiler exists for the language to be interpreted, creating a self-interpreter requires the implementation of the language in a host language (which may be another programming language or assembler). By having a first interpreter such as this, the system is bootstrapped and new versions of the interpreter can be developed in the language itself. It was in this way that Donald Knuth developed the TANGLE interpreter for the language WEB of the industrial standard TeX typesetting system.

Defining a computer language is usually done in relation to an abstract machine (so-called operational semantics) or as a mathematical function (denotational semantics). A language may also be defined by an interpreter in which the semantics of the host language is given. The definition of a language by a self-interpreter is not well-founded (it cannot define a language), but a self-interpreter tells a reader about the expressiveness and elegance of a language. It also enables the interpreter to interpret its source code, the first step towards reflective interpreting.

An important design dimension in the implementation of a self-interpreter is whether a feature of the interpreted language is implemented with the same feature in the interpreter's host language. An example is whether a closure in a Lisp-like language is implemented using closures in the interpreter language or implemented "manually" with a data structure explicitly storing the environment. The more features implemented by the same feature in the host language, the less control the programmer of the interpreter has; a different behavior for dealing with number overflows cannot be realized if the arithmetic operations are delegated to corresponding operations in the host language.

Some languages have an elegant self-interpreter, such as Lisp or Prolog. Much research on self-interpreters (particularly reflective interpreters) has been conducted in the Scheme programming language, a dialect of Lisp. In general, however, any Turing-complete language allows writing of its own interpreter. Lisp is such a language, because Lisp programs are lists of symbols and other lists. XSLT is such a language, because XSLT programs are written in XML. A sub-domain of meta-programming is the writing of domain-specific languages (DSLs).

Clive Gifford introduced a measure quality of self-interpreter (the eigenratio), the limit of the ratio between computer time spent running a stack of N self-interpreters and time spent to run a stack of N − 1 self-interpreters as N goes to infinity. This value does not depend on the program being run.

The book Structure and Interpretation of Computer Programs presents examples of meta-circular interpretation for Scheme and its dialects. Other examples of languages with a self-interpreter are Forth and Pascal.

Microcode

Microcode is a very commonly used technique "that imposes an interpreter between the hardware and the architectural level of a computer". As such, the microcode is a layer of hardware-level instructions that implement higher-level machine code instructions or internal state machine sequencing in many digital processing elements. Microcode is used in general-purpose central processing units, as well as in more specialized processors such as microcontrollers, digital signal processors, channel controllers, disk controllers, network interface controllers, network processors, graphics processing units, and in other hardware.

Microcode typically resides in special high-speed memory and translates machine instructions, state machine data or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram.

More extensive microcoding allows small and simple microarchitectures to emulate more powerful architectures with wider word length, more execution units and so on, which is a relatively simple way to achieve software compatibility between different products in a processor family.

Applications

  • Interpreters are frequently used to execute command languages, and glue languages since each operator executed in command language is usually an invocation of a complex routine such as an editor or compiler.
  • Self-modifying code can easily be implemented in an interpreted language. This relates to the origins of interpretation in Lisp and artificial intelligence research.
  • Virtualization. Machine code intended for a hardware architecture can be run using a virtual machine. This is often used when the intended architecture is unavailable, or among other uses, for running multiple copies.
  • Sandboxing: While some types of sandboxes rely on operating system protections, an interpreter or virtual machine is often used. The actual hardware architecture and the originally intended hardware architecture may or may not be the same. This may seem pointless, except that sandboxes are not compelled to actually execute all the instructions the source code it is processing. In particular, it can refuse to execute code that violates any security constraints it is operating under.
  • Emulators for running computer software written for obsolete and unavailable hardware on more modern equipment.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...