Search This Blog

Saturday, July 29, 2023

Object lifetime

From Wikipedia, the free encyclopedia

In object-oriented programming (OOP), the object lifetime (or life cycle) of an object is the time between an object's creation and its destruction. Rules for object lifetime vary significantly between languages, in some cases between implementations of a given language, and lifetime of a particular object may vary from one run of the program to another.

In some cases, object lifetime coincides with variable lifetime of a variable with that object as value (both for static variables and automatic variables), but in general, object lifetime is not tied to the lifetime of any one variable. In many cases – and by default in many object-oriented languages, particularly those that use garbage collection (GC) – objects are allocated on the heap, and object lifetime is not determined by the lifetime of a given variable: the value of a variable holding an object actually corresponds to a reference to the object, not the object itself, and destruction of the variable just destroys the reference, not the underlying object.

Overview

While the basic idea of object lifetime is simple – an object is created, used, then destroyed – details vary substantially between languages, and within implementations of a given language, and is intimately tied to how memory management is implemented. Further, many fine distinctions are drawn between the steps, and between language-level concepts and implementation-level concepts. Terminology is relatively standard, but which steps correspond to a given term varies significantly between languages.

Terms generally come in antonym pairs, one for a creation concept, one for the corresponding destruction concept, like initialize/finalize or constructor/destructor. The creation/destruction pair is also known as initiation/termination, among other terms. The terms allocation and deallocation or freeing are also used, by analogy with memory management, though object creation and destruction can involve significantly more than simply memory allocation and deallocation, and allocation/deallocation are more properly considered steps in creation and destruction, respectively.

Determinism

A major distinction is whether an object's lifetime is deterministic or non-deterministic. This varies by language, and within language varies with the memory allocation of an object; object lifetime may be distinct from variable lifetime.

Objects with static memory allocation, notably objects stored in static variables, and classes modules (if classes or modules are themselves objects, and statically allocated), have a subtle non-determinism in many languages: while their lifetime appears to coincide with the run time of the program, the order of creation and destruction – which static object is created first, which second, etc. – is generally nondeterministic.

For objects with automatic memory allocation or dynamic memory allocation, object creation generally happens deterministically, either explicitly when an object is explicitly created (such as via new in C++ or Java), or implicitly at the start of variable lifetime, particularly when the scope of an automatic variable is entered, such as at declaration. Object destruction varies, however – in some languages, notably C++, automatic and dynamic objects are destroyed at deterministic times, such as scope exit, explicit destruction (via manual memory management), or reference count reaching zero; while in other languages, such as C#, Java, and Python, these objects are destroyed at non-deterministic times, depending on the garbage collector, and object resurrection may occur during destruction, extending the lifetime.

In garbage-collected languages, objects are generally dynamically allocated (on the heap) even if they are initially bound to an automatic variable, unlike automatic variables with primitive values, which are typically automatically allocated (on the stack or in a register). This allows the object to be returned from a function ("escape") without being destroyed. However, in some cases a compiler optimization is possible, namely performing escape analysis and proving that escape is not possible, and thus the object can be allocated on the stack; this is significant in Java. In this case object destruction will occur promptly – possibly even during the variable's lifetime (before the end of its scope), if it is unreachable.

A complex case is the use of an object pool, where objects may be created ahead of time or reused, and thus apparent creation and destruction may not correspond to actual creation and destruction of an object, only (re)initialization for creation and finalization for destruction. In this case both creation and destruction may be nondeterministic.

Steps

Object creation can be broken down into two operations: memory allocation and initialization, where initialization both includes assigning values to object fields and possibly running arbitrary other code. These are implementation-level concepts, roughly analogous to the distinction between declaration and definition of a variable, though these later are language-level distinctions. For an object that is tied to a variable, declaration may be compiled to memory allocation (reserving space for the object), and definition to initialization (assigning values), but declarations may also be for compiler use only (such as name resolution), not directly corresponding to compiled code.

Analogously, object destruction can be broken down into two operations, in the opposite order: finalization and memory deallocation. These do not have analogous language-level concepts for variables: variable lifetime ends implicitly (for automatic variables, on stack unwind; for static variables, on program termination), and at this time (or later, depending on implementation) memory is deallocated, but no finalization is done in general. However, when an object's lifetime is tied to a variable's lifetime, the end of the variable's lifetime causes finalization of the object; this is a standard paradigm in C++.

Together these yield four implementation-level steps:

allocation, initialization, finalization, deallocation

These steps may be done automatically by the language runtime, interpreter, or virtual machine, or may be manually specified by the programmer in a subroutine, concretely via methods – the frequency of this varies significantly between steps and languages. Initialization is very commonly programmer-specified in class-based languages, while in strict prototype-based languages initialization is automatically done by copying. Finalization is also very common in languages with deterministic destruction, notably C++, but much less common in garbage-collected languages. Allocation is more rarely specified, and deallocation generally cannot be specified.

Status during creation and destruction

An important subtlety is the status of an object during creation or destruction, and handling cases where errors occur or exceptions are raised, such as if creation or destruction fail. Strictly speaking, an object's lifetime begins when allocation completes and ends when deallocation starts. Thus during initialization and finalization an object is alive, but may not be in a consistent state – ensuring class invariants is a key part of initialization – and the period from when initialization completes to when finalization starts is when the object is both alive and expected to be in a consistent state.

If creation or destruction fail, error reporting (often by raising an exception) can be complicated: the object or related objects may be in an inconsistent state, and in the case of destruction – which generally happens implicitly, and thus in an unspecified environment – it may be difficult to handle errors. The opposite issue – incoming exceptions, not outgoing exceptions – is whether creation or destruction should behave differently if they occur during exception handling, when different behavior may be desired.

Another subtlety is when creation and destruction happen for static variables, whose lifespan coincides with the run time of the program – do creation and destruction happen during regular program execution, or in special phases before and after regular execution – and how objects are destroyed at program termination, when the program may not be in a usual or consistent state. This is particularly an issue for garbage-collected languages, as they may have a lot of garbage at program termination.

Class-based programming

In class-based programming, object creation is also known as instantiation (creating an instance of a class), and creation and destruction can be controlled via methods known as a constructor and destructor, or an initializer and finalizer. Creation and destruction are thus also known as construction and destruction, and when these methods are called an object is said to be constructed or destructed (not "destroyed") – respectively, initialized or finalized when those methods are called.

The relationship between these methods can be complicated, and a language may have both constructors and initializers (like Python), or both destructors and finalizers (like C++/CLI), or the terms "destructor" and "finalizer" may refer to language-level construct versus implementation (as in C# versus CLI).

A key distinction is that constructors are class methods, as there is no object (class instance) available until the object is created, but the other methods (destructors, initializers, and finalizers) are instance methods, as an object has been created. Further, constructors and initializers may take arguments, while destructors and finalizers generally do not, as they are usually called implicitly.

In common usage, a constructor is a method directly called explicitly by user code to create an object, while "destructor" is the subroutine called (usually implicitly, but sometimes explicitly) on object destruction in languages with deterministic object lifetimes – the archetype is C++ – and "finalizer" is the subroutine called implicitly by the garbage collector on object destruction in languages with non-deterministic object lifetime – the archetype is Java.

The steps during finalization vary significantly depending on memory management: in manual memory management (as in C++, or manual reference counting), references need to be explicitly destroyed by the programmer (references cleared, reference counts decremented); in automatic reference counting, this also happens during finalization, but is automated (as in Python, when it occurs after programmer-specified finalizers have been called); and in tracing garbage collection this is not necessary. Thus in automatic reference counting, programmer-specified finalizers are often short or absent, but significant work may still be done, while in tracing garbage collectors finalization is often unnecessary.

Resource management

In languages where objects have deterministic lifetimes, object lifetime may be used for piggybacking resource management: this is called the Resource Acquisition Is Initialization (RAII) idiom: resources are acquired during initialization, and released during finalization. In languages where objects have non-deterministic lifetimes, notably due to garbage collection, the management of memory is generally kept separate from management of other resources.

Object creation

In typical case, the process is as follows:

  • calculate the size of an object – the size is mostly the same as that of the class but can vary. When the object in question is not derived from a class, but from a prototype instead, the size of an object is usually that of the internal data structure (a hash for instance) that holds its slots.
  • allocation – allocating memory space with the size of an object plus the growth later, if possible to know in advance
  • binding methods – this is usually either left to the class of the object, or is resolved at dispatch time, but nevertheless it is possible that some object models bind methods at creation time.
  • calling an initializing code (namely, constructor) of superclass
  • calling an initializing code of class being created

Those tasks can be completed at once but are sometimes left unfinished and the order of the tasks can vary and can cause several strange behaviors. For example, in multi-inheritance, which initializing code should be called first is a difficult question to answer. However, superclass constructors should be called before subclass constructors.

It is a complex problem to create each object as an element of an array. Some languages (e.g. C++) leave this to programmers.

Handling exceptions in the midst of creation of an object is particularly problematic because usually the implementation of throwing exceptions relies on valid object states. For instance, there is no way to allocate a new space for an exception object when the allocation of an object failed before that due to a lack of free space on the memory. Due to this, implementations of OO languages should provide mechanisms to allow raising exceptions even when there is short supply of resources, and programmers or the type system should ensure that their code is exception-safe. Propagating an exception is more likely to free resources than to allocate them. But in object oriented programming, object construction may fail, because constructing an object should establish the class invariants, which are often not valid for every combination of constructor arguments. Thus, constructors can raise exceptions.

The abstract factory pattern is a way to decouple a particular implementation of an object from code for the creation of such an object.

Creation methods

The way to create objects varies across languages. In some class-based languages, a special method known as a constructor, is responsible for validating the state of an object. Just like ordinary methods, constructors can be overloaded in order to make it so that an object can be created with different attributes specified. Also, the constructor is the only place to set the state of immutable objects. A copy constructor is a constructor which takes a (single) parameter of an existing object of the same type as the constructor's class, and returns a copy of the object sent as a parameter.

Other programming languages, such as Objective-C, have class methods, which can include constructor-type methods, but are not restricted to merely instantiating objects.

C++ and Java have been criticized for not providing named constructors—a constructor must always have the same name as the class. This can be problematic if the programmer wants to provide two constructors with the same argument types, e.g., to create a point object either from the cartesian coordinates or from the polar coordinates, both of which would be represented by two floating point numbers. Objective-C can circumvent this problem, in that the programmer can create a Point class, with initialization methods, for example, +newPointWithX:andY:, and +newPointWithR:andTheta:. In C++, something similar can be done using static member functions.

A constructor can also refer to a function which is used to create a value of a tagged union, particularly in functional languages.

Object destruction

It is generally the case that after an object is used, it is removed from memory to make room for other programs or objects to take that object's place. However, if there is sufficient memory or a program has a short run time, object destruction may not occur, memory simply being deallocated at process termination. In some cases object destruction simply consists of deallocating the memory, particularly in garbage-collected languages, or if the "object" is actually a plain old data structure. In other cases some work is performed prior to deallocation, particularly destroying member objects (in manual memory management), or deleting references from the object to other objects to decrement reference counts (in reference counting). This may be automatic, or a special destruction method may be called on the object.

In class-based languages with deterministic object lifetime, notably C++, a destructor is a method called when an instance of a class is deleted, before the memory is deallocated. In C++, destructors differs from constructors in various ways: they cannot be overloaded, must have no arguments, need not maintain class invariants, and can cause program termination if they throw exceptions.

In garbage collecting languages, objects may be destroyed when they can no longer be reached by the running code. In class-based GCed languages, the analog of destructors are finalizers, which are called before an object is garbage-collected. These differ in running at an unpredictable time and in an unpredictable order, since garbage collection is unpredictable, and are significantly less-used and less complex than C++ destructors. Example of such languages include Java, Python, and Ruby.

Destroying an object will cause any references to the object to become invalid, and in manual memory management any existing references become dangling references. In garbage collection (both tracing garbage collection and reference counting), objects are only destroyed when there are no references to them, but finalization may create new references to the object, and to prevent dangling references, object resurrection occurs so the references remain valid.

Examples

C++

class Foo {
 public:
  // These are the prototype declarations of the constructors.
  Foo(int x);
  Foo(int x, int y);    // Overloaded Constructor.
  Foo(const Foo &old);  // Copy Constructor.
  ~Foo();               // Destructor.
};

Foo::Foo(int x) {
  // This is the implementation of
  // the one-argument constructor.
}

Foo::Foo(int x, int y) {
  // This is the implementation of
  // the two-argument constructor.
}

Foo::Foo(const Foo &old) {
  // This is the implementation of
  // the copy constructor.
}

Foo::~Foo() {
  // This is the implementation of the destructor.
}

int main() {
  Foo foo(14);       // Call first constructor.
  Foo foo2(12, 16);  // Call overloaded constructor.
  Foo foo3(foo);     // Call the copy constructor.

  // Destructors called in backwards-order
  // here, automatically.
}

Java

class Foo
{
    public Foo(int x)
    {
        // This is the implementation of
        // the one-argument constructor
    }

    public Foo(int x, int y)
    {
        // This is the implementation of
        // the two-argument constructor
    }

    public Foo(Foo old)
    {
        // This is the implementation of
        // the copy constructor
    }

    public static void main(String[] args)
    {
        Foo foo = new Foo(14); // call first constructor
        Foo foo2 = new Foo(12, 16); // call overloaded constructor
        Foo foo3 = new Foo(foo); // call the copy constructor
        // garbage collection happens under the covers, and objects are destroyed
    }
}

C#

namespace ObjectLifeTime;

class Foo
{
    public Foo()
    {
        // This is the implementation of
        // default constructor.
    }

    public Foo(int x)
    {
        // This is the implementation of
        // the one-argument constructor.
    }
     ~Foo()
    {
        // This is the implementation of
        // the destructor.
    }

    public Foo(int x, int y)
    {
        // This is the implementation of
        // the two-argument constructor.
    }
 
    public Foo(Foo old)
    {
        // This is the implementation of
        // the copy constructor.
    }
 
    public static void Main(string[] args)
    {
        var defaultfoo = new Foo(); // Call default constructor
        var foo = new Foo(14); // Call first constructor
        var foo2 = new Foo(12, 16); // Call overloaded constructor
        var foo3 = new Foo(foo); // Call the copy constructor
    }
}

Objective-C

#import <objc/Object.h>

@interface Point : Object
{
   double x;
   double y;
}

//These are the class methods; we have declared two constructors
+ (Point *) newWithX: (double) andY: (double);
+ (Point *) newWithR: (double) andTheta: (double);

//Instance methods
- (Point *) setFirstCoord: (double);
- (Point *) setSecondCoord: (double);

/* Since Point is a subclass of the generic Object 
 * class, we already gain generic allocation and initialization
 * methods, +alloc and -init. For our specific constructors
 * we can make these from these methods we have
 * inherited.
 */
@end
 
@implementation Point

- (Point *) setFirstCoord: (double) new_val
{
   x = new_val;
}

- (Point *) setSecondCoord: (double) new_val
{
   y = new_val;
}

+ (Point *) newWithX: (double) x_val andY: (double) y_val
{
   //Concisely written class method to automatically allocate and 
   //perform specific initialization.
   return [[[Point alloc] setFirstCoord:x_val] setSecondCoord:y_val]; 
}

+ (Point *) newWithR: (double) r_val andTheta: (double) theta_val
{
   //Instead of performing the same as the above, we can underhandedly
   //use the same result of the previous method
   return [Point newWithX:r_val andY:theta_val];
}

@end

int
main(void)
{
   //Constructs two points, p and q.
   Point *p = [Point newWithX:4.0 andY:5.0];
   Point *q = [Point newWithR:1.0 andTheta:2.28];

   //...program text....
   
   //We're finished with p, say, so, free it.
   //If p allocates more memory for itself, may need to
   //override Object's free method in order to recursively
   //free p's memory. But this is not the case, so we can just
   [p free];

   //...more text...

   [q free];

   return 0;
}

Object Pascal

Related Languages: "Delphi", "Free Pascal", "Mac Pascal".

program Example;

type

  DimensionEnum =
    (
      deUnassigned,
      de2D,
      de3D,
      de4D
    );

  PointClass = class
  private
    Dimension: DimensionEnum;

  public
    X: Integer;
    Y: Integer;
    Z: Integer;
    T: Integer;

  public
    (* prototype of constructors *)

    constructor Create();
    constructor Create(AX, AY: Integer);
    constructor Create(AX, AY, AZ: Integer);
    constructor Create(AX, AY, AZ, ATime: Integer);
    constructor CreateCopy(APoint: PointClass);

    (* prototype of destructors *)

    destructor Destroy;
  end;

constructor PointClass.Create();
begin
  // implementation of a generic, non argument constructor
  Self.Dimension := deUnassigned;
end;

constructor PointClass.Create(AX, AY: Integer);
begin
  // implementation of a, 2 argument constructor
  Self.X := AX;
  Y := AY;

  Self.Dimension := de2D;
end;

constructor PointClass.Create(AX, AY, AZ: Integer);
begin
  // implementation of a, 3 argument constructor
  Self.X := AX;
  Y := AY;
  Self.X := AZ;

  Self.Dimension := de3D;
end;

constructor PointClass.Create(AX, AY, AZ, ATime: Integer);
begin
  // implementation of a, 4 argument constructor
  Self.X := AX;
  Y := AY;
  Self.X := AZ;
  T := ATime;

  Self.Dimension := de4D;
end;

constructor PointClass.CreateCopy(APoint: PointClass);
begin
  // implementation of a, "copy" constructor
  APoint.X := AX;
  APoint.Y := AY;
  APoint.X := AZ;
  APoint.T := ATime;

  Self.Dimension := de4D;
end;

destructor PointClass.PointClass.Destroy;
begin
  // implementation of a generic, non argument destructor
  Self.Dimension := deUnAssigned;
end;

var
  (* variable for static allocation *)
  S:  PointClass;
  (* variable for dynamic allocation *)
  D: ^PointClass;

begin (* of program *)
  (* object lifeline with static allocation *)
  S.Create(5, 7);

  (* do something with "S" *)

  S.Destroy; 

  (* object lifeline with dynamic allocation *)
  D = new PointClass, Create(5, 7);

  (* do something with "D" *)

  dispose D, Destroy;
end.  (* of program *)

Python

class Socket:
    def __init__(self, remote_host: str) -> None:
        # connect to remote host

    def send(self):
        # Send data

    def recv(self):
        # Receive data
        
    def close(self):
        # close the socket
        
    def __del__(self):
        # __del__ magic function called when the object's reference count equals zero
        self.close()

def f():
    socket = Socket("example.com")
    socket.send("test")
    return socket.recv()

Socket will be closed at the next garbage collection round after "f" function runs and returns, as all references to it have been lost.

Graphical user interface

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Graphical_user_interface

The graphical user interface, or GUI (/ˌjuːˈ/ JEE-yoo-EYE or /ˈɡi/ GOO-ee), is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based UIs, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up displays (HUDs) are preferred), or not including flat screens like volumetric displays because the term is restricted to the scope of 2D display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.

GUI and interaction design

The GUI is presented (displayed) on the computer screen. It is the result of processed user input and usually the main interface for human-machine interaction. The touch UIs popular on small mobile devices are an overlay of the visual output to the visual input.

Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks.

The visible graphical interface features of an application are sometimes referred to as chrome or GUI (pronounced gooey). Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent of and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin at will, and eases the designer's work to change the interface as user needs evolve. Good GUI design relates to users more, and to system architecture less. Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message, or drawing. Smaller ones usually act as a user-input tool.

A GUI may be designed for the requirements of a vertical market as application-specific GUIs. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticket and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS).

Cell phones and handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations.

Examples

Components

Layers of a GUI based on a windowing system

A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.

A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is the windows, icons, text fields, canvases, menus, pointer (WIMP) paradigm, especially in personal computers.

The WIMP style of interaction uses a virtual input device to represent the position of a pointing device's interface, most often a mouse, and presents information organized in windows and represented with icons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer.

In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism.

Entries may appear in a list to make space for text and details, or in a grid for compactness and larger icons with little space underneath for text. Variations inbetween exist, such as a list with multiple columns of items and a grid of items with rows of text extending sideways from the icon.

Multi-row and multi-column layouts commonly found on the web are "shelf" and "waterfall". The former is found on image search engines, where images appear with a fixed height but variable length, and is typically implemented with the CSS property and parameter display: inline-block;. A waterfall layout found on Imgur and Tweetdeck with fixed width but variable height per item is usually implemented by specifying column-width:.

Post-WIMP interface

Smaller app mobile devices such as personal digital assistants (PDAs) and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP UIs.

As of 2011, some touchscreen-based operating systems such as Apple's iOS (iPhone) and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse.

Interaction

Human interface devices, for the efficient interaction with a GUI include a computer keyboard, especially used together with keyboard shortcuts, pointing devices for the cursor (or rather pointer) control: mouse, pointing stick, touchpad, trackball, joystick, virtual keyboards, and head-up displays (translucent information devices at the eye level).

There are also actions performed by programs that affect the GUI. For example, there are components like inotify or D-Bus to facilitate communication between computer programs.

History

Early efforts

Ivan Sutherland developed Sketchpad in 1963, widely held as the first graphical computer-aided design program. It used a light pen to create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at the Stanford Research Institute, led by Douglas Engelbart, developed the On-Line System (NLS), which used text-based hyperlinks manipulated with a then-new device: the mouse. (A 1968 demonstration of NLS became known as "The Mother of All Demos.") In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers at Xerox PARC and specifically Alan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for the Smalltalk programming language, which ran on the Xerox Alto computer, released in 1973. Most modern general-purpose GUIs are derived from this system.

The Xerox PARC GUI consisted of graphical elements such as windows, menus, radio buttons, and check boxes. The concept of icons was later introduced by David Canfield Smith, who had written a thesis on the subject under the guidance of Kay. The PARC GUI employs a pointing device along with a keyboard. These aspects can be emphasized by using the alternative term and acronym for windows, icons, menus, pointing device (WIMP). This effort culminated in the 1973 Xerox Alto, the first computer with a GUI, though the system never reached commercial production.

The first commercially available computer with a GUI was 1979 PERQ workstation, manufactured by Three Rivers Computer Corporation. Its design was heavily influenced by the work at Xerox PARC. In 1981, Xerox eventually commercialized the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as the Xerox Star. These early systems spurred many other GUI efforts, including Lisp machines by Symbolics and other manufacturers, the Apple Lisa (which presented the concept of menu bar and window controls) in 1983, the Apple Macintosh 128K in 1984, and the Atari ST with Digital Research's GEM, and Commodore Amiga in 1985. Visi On was released in 1983 for the IBM PC compatible computers, but was never popular due to its high hardware demands. Nevertheless, it was a crucial influence on the contemporary development of Microsoft Windows.

Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM's Common User Access specifications formed the basis of the GUIs used in Microsoft Windows, IBM OS/2 Presentation Manager, and the Unix Motif toolkit and window manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in various desktop environments for Unix-like operating systems, such as macOS and Linux. Thus most current GUIs have largely common idioms.

An Apple Lisa (1983) demonstrating LisaOS, Apple Computer's first commercially available GUI.

Popularization

HP LX System Manager running on a HP 200LX.

GUIs were a hot topic in the early 1980s. The Apple Lisa was released in 1983, and various windowing systems existed for DOS operating systems (including PC GEM and PC/GEOS). Individual applications for many platforms presented their own GUI variants. Despite the GUIs advantages, many reviewers questioned the value of the entire concept, citing hardware limits, and problems in finding compatible software.

In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS, with allusions to George Orwell's noted novel Nineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems, and becoming a signature representation of Apple products.

Windows 95, accompanied by an extensive marketing campaign, was a major success in the marketplace at launch and shortly became the most popular desktop operating system.

In 2007, with the iPhone and later in 2010 with the introduction of the iPad, Apple popularized the post-WIMP style of interaction for multi-touch screens, and those devices were considered to be milestones in the development of mobile devices.

The GUIs familiar to most people as of the mid-late 2010s are Microsoft Windows, macOS, and the X Window System interfaces for desktop and laptop computers, and Android, Apple's iOS, Symbian, BlackBerry OS, Windows Phone/Windows 10 Mobile, Tizen, WebOS, and Firefox OS for handheld (smartphone) devices.

Comparison to other interfaces

Command-line interfaces

A modern CLI

Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. Custom functions may be used to facilitate access to frequent actions. Command-line interfaces are more lightweight, as they only recall information necessary for a task; for example, no preview thumbnails or graphical rendering of web pages. This allows greater efficiency and productivity once many commands are learned. But reaching this level takes some time because the command words may not be easily discoverable or mnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However, windows, icons, menus, pointer (WIMP) interfaces present users with many widgets that represent and can trigger some of the system's available commands.

GUIs can be made quite hard when dialogs are buried deep in a system or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script.

WIMPs extensively use modes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command-line interfaces use modes only in limited forms, such as for current directory and environment variables.

Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention.

GUI wrappers

GUI wrappers find a way around the command-line interface versions (CLI) of (typically) Linux and Unix-like software applications and their text-based UIs or typed command labels. While command-line or text-based applications allow users to run a program non-interactively, GUI wrappers atop them avoid the steep learning curve of the command-line, which requires commands to be typed on the keyboard. By starting a GUI wrapper, users can intuitively interact with, start, stop, and change its working parameters, through graphical icons and visual indicators of a desktop environment, for example. Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unix-like operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in a shell script.

Three-dimensional graphical user interface

Many environments and games use the methods of 3D graphics to project 3D GUI objects onto the screen. The use of 3D graphics has become increasingly common in mainstream operating systems (ex. Windows Aero, and Aqua (MacOS)) to create attractive interfaces, termed eye candy (which includes, for example, the use of drop shadows underneath windows and the cursor), or for functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube with faces representing each user's workspace, and window management is represented via a Rolodex-style flipping mechanism in Windows Vista (see Windows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows.

The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm, File System Navigator, File System Visualizer, 3D Mailbox, and GopherVR. Zooming (ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. In 2006, Hillcrest Labs introduced the first ZUI for television. Other innovations include the menus on the PlayStation 2, the menus on the Xbox, Sun's Project Looking Glass, Metisse, which was similar to Project Looking Glass, BumpTop, where users can manipulate documents and windows with realistic movement and physics as if they were physical documents, Croquet OS, which is built for collaboration, and compositing window managers such as Enlightenment and Compiz. Augmented reality and virtual reality also make use of 3D GUI elements.

In science fiction

3D GUIs have appeared in science fiction literature and films, even before certain technologies were feasible or in common use.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...