Search This Blog

Wednesday, August 19, 2020

User interface

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/User_interface 

The Reactable, an example of a tangible user interface

In the industrial design field of human-computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.

Generally, the goal of user interface design is to produce a user interface which makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result (i.e. maximum usability). This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the user.

User interfaces are composed of one or more layers, including a human-machine interface (HMI) that interfaces machines with physical input hardware such as keyboards, mice, or game pads, and output hardware such as computer monitors, speakers, and printers. A device that implements an HMI is called a human interface device (HID). Other terms for human-machine interfaces are man-machine interface (MMI) and, when the machine in question is a computer, human-computer interface. Additional UI layers may interact with one or more human senses, including: tactile UI (touch), visual UI (sight), auditory UI (sound), olfactory UI (smell), equilibrial UI (balance), and gustatory UI (taste).




Composite user interfaces (CUIs) are UIs that interact with two or more senses. The most common CUI is a graphical user interface (GUI), which is composed of a tactile UI and a visual UI capable of displaying graphics. When sound is added to a GUI, it becomes a multimedia user interface (MUI). There are three broad categories of CUI: standard, virtual and augmented. Standard composite user interfaces use standard human interface devices like keyboards, mice, and computer monitors. When the CUI blocks out the real world to create a virtual reality, the CUI is virtual and uses a virtual reality interface. When the CUI does not block out the real world and creates augmented reality, the CUI is augmented and uses an augmented reality interface. When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia. CUI may also be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense (3S) Standard CUI with visual display, sound and smells; when virtual reality interfaces interface with smells and touch it is said to be a 4-sense (4S) virtual reality interface; and when augmented reality interfaces interface with smells and touch it is said to be a 4-sense (4S) augmented reality interface.

Overview


The user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch.

In complex systems, the human–machine interface is typically computerized. The term human–computer interface refers to this kind of system. In the context of computing, the term typically extends as well to the software dedicated to control the physical elements used for human-computer interaction.

The engineering of human–machine interfaces is enhanced by considering ergonomics (human factors). The corresponding disciplines are human factors engineering (HFE) and usability engineering (UE), which is part of systems engineering.

Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics.

Multimodal interfaces allow users to interact using more than one modality of user input.

Terminology

A human–machine interface usually involves peripheral hardware for the INPUT and for the OUTPUT. Often, there is an additional component implemented in software, like e.g. a graphical user interface.
There is a difference between a user interface and an operator interface or a human–machine interface (HMI).
  • The term "user interface" is often used in the context of (personal) computer systems and electronic devices.
    • Where a network of equipment or computers are interlinked through an MES (Manufacturing Execution System)-or Host to display information.
    • A human-machine interface (HMI) is typically local to one machine or piece of equipment, and is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple pieces of equipment that are linked by a host control system are accessed or controlled.
    • The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons (limited set of functions, optimized for ease of use) and the other for library personnel (wide set of functions, optimized for efficiency).
  • The user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface (HMI). HMI is a modification of the original term MMI (man-machine interface). In practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more commonly used for human–computer interaction. Other terms used are operator interface console (OIC) and operator interface terminal (OIT). However it is abbreviated, the terms refer to the 'layer' that separates a human that is operating a machine from the machine itself. Without a clean and usable interface, humans would not be able to interact with information systems.
In science fiction, HMI is sometimes used to refer to what is better described as a direct neural interface. However, this latter usage is seeing increasing application in the real-life use of (medical) prostheses—the artificial extension that replaces a missing body part (e.g., cochlear implants).

In some circumstances, computers might observe the user and react according to their actions without specific commands. A means of tracking parts of the body is required, and sensors noting the position of the head, direction of gaze and so on have been used experimentally. This is particularly relevant to immersive interfaces.

History

The history of user interfaces can be divided into the following phases according to the dominant type of user interface:

1945–1968: Batch interface

IBM 029 card punch
IBM 029

In the batch era, computing power was extremely scarce and expensive. User interfaces were rudimentary. Users had to accommodate computers rather than the other way around; user interfaces were considered overhead, and software was designed to keep the processor at maximum utilization with as little overhead as possible.

The input side of the user interfaces for batch machines was mainly punched cards or equivalent media like paper tape. The output side added line printers to these media. With the limited exception of the system operator's console, human beings did not interact with batch machines in real time at all.

Submitting a job to a batch machine involved, first, preparing a deck of punched cards describing a program and a dataset. Punching the program cards wasn't done on the computer itself, but on keypunches, specialized typewriter-like machines that were notoriously bulky, unforgiving, and prone to mechanical failure. The software interface was similarly unforgiving, with very strict syntaxes meant to be parsed by the smallest possible compilers and interpreters.

Holes are punched in the card according to a prearranged code transferring the facts from the census questionnaire into statistics

Once the cards were punched, one would drop them in a job queue and wait. Eventually, operators would feed the deck to the computer, perhaps mounting magnetic tapes to supply another dataset or helper software. The job would generate a printout, containing final results or an abort notice with an attached error log. Successful runs might also write a result on magnetic tape or generate some data cards to be used in a later computation.

The turnaround time for a single job often spanned entire days. If one were very lucky, it might be hours; there was no real-time response. But there were worse fates than the card queue; some computers required an even more tedious and error-prone process of toggling in programs in binary code using console switches. The very earliest machines had to be partly rewired to incorporate program logic into themselves, using devices known as plugboards

Early batch systems gave the currently running job the entire computer; program decks and tapes had to include what we would now think of as operating system code to talk to I/O devices and do whatever other housekeeping was needed. Midway through the batch period, after 1957, various groups began to experiment with so-called “load-and-go” systems. These used a monitor program which was always resident on the computer. Programs could call the monitor for services. Another function of the monitor was to do better error checking on submitted jobs, catching errors earlier and more intelligently and generating more useful feedback to the users. Thus, monitors represented the first step towards both operating systems and explicitly designed user interfaces.

1969–present: Command-line user interface

Teletype Model 33
Teletype Model 33 ASR

Command-line interfaces (CLIs) evolved from batch monitors connected to the system console. Their interaction model was a series of request-response transactions, with requests expressed as textual commands in a specialized vocabulary. Latency was far lower than for batch systems, dropping from days or hours to seconds. Accordingly, command-line systems allowed the user to change his or her mind about later stages of the transaction in response to real-time or near-real-time feedback on earlier results. Software could be exploratory and interactive in ways not possible before. But these interfaces still placed a relatively heavy mnemonic load on the user, requiring a serious investment of effort and learning time to master.

The earliest command-line systems combined teleprinters with computers, adapting a mature technology that had proven effective for mediating the transfer of information over wires between human beings. Teleprinters had originally been invented as devices for automatic telegraph transmission and reception; they had a history going back to 1902 and had already become well-established in newsrooms and elsewhere by 1920. In reusing them, economy was certainly a consideration, but psychology and the Rule of Least Surprise mattered as well; teleprinters provided a point of interface with the system that was familiar to many engineers and users.

The VT100, introduced in 197″8, was the most popular VDT of all time. Most terminal emulators still default to VT100 mode.
DEC VT100 terminal
The widespread adoption of video-display terminals (VDTs) in the mid-1970s ushered in the second phase of command-line systems. These cut latency further, because characters could be thrown on the phosphor dots of a screen more quickly than a printer head or carriage can move. They helped quell conservative resistance to interactive programming by cutting ink and paper consumables out of the cost picture, and were to the first TV generation of the late 1950s and 60s even more iconic and comfortable than teleprinters had been to the computer pioneers of the 1940s.

Just as importantly, the existence of an accessible screen — a two-dimensional display of text that could be rapidly and reversibly modified — made it economical for software designers to deploy interfaces that could be described as visual rather than textual. The pioneering applications of this kind were computer games and text editors; close descendants of some of the earliest specimens, such as rogue(6), and vi(1), are still a live part of Unix tradition.

1985: SAA User Interface or Text-Based User Interface

In 1985, with the beginning of Microsoft Windows and other graphical user interfaces, IBM created what is called the Systems Application Architecture (SAA) standard which include the Common User Access (CUA) derivative. CUA successfully created what we know and use today in Windows, and most of the more recent DOS or Windows Console Applications will use that standard as well.

This defined that a pulldown menu system should be at the top of the screen, status bar at the bottom, shortcut keys should stay the same for all common functionality (F2 to Open for example would work in all applications that followed the SAA standard). This greatly helped the speed at which users could learn an application so it caught on quick and became an industry standard.

1968–present: Graphical User Interface

AMX Desk made a basic WIMP GUI
Linotype WYSIWYG 2000, 1989
  • 1968 – Douglas Engelbart demonstrated NLS, a system which uses a mouse, pointers, hypertext, and multiple windows.
  • 1970 – Researchers at Xerox Palo Alto Research Center (many from SRI) develop WIMP paradigm (Windows, Icons, Menus, Pointers)
  • 1973 – Xerox Alto: commercial failure due to expense, poor user interface, and lack of programs
  • 1979 – Steve Jobs and other Apple engineers visit Xerox PARC. Though Pirates of Silicon Valley dramatizes the events, Apple had already been working on developing a GUI, such as the Macintosh and Lisa projects, before the visit.
  • 1981 – Xerox Star: focus on WYSIWYG. Commercial failure (25K sold) due to cost ($16K each), performance (minutes to save a file, couple of hours to recover from crash), and poor marketing
  • 1984 – Apple Macintosh popularizes the GUI. Super Bowl commercial shown twice, was the most expensive commercial ever made at that time
  • 1984 – MIT's X Window System: hardware-independent platform and networking protocol for developing GUIs on UNIX-like systems
  • 1985 – Windows 1.0 – provided GUI interface to MS-DOS. No overlapping windows (tiled instead).
  • 1985 – Microsoft and IBM start work on OS/2 meant to eventually replace MS-DOS and Windows
  • 1986 – Apple threatens to sue Digital Research because their GUI desktop looked too much like Apple's Mac.
  • 1987 – Windows 2.0 – Overlapping and resizable windows, keyboard and mouse enhancements
  • 1987 – Macintosh II: first full-color Mac
  • 1988 – OS/2 1.10 Standard Edition (SE) has GUI written by Microsoft, looks a lot like Windows 2

Interface design

Primary methods used in the interface design include prototyping and simulation.

Typical human–machine interface design consists of the following stages: interaction specification, interface software specification and prototyping:
  • Common practices for interaction specification include user-centered design, persona, activity-oriented design, scenario-based design, and resiliency design.
  • Common practices for interface software specification include use cases and constrain enforcement by interaction protocols (intended to avoid use errors).
  • Common practices for prototyping are based on libraries of interface elements (controls, decoration, etc.).

Principles of quality

All great interfaces share eight qualities or characteristics:
  1. Clarity: The interface avoids ambiguity by making everything clear through language, flow, hierarchy and metaphors for visual elements.
  2. Concision: It's easy to make the interface clear by over-clarifying and labeling everything, but this leads to interface bloat, where there is just too much stuff on the screen at the same time. If too many things are on the screen, finding what you're looking for is difficult, and so the interface becomes tedious to use. The real challenge in making a great interface is to make it concise and clear at the same time.
  3. Familiarity: Even if someone uses an interface for the first time, certain elements can still be familiar. Real-life metaphors can be used to communicate meaning.
  4. Responsiveness: A good interface should not feel sluggish. This means that the interface should provide good feedback to the user about what's happening and whether the user's input is being successfully processed.
  5. Consistency: Keeping your interface consistent across your application is important because it allows users to recognize usage patterns.
  6. Aesthetics: While you don't need to make an interface attractive for it to do its job, making something look good will make the time your users spend using your application more enjoyable; and happier users can only be a good thing.
  7. Efficiency: Time is money, and a great interface should make the user more productive through shortcuts and good design.
  8. Forgiveness: A good interface should not punish users for their mistakes but should instead provide the means to remedy them.

Principle of least astonishment

The principle of least astonishment (POLA) is a general principle in the design of all kinds of interfaces. It is based on the idea that human beings can only pay full attention to one thing at one time, leading to the conclusion that novelty should be minimized.

Principle of habit formation

If an interface is used persistently, the user will unavoidably develop habits for using the interface. The designer's role can thus be characterized as ensuring the user forms good habits. If the designer is experienced with other interfaces, they will similarly develop habits, and often make unconscious assumptions regarding how the user will interact with the interface.

A model of design criteria: User Experience Honeycomb

User interface / user experience guide
User Experience Design Honeycomb designed by Peter Morville
Peter Morville of Google designed the User Experience Honeycomb framework in 2004 when leading operations in user interface design. The framework was created to guide user interface design. It would act as a guideline for many web development students for a decade.
  1. Usable: Is the design of the system easy and simple to use? The application should feel familiar, and it should be easy to use.
  2. Useful: Does the application fulfill a need? A business’s product or service needs to be useful.
  3. Desirable: Is the design of the application sleek and to the point? The aesthetics of the system should be attractive, and easy to translate.
  4. Findable: Are users able to quickly find the information they're looking for? Information needs to be findable and simple to navigate. A user should never have to hunt for your product or information.
  5. Accessible: Does the application support enlarged text without breaking the framework? An application should be accessible to those with disabilities.
  6. Credible: Does the application exhibit trustworthy security and company details? An application should be transparent, secure, and honest.
  7. Valuable: Does the end-user think it's valuable? If all 6 criteria are met, the end-user will find value and trust in the application.

Types

Touchscreen of the HP Series 100 HP-150
HP Series 100 HP-150 Touchscreen
  1. Attentive user interfaces manage the user attention deciding when to interrupt the user, the kind of warnings, and the level of detail of the messages presented to the user.
  2. Batch interfaces are non-interactive user interfaces, where the user specifies all the details of the batch job in advance to batch processing, and receives the output when all the processing is done. The computer does not prompt for further input after the processing has started.
  3. Command line interfaces (CLIs) prompt the user to provide input by typing a command string with the computer keyboard and respond by outputting text to the computer monitor. Used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users.
  4. Conversational interfaces enable users to command the computer with plain text English (e.g., via text messages, or chatbots) or voice commands, instead of graphic elements. These interfaces often emulate human-to-human conversations.
  5. Conversational interface agents attempt to personify the computer interface in the form of an animated person, robot, or other character (such as Microsoft's Clippy the paperclip), and present interactions in a conversational form.
  6. Crossing-based interfaces are graphical user interfaces in which the primary task consists in crossing boundaries instead of pointing.
  7. Direct manipulation interface is the name of a general class of user interfaces that allow users to manipulate objects presented to them, using actions that correspond at least loosely to the physical world.
  8. Gesture interfaces are graphical user interfaces which accept input in a form of hand gestures, or mouse gestures sketched with a computer mouse or a stylus.
  9. Graphical user interfaces (GUI) accept input via devices such as a computer keyboard and mouse and provide articulated graphical output on the computer monitor. There are at least two different principles widely used in GUI design: Object-oriented user interfaces (OOUIs) and application-oriented interfaces.
  10. Hardware interfaces are the physical, spatial interfaces found on products in the real world from toasters, to car dashboards, to airplane cockpits. They are generally a mixture of knobs, buttons, sliders, switches, and touchscreens.
  11. Holographic user interfaces provide input to electronic or electro-mechanical devices by passing a finger through reproduced holographic images of what would otherwise be tactile controls of those devices, floating freely in the air, detected by a wave source and without tactile interaction.
  12. Intelligent user interfaces are human-machine interfaces that aim to improve the efficiency, effectiveness, and naturalness of human-machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media (e.g., graphics, natural language, gesture).
  13. Motion tracking interfaces monitor the user's body motions and translate them into commands, currently being developed by Apple.
  14. Multi-screen interfaces, employ multiple displays to provide a more flexible interaction. This is often employed in computer game interaction in both the commercial arcades and more recently the handheld markets.
  15. Natural-language interfaces are used for search engines and on webpages. User types in a question and waits for a response.
  16. Non-command user interfaces, which observe the user to infer their needs and intentions, without requiring that they formulate explicit commands.
  17. Object-oriented user interfaces (OOUI) are based on object-oriented programming metaphors, allowing users to manipulate simulated objects and their properties.
  18. Permission-driven user interfaces show or conceal menu options or functions depending on the user's level of permissions. The system is intended to improve the user experience by removing items that are unavailable to the user. A user who sees functions that are unavailable for use may become frustrated. It also provides an enhancement to security by hiding functional items from unauthorized persons.
  19. Reflexive user interfaces where the users control and redefine the entire system via the user interface alone, for instance to change its command verbs. Typically, this is only possible with very rich graphic user interfaces.
  20. Search interface is how the search box of a site is displayed, as well as the visual representation of the search results.
  21. Tangible user interfaces, which place a greater emphasis on touch and physical environment or its element.
  22. Task-focused interfaces are user interfaces which address the information overload problem of the desktop metaphor by making tasks, not files, the primary unit of interaction.
  23. Text-based user interfaces (TUIs) are user interfaces which interact via text. TUIs include command-line interfaces and text-based WIMP environments.
  24. Touchscreens are displays that accept input by touch of fingers or a stylus. Used in a growing amount of mobile devices and many types of point of sale, industrial processes and machines, self-service machines, etc.
  25. Touch user interface are graphical user interfaces using a touchpad or touchscreen display as a combined input and output device. They supplement or replace other forms of output with haptic feedback methods. Used in computerized simulators, etc.
  26. Voice user interfaces, which accept input and provide output by generating voice prompts. The user input is made by pressing keys or buttons, or responding verbally to the interface.
  27. Web-based user interfaces or web user interfaces (WUI) that accept input and provide output by generating web pages viewed by the user using a web browser program. Newer implementations utilize PHP, Java, JavaScript, AJAX, Apache Flex, .NET Framework, or similar technologies to provide real-time control in a separate program, eliminating the need to refresh a traditional HTML-based web browser. Administrative web interfaces for web-servers, servers and networked computers are often called control panels.
  28. Zero-input interfaces get inputs from a set of sensors instead of querying the user with input dialogs.
  29. Zooming user interfaces are graphical user interfaces in which information objects are represented at different levels of scale and detail, and where the user can change the scale of the viewed area in order to show more detail.

LCD television

From Wikipedia, the free encyclopedia
A generic LCD TV, with speakers on either side of the screen

Liquid-crystal-display televisions (LCD TVs) are television sets that use liquid-crystal displays to produce images. They are, by far, the most widely produced and sold television display type. LCD TVs are thin and light, but have some disadvantages compared to other display types such as high power consumption, poorer contrast ratio, and inferior color gamut.

LCD TVs rose in popularity in the early years of the 21st century, surpassing sales of cathode ray tube televisions worldwide in 2007. Sales of CRT TVs dropped rapidly after that, as did sales of competing technologies such as plasma display panels and rear-projection television.

History

An LCD TV hanging on a wall in the Taipei World Trade Center during the Computex Taipei show in 2008.

Early efforts

Passive matrix LCDs first became common as portable computer displays in the 1980s, competing for market share with plasma displays. The LCDs had very slow refresh rates that blurred the screen even with scrolling text, but their light weight and low cost were major benefits. Screens using reflective LCDs required no internal light source, making them particularly well suited to laptop computers. Refresh rates of early devices were too slow to be useful for television.




Portable televisions were a target application for LCDs. LCDs consumed far less battery power than even the miniature tubes used in portable televisions of the era. In 1980, Hattori Seiko's R&D group began development on color LCD pocket televisions. In 1982, Seiko Epson released the first LCD television, the Epson TV Watch, a small wrist-worn active-matrix LCD television. Sharp Corporation introduced the dot matrix TN-LCD in 1983, and Casio introduced its TV-10 portable TV. In 1984, Epson released the ET-10, the first full-color pocket LCD television. That same year Citizen Watch introduced the Citizen Pocket TV, a 2.7-inch color LCD TV, with the first commercial TFT LCD display. 


Throughout this period, screen sizes over 30" were rare as these formats would start to appear blocky at normal seating distances when viewed on larger screens. LCD projection systems were generally limited to situations where the image had to be viewed by a larger audience. At the same time, plasma displays could easily offer the performance needed to make a high quality display, but suffered from low brightness and very high power consumption. Still, some experimentation with LCD televisions took place during this period. In 1988, Sharp introduced a 14-inch active-matrix full-color full-motion TFT-LCD. These were offered primarily as high-end items, and were not aimed at the general market. This led to Japan launching an LCD industry, which developed larger-size LCDs, including TFT computer monitors and LCD televisions. Epson developed the 3LCD projection technology in the 1980s, and licensed it for use in projectors in 1988. Epson's VPJ-700, released in January 1989, was the world's first compact, full-color LCD projector.

Market takeover

In 2006, LCD prices started to fall rapidly and their screen sizes increased, although plasma televisions maintained a slight edge in picture quality and a price advantage for sets at the critical 42" size and larger. By late 2006, several vendors were offering 42" LCDs, albeit at a premium price, encroaching upon plasma's only stronghold. More decisively, LCDs offered higher resolutions and true 1080p support, while plasmas were stuck at 720p, which made up for the price difference.

Predictions that prices for LCDs would rapidly drop through 2007 led to a "wait and see" attitude in the market, and sales of all large-screen televisions stagnated while customers watched to see if this would happen. Plasmas and LCDs reached price parity in 2007, with the LCD's higher resolution being a 'winning point' for many sales. By late 2007, it was clear plasmas would lose out to LCDs during the critical Christmas sales season. This was in spite of plasmas continuing to hold an image quality advantage, but as the president of Chunghwa Picture Tubes noted after shutting down their plasma production line, "(g)lobally, so many companies, so many investments, so many people have been working in this area, on this product. So they can improve so quickly."

When the sales figures for the 2007 Christmas season were finally tallied, analysts were surprised to find that not only had LCD outsold plasma, but CRTs as well, during the same period. This development drove competing large-screen systems from the market almost overnight. Plasma had overtaken rear-projection systems in 2005. The same was true for CRTs, which lasted only a few months longer; Sony ended sales of their famous Trinitron in most markets in 2007, and shut down the final plant in March 2008. The February 2009 announcement that Pioneer Electronics was ending production of the plasma screens was widely considered the tipping point in that technology's history as well.

LCD's dominance in the television market accelerated rapidly. It was the only technology that could scale both up and down in size, covering both the high-end market for large screens in the 40 to 50" class, as well as customers looking to replace their existing smaller CRT sets in the 14 to 30" range. Building across these wide scales quickly pushed the prices down across the board.

In 2008, LCD TV shipments were up 33 percent year-on-year compared to 2007 to 105 million units. In 2009, LCD TV shipments raised to 146 million units (69% from the total of 211 million TV shipments). In 2010, LCD TV shipments reached 187.9 million units (from an estimated total of 247 million TV shipments).

Larger size displays continued to be released throughout the decade:
  • In October 2004, Sharp announced the successful manufacture of a 65" panel.
  • In March 2005, Samsung announced an 82" LCD panel.
  • In August 2006, LG.Philips LCD announced a 100" LCD television
  • In January 2007, Sharp displayed a 108" LCD panel under the AQUOS brand name at CES in Las Vegas.

Competing systems

In spite of LCD's dominance of the television field, other technologies continued to be developed to address its shortcomings. Whereas LCDs produce an image by selectively blocking a backlight, organic LED, microLED, field-emission display and surface-conduction electron-emitter display technologies all produce an illuminated image directly. In comparison to LCDs all of these technologies offer better viewing angles, much higher brightness and contrast ratio (as much as 5,000,000:1), and better color saturation and accuracy. They also use less power, and in theory they are less complex and less expensive to build.

Manufacturing these screens proved to be more difficult than originally thought, however. Sony abandoned their field-emission display project in March 2009, but continued to work on OLED sets. Canon continued development of their surface-conduction electron-emitter display technology, but announced they would not attempt to introduce sets to market for the foreseeable future.

Samsung announced that 14.1 and 31 inch OLED sets were "production ready" at the SID 2009 trade show in San Antonio.

Large-screen television technology

56 inch DLP rear-projection TV
 
Large-screen television technology (colloquially big-screen TV) developed rapidly in the late 1990s and 2000s. Previously, a video display that used large-screen television technology was called a jumbotron and was used at stadiums and concerts. Various thin-screen technologies are being developed, but only liquid crystal display (LCD), plasma display (PDP) and Digital Light Processing (DLP) have been released on the public market. However, recently released technologies like organic light-emitting diode (OLED), and not-yet-released technologies like surface-conduction electron-emitter display (SED) or field emission display (FED), are on their way to replacing the first flat-screen technologies in picture quality.

These technologies have almost completely displaced cathode ray tubes (CRT) in television sales, due to the necessary bulkiness of cathode ray tubes. The diagonal screen size of a CRT television is limited to about 40 inches because of the size requirements of the cathode ray tube, which fires three beams of electrons onto the screen, creating a viewable image. A larger screen size requires a longer tube, making a CRT television with a large screen (50 to 80 inches diagonally) unrealistic. The new technologies can produce large-screen televisions that are much thinner.

Viewing distances

Horizontal, vertical and diagonal field of view

Before deciding on a particular display technology size, it is very important to determine from what distances it is going to be viewed. As the display size increases so does the ideal viewing distance. Bernard J. Lechner, while working for RCA, studied the best viewing distances for various conditions and derived the so-called Lechner distance.

As a rule of thumb, the viewing distance should be roughly two to three times the screen size for standard definition (SD) displays.

Screen size (in) Viewing distance (ft) Viewing distance (m)
15–26 5–8 1.5-2.4
26–32 8–11.5 2.4-3.5
32–42 11.5–13 3.5-4
42–55 >13 >4

Display specifications

The following are important factors for evaluating television displays:
  • Display size: the diagonal length of the display.
  • Display resolution: the number of pixels in each dimension on a display. In general a higher resolution will yield a clearer, sharper image.
  • Dot pitch: This is the size of an individual pixel, which includes the length of the subpixels and distances between subpixels. It can be measured as the horizontal or diagonal length of a pixel. A smaller dot pitch generally results in sharper images because there are more pixels in a given area. In the case of CRT based displays, pixels are not equivalent to the phosphor dots, as they are to the pixel triads in LC displays. Projection displays that use three monochrome CRTs do not have a dot structure, so this specification does not apply.
  • Response time: The time it takes for the display to respond to a given input. For an LC display it is defined as the total time it takes for a pixel to transition from black to white, and then white to black. A display with slow response times displaying moving pictures may result in blurring and distortion. Displays with fast response times can make better transitions in displaying moving objects without unwanted image artefacts.
  • Brightness: The amount of light emitted from the display. It is sometimes synonymous with the term luminance, which is defined as the amount of light per area and is measured in SI units as candela per square meter.
  • Contrast ratio: The ratio of the luminance of the brightest color to the luminance of the darkest color on the display. High contrast ratios are desirable but the method of measurement varies greatly. It can be measured with the display isolated from its environment or with the lighting of the room being accounted for. Static contrast ratio is measured on a static image at some instant in time. Dynamic contrast ratio is measured on the image over a period of time. Manufacturers can market either static or dynamic contrast ratio depending on which one is higher.
  • Aspect ratio: The ratio of the display width to the display height. The aspect ratio of a traditional television is 4:3, which is being discontinued; the television industry is currently changing to the 16:9 ratio typically used by large-screen, high-definition televisions.
  • Viewing angle: The maximum angle at which the display can be viewed with acceptable quality. The angle is measured from one direction to the opposite direction of the display, such that the maximum viewing angle is 180 degrees. Outside of this angle the viewer will see a distorted version of the image being displayed. The definition of what is acceptable quality for the image can be different among manufacturers and display types. Many manufacturers define this as the point at which the luminance is half of the maximum luminance. Some manufacturers define it based on contrast ratio and look at the angle at which a certain contrast ratio is realized.
  • Color reproduction/gamut: The range of colors that the display can accurately represent.

Display technologies

LCD television

A pixel on an LCD consists of multiple layers of components: two polarizing filters, two glass plates with electrodes, and liquid crystal molecules. The liquid crystals are sandwiched between the glass plates and are in direct contact with the electrodes. The two polarizing filters are the outer layers in this structure. The polarity of one of these filters is oriented horizontally, while the polarity of the other filter is oriented vertically. The electrodes are treated with a layer of polymer to control the alignment of liquid crystal molecules in a particular direction. These rod-like molecules are arranged to match the horizontal orientation on one side and the vertical orientation on the other, giving the molecules a twisted, helical structure. Twisted nematic liquid crystals are naturally twisted, and are commonly used for LCDs because they react predictably to temperature variation and electric current.

When the liquid crystal material is in its natural state, light passing through the first filter will be rotated (in terms of polarity) by the twisted molecule structure, which allows the light to pass through the second filter. When voltage is applied across the electrodes, the liquid crystal structure is untwisted to an extent determined by the amount of voltage. A sufficiently large voltage will cause the molecules to untwist completely, such that the polarity of any light passing through will not be rotated and will instead be perpendicular to the filter polarity. This filter will block the passage of light because of the difference in polarity orientation, and the resulting pixel will be black. The amount of light allowed to pass through at each pixel can be controlled by varying the corresponding voltage accordingly. In a color LCD each pixel consists of red, green, and blue subpixels, which require appropriate color filters in addition to the components mentioned previously. Each subpixel can be controlled individually to display a large range of possible colors for a particular pixel.

The electrodes on one side of the LCD are arranged in columns, while the electrodes on the other side are arranged in rows, forming a large matrix that controls every pixel. Each pixel is designated a unique row-column combination, and the pixel can be accessed by the control circuits using this combination. These circuits send charge down the appropriate row and column, effectively applying a voltage across the electrodes at a given pixel. Simple LCDs such as those on digital watches can operate on what is called a passive-matrix structure, in which each pixel is addressed one at a time. This results in extremely slow response times and poor voltage control. A voltage applied to one pixel can cause the liquid crystals at surrounding pixels to untwist undesirably, resulting in fuzziness and poor contrast in this area of the image. LCDs with high resolutions, such as large-screen LCD televisions, require an active-matrix structure. This structure is a matrix of thin-film transistors, each corresponding to one pixel on the display. The switching ability of the transistors allows each pixel to be accessed individually and precisely, without affecting nearby pixels. Each transistor also acts as a capacitor while leaking very little current, so it can effectively store the charge while the display is being refreshed.

The following are types of LC display technologies:
  • Twisted Nematic (TN): This type of display is the most common and makes use of twisted nematic-phase crystals, which have a natural helical structure and can be untwisted by an applied voltage to allow light to pass through. These displays have low production costs and fast response times but also limited viewing angles, and many have a limited color gamut that cannot take full advantage of advanced graphics cards. These limitations are due to variation in the angles of the liquid crystal molecules at different depths, restricting the angles at which light can leave the pixel.
  • In-Plane Switching (IPS): Unlike the electrode arrangement in traditional TN displays, the two electrodes corresponding to a pixel are both on the same glass plate and are parallel to each other. The liquid crystal molecules do not form a helical structure and instead are also parallel to each other. In its natural or "off" state, the molecule structure is arranged parallel to the glass plates and electrodes. Because the twisted molecule structure is not used in an IPS display, the angle at which light leaves a pixel is not as restricted, and therefore viewing angles and color reproduction are much improved compared to those of TN displays. However, IPS displays have slower response times. IPS displays also initially suffered from poor contrast ratios but has been significantly improved with the development of Advanced Super IPS (AS – IPS).
  • Multi-Domain Vertical Alignment (MVA): In this type of display the liquid crystals are naturally arranged perpendicular to the glass plates but can be rotated to control light passing through. There are also pyramid-like protrusions in the glass substrates to control the rotation of the liquid crystals such that the light is channeled at an angle with the glass plate. This technology results in wide viewing angles while boasting good contrast ratios and faster response times than those of TN and IPS displays. The major drawback is a reduction in brightness.
  • Patterned Vertical Alignment (PVA): This type of display is a variation of MVA and performs very similarly, but with much higher contrast ratios.

Plasma display

Composition of plasma display panel

A plasma display is made up of many thousands of gas-filled cells that are sandwiched in between two glass plates, two sets of electrodes, dielectric material, and protective layers. The address electrodes are arranged vertically between the rear glass plate and a protective layer. This structure sits behind the cells in the rear of the display, with the protective layer in direct contact with the cells. On the front side of the display there are horizontal display electrodes that sit in between a magnesium-oxide (MgO) protective layer and an insulating dielectric layer. The MgO layer is in direct contact with the cells and the dielectric layer is in direct contact with the front glass plate. The horizontal and vertical electrodes form a grid from which each individual cell can be accessed. Each individual cell is walled off from surrounding cells so that activity in one cell does not affect another. The cell structure is similar to a honeycomb structure except with rectangular cells.

To illuminate a particular cell, the electrodes that intersect at the cell are charged by control circuitry and electric current flows through the cell, stimulating the gas (typically xenon and neon) atoms inside the cell. These ionized gas atoms, or plasmas, then release ultraviolet photons that interact with a phosphor material on the inside wall of the cell. The phosphor atoms are stimulated and electrons jump to higher energy levels. When these electrons return to their natural state, energy is released in the form of visible light. Every pixel on the display is made up of three subpixel cells. One subpixel cell is coated with red phosphor, another is coated with green phosphor, and the third cell is coated with blue phosphor. Light emitted from the subpixel cells is blended together to create an overall color for the pixel. The control circuitry can manipulate the intensity of light emitted from each cell, and therefore can produce a large gamut of colors. Light from each cell can be controlled and changed rapidly to produce a high-quality moving picture.

Projection television

A projection television uses a projector to create a small image from a video signal and magnify this image onto a viewable screen. The projector uses a bright beam of light and a lens system to project the image to a much larger size. A front-projection television uses a projector that is separate from the screen which could be a suitably prepared wall, and the projector is placed in front of the screen. The setup of a rear-projection television is similar to that of a traditional television in that the projector is contained inside the television box and projects the image from behind the screen.

Rear-projection television

The following are different types of rear-projection televisions, which differ based on the type of projector and how the image (before projection) is created:
  • CRT rear-projection television: Small cathode ray tubes create the image in the same manner that a traditional CRT television does, which is by firing a beam of electrons onto a phosphor-coated screen; the image is projected onto a large screen. This is done to overcome the cathode ray tube size limit which is about 40 inches, the maximum size for a normal direct-view-CRT television set (see image). The projection cathode ray tubes can be arranged in various ways. One arrangement is to use one tube and three phosphor (red, green, blue) coatings. Alternatively, one black-and-white tube can be used with a spinning color wheel. A third option is to use three CRTs, one each for red, green, and blue.
  • LCD rear-projection television: A lamp transmits light through a small LCD chip made up of individual pixels to create an image. The LCD projector uses dichroic mirrors to take the light and create three separate red, green, and blue beams, which are then passed through three separate LCD panels. The liquid crystals are manipulated using electric current to control the amount of light passing through. The lens system combines the three color images and projects them.
  • DLP rear-projection television: A DLP projector creates an image using a digital micromirror device (DMD chip), which on its surface contains a large matrix of microscopic mirrors, each corresponding to one pixel (or sub-pixel) in an image. Each mirror can be tilted to reflect light such that the pixel appears bright, or the mirror can be tilted to direct light elsewhere (where it is absorbed) to make the pixel appear dark. Mirrors flip between light and dark positions, so subpixel brightness is controlled by proportionally varying the amount of time a mirror is in the bright position; its pulse-width modulation. The mirror is made of aluminum and is mounted on a torsion-supported yoke. There are electrodes on both sides of the yoke that control the tilt of the mirror using electrostatic attraction. The electrodes are connected to an SRAM cell located under each pixel, and charges from the SRAM cell move the mirrors. Color is created by a spinning color wheel (used with a single-chip projector) or a three-chip (red, green, blue) projector. The color wheel is placed between the lamp light source and the DMD chip such that the light passing through is colored and then reflected off the mirror array to determine brightness. A color wheel consists of a red, green, and blue sector, as well as a fourth sector to either control brightness or include a fourth color. This spinning color wheel in the single-chip arrangement can be replaced by red, green, and blue light-emitting diodes (LED). The three-chip projector uses a prism to split up the light into three beams (red, green, blue), each directed towards its own DMD chip. The outputs of the three DMD chips are recombined and then projected.

Laser Phosphor Display

In Laser Phosphor Display technology, first demonstrated in June 2010 at InfoComm, the image is provided by the use of lasers, which are located on the back of the television, reflected off a rapidly moving bank of mirrors to excite pixels on the television screen in a similar way to cathode ray tubes. The mirrors reflect the laser beams across the screen and so produce the necessary number of image lines. The small layers of phosphors inside the glass emit red, green or blue light when excited by a soft UV laser. The laser can be varied in intensity or completely turned on or off without a problem, which means that a dark display would need less power to project its images.

Comparison of television display technologies

CRT

Though large-screen CRT TVs/monitors exist, the screen size is limited by their impracticality. The bigger the screen, the greater the weight, and the deeper the CRT. A typical 32-inch television can weigh about 150 ⁠lb or more. The Sony PVM-4300 monitor weighed 440 ⁠lb (200kg) and had the largest ever CRT with a 43" diagonal display. SlimFit televisions exist, but are not common.

LCD

Advantages

  • Slim profile
  • Lighter and less bulky than rear-projection televisions
  • Is less susceptible to burn-in: Burn-in refers to the television displaying a permanent ghost-like image due to constant, prolonged display of the image. Light-emitting phosphors lose their luminosity over time and, when frequently used, the low-luminosity areas become permanently visible.
  • LCDs reflect very little light, allowing them to maintain contrast levels in well-lit rooms and not be affected by glare.
  • Slightly lower power usage than equivalent sized Plasma displays.
  • Can be wall-mounted.
Disadvantages

  • Poor black level: Some light passes through even when liquid crystals completely untwist, so the best black color that can be achieved is varying shades of dark gray, resulting in worse contrast ratios and detail in the image. This can be mitigated by the use of a matrix of LEDs as the illuminator to provide nearly true black performance.
  • Narrower viewing angles than competing technologies. It is nearly impossible to use an LCD without some image warping occurring.
  • LCDs rely heavily on thin-film transistors, which can be damaged, resulting in a defective pixel.
  • Typically have slower response times than Plasmas, which can cause ghosting and blurring during the display of fast-moving images. This is also improving by increasing the refresh rate of LCDs.

Plasma display

Advantages

  • Slim cabinet profile
  • Can be wall-mounted
  • Lighter and less voluminous than rear-projection television sets
  • More accurate color reproduction than that of an LCD; 68 billion (236) colors vs. 16.7 million (224) colors 
  • Produces deep, true blacks, allowing for superior contrast ratios (+ 1:1,000,000)
  • Wider viewing angles (+178°) than those of an LCD; the image does not degrade (dim and distort) when viewed from a high angle, as occurs with an LCD
  • No motion blur; eliminated with higher refresh rates and faster response times (up to 1.0 microsecond), which make plasma TV technology ideal for viewing the fast-moving film and sport images
Disadvantages

  • No longer being produced
  • Susceptible to screen burn-in and image retention; late-model plasma TV sets feature corrective technology, such as pixel shifting
  • Phosphor-luminosity diminishes over time, resulting in the gradual decline of absolute image-brightness; corrected with the 60,000-hour life-span of contemporary plasma TV technology (longer than that of CRT technology)
  • Not manufactured in sizes smaller than 37-inches diagonal
  • Susceptible to reflective glare in a brightly lighted room, which dims the image
  • High rate of electrical power consumption
  • Heavier than a comparable LCD TV set, because of the glass screen that contains the gases
  • Costlier screen repair; the glass screen of a plasma TV set can be damaged permanently, and is more difficult to repair than the plastic screen of an LCD TV set

Projection television

Front-projection television

Advantages

  • Significantly cheaper than flat-panel counterparts
  • Front-projection picture quality approaches that of movie theater
  • Front-projection televisions take up very little space because a projector screen is extremely slim, and even a suitably prepared wall can be used
  • Display size can be extremely large, typically limited by room height.
Disadvantages

  • Front-projection more difficult to set up because projector is separate and must be placed in front of the screen, typically on the ceiling
  • Lamp may need to be replaced after heavy usage
  • Image brightness is an issue, may require darkened room.

Rear-projection television

Advantages

  • Significantly cheaper than flat-panel counterparts
  • Projectors that are not phosphor-based (LCD/DLP) are not susceptible to burn-in
  • Rear-projection is not subject to glare
Disadvantages

  • Rear-projection televisions are much bulkier than flat-panel televisions
  • Lamp may need to be replaced after heavy usage
  • Rear-projection has smaller viewing angles than those of flat-panel displays

Comparison of different types of rear-projection televisions

CRT projector

Advantages:
  • Achieves excellent black level and contrast ratio
  • Achieves excellent color reproduction
  • CRTs have generally very long lifetimes
  • Greater viewing angles than those of LCDs
Disadvantages:
  • Heavy and large, especially depth-wise
  • If one CRT fails the other two should be replaced for optimal color and brightness balance
  • Susceptible to burn-in because CRT is phosphor-based
  • Needs to be "converged" (primary colors positioned so they overlay without color fringes) annually (or after set relocation)
  • May display colour halos or lose focus

LCD projector

Advantages:
  • Smaller than CRT projectors
  • LCD chip can be easily repaired or replaced
  • Is not susceptible to burn-in
Disadvantages:
  • The Screen-door effect: Individual pixels may be visible on the large screen, giving the appearance that the viewer is looking through a screen door.
  • Possibility of defective pixels
  • Poor black level: Some light passes through even when liquid crystals completely untwist, so the best black color that can be achieved is a very dark gray, resulting in worse contrast ratios and detail in the image. Some newer models use an adjustable iris to help offset this.
  • Not as slim as DLP projection television
  • Uses lamps for light, lamps may need to be replaced
  • Fixed number of pixels, other resolutions need to be scaled to fit this
  • Limited viewing angles

DLP projector

Advantages:
  • Slimmest of all types of projection televisions
  • Achieves excellent black level and contrast ratio
  • DMD chip can be easily repaired or replaced
  • Is not susceptible to burn-in
  • Better viewing angles than those of CRT projectors
  • Image brightness only decreases due to the age of the lamp
  • defective pixels are rare
  • Does not experience the screen-door effect
Disadvantages:
  • Uses lamps for light, lamps need to be replaced on average once every year and a half to two years. Current models with LED lamps reduce or eliminate this. Estimated lifetime of LED lamps is over 100,000 hours.
  • Fixed number of pixels, other resolutions need to be scaled to fit this. This is a limitation only when compared with CRT displays.
  • The Rainbow Effect: This is an unwanted visual artifact that is described as flashes of colored light seen when the viewer looks across the display from one side to the other. This artifact is unique to single-chip DLP projectors. The Rainbow Effect is significant only in DLP displays that use a single white lamp with a "color wheel" that is synchronized with the display of red, green and blue components. LED illumination systems that use discrete red, green and blue LEDs in concert with the display of red, green and blue components at high frequency reduce, or altogether eliminate, the Rainbow effect.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...