The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web. Founded in 1994 and currently led by Sir Tim Berners-Lee, the consortium is made up of member organizations which maintain full-time staff working together in the development of standards for the World Wide Web. As of 21 October 2019, the World Wide Web Consortium (W3C) has 443 members.
The consortium also engages in education and outreach, develops
software and serves as an open forum for discussion about the Web.
The organization tries to foster compatibility and agreement
among industry members in the adoption of new standards defined by the
W3C. Incompatible versions of HTML are offered by different vendors,
causing inconsistency in how web pages are displayed. The consortium
tries to get all those vendors to implement a set of core principles and
components which are chosen by the consortium.
It was originally intended that CERN host the European branch of
W3C; however, CERN wished to focus on particle physics, not information
technology. In April 1995, the French Institute for Research in Computer Science and Automation (INRIA) became the European host of W3C, with Keio University Research Institute at SFC (KRIS) becoming the Asian host in September 1996.
Starting in 1997, W3C created regional offices around the world. As of
September 2009, it had eighteen World Offices covering Australia, the
Benelux countries (Netherlands, Luxembourg, and Belgium), Brazil, China,
Finland, Germany, Austria, Greece, Hong Kong, Hungary, India, Israel,
Italy, South Korea, Morocco, South Africa, Spain, Sweden, and, as of
2016, the United Kingdom and Ireland.
In October 2012, W3C convened a community of major web players and publishers to establish a MediaWiki wiki that seeks to document open web standards called the WebPlatform and WebPlatform Docs.
Sometimes,
when a specification becomes too large, it is split into independent
modules which can mature at their own pace. Subsequent editions of a
module or specification are known as levels and are denoted by the first
integer in the title (e.g. CSS3 = Level 3). Subsequent revisions on
each level are denoted by an integer following a decimal point (for
example, CSS2.1 = Revision 1).
The W3C standard formation process is defined within the W3C
process document, outlining four maturity levels through which each new
standard or recommendation must progress.
Working draft (WD)
After
enough content has been gathered from 'editor drafts' and discussion,
it may be published as a working draft (WD) for review by the community.
A WD document is the first form of a standard that is publicly
available. Commentary by virtually anyone is accepted, though no
promises are made with regard to action on any particular element
commented upon.
At this stage, the standard document may have significant
differences from its final form. As such, anyone who implements WD
standards should be ready to significantly modify their implementations
as the standard matures.
Candidate recommendation (CR)
A
candidate recommendation is a version of a standard that is more mature
than the WD. At this point, the group responsible for the standard is
satisfied that the standard meets its goal. The purpose of the CR is to
elicit aid from the development community as to how implementable the
standard is.
The standard document may change further, but at this point,
significant features are mostly decided. The design of those features
can still change due to feedback from implementors.
Proposed recommendation (PR)
A
proposed recommendation is the version of a standard that has passed
the prior two levels. The users of the standard provide input. At this
stage, the document is submitted to the W3C Advisory Council for final
approval.
While this step is important, it rarely causes any significant changes to a standard as it passes to the next phase.
W3C recommendation (REC)
This
is the most mature stage of development. At this point, the standard
has undergone extensive review and testing, under both theoretical and
practical conditions. The standard is now endorsed by the W3C,
indicating its readiness for deployment to the public, and encouraging
more widespread support among implementors and authors.
Recommendations can sometimes be implemented incorrectly,
partially, or not at all, but many standards define two or more levels
of conformance that developers must follow if they wish to label their
product as W3C-compliant.
Later revisions
A
recommendation may be updated or extended by separately-published,
non-technical errata or editor drafts until sufficient substantial edits
accumulate for producing a new edition or level of the recommendation.
Additionally, the W3C publishes various kinds of informative notes which
are to be used as references.
Certification
Unlike the ISOC
and other international standards bodies, the W3C does not have a
certification program. The W3C has decided, for now, that it is not
suitable to start such a program, owing to the risk of creating more
drawbacks for the community than benefits.
The W3C has a staff team of 70–80 worldwide as of 2015. W3C is run by a management team which allocates resources and designs strategy, led by CEO Jeffrey Jaffe (as of March 2010), former CTO of Novell. It also includes an advisory board which supports in strategy and legal matters and helps resolve conflicts. The majority of standardization work is done by external experts in the W3C's various working groups.
Membership
The Consortium is governed by its membership. The list of members is available to the public. Members include businesses, nonprofit organizations, universities, governmental entities, and individuals.
Membership requirements are transparent except for one
requirement: An application for membership must be reviewed and approved
by the W3C. Many guidelines and requirements are stated in detail, but
there is no final guideline about the process or standards by which
membership might be finally approved or denied.
The cost of membership is given on a sliding scale, depending on
the character of the organization applying and the country in which it
is located. Countries are categorized by the World Bank's most recent grouping by GNI ("Gross National Income") per capita.
Criticism
In 2012 and 2013, the W3C started considering adding DRM-specific Encrypted Media Extensions
(EME) to HTML5, which was criticised as being against the openness,
interoperability, and vendor neutrality that distinguished websites
built using only W3C standards from those requiring proprietary plug-ins
like Flash.
On September 18, 2017, the W3C published the EME specification as a Recommendation, leading to the Electronic Frontier Foundation's resignation from W3C.
SVG images and their behaviors are defined in XML text files. This means that they can be searched, indexed, scripted, and compressed. As XML files, SVG images can be created and edited with any text editor, as well as with drawing software.
This
image illustrates the difference between bitmap and vector images. The
bitmap image is composed of a fixed set of pixels, while the vector
image is composed of a fixed set of shapes. In the picture, scaling the
bitmap reveals the pixels while scaling the vector image preserves the
shapes.
SVG has been in development within the World Wide Web Consortium
(W3C) since 1999 after six competing proposals for vector graphics
languages had been submitted to the consortium during 1998. The early
SVG Working Group decided not to develop any of the commercial
submissions, but to create a new markup language that was informed by
but not really based on any of them.
The SVG specification was updated to version 1.1 in 2011. There
are two 'Mobile SVG Profiles,' SVG Tiny and SVG Basic, meant for mobile devices with reduced computational and display capabilities. Scalable Vector Graphics 2 became a W3C Candidate Recommendation on 15 September 2016. SVG 2 incorporates several new features in addition to those of SVG 1.1 and SVG Tiny 1.2.
Printing
Though the SVG Specification primarily focuses on vector graphics markup language, its design includes the basic capabilities of a page description language like Adobe's PDF. It contains provisions for rich graphics, and is compatible with CSS for styling purposes. SVG has the information needed to place each glyph and image in a chosen location on a printed page.
Scripting and animation
SVG drawings can be dynamic and interactive. Time-based modifications to the elements can be described in SMIL, or can be programmed in a scripting language (e.g. ECMAScript or JavaScript). The W3C explicitly recommends SMIL as the standard for animation in SVG.
A rich set of event handlers such as "onmouseover" and "onclick" can be assigned to any SVG graphical object to apply actions and events.
Compression
SVG images, being XML, contain many repeated fragments of text, so they are well suited for lossless data compression algorithms. When an SVG image has been compressed with the gzip algorithm, it is referred to as an "SVGZ" image and uses the corresponding .svgz filename extension. Conforming SVG 1.1 viewers will display compressed images. An SVGZ file is typically 20 to 50 percent of the original size. W3C provides SVGZ files to test for conformance.
Development history
SVG was developed by the W3C SVG Working Group starting in 1998, after six competing vector graphics submissions were received that year:
SVG 1.1 became a W3C Recommendation on 14 January 2003.
The SVG 1.1 specification is modularized in order to allow subsets to
be defined as profiles. Apart from this, there is very little difference
between SVG 1.1 and SVG 1.0.
SVG Tiny and SVG Basic (the Mobile SVG Profiles) became W3C Recommendations on 14 January 2003. These are described as profiles of SVG 1.1.
SVG Tiny 1.2 became a W3C Recommendation on 22 December 2008. It was initially drafted as a profile of the planned SVG Full 1.2 (which has since been dropped in favor of SVG 2), but was later refactored as a standalone specification.
SVG 1.1 Second Edition, which includes all the errata and
clarifications, but no new features to the original SVG 1.1 was released
on 16 August 2011.
Version 2.x
SVG 2.0 removes or deprecates some features of SVG 1.1 and incorporates new features from HTML5 and Web Open Font Format:
For example, SVG 2.0 removes several font elements such as glyph and altGlyph (replaced by the WOFF font format).
The xml:space attribute is deprecated in favor of CSS.
HTML5 features such as translate and data-* attributes have been added.
It reached Candidate Recommendation stage on 15 September 2016. The latest draft was released on 23 September 2019.
Mobile profiles
Because of industry demand, two mobile profiles were introduced with SVG 1.1: SVG Tiny (SVGT) and SVG Basic (SVGB).
These are subsets of the full SVG standard, mainly intended for user agents with limited capabilities. In particular, SVG Tiny was defined for highly restricted mobile devices such as cellphones; it does not support styling or scripting. SVG Basic was defined for higher-level mobile devices, such as smartphones.
In 2003, the 3GPP,
an international telecommunications standards group, adopted SVG Tiny
as the mandatory vector graphics media format for next-generation
phones. SVGT is the required vector graphics format and support of SVGB
is optional for Multimedia Messaging Service (MMS) and Packet-switched Streaming Service. It was later added as required format for vector graphics in 3GPP IP Multimedia Subsystem (IMS).
Differences from non-mobile SVG
Neither
mobile profile includes support for the full Document Object Model
(DOM), while only SVG Basic has optional support for scripting, but
because they are fully compatible subsets of the full standard, most SVG
graphics can still be rendered by devices which only support the mobile
profiles.
SVGT 1.2 adds a microDOM (μDOM), styling and scripting.
Related work
The MPEG-4 Part 20 standard - Lightweight Application Scene Representation (LASeR) and Simple Aggregation Format (SAF) is based on SVG Tiny. It was developed by MPEG (ISO/IEC JTC1/SC29/WG11) and published as ISO/IEC 14496-20:2006.
SVG capabilities are enhanced in MPEG-4 Part 20 with key features for
mobile services, such as dynamic updates, binary encoding, state-of-art
font representation. SVG was also accommodated in MPEG-4 Part 11, in the Extensible MPEG-4 Textual (XMT) format - a textual representation of the MPEG-4 multimedia content using XML.
Functionality
The SVG 1.1 specification defines 14 functional areas or feature sets:
Paths
Simple or compound shape outlines are drawn with curved or straight lines that can be filled in, outlined, or used as a clipping path. Paths have a compact coding.
For example, M (for "move to") precedes initial numeric x and ycoordinates, and L (for "line to") precedes a point to which a line should be drawn. Further command letters (C, S, Q, T, and A) precede data that is used to draw various Bézier and elliptical curves. Z is used to close a path.
In all cases, absolute coordinates follow capital letter commands
and relative coordinates are used after the equivalent lower-case
letters.
Basic shapes
Straight-line paths and paths made up of a series of connected
straight-line segments (polylines), as well as closed polygons, circles,
and ellipses can be drawn. Rectangles and round-cornered rectangles are
also standard elements.
Text
Unicode character text included in an SVG file is expressed as XML
character data. Many visual effects are possible, and the SVG
specification automatically handles bidirectional text (for composing a
combination of English and Arabic text, for example), vertical text (as
Chinese was historically written) and characters along a curved path
(such as the text around the edge of the Great Seal of the United States).
Painting
SVG shapes can be filled and outlined (painted with a color, a
gradient, or a pattern). Fills may be opaque, or have any degree of
transparency.
"Markers" are line-end features, such as arrowheads, or symbols that can appear at the vertices of a polygon.
Color
Colors can be applied to all visible SVG elements, either directly or via fill, stroke, and other properties. Colors are specified in the same way as in CSS2, i.e. using names like black or blue, in hexadecimal such as #2f0 or #22ff00, in decimal like rgb(255,255,127), or as percentages of the form rgb(100%,100%,50%).
Gradients and patterns
SVG shapes can be filled or outlined with solid colors as above, or
with color gradients or with repeating patterns. Color gradients can be
linear or radial (circular), and can involve any number of colors as
well as repeats. Opacity gradients can also be specified. Patterns are
based on predefined raster or vector graphic objects, which can be
repeated in x and y directions. Gradients and patterns can be animated and scripted.
Since 2008, there has been discussion among professional users of SVG that either gradientmeshes or preferably diffusion curves
could usefully be added to the SVG specification. It is said that a
"simple representation [using diffusion curves] is capable of
representing even very subtle shading effects"
and that "Diffusion curve images are comparable both in quality and
coding efficiency with gradient meshes, but are simpler to create
(according to several artists who have used both tools), and can be
captured from bitmaps fully automatically." The current draft of SVG 2 includes gradient meshes.
Clipping, masking and compositing
Graphic elements, including text, paths, basic shapes and combinations of these, can be used as outlines to define both inside and outside regions that can be painted (with colors, gradients and patterns) independently. Fully opaque clipping paths and semi-transparent masks are composited together to calculate the color and opacity of every pixel of the final image, using alpha blending.
Filter effects
A filter effect consists of a series of graphics operations that are
applied to a given source vector graphic to produce a modified bitmapped result.
Interactivity
SVG images can interact with users in many ways. In addition to
hyperlinks as mentioned below, any part of an SVG image can be made
receptive to user interface events such as changes in focus,
mouse clicks, scrolling or zooming the image and other pointer,
keyboard and document events. Event handlers may start, stop or alter
animations as well as trigger scripts in response to such events.
Linking
SVG images can contain hyperlinks to other documents, using XLink. Through the use of the element or a fragment identifier, URLs
can link to SVG files that change the visible area of the document.
This allows for creating specific view states that are used to zoom
in/out of a specific area or to limit the view to a specific element.
This is helpful when creating sprites. XLink support in combination with the
element also allow linking to and re-using internal and external
elements. This allows coders to do more with less markup and makes for
cleaner code.
Scripting
All aspects of an SVG document can be accessed and manipulated using
scripts in a similar way to HTML. The default scripting language is ECMAScript (closely related to JavaScript) and there are defined Document Object Model (DOM) objects for every SVG element and attribute. Scripts are enclosed in elements. They can run in response to pointer events, keyboard events and document events as required.
Animation
SVG content can be animated using the built-in animation elements such as , and .
Content can be animated by manipulating the DOM using ECMAScript and
the scripting language's built-in timers. SVG animation has been
designed to be compatible with current and future versions of Synchronized Multimedia Integration Language (SMIL). Animations can be continuous, they can loop and repeat, and they can respond to user events, as mentioned above.
Fonts
As with HTML and CSS, text in SVG may reference external font files,
such as system fonts. If the required font files do not exist on the
machine where the SVG file is rendered, the text may not appear as
intended. To overcome this limitation, text can be displayed in an SVG font, where the required glyphs are defined in SVG as a font that is then referenced from the element.
Metadata
In accord with the W3C's Semantic Web initiative, SVG allows authors to provide metadata about SVG content. The main facility is the element, where the document can be described using Dublin Core
metadata properties (e.g. title, creator/author, subject, description,
etc.). Other metadata schemas may also be used. In addition, SVG defines
and elements where
authors may also provide plain-text descriptive material within an SVG
image to help indexing, searching and retrieval by a number of means.
An SVG document can define components including shapes, gradients etc., and use them repeatedly. SVG images can also contain raster graphics, such as PNG and JPEG images, and further SVG images.
Example
This code will produce the colored shapes shown in the image, excluding the grid and labels:
The use of SVG on the web was limited by the lack of support in older versions of Internet Explorer (IE). Many web sites that serve SVG images, such as Wikipedia, also provide the images in a raster format, either automatically by HTTPcontent negotiation or by allowing the user directly to choose the file.
Google
announced on 31 August 2010 that it had started to index SVG content on
the web, whether it is in standalone files or embedded in HTML, and that users would begin to see such content listed among their search results.
It was announced on 8 December 2010 that Google Image Search would also begin indexing SVG files. The site announced an option to restrict image searches to SVG files on 11 February 2011.
Native browser support
Konqueror was the first browser to support SVG in release version 3.2 in February 2004.
As of 2011, all major desktop browsers, and many minor ones, have some
level of SVG support. Other browsers' implementations are not yet
complete; see comparison of layout engines for further details.
Some earlier versions of Firefox (e.g. versions between 1.5 and 3.6), as well as a smattering of other now-outdated web browsers capable of displaying SVG graphics, needed them embedded in or elements to display them integrated as parts of an HTML webpage instead of using the standard way of integrating images with . However, SVG images may be included in XHTML pages using XML namespaces.
Tim Berners-Lee, the inventor of the World Wide Web, has been critical of (earlier versions of) Internet Explorer for its failure to support SVG.
Opera
(since 8.0) has support for the SVG 1.1 Tiny specification, while Opera
9 includes SVG 1.1 Basic support and some of SVG 1.1 Full. Opera 9.5
has partial SVG Tiny 1.2 support. It also supports SVGZ (compressed
SVG).
Browsers based on the Geckolayout engine (such as Firefox, Flock, Camino, and SeaMonkey)
all have had incomplete support for the SVG 1.1 Full specification
since 2005. The Mozilla site has an overview of the modules which are
supported in Firefox and of the modules which are development. Gecko 1.9, included in Firefox 3.0, adds support for more of the SVG specification (including filters).
Pale Moon, which uses the Goanna layout engine (a fork of the Gecko engine), supports SVG.
Internet Explorer 8 and older versions do not support SVG. IE9 (released 14 March 2011) supports the basic SVG feature set. IE10 extended SVG support by adding SVG 1.1 filters.
There are several advantages to native and full support: plugins
are not needed, SVG can be freely mixed with other content in a single
document, and rendering and scripting become considerably more reliable.
Mobile support
SVG
Tiny (SVGT) 1.1 and 1.2 are mobile profiles for SVG. SVGT 1.2 includes
some features not found in SVG 1.1, including non-scaling strokes, which
are supported by some SVG 1.1 implementations, such as Opera, Firefox
and WebKit. As shared code bases between desktop and mobile browsers
increased, the use of SVG 1.1 over SVGT 1.2 also increased.
Support for SVG may be limited to SVGT on older or more limited smart phones or may be primarily limited by their respective operating system. Adobe Flash Lite has optionally supported SVG Tiny since version 1.1. At the SVG Open 2005 conference, Sun demonstrated a mobile implementation of SVG Tiny 1.1 for the Connected Limited Device Configuration (CLDC) platform.
Mobiles that use Opera Mobile, as well as the iPhone's built in browser, also include SVG support. However, even though it used the WebKit engine, the Android built-in browser did not support SVG prior to v3.0 (Honeycomb). Prior to v3.0, Firefox Mobile 4.0b2 (beta) for Android was the first browser running under Android to support SVG by default.
The level of SVG Tiny support available varies from mobile to
mobile, depending on the SVG engine installed. Many newer mobile
products support additional features beyond SVG Tiny 1.1, like gradient
and opacity; this is sometimes referred to as "SVGT 1.1+", though there
is no such standard.
RIM's BlackBerry has built-in support for SVG Tiny 1.1 since version 5.0. Support continues for WebKit-based BlackBerry Torch browser in OS 6 and 7.
Nokia's S60 platform
has built-in support for SVG. For example, icons are generally rendered
using the platform's SVG engine. Nokia has also led the JSR 226:
Scalable 2D Vector Graphics API expert group that defines Java ME API for SVG presentation and manipulation. This API has been implemented in S60 Platform 3rd Edition Feature Pack 1 and onward. Some Series 40 phones also support SVG (such as Nokia 6280).
Most Sony Ericsson phones beginning with K700 (by release date) support SVG Tiny 1.1. Phones beginning with K750 also support such features as opacity and gradients. Phones with Sony Ericsson Java Platform-8 have support for JSR 226.
SVG is also supported on various mobile devices from Motorola, Samsung, LG, and Siemens mobile/BenQ-Siemens. eSVG, an SVG rendering library mainly written for embedded devices, is available on some mobile platforms.
Software can be programmed to render SVG images by using a library such as librsvg used by GNOME since 2000, or Batik. SVG images can also be rendered to any desired popular image format by using ImageMagick, a free command-line utility (which also uses librsvg under the hood).
Interactive television (also known as ITV or iTV) is a form of media convergence, adding data services to traditional television technology.
Throughout its history, these have included on-demand delivery of
content, as well as new uses such as online shopping, banking, and so
forth. Interactive TV is a concrete example of how new information
technology can be integrated vertically (into established technologies
and commercial structures) rather than laterally (creating new
production opportunities outside existing commercial structures, e.g.
the world wide web).
Definitions
Interactive television represents a continuum from low (TV on/off, volume, changing channels) to moderate interactivity (simple movies on demand without player controls) and high interactivity
in which, for example, an audience member affects the program being
watched. The most obvious example of this would be any kind of real-time
voting
on the screen, in which audience votes create decisions that are
reflected in how the show continues. A return path to the program
provider is not necessary to have an interactive program experience.
Once a movie is downloaded, for example, controls may all be local. The
link was needed to download the program, but texts and software which can be executed locally at the set-top box or IRD (Integrated Receiver Decoder) may occur automatically, once the viewer enters the channel.
History
Interactive video-on-demand (VOD) television services first appeared in the 1990s. Up until then, it was not thought possible that a television programme could be squeezed into the limited telecommunicationbandwidth of a coppertelephone cable to provide a VOD service of acceptable quality, as the required bandwidth of a digital television signal was around 200Mbps, which was 2,000 times greater than the bandwidth of a speech signal over a copper telephone wire. VOD services were only made possible as a result of two major technological developments: discrete cosine transform (DCT) video compression and asymmetric digital subscriber line (ADSL) data transmission. DCT is a lossy compression technique that was first proposed by Nasir Ahmed in 1972, and was later adapted into a motion-compensated DCT algorithm for video coding standards such as the H.26x formats from 1988 onwards and the MPEG formats from 1991 onwards.
Motion-compensated DCT video compression significantly reduced the
amount of bandwidth required for a television signal, while at the same
time ADSL increased the bandwidth of data that could be sent over a
copper telephone wire. ADSL increased the bandwidth of a telephone line
from around 100kbps to 2Mbps, while DCT compression reduced the required bandwidth of a television signal from around 200Mbps down to 2Mpps. The combination of DCT and ADSL technologies made it possible to practically implement VOD services at around 2Mbps bandwidth in the 1990s.
An interactive VOD television service was proposed as early as
1986 in Japan, where there were plans to develop an "Integrated Network
System" service. It was intended to include various interactive
services, including videophone, home shopping, tele-banking, working-at-home,
and home entertainment services. However, it was not possible to
practically implement such an interactive VOD service until the adoption
of DCT and ADSL technologies made it possible in the 1990s. In early
1994, British Telecommunications (BT) began testing an interactive VOD television trial service in the United Kingdom. It used the DCT-based MPEG-1 and MPEG-2 video compression standards, along with ADSL technology.
The first patent of interactive connected TV was registered in 1994, carried on 1995 in the United States.
It clearly exposed this new interactive technology with content feeding
and feedback through global networking. User identification allows
interacting and purchasing and some other functionalities.
Return path
The viewer must be able to alter the viewing experience (e.g. choose which angle to watch a football match), or return information to the broadcaster.
Cable TV
viewers receive their programs via a cable, and in the integrated cable
return path enabled platforms, they use the same cable as a return
path.
Satellite
viewers (mostly) return information to the broadcaster via their
regular telephone lines. They are charged for this service on their
regular telephone bill. An Internet connection via ADSL, or other data communications technology, is also being increasingly used.
Interactive TV can also be delivered via a terrestrial aerial (Digital Terrestrial TV such as 'Freeview' in the UK).
In this case, there is often no 'return path' as such - so data cannot
be sent back to the broadcaster (so you could not, for instance, vote on
a TV show, or order a product
sample). However, interactivity is still possible as there is still the
opportunity to interact with an application which is broadcast and
downloaded to the set-top box (so you could still choose camera angles, play games etc.).
Increasingly the return path is becoming a broadbandIP
connection, and some hybrid receivers are now capable of displaying
video from either the IP connection or from traditional tuners. Some
devices are now dedicated to displaying video only from the IP channel,
which has given rise to IPTV
- Internet Protocol Television. The rise of the "broadband return path"
has given new relevance to Interactive TV, as it opens up the need to
interact with Video on Demand servers, advertisers, and website
operators.
Forms of interaction
The
term "interactive television" is used to refer to a variety of rather
different kinds of interactivity (both as to usage and as to
technology), and this can lead to considerable misunderstanding. At
least three very different levels are important (see also the
instructional video literature which has described levels of
interactivity in computer-based instruction which will look very much
like tomorrow's interactive television):
Interactivity with a TV set
The simplest, Interactivity with a TV set
is already very common, starting with the use of the remote control to
enable channel surfing behaviors, and evolving to include video-on-demand, VCR-like pause, rewind, and fast forward, and DVRs,
commercial skipping and the like. It does not change any content or its
inherent linearity, only how users control the viewing of that content.
DVRs allow users to time shift content in a way that is impractical
with VHS. Though this form of interactive TV is not insignificant,
critics claim that saying that using a remote control to turn TV sets on
and off makes television interactive is like saying turning the pages
of a book makes the book interactive.
In the not too distant future, the questioning of what is real
interaction with the TV will be difficult. Panasonic already has face
recognition technology implemented its prototype Panasonic Life Wall.
The Life Wall is literally a wall in your house that doubles as a
screen. Panasonic uses their face recognition technology to follow the
viewer around the room, adjusting its screen size according to the
viewers distance from the wall. Its goal is to give the viewer the best
seat in the house, regardless of location. The concept was released at
Panasonic Consumer Electronics Show in 2008. Its anticipated release
date is unknown, but it can be assumed technology like this will not
remain hidden for long.
Interactivity with TV program content
In its deepest sense, Interactivity with normal TV program content
is the one that is "interactive TV", but it is also the most
challenging to produce. This is the idea that the program, itself, might
change based on viewer input. Advanced forms, which still have
uncertain prospect for becoming mainstream, include dramas where viewers
get to choose or influence plot details and endings.
As an example, in Accidental Lovers
viewers can send mobile text messages to the broadcast and the plot
transforms on the basis of the keywords picked from the messages.
Global Television Network offers a multi-monitor interactive game for Big Brother 8 (US)
"'In The House'" which allows viewers to predict who will win each
competition, who's going home, as well as answering trivia questions and
instant recall challenges throughout the live show. Viewers login to
the Global website to play, with no downloads required.
Another kind of example of interactive content is the Hugo
game on Television where viewers called the production studio, and were
allowed to control the game character in real time using telephone
buttons by studio personnel, similar to The Price Is Right.
Another example is the Clickvision Interactive Perception Panel used on news programmes in Britain, a kind of instant clap-o-meter run over the telephone.
Simpler forms, which are enjoying some success, include programs that
directly incorporate polls, questions, comments, and other forms of
(virtual) audience response back into the show. One example would be
Australian media producer Yahoo!7's
Fango mobile app, which allows viewers to access program-related polls,
discussion groups and (in some cases) input into live programming.
During the 2012 Australian Open viewers used the app to suggest questions for commentator Jim Courier to ask players in post-match interviews.
There is much debate as to how effective and popular this kind of
truly interactive TV can be. It seems likely that some forms of it will
be popular, but that viewing of pre-defined content, with a scripted
narrative arc, will remain a major part of the TV experience
indefinitely. The United States lags far behind the rest of the
developed world in its deployment of interactive television. This is a
direct response to the fact that commercial television in the U.S. is
not controlled by the government, whereas the vast majority of other
countries' television systems are controlled by the government. These
"centrally planned" television systems are made interactive by fiat,
whereas in the U.S., only some members of the Public Broadcasting System
has this capability.
Commercial broadcasters and other content providers serving the US market are constrained from adopting advanced interactive technologies
because they must serve the desires of their customers, earn a level of
return on investment for their investors, and are dependent on the
penetration of interactive technology into viewers' homes. In
association with many factors such as
requirements for backward compatibility of TV content formats, form factors and Customer Premises Equipment (CPE)
the 'cable monopoly' laws that are in force in many communities served by cable TV operators
consumer acceptance of the pricing structure for new TV-delivered
services. Over the air (broadcast) TV is Free in the US, free of taxes
or usage fees.
proprietary coding of set top boxes by cable operators and box manufacturers
the ability to implement 'return path' interaction in rural areas that have low, or no technology infrastructure
the competition from Internet-based content and service providers for the consumers' attention and budget
and many other technical and business roadblocks
Interactivity with TV-related content
The least understood, Interactivity with TV-related content
may have most promise to alter how we watch TV over the next decade.
Examples include getting more information about what is on the TV,
weather, sports, movies, news, or the like.
Similar (and most likely to pay the bills), getting more
information about what is being advertised, and the ability to buy
it—(after futuristic innovators make it) is called "tcommerce" (short for "television commerce").
Partial steps in this direction are already becoming a mass phenomenon,
as Web sites and mobile phone services coordinate with TV programs
(note: this type of interactive TV is currently being called
"participation TV" and GSN and TBS are proponents of it). This kind of
multitasking is already happening on large scale—but there is currently
little or no automated support for relating that secondary interaction
to what is on the TV compared to other forms of interactive TV. Others
argue that this is more a "web-enhanced" television viewing than
interactive TV. In the coming months and years, there will be no need to
have both a computer and a TV set for interactive television as the
interactive content will be built into the system via the next
generation of set-top boxes. However, set-top-boxes have yet to get a
strong foothold in American households as price (pay per service pricing
model) and lack of interactive content have failed to justify their
cost.
One individual who is working to radically disrupt this field is
Michael McCarty, who is the Founder and CEO of a new wave of interactive
TV products that will be hitting the market in early 2013. As he
suggested in his presentation to the "Community for Interactive Media",
"Static media is on its way out, and if Networks would like to stay in
the game, they must adapt to consumers needs."
Many think of interactive TV primarily in terms of "one-screen"
forms that involve interaction on the TV screen, using the remote
control, but there is another significant form of interactive TV that
makes use of Two-Screen Solutions, such as NanoGaming. In this case, the second screen
is typically a PC (personal computer) connected to a Web site
application. Web applications may be synchronized with the TV broadcast,
or be regular websites that provide supplementary content to the live
broadcast, either in the form of information, or as interactive game or
program. Some two-screen applications allow for interaction from a
mobile device (phone or PDA), that run "in synch" with the show.
Such services are sometimes called "Enhanced TV," but this
term is in decline, being seen as anachronistic and misused
occasionally. (Note: "Enhanced TV" originated in the mid-late 1990s as a
term that some hoped would replace the umbrella term of "interactive
TV" due to the negative associations "interactive TV" carried because of
the way companies and the news media over-hyped its potential in the
early 1990s.)
Notable Two-Screen Solutions have been offered for specific popular programs by many US broadcast TV networks. Today, two-screen interactive TV is called either 2-screen (for short) or "Synchronized TV"
and is widely deployed around the US by national broadcasters with the
help of technology offerings from certain companies. The first such
application was Chat Television™ (ChatTV.com), originally developed in
1996. The system synchronized online services with television
broadcasts, grouping users by time-zone and program so that all
real-time viewers could participate in a chat or interactive gathering
during the show's airing.
One-screen interactive TV generally requires special support in the set-top box, but Two-Screen Solutions, synchronized interactive TV applications generally do not, relying instead on Internet
or mobile phone servers to coordinate with the TV and are most often
free to the user. Developments from 2006 onwards indicate that the
mobile phone can be used for seamless authentication through Bluetooth, explicit authentication through near-field communication. Through such an authentication it will be possible to provide personalized services to the mobile phone.
Interactive TV services
Notable interactive TV services are:
ActiveVideo
(formerly known as ICTV) - Pioneers in interactive TV and creators of
CloudTV™: A cloud-based interactive TV platform built on current web and
television standards. The network-centric approach provides for the
bulk of application and video processing to be done in the cloud, and
delivers a standard MPEG stream to virtually any digital set-top box,
web-connected TV or media device.
T-commerce - Is a commerce transaction through the set top box return path connection.
ATVEF - 'Advanced Television Enhancement Forum' is a group of
companies that are set up to create HTML based TV products and services.
ATVEF's work has resulted in an Enhanced Content Specification which
makes it possible for developers to create their content once and have
it display properly on any compliant receiver.
MSN TV
- A former service originally introduced as WebTV. It supplied
computerless Internet access. It required a set-top box that sold for
$100 to $200, with a monthly access fee. The service was discontinued in
2013, although customer service remained available until 2014.
Philips Net TV - solution
to view Internet content designed for TV; directly integrated inside
the TV set. No extra subscription costs or hardware costs involved.
An Interactive TV purchasing system was introduced in 1994 in
France. The system was using a regular TV set connected together with a
regular antenna and the Internet for feedback. A demo has shown the
possibility of immediate purchasing, interactively with displayed
contents.
QUBE - A very early example of this concept, it was introduced experimentally by Warner Cable (later Time Warner Cable, now part of CharterSpectrum) in Columbus, Ohio
in 1977. Its most notable feature was five buttons that could allow the
viewers to, among other things, participate in interactive game shows,
and answer survey questions. While successful, going on to expand to a
few other cities, the service eventually proved to be too expensive to
run, and was discontinued by 1984, although the special boxes would
continue to be serviced well into the 1990s.
Interactive TV has been described in human-computer interaction research as "lean back" interaction,
as users are typically relaxing in the living room environment with a
remote control in one hand. This is a very simplistic definition of
interactive television that is less and less descriptive of interactive
television services that are in various stages of market introduction.
This is in contrast to the descriptor of personal computer-oriented "lean forward" experience of a keyboard, mouse and monitor.
This description is becoming more distracting than useful as video game
users, for example, don't lean forward while they are playing video
games on their television sets, a precursor to interactive TV. A more
useful mechanism for categorizing the differences between PC- and
TV-based user interaction is by measuring the distance the user is from
the Device. Typically a TV viewer is "leaning back" in their sofa, using
only a Remote Control as a means of interaction. While a PC user is
2 ft or 3 ft (60 or 100 cm) from his high resolution screen using a
mouse and keyboard. The demands of distance, and user input devices,
requires the application's look and feel to be designed differently.
Thus Interactive TV applications are often designed for the "10-foot user interface"
while PC applications and web pages are designed for the "3ft user
experience". This style of interface design rather than the "lean back
or lean forward" model is what truly distinguishes Interactive TV from
the web or PC.
However even this mechanism is changing because there is at least one
web-based service which allows you to watch internet television on a PC
with a wireless remote control.
In the case of Two-Screen Solutions Interactive TV, the
distinctions of "lean-back" and "lean-forward" interaction become more
and more indistinguishable. There has been a growing proclivity to media multitasking,
in which multiple media devices are used simultaneously (especially
among younger viewers). This has increased interest in two-screen
services, and is creating a new level of multitasking in interactive TV.
In addition, video is now ubiquitous on the web, so research can now be
done to see if there is anything left to the notion of "lean back"
"versus" "lean forward" uses of interactive television.
For one-screen services, interactivity is supplied by the manipulation of the API of the particular software installed on a set-top box, referred to as 'middleware'
due to its intermediary position in the operating environment. Software
programs are broadcast to the set-top box in a 'carousel'.
On UK DTT (Freeview uses ETSI based MHEG-5), and Sky's DTH platform uses ETSI based WTVML in DVB-MHP systems and for OCAP; this is a DSM-CC Object Carousel.
The set-top box can then load and execute the application. In the
UK this is typically done by a viewer pressing a "trigger" button on
their remote control (e.g. the red button, as in "press red").
Interactive TV Sites
have the requirement to deliver interactivity directly from internet
servers, and therefore need the set-top box's middleware to support some
sort of TV Browser, content translation system or content rendering
system. Middleware examples like Liberate are based on a version of HTML/JavaScript and have rendering capabilities built in, while others such as OpenTV and DVB-MHP can load microbrowsers and applications to deliver content from TV Sites. In October 2008, the ITU's J.201 paper on interoperability of TV Sites recommended authoring using ETSI WTVML to achieve interoperability by allowing dynamic TV Site to be automatically translated into various TV dialects of HTML/JavaScript, while maintaining compatibility with middlewares such as MHP and OpenTV via native WTVML microbrowsers.
Typically the distribution system for Standard Definition digital TV is based on the MPEG-2 specification, while High Definition distribution is likely to be based on the MPEG-4
meaning that the delivery of HD often requires a new device or set-top
box, which typically are then also able to decode Internet Video via
broadband return paths.
Emergent approaches such as the Fango app
have utilised mobile apps on smartphones and tablet devices to present
viewers with a hybrid experience across multiple devices, rather than
requiring dedicated hardware support.
Interactive television projects
Some
interactive television projects are consumer electronics boxes which
provide set-top interactivity, while other projects are supplied by the
cable television companies (or multiple system operator, or MSO) as a
system-wide solution. Even other, newer, approaches integrate the
interactive functionality in the TV, thus negating the need for a
separate box. Some examples of interactive television include:
GTE mainStreet (US) a former product of GTE, also provided over select Continental Cablevision and Daniels cable television systems.
Smartbox from TV Cabo, Novabase and Microsoft
(PT) - this no longer in operation, although some of the equipment is
still used for the digital TV service. This was the pioneer project.