Search This Blog

Sunday, February 2, 2020

Smart TV

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Smart_TV
 
LG Smart TV model 42LW5700-TA showing web browser, with on-screen keyboard active; unlike traditional TVs, a smart TV enables the viewer to interact with icons or images on the screen.
 
A Sony Bravia Smart TV showing the home screen.

A Smart TV, also known as a connected TV (CTV), is a traditional television set with integrated Internet and interactive Web 2.0 features which allows users to stream music and videos, browse the internet, and view photos. Smart TV is a technological convergence of computers, television sets and set-top boxes. Besides the traditional functions of television sets and set-top boxes provided through traditional broadcasting media, these devices can also provide Internet TV, online interactive media, over-the-top content (OTT), as well as on-demand streaming media, and home networking access.

Smart TV should not be confused with Internet TV, IPTV or Web television. Internet TV refers to receiving television content over the Internet instead of traditional systems such as terrestrial, cable and satellite, regardless how the Internet is delivered. IPTV is one of the Internet television technology standards for use by television broadcasters. Web television is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV. 

In smart TVs, the operating system is preloaded or is available through the set-top box. The software applications or "apps" can be preloaded into the device, or updated or installed on demand via an app store or marketplace, in a similar manner to how the apps are integrated in modern smartphones.

The technology that enables smart TVs is also incorporated in external devices such as set-top boxes and some Blu-ray players, game consoles, digital media players, hotel television systems, smartphones, and other network-connected interactive devices that utilize television-type display outputs. These devices allow viewers to find and play videos, movies, TV shows, photos and other content from the Web, cable or satellite TV channel, or from a local storage device. 

Definition

Smart TVs on display
 
A smart TV device is either a television set with integrated Internet capabilities or a set-top box for television that offers more advanced computing ability and connectivity than a contemporary basic television set. Smart TVs may be thought of as an information appliance or the computer system from a mobile device integrated within a television set unit, as such a smart TV often allows the user to install and run more advanced applications or plugins/addons based on a specific platform. Smart TVs run a complete operating system or mobile operating system software providing a platform for application developers.

Smart TV platforms or middleware have a public software development kit (SDK) and/or native development kit (NDK) for apps so that third-party developers can develop applications for it, and an app store so that the end-users can install and uninstall apps themselves. The public SDK enables third-party companies and other interactive application developers to “write” applications once and see them run successfully on any device that supports the smart TV platform or middleware architecture which it was written for, no matter who the hardware manufacturer is.

Smart TVs deliver content (such as photos, movies and music) from other computers or network attached storage devices on a network using either a Digital Living Network Alliance / Universal Plug and Play media server or similar service program like Windows Media Player or Network-attached storage (NAS), or via iTunes. It also provides access to Internet-based services including traditional broadcast TV channels, catch-up services, video-on-demand (VOD), electronic program guide, interactive advertising, personalisation, voting, games, social networking, and other multimedia applications. Smart TV enables access to movies, shows, video games, apps and more. Some of those apps include Netflix, Spotify, YouTube, and Amazon.

History

In the early 1980s, "intelligent" television receivers were introduced in Japan. The addition of an LSI chip with memory and a character generator to a television receiver enabled Japanese viewers to receive a mix of programming and information transmitted over spare lines of the broadcast television signal. A patent was published in 1994 (and extended the following year) for an "intelligent" television system, linked with data processing systems, by means of a digital or analog network. Apart from being linked to data networks, one key point is its ability to automatically download necessary software routines, according to a user's demand, and process their needs. 

The mass acceptance of digital television in late 2000s and early 2010s greatly improved smart TVs. Major TV manufacturers have announced production of smart TVs only, for their middle-end to high-end TVs in 2015. Smart TVs are expected to become the dominant form of television by the late 2010s. At the beginning of 2016, Nielsen reported that 29 percent of those with incomes over $75,000 a year had a smart TV.

Typical features

LG Smart TV using the web browser.
 
Smart TV devices also provide access to user-generated content (either stored on an external hard drive or in cloud storage) and to interactive services and Internet applications, such as YouTube, many using HTTP Live Streaming (also known as HLS) adaptive streaming. Smart TV devices facilitate the curation of traditional content by combining information from the Internet with content from TV providers. Services offer users a means to track and receive reminders about shows or sporting events, as well as the ability to change channels for immediate viewing. Some devices feature additional interactive organic user interface / natural user interface technologies for navigation controls and other human interaction with a Smart TV, with such as second screen companion devices, spatial gestures input like with Xbox Kinect, and even for speech recognition for natural language user interface. Smart TV develops new features to satisfy consumers and companies, such as new payment processes. LG and PaymentWall have collaborated to allow consumers to access purchased apps, movies, games, and more using a remote control, laptop, tablet, or smartphone. This is intended for an easier and more convenient way for checkout.

Platforms

Smart TV technology and software is still evolving, with both proprietary and open source software frameworks already available. These can run applications (sometimes available via an 'app store' digital distribution platform), play over-the-top media services and interactive on-demand media, personalized communications, and have social networking features.

Android TV, Boxee, Firefox OS, Frog, Google TV, Horizon TV, httvLink, Inview, Kodi Entertainment Center, MeeGo, Mediaroom, OpenTV, Opera TV, Plex, Roku, RDK(Reference Development Kit), Smart TV Alliance, ToFu Media Platform, Ubuntu TV, and Yahoo! Smart TV are framework platforms managed by individual companies. HbbTV, provided by the Hybrid Broadcast Broadband TV association, CE-HTML, part of Web4CE, OIPF, part of HbbTV, and Tru2way are framework platforms managed by technology businesses. Current Smart TV platforms used by vendors are Amazon, Apple, Google, Haier, Hisense, Hitachi, Insignia, LG, Microsoft, Netgear, Panasonic, Philips, Samsung, Sharp, Sony, TCL, TiVO, Toshiba, Sling Media, and Western Digital. Sony, Panasonic, Samsung, LG, and Roku TV are some platforms ranked under the best Smart TV platforms.

Sales

According to a report from research group NPD In-Stat, in 2012 only about 12 million U.S. households had their Web-capable TVs connected to the Internet, although an estimated 25 million households owned a set with the built-in network capability. In-Stat predicted that by 2016, 100 million homes in North America and western Europe would be using television sets blending traditional programming with internet content.

The number of households using over-the-top television services has rapidly increased over the years. In 2015, 52% of U.S. households subscribed to Netflix, Amazon Prime, or Hulu Plus; 43% of pay-TV subscribers also used Netflix, and 43% of adults used some streaming video on demand service at least monthly. Additionally, 19% of Netflix subscribers shared their subscription with people outside of their households. Ten percent of adults at the time showed interest in HBO Now.

 

Use


Social networking

Some smart TV platforms come prepackaged, or can be optionally extended, with social networking technology capabilities. The addition of social networking synchronization to smart TV and HTPC platforms may provide an interaction with both on-screen content and other viewers than is currently available to most televisions, while simultaneously providing a much more cinematic experience of the content than is currently available with most computers.

Advertising

Some smart TV platforms also support interactive advertising, addressable advertising with local advertising insertion and targeted advertising, and other advanced advertising features such as ad telescoping using VOD and DVR, enhanced TV for consumer call-to-action and audience measurement solutions for ad campaign effectiveness. The marketing and trading possibilities offered by Smart TVs are sometimes summarized by the term t-commerce. Taken together, this bidirectional data flow means that smart TVs can be and are used for clandestine observation of the owners. Even in sets that are not configured off-the-shelf to do so, default security measures are often weak and will allow hackers to easily break into the TV.

2019 research, "Watching You Watch: The Tracking Ecosystem of Over-the-Top TV Streaming Devices", conducted at Princeton and University of Chicago, demonstrated that a majority of streaming devices will covertly collect and transmit personal user data, including captured screen images, to a wide network of advertising and analytics companies, raising privacy concerns.

Digital marketing research firm eMarketer reported a 38 percent surge—to close to $7 billion, a 10 percent television advertising market share—in advertising on connected TV like Hulu and Roku, to be underway in 2019, with market indicators that the figure would surpass $10 billion in 2021.

Security

There is evidence that a smart TV is vulnerable to attacks. Some serious security bugs have been discovered, and some successful attempts to run malicious code to get unauthorized access were documented on video. There is evidence that it is possible to gain root access to the device, install malicious software, access and modify configuration information for a remote control, remotely access and modify files on TV and attached USB drives, access camera and microphone.

There have also been concerns that hackers may be able to remotely turn on the microphone or web-camera on a smart TV, being able to eavesdrop on private conversations. A common loop antenna may be set for a bidirectional transmission channel, capable of uploading data rather than only receiving. Since 2012, security researchers discovered a similar vulnerability present in more series of Smart Tvs, which allows hackers to get an external root access on the device. 

Anticipating growing demand for an antivirus for a smart TV, some security software companies are already working with partners in digital TV field on the solution. At this writing it seems like there is only one antivirus for smart TVs available: 'Neptune', a cloud-based antimalware system developed by Ocean Blue Software in partnership with Sophos. However, antivirus company Avira has joined forces with digital TV testing company Labwise to work on software to protect against potential attacks. The privacy policy for Samsung's Smart TVs has been called Orwellian (a reference to George Orwell and the dystopian world of constant surveillance he depicted in 1984), and compared to Telescreens because of eavesdropping concerns.

Hackers have misused Smart TV's abilities such as operating source codes for applications and its unsecured connection to the Internet. Passwords, IP address data, and credit card information can be accessed by hackers and even companies for advertisement. A company caught in the act is Vizio. The confidential documents, codenamed Vault 7 and dated from 2013–2016, include details on CIA's software capabilities, such as the ability to compromise smart TVs.

Restriction of access

Internet websites can block smart TV access to content at will, or tailor the content that will be received by each platform. Google TV-enabled devices were blocked by NBC, ABC, CBS, and Hulu from accessing their Web content since the launch of Google TV in October 2010. Google TV devices were also blocked from accessing any programs offered by Viacom’s subsidiaries.

Reliability

High-end Samsung Smart TVs stopped working for at least seven days after a software update. Application providers are rarely upgrading Smart TV apps to the latest version; for example, Netflix does not support older TV versions with new Netflix upgrades.

Streaming television

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Streaming_television
 
A screenshot from a webcast.
 
Streaming television (also known as streaming TV, online TV, Internet TV, or TV streaming) is the digital distribution of television content, such as TV shows, as streaming video delivered over the Internet. Streaming TV stands in contrast to dedicated terrestrial television delivered by over-the-air aerial systems, cable television, and/or satellite television systems.

History

Up until the 1990s, it was not thought possible that a television programme could be squeezed into the limited telecommunication bandwidth of a copper telephone cable to provide a streaming service of acceptable quality, as the required bandwidth of a digital television signal was around 200 Mbps, which was 2,000 times greater than the bandwidth of a speech signal over a copper telephone wire. Streaming services were only made possible as a result of two major technological developments: discrete cosine transform (DCT) video compression and asymmetric digital subscriber line (ADSL) data transmission. DCT is a lossy compression technique that was first proposed by Nasir Ahmed in 1972, and was later adapted into a motion-compensated DCT algorithm for video coding standards such as the H.26x formats from 1988 onwards and the MPEG formats from 1991 onwards. Motion-compensated DCT video compression significantly reduced the amount of bandwidth required for a television signal, while at the same time ADSL increased the bandwidth of data that could be sent over a copper telephone wire. ADSL increased the bandwidth of a telephone line from around 100 kbps to 2 Mbps, while DCT compression reduced the required bandwidth of a digital television signal from around 200 Mbps down to about 2 Mpps. The combination of DCT and ADSL technologies made it possible to practically implement streaming services at around 2 Mbps bandwidth.

The mid-2000s were the beginning of television programs becoming available via the Internet. The video-sharing site YouTube was launched in early 2005, allowing users to share illegally posted television programs. YouTube co-founder Jawed Karim said the inspiration for YouTube first came from Janet Jackson's role in the 2004 Super Bowl incident, when her breast was exposed during her performance, and later from the 2004 Indian Ocean tsunami. Karim could not easily find video clips of either event online, which led to the idea of a video sharing site. Apple's iTunes service also began offering select television programs and series in 2005, available for download after direct payment. A few years later, television networks and other independent services began creating sites where shows and programs could be streamed online. Amazon Video began in the United States as Amazon Unbox in 2006, but did not launch worldwide until 2016. Netflix, a website originally created for DVD rentals and sales, began providing streaming content in 2007. In 2008 Hulu, owned by NBC and Fox, was launched, followed by tv.com in 2009, owned by CBS. Digital media players also began to become available to the public during this time. The first generation Apple TV was released in 2007 and in 2008 the first generation Roku streaming device was announced.

Smart TVs took over the television market after 2010 and continue to partner with new providers to bring streaming video to even more users. As of 2015 smart TVs are the only type of middle to high-end television being produced. Amazon's version of a digital media player, Amazon Fire TV, was not offered to the public until 2014. These digital media players have continued to be updated and new generations released. Access to television programming has evolved from computer and television access, to also include mobile devices such as smartphones and tablet computers. Apps for mobile devices started to become available via app stores in 2008. These mobile apps allow users to view content on mobile devices that support the apps. After 2010 traditional cable and satellite television providers began to offer services such as Sling TV, owned by Dish Network, which was unveiled in January 2015. DirecTV, another satellite television provider launched their own streaming service, DirecTV Now, in 2016. In 2017 YouTube launched YouTube TV, a streaming service that allows users to watch live television programs from popular cable or network channels, and record shows to stream anywhere, anytime. As of 2017, 28% of US adults cite streaming services as their main means for watching television, and 61% of those ages 18 to 29 cite it as their main method. As of 2018, Netflix is the world's largest streaming TV network and also the world's largest Internet media and entertainment company with 117 million paid subscribers, and by revenue and market cap.

Technology

The Hybrid Broadcast Broadband TV (HbbTV) consortium of industry companies (such as SES, Humax, Philips, and ANT Software) is currently promoting and establishing an open European standard for hybrid set-top boxes for the reception of broadcast and broadband digital television and multimedia applications with a single-user interface.

As of the 2010s, providers of Internet television use various technologies to provide VoD systems and live streaming. BBC iPlayer makes use of the Adobe Flash Player to provide streaming-video clips and other software provided by Adobe for its download service. CNBC, Bloomberg Television and Showtime use live-streaming services from BitGravity to stream live television to paid subscribers using the HTTP protocol. 

BBC iPlayer originally incorporated peer-to-peer streaming, moved towards centralized distribution for their video streaming services. BBC executive Anthony rose cited network performance as an important factor in the decision, as well as the unhappiness among consumers unhappy with their own network bandwidth being consumed for transmitting content to other viewers.

Samsung TV has also announced their plans to provide streaming options including 3D Video on Demand through their Explore 3D service.

Access control

Some streaming services incorporate digital rights management. The W3C made the controversial decision to adopt Encrypted Media Extensions due in large part to motivations to provide copy protection for streaming content. Sky Go has software that is provided by Microsoft to prevent content being copied.

Additionally, BBC iPlayer makes use of a parental control system giving parents the option to "lock" content, meaning that a password would have to be used to access it. Flagging systems can be used to warn a user that content may be certified or that it is intended for viewing post-watershed. Honour systems are also used where users are asked for their dates of birth or age to verify if they are able to view certain content.

IPTV

IPTV delivers television content using signals based on the Internet protocol (IP), through the open, unmanaged Internet with the "last-mile" telecom company acting only as the Internet service provider (ISP). As described above, "Internet television" is "over-the-top technology" (OTT). Both IPTV and OTT use the Internet protocol over a packet-switched network to transmit data, but IPTV operates in a closed system—a dedicated, managed network controlled by the local cable, satellite, telephone, or fiber-optic company. In its simplest form, IPTV simply replaces traditional circuit switched analog or digital television channels with digital channels which happen to use packet-switched transmission. In both the old and new systems, subscribers have set-top boxes or other customer-premises equipment that communicates directly over company-owned or dedicated leased lines with central-office servers. Packets never travel over the public Internet, so the television provider can guarantee enough local bandwidth for each customer's needs. 

The Internet protocol is a cheap, standardized way to enable two-way communication and simultaneously provide different data (e.g., TV-show files, email, Web browsing) to different customers. This supports DVR-like features for time shifting television: for example, to catch up on a TV show that was broadcast hours or days ago, or to replay the current TV show from its beginning. It also supports video on demand—browsing a catalog of videos (such as movies or television shows) which might be unrelated to the company's scheduled broadcasts.

IPTV has an ongoing standardization process (for example, at the European Telecommunications Standards Institute). 


IPTV Over-the-top technology
Content provider Local telecom Studio, channel, or independent service
Transmission network Local telecom - dedicated,
owned< or leased network
Public Internet + local telecom
Receiver Local telecom (set-top box) Purchased by consumer (box, stick, TV, computer, or mobile)
Display device Screen provided by consumer Screen provided by consumer
Examples AT&T U-verse,
Bell Fibe TV,
Verizon Fios
(IPTV service now discontinued)
Video on demand services like
fuboTV, PlayStation Vue, Sky Go, YouTube,
Netflix, Amazon, DittoTV, YuppTV, Lovefilm,
BBC iPlayer, Hulu, Sony Liv, myTV,
Now TV, Emagine, SlingTV, KlowdTV

Streaming quality

Streaming quality is the quality of image and audio transmission from the servers of the distributor to the user's screen. High-definition video (720p+) and later standards require higher bandwidth and faster connection speeds than previous standards, because they carry higher spatial resolution image content. In addition, transmission packet loss and latency caused by network impairments and insufficient bandwidth degrade replay quality. Decoding errors may manifest themselves with video breakup and macro blocks. The generally accepted download rate for streaming high-definition video encoded in H.264 is 3500 kbit/s, whereas standard-definition television can range from 500 to 1500 kbit/s depending on the resolution on screen. In the UK, the BBC iPlayer deals with the largest amount of traffic yet it offers HD content along with SD content. As more people have gotten broadband connections which can deal with streaming HD video over the Internet, the BBC iPlayer has tried to keep up with demand and pace. However, as streaming HD video takes around 1.5 GB of data per hour of video the BBC has had to invest a lot of money collected from License Fee payers to implement this on a large scale.

For users who do not have the bandwidth to stream HD video or even high-SD video, which requires 1500 kbit/s, the BBC iPlayer offers lower bitrate streams which in turn lead to lower video quality. This makes use of an adaptive bitrate stream so that if the user's bandwidth suddenly drops, iPlayer will lower its streaming rate to compensate. A diagnostic tool offered on the BBC iPlayer site measures a user's streaming capabilities and bandwidth.

Usage

Internet television is common in most US households as of the mid 2010s. About one in four new televisions being sold is now a smart TV.

Considering the popularity of smart TVs and devices such as the Roku and Chromecast, much of the US public can watch television via the Internet. Internet-only channels are now established enough to feature some Emmy-nominated shows, such as Netflix's House of Cards. Many networks also distribute their shows the next day to streaming providers such as Hulu. Some networks may use a proprietary system, such as the BBC utilizes their iPlayer format. This has resulted in bandwidth demands increasing to the point of causing issues for some networks. It was reported in February 2014 that Verizon is having issues coping with the demand placed on their network infrastructure. Until long-term bandwidth issues are worked out and regulation such at net neutrality Internet Televisions push to HDTV may start to hinder growth.

Aereo was launched in March 2012 in New York City (and subsequently stopped from broadcasting in June 2014). It streamed network TV only to New York customers over the Internet. Broadcasters filed lawsuits against Aereo, because Aereo captured broadcast signals and streamed the content to Aereo's customers without paying broadcasters. In mid-July 2012, a federal judge sided with the Aereo start-up. Aereo planned to expand to every major metropolitan area by the end of 2013. The Supreme Court ruled against Aereo June 24, 2014.

Market competitors

Many providers of Internet television services exist—including conventional television stations that have taken advantage of the Internet as a way to continue showing television shows after they have been broadcast, often advertised as "on-demand" and "catch-up" services. Today, almost every major broadcaster around the world is operating an Internet television platform. Examples include the BBC, which introduced the BBC iPlayer on 25 June 2008 as an extension to its "RadioPlayer" and already existing streamed video-clip content, and Channel 4 that launched 4oD ("4 on Demand") (now All 4) in November 2006 allowing users to watch recently shown content. Most Internet television services allow users to view content free of charge; however, some content is for a fee.

Since 2012 around 200 over-the-top (OTT) platforms providing streamed and downloadable content have emerged. Investment by Netflix in new original content for its OTT platform reached $13bn in 2018.

Broadcasting rights

Broadcasting rights vary from country to country and even within provinces of countries. These rights govern the distribution of copyrighted content and media and allow the sole distribution of that content at any one time. An example of content only being aired in certain countries is BBC iPlayer. The BBC checks a user's IP address to make sure that only users located in the UK can stream content from the BBC. The BBC only allows free use of their product for users within the UK as those users have paid for a television license that funds part of the BBC. This IP address check is not foolproof as the user may be accessing the BBC website through a VPN or proxy server.  Broadcasting rights can also be restricted to allowing a broadcaster rights to distribute that content for a limited time. Channel 4's online service All 4 can only stream shows created in the US by companies such as HBO for thirty days after they are aired on one of the Channel 4 group channels. This is to boost DVD sales for the companies who produce that media. 

Some companies pay very large amounts for broadcasting rights with sports and US sitcoms usually fetching the highest price from UK-based broadcasters. A trend among major content producers in North America[when?] is the use of the "TV Everywhere" system. Especially for live content, the TV Everywhere system restricts viewership of a video feed to select Internet service providers, usually cable television companies that pay a retransmission consent or subscription fee to the content producer. This often has the negative effect of making the availability of content dependent upon the provider, with the consumer having little or no choice on whether they receive the product.

Profits and costs

With the advent of broadband Internet connections, multiple streaming providers have come onto the market in the last couple of years. The main providers are Netflix, Hulu and Amazon. Some of these providers such as Hulu advertise and charge a monthly fee. Other such as Netflix and Amazon charge users a monthly fee and have no commercials. Netflix is the largest provider; it has over 43 million members and its membership numbers are growing. The rise of internet TV has resulted in cable companies losing customers to a new kind of customer called "cord cutters". Cord cutters are consumers who are cancelling their cable TV or satellite TV subscriptions and choosing instead to stream TV shows, movies and other content via the Internet. Cord cutters are forming communities. With the increasing availability of video sharing websites (e.g., YouTube) and streaming services, there is an alternative to cable and satellite television subscriptions. Cord cutters tend to be younger people.

Multicast

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Semantic_Web
 
Multicast.svg

In computer networking, multicast is group communication where data transmission is addressed to a group of destination computers simultaneously. Multicast can be one-to-many or many-to-many distribution. Multicast should not be confused with physical layer point-to-multipoint communication.

Group communication may either be application layer multicast or network assisted multicast, where the latter makes it possible for the source to efficiently send to the group in a single transmission. Copies are automatically created in other network elements, such as routers, switches and cellular network base stations, but only to network segments that currently contain members of the group. Network assisted multicast may be implemented at the data link layer using one-to-many addressing and switching such as Ethernet multicast addressing, Asynchronous Transfer Mode (ATM), point-to-multipoint virtual circuits (P2MP) or Infiniband multicast. Network assisted multicast may also be implemented at the Internet layer using IP multicast. In IP multicast the implementation of the multicast concept occurs at the IP routing level, where routers create optimal distribution paths for datagrams sent to a multicast destination address.

Multicast is often employed in Internet Protocol (IP) applications of streaming media, such as IPTV and multipoint videoconferencing.

Ethernet multicast

Ethernet frames with a value of 1 in the least-significant bit of the first octet of the destination address are treated as multicast frames and are flooded to all points on the network. This mechanism constitutes multicast at the data link layer. This mechanism is used by IP multicast to achieve one-to-many transmission for IP on Ethernet networks. Modern Ethernet controllers filter received packets to reduce CPU load, by looking up the hash of a multicast destination address in a table, initialized by software, which controls whether a multicast packet is dropped or fully received.

IP multicast

IP multicast is a technique for one-to-many communication over an IP network. The destination nodes send Internet Group Management Protocol join and leave messages, for example in the case of IPTV when the user changes from one TV channel to another. IP multicast scales to a larger receiver population by not requiring prior knowledge of who or how many receivers there are. Multicast uses network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. The nodes in the network take care of replicating the packet to reach multiple receivers only when necessary.

The most common transport layer protocol to use multicast addressing is User Datagram Protocol (UDP). By its nature, UDP is not reliable—messages may be lost or delivered out of order. By adding loss detection and re-transmission mechanisms, reliable multicast has been implemented on top of UDP or IP by various middleware products, e.g. those that implement the Real-Time Publish-Subscribe (RTPS) Protocol of the Object Management Group (OMG) Data Distribution Service (DDS) standard, as well as by special transport protocols such as Pragmatic General Multicast (PGM). 

Application layer multicast

Application layer multicast overlay services are not based on IP multicast or data link layer multicast. Instead they use multiple unicast transmissions to simulate a multicast. These services are designed for application-level group communication. Internet Relay Chat (IRC) implements a single spanning tree across its overlay network for all conference groups. The lesser known PSYC technology uses custom multicast strategies per conference. Some peer-to-peer technologies employ the multicast concept known as peercasting when distributing content to multiple recipients.

Explicit multi-unicast (Xcast) is an alternate multicast strategy that includes addresses of all intended destinations within each packet. As such, given maximum transmission unit limitations, Xcast cannot be used for multicast groups with many destinations. The Xcast model generally assumes that stations participating in the communication are known ahead of time, so that distribution trees can be generated and resources allocated by network elements in advance of actual data traffic.

Multicast over wireless networks and cable-TV

Wireless communications (with exception to point-to-point radio links using directional antennas) are inherently broadcasting media. However, the communication service provided may be unicast, multicast as well as broadcast, depending on if the data is addressed to one, to a group or to all receivers in the covered network, respectively. 

In digital television, the concept of multicast service sometimes is used to refer to content protection by broadcast encryption, i.e. encrypted content over a simplex broadcast channel only addressed to paying viewers (pay television). In this case, data is broadcast (or distributed) to all receivers, but only addressed to a specific group. 

The concept of interactive multicast, for example using IP multicast, may be used over TV broadcast networks to improve efficiency, offer more TV programs, or reduce the required spectrum. Interactive multicast implies that TV programs are sent only over transmitters where there are viewers, and that only the most popular programs are transmitted. It relies on an additional interaction channel (a back-channel or return channel), where user equipment may send join and leave messages when the user changes TV channel. Interactive multicast has been suggested as an efficient transmission scheme in DVB-H and DVB-T2 terrestrial digital television systems, A similar concept is switched broadcast over cable-TV networks, where only the currently most popular content is delivered in the cable-TV network. Scalable video multicast in an application of interactive multicast, where a subset of the viewers receive additional data for high-resolution video. 

TV gateways converts Satellite (DVB-S, DVB-S2), Cable (DVB-C, DVB-C2) and Terrestrial television (DVB-T, DVB-T2) to IP for distribution using unicast and multicast in home, hospitality and enterprise applications.

Another similar concept is Cell-TV, and implies TV distribution over 3G cellular networks using the network-assisted multicasting offered by the Multimedia Broadcast Multicast Service (MBMS) service, or over 4G/LTE cellular networks with the eMBMS (enhanced MBMS) service.

Internet protocol suite (updated)

From Wikipedia, the free encyclopedia

The Internet protocol suite is the conceptual model and set of communications protocols used in the Internet and similar computer networks. It is commonly known as TCP/IP because the foundational protocols in the suite are the Transmission Control Protocol (TCP) and the Internet Protocol (IP). During its development, versions of it were known as the Department of Defense (DoD) model because the development of the networking method was funded by the United States Department of Defense through DARPA. Its implementation is a protocol stack.

The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to the scope of networking involved. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications.

The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.

History


Early research

Diagram of the first internetworked connection
 
An SRI International Packet Radio Van, used for the first three-way internetworked transmission.

The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, who helped develop the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Hubert Zimmermann, Gérard Le Lann  and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974.

Initially, the Transmission Control Program managed both datagram transmissions and routing, but as experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols. Advocates included Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development., and the research group of Robert Metcalfe at Xerox PARC. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. The Transmission Control Program was split into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years later, as a joke, the IP over Avian Carriers formal protocol specification was created and successfully tested.

DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6). 

Early Implementation

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London. In November 1977, a three-network TCP/IP test was conducted between sites in the US, the UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983.

A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.

Adoption

In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, Peter T. Kirstein's research group at University College London adopted the protocol.

The migration of the ARPANET to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.

In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting.

IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin.

Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky  of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which ran atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983-4. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP).

The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. Microsoft released a native TCP/IP stack in Windows 95. This event helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS).

The British academic network JANET converted to TCP/IP in 1991.

Formal specification and standards

The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF). 

The characteristic architecture of the Internet Protocol Suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specification of the suite is RFC 1122, which broadly outlines four abstraction layers. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet Protocol Suite predates the OSI model, a more comprehensive reference framework for general networking systems. 

Key architectural principles


Conceptual data flow in a simple network topology of two hosts (A and B) connected by a link between their respective routers. The application on each host executes read and write operations as if the processes were directly connected to each other by some kind of data pipe. After establishment of this pipe, most details of the communication are hidden from each process, as the underlying principles of communication are implemented in the lower protocol layers. In analogy, at the transport layer the communication appears as host-to-host, without knowledge of the application data structures and the connecting routers, while at the internetworking layer, individual network boundaries are traversed at each router.
 
Encapsulation of application data descending through the layers described in RFC 1122

The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.

The robustness principle states: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear)." "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features."

Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level. 

An early architectural document, RFC 1122, emphasizes architectural principles over layering. RFC 1122, titled Host Requirements, is structured in paragraphs referring to layers, but the document refers to many other architectural principles and does not emphasize layering. It loosely defines a four-layer model, with the layers having names, not numbers, as follows:
  • The application layer is the scope within which applications, or processes, create user data and communicate this data to other applications on another or the same host. The applications make use of the services provided by the underlying lower layers, especially the transport layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client-server model and peer-to-peer networking. This is the layer in which all application protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services.
  • The transport layer performs host-to-host communications on either the local network or remote networks separated by routers.[28] It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable connectionless datagram service. The Transmission Control Protocol provides flow-control, connection establishment, and reliable transmission of data.
  • The internet layer exchanges datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also the layer that establishes internetworking. Indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next host, functioning as an IP router, that has the connectivity to a network closer to the final data destination.
  • The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to affect transmission of Internet layer datagrams to next-neighbor hosts.

Link layer

The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations, but also virtual link layers such as virtual private networks and networking tunnels.

The link layer is used to move packets between the Internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled both in the software device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the Internet layer packets for transmission, and finally transmit the frames over a physical medium. The TCP/IP model includes specifications of translating the network addressing methods used in the Internet Protocol to link layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist, and are not explicitly defined in the TCP/IP model. 

The link layer in the TCP/IP model has corresponding functions in Layer 2 of the Open Systems Interconnection (OSI) model. 

Internet layer

The internet layer has the responsibility of sending packets across potentially multiple networks. Internetworking requires sending data from the source network to the destination network. This process is called routing.

The Internet Protocol performs two basic functions:
  • Host addressing and identification: This is accomplished with a hierarchical IP addressing system.
  • Packet routing: This is the basic task of sending packets of data (datagrams) from source to destination by forwarding them to the next network router closer to the final destination.
The internet layer is not only agnostic of data structures at the transport layer, but it also does not distinguish between operation of the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively.

Some of the protocols carried by IP, such as ICMP which is used to transmit diagnostic information, and IGMP which is used to manage IP Multicast data, are layered on top of IP but perform internetworking functions. This illustrates the differences in the architecture of the TCP/IP stack of the Internet and the OSI model. The TCP/IP model's internet layer corresponds to layer three of the Open Systems Interconnection (OSI) model, where it is referred to as the network layer. 

The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding the transport layer datagrams to an appropriate next-hop router for further relaying to its destination. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet. The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts' computers, and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006.

Transport layer

The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity, meaning it provides end-to-end message transfer services that are independent of the structure of user data and the logistics of exchanging information for any particular specific purpose and independent of the underlying network. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers). End-to-end message transmission or connecting applications at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP.

For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service announcements or directory services.

Because IP provides only a best effort delivery, some transport layer protocols offer reliability. However, IP can run over a reliable data link protocol such as the High-Level Data Link Control (HDLC).

For example, the TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream:
  • data arrives in-order
  • data has minimal error (i.e., correctness)
  • duplicate data is discarded
  • lost or discarded packets are resent
  • includes traffic congestion control
The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented—not byte-stream-oriented like TCP—and provides multiple streams multiplexed over a single connection. It also provides multi-homing support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP), but can also be used for other applications.

The User Datagram Protocol is a connectionless datagram protocol. Like IP, it is a best effort, "unreliable" protocol. Reliability is addressed through error detection using a weak checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is designed for real-time data such as streaming audio and video.

The applications at any given network address are distinguished by their TCP or UDP port. By convention certain well known ports are associated with specific applications.

The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the Open Systems Interconnection (OSI) model, also called the transport layer. 

Application layer

The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower level protocols. This may include some basic network support services such as protocols for routing and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP or UDP messages), which in turn use lower layer protocols to effect actual data transfer.

The TCP/IP model does not consider the specifics of formatting and presenting data, and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). Such functions are the realm of libraries and application programming interfaces.

Application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate, although the applications are usually aware of key qualities of the transport layer connection such as the end point IP addresses and port numbers. Application layer protocols are often associated with particular client-server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application.

The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications must interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for network address translator (NAT) traversal to consider the application payload.

The application layer in the TCP/IP model is often compared as equivalent to a combination of the fifth (Session), sixth (Presentation), and the seventh (Application) layers of the Open Systems Interconnection (OSI) model.

Furthermore, the TCP/IP model distinguishes between user protocols and support protocols. Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol.

Comparison of TCP/IP and OSI layering

The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the eXternal Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport.

Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or the entire TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether a hardware layer is assumed below the link layer.

Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model, since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2.

The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet protocol and architecture development is not intended to be OSI-compliant. RFC 3439, addressing Internet architecture, contains a section entitled: "Layering Considered Harmful".

For example, the session and presentation layers of the OSI suite are considered to be included to the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session layer functionality is also realized with the port numbering of the TCP and UDP protocols, which cover the transport layer in the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange. 

Conflicts are apparent also in the original OSI model, ISO 7498, when not considering the annexes to this model, e.g., the ISO 7498/4 Management Framework, or the ISO 8648 Internal Organization of the Network layer (IONL). When the IONL and Management Framework documents are considered, the ICMP and IGMP are defined as layer management protocols for the network layer. In like manner, the IONL provides a structure for "subnetwork dependent convergence facilities" such as ARP and RARP

IETF protocols can be encapsulated recursively, as demonstrated by tunneling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunneling at the network layer. 

Implementations

The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and IGMPv6 and is often accompanied by an integrated IPSec security layer. 

Application programmers are typically concerned only with interfaces in the application layer and often also in the transport layer, while the layers below are services provided by the TCP/IP stack in the operating system. Most IP implementations are accessible to programmers through sockets and APIs

Unique implementations include Lightweight TCP/IP, an open source stack designed for embedded systems, and KA9Q NOS, a stack and associated protocols for amateur packet radio systems and personal computers connected via serial lines.

Microcontroller firmware in the network adapter typically handles link issues, supported by driver software in the operating system. Non-programmable analog and digital electronics are normally in charge of the physical components below the link layer, typically using an application-specific integrated circuit (ASIC) chipset for each network interface or other physical standard. High-performance routers are to a large extent based on fast non-programmable digital electronics, carrying out link level switching.

Political psychology

From Wikipedia, the free encyclopedia ...