Search This Blog

Saturday, June 9, 2018

Telepresence

From Wikipedia, the free encyclopedia

A telepresence videoconferencing system in 2007

Telepresence refers to a set of technologies which allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location.

Telepresence requires that the users' senses be provided with such stimuli as to give the feeling of being in that other location. Additionally, users may be given the ability to affect the remote location. In this case, the user's position, movements, actions, voice, etc. may be sensed, transmitted and duplicated in the remote location to bring about this effect. Therefore information may be traveling in both directions between the user and the remote location.

A popular application is found in telepresence videoconferencing, the highest possible level of videotelephony. Telepresence via video deploys greater technical sophistication and improved fidelity of both sight and sound than in traditional videoconferencing. Technical advancements in mobile collaboration have also extended the capabilities of videoconferencing beyond the boardroom for use with hand-held mobile devices, enabling collaboration independent of location.

History

In a pioneering paper, the U.S. cognitive scientist Marvin Minsky attributed the development of the idea of telepresence to science fiction author Robert A. Heinlein: "My first vision of a remote-controlled economy came from Robert A. Heinlein's prophetic 1948 [sic] novel, Waldo," wrote Minsky. In his science fiction short story "Waldo" (1942), Heinlein first proposed a primitive telepresence master-slave manipulator system.

The Brother Assassin, written by Fred Saberhagen in 1969, introduced the complete concept for a telepresence master-slave humanoid system. In the novel, the concept is described as follows: "And a moment later it seemed to all his senses that he had been transported from the master down into the body of the slave-unit standing beneath it on the floor. As the control of its movements passed over to him, the slave started gradually to lean to one side, and he moved its foot to maintain balance as naturally as he moved his own. Tilting back his head, he could look up through the slave's eyes to see the master-unit, with himself inside, maintaining the same attitude on its complex suspension."


Early system for immersive telepresence (USAF, 1992 - Virtual Fixtures)

The term telepresence was coined in a 1980 article by Minsky, who outlined his vision for an adapted version of the older concept of teleoperation that focused on giving a remote participant a feeling of actually being present at a different location.[1] One of the first systems to create a fully immersive illusion of presence in a remote location was the Virtual Fixtures platform developed in 1992 at the U.S. Air Force, Armstrong Labs by inventor Louis Rosenberg. The system included stereoscopic image display from the remote environment as well as immersive touch feedback using a full upper-body exoskeleton.[2][3][4]

The first commercially successful telepresence company, Teleport (which was later renamed TeleSuite), was founded in 1993 by David Allen and Herold Williams.[5] Before TeleSuite, they ran a resort business from which the original concept emerged, because they often found businesspeople would have to cut their stays short to participate in important meetings. Their idea was to develop a technology that would allow businesspeople to attend their meetings without leaving the resorts so that they could lengthen their hotel stays.

A Tandberg E20 high resolution videoconferencing phone meant to replace conventional desktop phones

Hilton Hotels had originally licensed to install them in their hotels throughout the United States and other countries, but use was low. The idea lost momentum, with Hilton eventually backing out. TeleSuite later began to focus less on the hospitality industry and more on business-oriented telepresence systems. Shareholders eventually held enough stock to replace the company's original leadership, which ultimately led to its collapse.[citation needed] David Allen purchased all of the assets of TeleSuite and appointed Scott Allen as president [6] of the new company called Destiny Conferencing.

Destiny Conferencing licensed its patent portfolio to HP which became the first large company to join the telepresence industry, soon followed by others such as Cisco and Polycom.[7] After forming a distribution agreement with Pleasanton-based Polycom, Destiny Conferencing sold on January 5, 2007 to Polycom for $60 million.

An important research project in telepresence began in 1990. Located at the University of Toronto, the Ontario Telepresence Project (OTP) was an interdisciplinary effort involving social sciences and engineering. Its final report stated that it "...was a three year, $4.8 million pre-competitive research project whose mandate was to design and field trial advanced media space systems in a variety of workplaces in order to gain insights into key sociological and engineering issues. The OTP, which ended in December 1994, was part of the International Telepresence Project which linked Ontario researchers to their counterparts in four European nations. The Project’s major sponsor was the Province of Ontario, through two of its Centres of Excellence—the Information Technology Research Centre (ITRC) and the Telecommunications Research Institute of Ontario (TRIO)." [8]

Benefits


A modular telepresence system

An industry expert described some benefits of telepresence: "There were four drivers for our decision to do more business over video and telepresence. We wanted to reduce our travel spend, reduce our carbon footprint and environmental impact, improve our employees' work/life balance, and improve employee productivity.".[9]


American exile Edward Snowden participates in a TED talk in Texas from Russia via telepresence robot, March 2014

Rather than traveling great distances in order to have a face-face meeting, it is now commonplace to instead use a telepresence system, which uses a multiple codec video system (which is what the word "telepresence" most currently represents). Each member/party of the meeting uses a telepresence room to "dial in" and can see/talk to every other member on a screen/screens as if they were in the same room. This brings enormous time and cost benefits. It is also superior to phone conferencing (except in cost), as the visual aspect greatly enhances communications, allowing for perceptions of facial expressions and other body language.

Mobile collaboration systems combine the use of video, audio and on-screen drawing capabilities using newest generation hand-held mobile devices to enable multi-party conferencing in real-time, independent of location. Benefits include cost-efficiencies resulting from accelerated problem resolution, reductions in downtimes and travel, improvements in customer service and increased productivity.[10]

Implementation

Telepresence has been described as the human experience of being fully present at a live real-world location remote from one's own physical location. Someone experiencing video telepresence would therefore be able to behave, and receive stimuli, as though part of a meeting at the remote site. The aforementioned would result in interactive participation of group activities that would bring benefits to a wide range of users.[11]

Implementation of human sensory elements

To provide a telepresence experience, technologies are required that implement the human sensory elements of vision, sound, and manipulation.

Vision and sound

A minimum system usually includes visual feedback. Ideally, the entire field of view of the user is filled with a view of the remote location, and the viewpoint corresponds to the movement and orientation of the user's head. In this way, it differs from television or cinema, where the viewpoint is out of the control of the viewer.

In order to achieve this, the user may be provided with either a very large (or wraparound) screen, or small displays mounted directly in front of the eyes. The latter provides a particularly convincing 3D sensation. The movements of the user's head must be sensed, and the camera must mimic those movements accurately and in real time. This is important to prevent unintended motion sickness.

Another source of future improvement to telepresence displays, compared by some to holograms, is a projected display technology featuring life-sized imagery.[12]

Sound is generally the easiest sensation to implement with high fidelity, based on the foundational telephone technology dating back more than 130 years. Very high-fidelity sound equipment has also been available for a considerable period of time, with stereophonic sound being more convincing than monaural sound.

Manipulation


Monty, a telemanipulation prototype from Anybots

The ability to manipulate a remote object or environment is an important aspect for some telepresence users, and can be implemented in large number of ways depending on the needs of the user. Typically, the movements of the user's hands (position in space, and posture of the fingers) are sensed by wired gloves, inertial sensors, or absolute spatial position sensors. A robot in the remote location then copies those movements as closely as possible. This ability is also known as teleoperation.

The more closely the robot re-creates the form factor of the human hand, the greater the sense of telepresence. Complexity of robotic effectors varies greatly, from simple one axis grippers, to fully anthropomorphic robot hands.

Haptic teleoperation refers to a system that provides some sort of tactile force feedback to the user, so the user feels some approximation of the weight, firmness, size, and/or texture of the remote objects manipulated by the robot.

Freedom of movement


iRobot Ava 500, an autonomous roaming telepresence robot.

The prevalence of high quality video conferencing using mobile devices, tablets and portable computers has enabled a drastic growth in telepresence robots to help give a better sense of remote physical presence for communication and collaboration in the office, home or school when one cannot be there in person. The robot avatar can move or look around at the command of the remote person. Drivable telepresence robots – typically contain a display (integrated or separate phone or tablet) mounted on a roaming base. Some examples of roaming telepresence robots include Beam by Suitable Technologies, Double by Double Robotics, RP-Vita by iRobot, Anybots, Vgo, TeleMe by Mantarobot, and Romo by Romotive.[13]

More modern roaming telepresence robots may include an ability to operate autonomously. The robots can map out the space and be able to avoid obstacles while driving themselves between rooms and their docking stations.[14]

Effectiveness

Telepresence's effectiveness varies by degree of fidelity. Research has noted that telepresence solutions differ in degree of implementation, from "immersive" through "adaptive" to "lite" solutions.[15] At the top are immersive solutions where the environments at both ends are highly controlled (and often the same) with respect to lighting, acoustics, decor and furniture, thereby giving all the participants the impression they are together at the same table in the same room, thus engendering the "immersive" label.

Adaptive telepresence solutions may use the same technology, but the environments at both ends are not highly controlled and hence often differ. Adaptive solutions differ from telepresence lite solutions not in terms of control of environments, but in terms of integration of technology. Adaptive solutions use a managed service, whereas telepresence lite solutions use components that someone must integrate.

Transparency of implementation


A telepresence conference between participants in Ghana and Newark, New Jersey in 2012

A good telepresence strategy puts the human factors first, focusing on visual collaboration configurations that closely replicate the brain's innate preferences for interpersonal communications, separating from the unnatural "talking heads" experience of traditional videoconferencing. These cues include life–size participants, fluid motion, accurate flesh tones and the appearance of true eye contact.[16] This is already a well-established technology, used by many businesses today. The chief executive officer of Cisco Systems, John Chambers in June 2006 at the Networkers Conference compared telepresence to teleporting from Star Trek, and said that he saw the technology as a potential billion dollar market for Cisco.[17]

Rarely will a telepresence system provide such a transparent implementation with such comprehensive and convincing stimuli that the user perceives no differences from actual presence. But the user may set aside such differences, depending on the application.

The fairly simple telephone achieves a limited form of telepresence using just the human sensory element of hearing, in that users consider themselves to be talking to each other rather than talking to the telephone itself.

Watching television, for example, although it stimulates our primary senses of vision and hearing, rarely gives the impression that the watcher is no longer at home. However, television sometimes engages the senses sufficiently to trigger emotional responses from viewers somewhat like those experienced by people who directly witness or experience events. Televised depictions of sports events, or disasters such as the September 11 terrorist attacks, can elicit strong emotions from viewers.

As the screen size increases, so does the sense of immersion, as well as the range of subjective mental experiences available to viewers. Some viewers have reported a sensation of genuine vertigo or motion sickness while watching IMAX movies of flying or outdoor sequences.

Because most currently feasible telepresence gear leaves something to be desired; the user must suspend disbelief to some degree, and choose to act in a natural way, appropriate to the remote location, perhaps using some skill to operate the equipment. In contrast, a telephone user does not see herself as "operating" the telephone, but merely talking to another person with it.

Related technologies

Virtual presence (virtual reality)


An Online Video Web Conference in an office

Telepresence refers to a user interacting with another live, real place, and is distinct from virtual presence, where the user is given the impression of being in a simulated environment. Telepresence and virtual presence rely on similar user-interface equipment, and they share the common feature that the relevant portions of the user's experience at some point in the process will be transmitted in an abstract (usually digital) representation. The main functional difference is the entity on the other end: a real environment in the case of telepresence, vs. a computer in the case of immersive virtual reality.

Applications

Application examples could be cited within emergency management and security services, B&I, and the entertainment and education industries.[11]

Connecting communities

Telepresence can be used to establish a sense of shared presence or shared space among geographically separated members of a group.[citation needed]

Hazardous environments



Many other applications in situations where humans are exposed to hazardous situations are readily recognised as suitable candidates for telepresence. Mining, bomb disposal, military operations, rescue of victims from fire, toxic atmospheres, deep sea exploration, or even hostage situations, are some examples. Telepresence also plays a critical role in the exploration of other worlds, such as with the Mars Exploration Rovers, which are teleoperated from Earth.

Pipeline inspection

Small diameter pipes otherwise inaccessible for examination can now be viewed using pipeline video inspection.

Remote surgery

The possibility of being able to project the knowledge and the physical skill of a surgeon over long distances has many attractions. Thus, again there is considerable research underway in the subject. (Locally controlled robots are currently being used for joint replacement surgery as they are more precise in milling bone to receive the joints.) The armed forces have an obvious interest since the combination of telepresence, teleoperation, and telerobotics can potentially save the lives of battle casualties by allowing them prompt attention in mobile operating theatres by remote surgeons.

Recently, teleconferencing has been used in medicine (telemedicine or telematics), mainly employing audio-visual exchange, for the performance of real time remote surgical operations – as demonstrated in Regensburg, Germany in 2002.[18] In addition to audio-visual data, the transfer of haptic (tactile) information has also been demonstrated in telemedicine.[19]

Education


A professional development expert in Denver uses telepresence to coach a teacher in Utah during the initial research of Project ThereNow.

Research has been conducted on the use of telepresence to provide professional development to teachers. Research has shown that one of the most effective forms of teacher professional development is coaching, or cognitive apprenticeship. The application of telepresence shows promise for making this approach to teacher professional development practical.[20]

The benefits of enabling schoolchildren to take an active part in exploration have also been shown by the JASON and the NASA Ames Research Center programs. The ability of a pupil, student, or researcher to explore an otherwise inaccessible location is a very attractive proposition; For example, locations where the passage of too many people is harming the immediate environment or the artifacts themselves, e.g. undersea exploration of coral reefs, ancient Egyptian tombs, and more recent works of art.

Another application is for remote classroom which allows a professor to interact with students in multiple campuses to teach the same class simultaneously. An example of this application is in classrooms of the law schools of Rutgers University. Two identical rooms are located in two metropolitan areas. Each classroom is equipped with studio lighting, audio, and video conference equipment connected to a 200-inch monitor on the wall that students face to give an impression that they are all in the same classroom. This allows professors to be on either campus and facilitates the interaction among students in both campuses during the classes.[21]

Telepresence art

True telepresence is a multidisciplinary art and science that foundationally integrates engineering, psychology, and the art of television broadcast.

In 1998, Diller and Scofidio created the "Refresh", an Internet-based art installation that juxtaposed a live web camera with recorded videos staged by professional actors. Each image was accompanied with a fictional narrative which made it difficult to distinguish which was the live web camera.


A soap opera for iMacs

In 1993, Eduardo Kac and Ed Bennett created a telepresence installation "Ornitorrinco on the Moon", for the international telecommunication arts festival "Blurred Boundaries" (Entgrenzte Grenzen II). It was coordinated by Kulturdata, in Graz, Austria, and was connected around the world.

From 1997 to the present Ghislaine Boddington of shinkansen and body>data>space has explored, in a collaboration process she has called The Weave[22] using performing arts techniques, the extended use of telepresence into festivals, arts centres and clubs and has directed numerous workshops leading to exploration of telepresence by many artists worldwide. This methodology has been used extensively to develop skills in tele-intuition for young people in preparation for the future world of work through the body>data>space / NESTA project "Robots and Avatars" an innovative project explores how young people will work and play with new representational forms of themselves and others in virtual and physical life in the next 10–15 years.

An overview of tele-presence in dance and theatre through the last 20 years is given in Excited Atoms[23] a research document by Judith Staines (2009) which one can download from the On The Move website

Artificial intelligence

Marvin Minsky was one of the pioneers of intelligence-based mechanical robotics and telepresence. He designed and built some of the first mechanical hands with tactile sensors, visual scanners, and their software and computer interfaces. He also influenced many robotic projects outside of MIT, and designed and built the first LOGO "turtle."

Popular culture

Telepresence is represented in media and entertainment.

Literature

  • The Naked Sun (1957) – a novel mostly occurring on Solaria, a planet with an extremely low population where all personal contact is considered obscene, and all communication occurs through holographic telepresence.

Television

Film

Video games

Comics

  • Lamar Waldron's "M.I.C.R.A. Mind Controlled Remote Automaton" (1987), related the story of a college female paralyzed by a neck injury, who volunteered to be the remote pilot of an android body created by one of her professors.

Virtual reality communities

Cyberspace

From Wikipedia, the free encyclopedia

Cyberspace is interconnected technology. The term entered the popular culture from science fiction and the arts but is now used by technology strategists, security professionals, government, military and industry leaders and entrepreneurs to describe the domain of the global technology environment. Others consider cyberspace to be just a notional environment in which communication over computer networks occurs.[1] The word became popular in the 1990s when the uses of the Internet, networking, and digital communication were all growing dramatically and the term "cyberspace" was able to represent the many new ideas and phenomena that were emerging.[2] It has been called the largest unregulated and uncontrolled domain in the history of mankind,[3] and is also unique because it is a domain created by people vice the traditional physical domains.

The parent term of cyberspace is "cybernetics", derived from the Ancient Greek κυβερνήτης (kybernētēs, steersman, governor, pilot, or rudder), a word introduced by Norbert Wiener for his pioneering work in electronic communication and control science. This word cyberspace first appeared in the art installation of the same name by danish artist Susanne Ussing, 1968).[4]

As a social experience, individuals can interact, exchange ideas, share information, provide social support, conduct business, direct actions, create artistic media, play games, engage in political discussion, and so on, using this global network. They are sometimes referred to as cybernauts. The term cyberspace has become a conventional means to describe anything associated with the Internet and the diverse Internet culture. The United States government recognizes the interconnected information technology and the interdependent network of information technology infrastructures operating across this medium as part of the US national critical infrastructure. Amongst individuals on cyberspace, there is believed to be a code of shared rules and ethics mutually beneficial for all to follow, referred to as cyberethics. Many view the right to privacy as most important to a functional code of cyberethics.[5] Such moral responsibilities go hand in hand when working online with global networks, specifically, when opinions are involved with online social experiences.[6]

According to Chip Morningstar and F. Randall Farmer, cyberspace is defined more by the social interactions involved rather than its technical implementation.[7] In their view, the computational medium in cyberspace is an augmentation of the communication channel between real people; the core characteristic of cyberspace is that it offers an environment that consists of many participants with the ability to affect and influence each other. They derive this concept from the observation that people seek richness, complexity, and depth within a virtual world.

Origins of the term

The term “cyberspace” first appeared in the visual arts in the late 1960s, when Danish artist Susanne Ussing (1940-1998) and her partner architect Carsten Hoff (b. 1934) constituted themselves as Atelier Cyberspace. Under this name the two made a series of installations and images entitled “sensory spaces” that were based on the principle of open systems adaptable to various influences, such as human movement and the behaviour of new materials.[8]

Atelier Cyberspace worked at a time when the Internet did not exist and computers were more or less off-limit to artists and creative engagement. In a 2015-interview with Scandinavian art magazine Kunstkritikk, Carsten Hoff recollects, that although Atelier Cyberspace did try to implement computers, they had no interest in the virtual space as such:[8]
To us, "cyberspace" was simply about managing spaces. There was nothing esoteric about it. Nothing digital, either. It was just a tool. The space was concrete, physical.
And in the same interview Hoff continues:
Our shared point of departure was that we were working with physical settings, and we were both frustrated and displeased with the architecture from the period, particularly when it came to spaces for living. We felt that there was a need to loosen up the rigid confines of urban planning, giving back the gift of creativity to individual human beings and allowing them to shape and design their houses or dwellings themselves – instead of having some clever architect pop up, telling you how you should live. We were thinking in terms of open-ended systems where things could grow and evolve as required.
For instance, we imagined a kind of mobile production unit, but unfortunately the drawings have been lost. It was a kind of truck with a nozzle at the back. Like a bee building its hive. The nozzle would emit and apply material that grew to form amorphous mushrooms or whatever you might imagine. It was supposed to be computer-controlled, allowing you to create interesting shapes and sequences of spaces. It was a merging of organic and technological systems, a new way of structuring the world. And a response that counteracted industrial uniformity. We had this idea that sophisticated software might enable us to mimic the way in which nature creates products – where things that belong to the same family can take different forms. All oak trees are oak trees, but no two oak trees are exactly alike. And then a whole new material – polystyrene foam – arrived on the scene. It behaved like nature in the sense that it grew when its two component parts were mixed. Almost like a fungal growth. This made it an obvious choice for our work in Atelier Cyberspace.
The works of Atelier Cyberspace were originally shown at a number of Copenhagen venues and have later been exhibited at The National Gallery of Denmark in Copenhagen as part of the exhibition “What’s Happening?”[9]

The term "cyberspace" first appeared in fiction in the 1980s in the work of cyberpunk science fiction author William Gibson, first in his 1982 short story "Burning Chrome" and later in his 1984 novel Neuromancer.[10] In the next few years, the word became prominently identified with online computer networks. The portion of Neuromancer cited in this respect is usually the following:[11]
Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts... A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.
Now widely used, the term has since been criticized by Gibson, who commented on the origin of the term in the 2000 documentary No Maps for These Territories:
All I knew about the word "cyberspace" when I coined it, was that it seemed like an effective buzzword. It seemed evocative and essentially meaningless. It was suggestive of something, but had no real semantic meaning, even for me, as I saw it emerge on the page.

Metaphorical

Don Slater uses a metaphor to define cyberspace, describing the "sense of a social setting that exists purely within a space of representation and communication ... it exists entirely within a computer space, distributed across increasingly complex and fluid networks." The term "Cyberspace" started to become a de facto synonym for the Internet, and later the World Wide Web, during the 1990s, especially in academic circles[12] and activist communities. Author Bruce Sterling, who popularized this meaning,[13] credits John Perry Barlow as the first to use it to refer to "the present-day nexus of computer and telecommunications networks". Barlow describes it thus in his essay to announce the formation of the Electronic Frontier Foundation (note the spatial metaphor) in June 1990:[14]
In this silent world, all conversation is typed. To enter it, one forsakes both body and place and becomes a thing of words alone. You can see what your neighbors are saying (or recently said), but not what either they or their physical surroundings look like. Town meetings are continuous and discussions rage on everything from sexual kinks to depreciation schedules. Whether by one telephonic tendril or millions, they are all connected to one another. Collectively, they form what their inhabitants call the Net. It extends across that immense region of electron states, microwaves, magnetic fields, light pulses and thought which sci-fi writer William Gibson named Cyberspace.
— John Perry Barlow, "Crime and Puzzlement", 1990-06-08
As Barlow, and the EFF, continued public education efforts to promote the idea of "digital rights", the term was increasingly used during the Internet boom of the late 1990s.

Virtual environments

Although the present-day, loose use of the term "cyberspace" no longer implies or suggests immersion in a virtual reality, current technology allows the integration of a number of capabilities (sensors, signals, connections, transmissions, processors, and controllers) sufficient to generate a virtual interactive experience that is accessible regardless of a geographic location. It is for these reasons cyberspace has been described as the ultimate tax haven.[15]

In 1989, Autodesk, an American multinational corporation that focuses on 2D and 3D design software, developed a virtual design system called Cyberspace.[16]

Recent definitions of Cyberspace

Although several definitions of cyberspace can be found both in scientific literature and in official governmental sources, there is no fully agreed official definition yet. According to F. D. Kramer there are 28 different definitions of the term cyberspace. See in particular the following links: "Cyberpower and National Security: Policy Recommendations for a Strategic Framework," in Cyberpower and National Security, FD Kramer, S. Starr, L.K. Wentz (ed.), National Defense University Press, Washington (DC) 2009; see also Mayer, M., Chiarugi, I., De Scalzi, N., https://www.academia.edu/14336129/International_Politics_in_the_Digital_Age.

The most recent draft definition is the following:
Cyberspace is a global and dynamic domain (subject to constant change) characterized by the combined use of electrons and electromagnetic spectrum, whose purpose is to create, store, modify, exchange, share and extract, use, eliminate information and disrupt physical resources. Cyberspace includes: a) physical infrastructures and telecommunications devices that allow for the connection of technological and communication system networks, understood in the broadest sense (SCADA devices, smartphones/tablets, computers, servers, etc.); b) computer systems (see point a) and the related (sometimes embedded) software that guarantee the domain's basic operational functioning and connectivity; c) networks between computer systems; d) networks of networks that connect computer systems (the distinction between networks and networks of networks is mainly organizational); e) the access nodes of users and intermediaries routing nodes; f) constituent data (or resident data). Often, in common parlance (and sometimes in commercial language), networks of networks are called Internet (with a lowercase i), while networks between computers are called intranet. Internet (with a capital I, in journalistic language sometimes called the Net) can be considered a part of the system a). A distinctive and constitutive feature of cyberspace is that no central entity exercises control over all the networks that make up this new domain.[17] Just as in the real world there is no world government, cyberspace lacks an institutionally predefined hierarchical center. To cyberspace, a domain without a hierarchical ordering principle, we can therefore extend the definition of international politics coined by Kenneth Waltz: as being "with no system of law enforceable." This does not mean that the dimension of power in cyberspace is absent, nor that power is dispersed and scattered into a thousand invisible streams, nor that it is evenly spread across myriad people and organizations, as some scholars had predicted. On the contrary, cyberspace is characterized by a precise structuring of hierarchies of power.[18]
The Joint Chiefs of Staff of the United States Department of Defense define cyberspace as one of five interdependent domains, the remaining four being land, air, maritime, and space.[19] See United States Cyber Command

Cyberspace as an Internet metaphor

While cyberspace should not be confused with the Internet, the term is often used to refer to objects and identities that exist largely within the communication network itself, so that a website, for example, might be metaphorically said to "exist in cyberspace".[20] According to this interpretation, events taking place on the Internet are not happening in the locations where participants or servers are physically located, but "in cyberspace". The philosopher Michel Foucault used the term heterotopias, to describe such spaces which are simultaneously physical and mental.

Firstly, cyberspace describes the flow of digital data through the network of interconnected computers: it is at once not "real", since one could not spatially locate it as a tangible object, and clearly "real" in its effects. Secondly, cyberspace is the site of computer-mediated communication (CMC), in which online relationships and alternative forms of online identity were enacted, raising important questions about the social psychology of Internet use, the relationship between "online" and "offline" forms of life and interaction, and the relationship between the "real" and the virtual. Cyberspace draws attention to remediation of culture through new media technologies: it is not just a communication tool but a social destination, and is culturally significant in its own right. Finally, cyberspace can be seen as providing new opportunities to reshape society and culture through "hidden" identities, or it can be seen as borderless communication and culture.[21]
Cyberspace is the "place" where a telephone conversation appears to occur. Not inside your actual phone, the plastic device on your desk. Not inside the other person's phone, in some other city. The place between the phones. [...] in the past twenty years, this electrical "space," which was once thin and dark and one-dimensional—little more than a narrow speaking-tube, stretching from phone to phone—has flung itself open like a gigantic jack-in-the-box. Light has flooded upon it, the eerie light of the glowing computer screen. This dark electric netherworld has become a vast flowering electronic landscape. Since the 1960s, the world of the telephone has cross-bred itself with computers and television, and though there is still no substance to cyberspace, nothing you can handle, it has a strange kind of physicality now. It makes good sense today to talk of cyberspace as a place all its own.
— Bruce Sterling, Introduction to The Hacker Crackdown
The "space" in cyberspace has more in common with the abstract, mathematical meanings of the term (see space) than physical space. It does not have the duality of positive and negative volume (while in physical space for example a room has the negative volume of usable space delineated by positive volume of walls, Internet users cannot enter the screen and explore the unknown part of the Internet as an extension of the space they are in), but spatial meaning can be attributed to the relationship between different pages (of books as well as web servers), considering the unturned pages to be somewhere "out there." The concept of cyberspace therefore refers not to the content being presented to the surfer, but rather to the possibility of surfing among different sites, with feedback loops between the user and the rest of the system creating the potential to always encounter something unknown or unexpected.

Video games differ from text-based communication in that on-screen images are meant to be figures that actually occupy a space and the animation shows the movement of those figures. Images are supposed to form the positive volume that delineates the empty space. A game adopts the cyberspace metaphor by engaging more players in the game, and then figuratively representing them on the screen as avatars. Games do not have to stop at the avatar-player level, but current implementations aiming for more immersive playing space (i.e. Laser tag) take the form of augmented reality rather than cyberspace, fully immersive virtual realities remaining impractical.

Although the more radical consequences of the global communication network predicted by some cyberspace proponents (i.e. the diminishing of state influence envisioned by John Perry Barlow[22]) failed to materialize and the word lost some of its novelty appeal, it remains current as of 2006.[6][23]
Some virtual communities explicitly refer to the concept of cyberspace, for example Linden Lab calling their customers "Residents" of Second Life, while all such communities can be positioned "in cyberspace" for explanatory and comparative purposes (as did Sterling in The Hacker Crackdown, followed by many journalists), integrating the metaphor into a wider cyber-culture.

The metaphor has been useful in helping a new generation of thought leaders to reason through new military strategies around the world, led largely by the US Department of Defense (DoD).[24] The use of cyberspace as a metaphor has had its limits, however, especially in areas where the metaphor becomes confused with physical infrastructure. It has also been critiqued as being unhelpful for falsely employing a spatial metaphor to describe what is inherently a network.[20]

Alternate realities in philosophy and art

Predating computers

A forerunner of the modern ideas of cyberspace is the Cartesian notion that people might be deceived by an evil demon that feeds them a false reality. This argument is the direct predecessor of modern ideas of a brain in a vat and many popular conceptions of cyberspace take Descartes's ideas as their starting point.

Visual arts have a tradition, stretching back to antiquity, of artifacts meant to fool the eye and be mistaken for reality. This questioning of reality occasionally led some philosophers and especially theologians[citation needed] to distrust art as deceiving people into entering a world which was not real (see Aniconism). The artistic challenge was resurrected with increasing ambition as art became more and more realistic with the invention of photography, film (see Arrival of a Train at La Ciotat), and immersive computer simulations.

Influenced by computers

Philosophy

American counterculture exponents like William S. Burroughs (whose literary influence on Gibson and cyberpunk in general is widely acknowledged[25][26]) and Timothy Leary[27] were among the first to extol the potential of computers and computer networks for individual empowerment.[28]

Some contemporary philosophers and scientists (e.g. David Deutsch in The Fabric of Reality) employ virtual reality in various thought experiments. For example, Philip Zhai in Get Real: A Philosophical Adventure in Virtual Reality connects cyberspace to the platonic tradition:
Let us imagine a nation in which everyone is hooked up to a network of VR infrastructure. They have been so hooked up since they left their mother's wombs. Immersed in cyberspace and maintaining their life by teleoperation, they have never imagined that life could be any different from that. The first person that thinks of the possibility of an alternative world like ours would be ridiculed by the majority of these citizens, just like the few enlightened ones in Plato's allegory of the cave.
Note that this brain-in-a-vat argument conflates cyberspace with reality, while the more common descriptions of cyberspace contrast it with the "real world".

A New Communication Model

The technological convergence of the mass media is the result of a long adaptation process of their communicative resources to the evolutionary changes of each historical moment. Thus, the new media became (plurally) an extension of the traditional media on the cyberspace, allowing to the public access information in a wide range of digital devices.[29] In other words, it is a cultural virtualization of human reality as a result of the migration from physical to virtual space (mediated by the ICTs), ruled by codes, signs and particular social relationships. Forwards, arise instant ways of communication, interaction and possible quick access to information, in which we are no longer mere senders, but also producers, reproducers, co-workers and providers. New technologies also help to “connect” people from different cultures outside the virtual space, what was unthinkable fifty years ago. In this giant relationships web, we mutually absorb each other’s beliefs, customs, values, laws and habits, cultural legacies perpetuated by a physical-virtual dynamics in constant metamorphosis (ibidem). In this sense, Professor Doctor Marcelo Mendonça Teixeira created, in 2013, a new model of communication to the virtual universe, based in Claude Elwood Shannon (1948) article "A Mathematical Theory of Communication".

Art

Having originated among writers, the concept of cyberspace remains most popular in literature and film. Although artists working with other media have expressed interest in the concept, such as Roy Ascott, "cyberspace" in digital art is mostly used as a synonym for immersive virtual reality and remains more discussed than enacted.[30]

Computer crime

Cyberspace also brings together every service and facility imaginable to expedite money laundering. One can purchase anonymous credit cards, bank accounts, encrypted global mobile telephones, and false passports. From there one can pay professional advisors to set up IBCs (International Business Corporations, or corporations with anonymous ownership) or similar structures in OFCs (Offshore Financial Centers). Such advisors are loath to ask any penetrating questions about the wealth and activities of their clients, since the average fees criminals pay them to launder their money can be as much as 20 percent.[31]

5-level model

In 2010, a five-level model was designed in France. According to this model, cyberspace is composed of five layers based on information discoveries: language, writing, printing, Internet, etc. This original model links the world of information to telecommunication technologies.

Popular culture examples

  • The anime Digimon is set in a variant of the cyberspace concept called the "Digital World". The Digital World is a parallel universe made up of data from the Internet. Similar to cyberspace, except that people could physically enter this world instead of merely using a computer.
  • The anime Ghost in the Shell is set in the future where cyberization of humanity is commonplace and the world is connected by a vast electronic network.
  • The CGI series, ReBoot, takes place entirely inside cyberspace, which is composed of two worlds: the Net and the Web.
  • In the film Tron, a programmer was physically transferred to the program world, where programs were personalities, resembling the forms of their creators.
  • In the film Virtuosity a program encapsulating a super-criminal within a virtual world simulation escapes into the "real world".
  • In the novel Simulacron-3 the author Daniel F. Galouye explores multiple levels of "reality" represented by the multiple levels of computer simulation involved.
  • The idea of "the matrix" in the film The Matrix resembles a complex form of cyberspace where people are "jacked in" from birth and do not know that the reality they experience is virtual.
  • In the televised remote controlled robot competition series Robot Wars, the Megahurtz and subsequently Terrorhurtz team and their robot were introduced as being "from Cyberspace", a nod to their online collaborative formation.
  • In the 1984 novel Neuromancer the author William Gibson introduces the idea of a virtual reality data space called "the Matrix".
  • The British 1960s spy/fantasy TV show The Avengers used antagonists called Cybernauts. Their nature, however, was merely that of murderous remote-controlled humanoid robots.

Computer network

From Wikipedia, the free encyclopedia

A computer network, or data network, is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes (data links.) These data links are established over cable media such as wires or optic cables, or wireless media such as WiFi.

Network computer devices that originate, route and terminate the data are called network nodes.[1] Nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered (i.e. carried as payload) over other more general communications protocols. This formidable collection of information technology requires skilled network management to keep it all running reliably.

Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications as well as many others. Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, topology, traffic control mechanism and organizational intent. The best-known computer network is the Internet.

History

The chronology of significant computer-network developments includes:

Properties

Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines.

A computer network facilitates interpersonal communications allowing users to communicate efficiently and easily via various means: email, instant messaging, online chat, telephone, video telephone calls, and video conferencing. A network allows sharing of network and computing resources. Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network. Distributed computing uses computing resources across a network to accomplish tasks.

A computer network may be used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack.

Network packet

Computer communication links that do not support packets, such as traditional point-to-point telecommunication links, simply transmit data as a bit stream. However, most information in computer networks is carried in packets. A network packet is a formatted unit of data (a list of bits or bytes, usually a few tens of bytes to a few kilobytes long) carried by a packet-switched network. Packets are sent through the network to their destination. Once the packets arrive they are reassembled into their original message.

Packets consist of two kinds of data: control information, and user data (payload). The control information provides data the network needs to deliver the user data, for example: source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.

With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link isn't overused. Often the route a packet needs to take through a network is not immediately available. In that case the packet is queued and waits until a link is free.

Network topology

The physical layout of a network is usually less important than the topology that connects network nodes. Most diagrams that describe a physical network are therefore topological, rather than geographic. The symbols on these diagrams usually denote network links and network nodes.

Network links

The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cable, optical fiber, and radio waves. In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer.

A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards use radio waves, others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.

Wired technologies

Bundle of glass threads with light emitting from the ends
Fiber optic cables are used to transmit light from one computer/network node to another

The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed.
  • Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation helps minimize interference and distortion. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
  • ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network
  • Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
World map with red and blue lines
2007 map showing submarine optical fiber telecommunication cables around the world.
  • An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea cables to interconnect continents. There are two types of transmission of fiber optics, Single-mode fiber (SMF) and Multimode fiber (MMF). Single-mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. Multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade.[13]
Price is a main factor distinguishing wired- and wireless-technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.[14]

Wireless technologies

Black laptop with router in the background
Computers are very often connected to networks using wireless links
  • Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart.
  • Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
  • Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
  • Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wifi.
  • Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.

Exotic technologies

There have been various attempts at transporting data over exotic media:
Both cases have a large round-trip delay time, which gives slow two-way communication, but doesn't prevent sending large amounts of information.

Network nodes

Apart from any physical transmission media there may be, networks comprise additional basic system building blocks, such as network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and perform multiple functions.

Network interfaces

A network interface circuit with port for ATM
An ATM network interface in the form of an accessory card. A lot of network interfaces are built-in.

A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry.

The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole.

In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.

Repeaters and hubs

A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.

A repeater with multiple ports is known as an Ethernet hub. Repeaters work on the physical layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule.

Hubs and repeaters in LANs have been mostly obsoleted by modern switches.

Bridges

A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks.

Bridges come in three basic types:
  • Local bridges: Directly connect LANs
  • Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
  • Wireless bridges: Can be used to join LANs or connect remote devices to LANs.

Switches

A network switch is a device that forwards and filters OSI layer 2 datagrams (frames) between ports based on the destination MAC address in each frame.[17] A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge.[18] It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches.

Routers


A typical home or small office router showing the ADSL telephone line and Ethernet network cable connections

A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a "null" interface, also known as the "black hole" interface because data can go into it, however, no further processing is done for said data, i.e. the packets are dropped.

Modems

Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a Digital Subscriber Line technology.

Firewalls

A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.

Network structure

Network topology is the layout or organizational hierarchy of interconnected nodes of a computer network. Different network topologies can affect throughput, but reliability is often more critical. With many technologies, such as bus networks, a single failure can cause the network to fail entirely. In general the more interconnections there are, the more robust the network is; but the more expensive it is to install.

Common layouts


Common network topologies
Common layouts are:
Note that the physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring (actually two counter-rotating rings), but the physical topology is often a star, because all neighboring connections can be routed via a central physical location.

Overlay network


A sample overlay network

An overlay network is a virtual computer network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.[19]

Overlay networks have been around since the invention of networking when computer systems were connected over telephone lines using modems, before any data network existed.

The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network.[19] Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.

Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys.

Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees to achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP Multicast have not seen wide acceptance largely because they require modification of all routers in the network.[citation needed] On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination.

For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast,[20] resilient routing and quality of service studies, among others.

Communication protocols

Protocols in relation to the Internet layering scheme.
The TCP/IP model or Internet layering scheme and its relation to common protocols often layered on top of it.
Figure 4. When a router is present, message flows go down through protocol layers, across to the router, up the stack inside the router and back down again and is sent on to the final destination where it climbs back up the stack
Figure 4. Message flows (A-B) in the presence of a router (R), red flows are effective communication paths, black paths are across the actual network links.

A communication protocol is a set of rules for exchanging information over a network. In a protocol stack (also see the OSI model), each protocol leverages the services of the protocol layer below it, until the lowest layer controls the hardware which sends information across the media. The use of protocol layering is today ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web.

Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing.

There are many communication protocols, a few of which are described below.

IEEE 802

IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at levels 1 and 2 of the OSI model.

For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based Network Access Control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key".

Ethernet

Ethernet, sometimes simply called LAN, is a family of protocols used in wired LANs, described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers.

Wireless LAN

Wireless LAN, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. It is standardized by IEEE 802.11 and shares many properties with wired Ethernet.

Internet Protocol Suite

The Internet Protocol Suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less as well as connection-oriented services over an inherently unreliable network traversed by data-gram transmission at the Internet protocol (IP) level. At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability.

SONET/SDH

Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support real-time, uncompressed, circuit-switched voice encoded in PCM (Pulse-Code Modulation) format. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.

Asynchronous Transfer Mode

Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet Protocol Suite or Ethernet that use variable sized packets or frames. ATM has similarity with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.

While the role of ATM is diminishing in favor of next-generation networks, it still plays a role in the last mile, which is the connection between an Internet service provider and the home user.[21]

Cellular standards

There are a number of different digital cellular standards, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN).[22]

Geographic scale

A network can be characterized by its physical capacity or its organizational purpose. Use of the network, including user authorization and access rights, differ accordingly.
Nanoscale network
A nanoscale communication network has key components implemented at the nanoscale including message carriers and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for classical communication.[23]
Personal area network
A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters.[24] A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.
Local area network
A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Wired LANs are most likely based on Ethernet technology. Newer standards such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.[25]

The defining characteristics of a LAN, in contrast to a wide area network (WAN), include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to 100 Gbit/s, standardized by IEEE in 2010.[26] Currently, 400 Gbit/s Ethernet is being developed.
A LAN can be connected to a WAN using a router.
Home area network
A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable TV or digital subscriber line (DSL) provider.
Storage area network
A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.
Campus area network
A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling, etc.) are almost entirely owned by the campus tenant / owner (an enterprise, university, government, etc.).

For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls.
Backbone network
A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or sub-networks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area.

For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it.

Another example of a backbone network is the Internet backbone, which is the set of wide area networks (WANs) and core routers that tie together all networks connected to the Internet.
Metropolitan area network
A Metropolitan area network (MAN) is a large computer network that usually spans a city or a large campus.
Wide area network
A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.
Enterprise private network
An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.
Virtual private network
A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.

VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.
Global area network
A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.[27]

Organizational scope

Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.

Intranet

An intranet is a set of networks that are under the control of a single administrative entity. The intranet uses the IP protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information. An intranet is also anything behind the router on a local area network.

Extranet

An extranet is a network that is also under the administrative control of a single organization, but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. Network connection to an extranet is often, but not always, implemented via WAN technology.

Internetwork

An internetwork is the connection of multiple computer networks via a common routing technology using routers.

Internet


Partial map of the Internet based on the January 15, 2005 data found on opte.org. Each line is drawn between two nodes, representing two IP addresses. The length of the lines are indicative of the delay between those two nodes. This graph represents less than 30% of the Class C networks reachable.

The Internet is the largest example of an internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW).

Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

Darknet

A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. A darknet is an anonymizing network where connections are made only between trusted peers — sometimes called "friends" (F2F)[28] — using non-standard protocols and ports.

Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.[29]

Routing


Routing calculates good paths through a network for information to take. For example, from node 1 to node 6 the best routes are likely to be 1-8-7-6 or 1-8-10-6, as this has the thickest routes.

Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks.

In packet switched networks, routing directs packet forwarding (the transit of logically addressed network packets from their source toward their ultimate destination) through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the router's memory, is very important for efficient routing.

There are usually multiple routes that can be taken, and to choose between them, different elements can be considered to decide which routes get installed into the routing table, such as (sorted by priority):
  1. Prefix-Length: where longer subnet masks are preferred (independent if it is within a routing protocol or over different routing protocol)
  2. Metric: where a lower metric/cost is preferred (only valid within one and the same routing protocol)
  3. Administrative distance: where a lower distance is preferred (only valid between different routing protocols)
Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths.

Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging). Routing has become the dominant form of addressing on the Internet. Bridging is still widely used within localized environments.

Network service

Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.

The World Wide Web, E-mail,[30] printing and network file sharing are examples of well-known network services. Network services such as DNS (Domain Name System) give names for IP and MAC addresses (people remember names like “nm.lan” better than numbers like “210.121.67.18”),[31] and DHCP to ensure that the equipment on the network has a valid IP address.[32]

Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.

Network performance

Quality of service

Depending on the installation requirements, network performance is usually measured by the quality of service of a telecommunications product. The parameters that affect this typically can include throughput, jitter, bit error rate and latency.

The following list gives examples of network performance measures for a circuit-switched network and one type of packet-switched network, viz. ATM:
  • Circuit-switched networks: In circuit switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads.[33] Other types of performance measures can include the level of noise and echo.
  • ATM: In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique and modem enhancements.[34]
There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modelled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.[35]

Network congestion

Network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increase in network throughput, or to an actual reduction in network throughput.

Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion—even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.

Modern networks use congestion control, congestion avoidance and traffic control techniques to try to avoid congestion collapse. These include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers. Another method to avoid the negative effects of network congestion is implementing priority schemes, so that some packets are transmitted with higher priority than others. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for some services. An example of this is 802.1p. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard, which provides high-speed (up to 1 Gbit/s) Local area networking over existing home wires (power lines, phone lines and coaxial cables).

For the Internet, RFC 2914 addresses the subject of congestion control in detail.

Network resilience

Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.”[36]

Security

Network security

Network security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources.[37] Network security is the authorization of access to data in a network, which is controlled by the network administrator. Users are assigned an ID and password that allows them access to information and programs within their authority. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies and individuals.

Network surveillance

Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.

Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.

Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent/investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.[38]

However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to numerous lawsuits such as Hepting v. AT&T.[38][39] The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".[40][41]

End to end encryption

End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet providers or application service providers, from discovering or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.

Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio.

Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox. Some such systems, for example LavaBit and SecretInk, have even described themselves as offering "end-to-end" encryption when they do not. Some systems that normally offer end-to-end encryption have turned out to contain a back door that subverts negotiation of the encryption key between the communicating parties, for example Skype or Hushmail.

The end-to-end encryption paradigm does not directly address risks at the communications endpoints themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the end points and the times and quantities of messages that are sent.

SSL/TLS

The introduction and rapid growth of e-commerce on the world wide web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of CA root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client.[13]

Views of networks

Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a Network Administrator is responsible to keep that network up and running. A community of interest has less of a connection of being in a local area, and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies.

Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology.

Both users and administrators are aware, to varying extents, of the trust and scope characteristics of a network. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees).[42] Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).[42]

Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).

Over the Internet, there can be business-to-business (B2B), business-to-consumer (B2C) and consumer-to-consumer (C2C) communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure Virtual Private Network (VPN) technology.

Sexual cannibalism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Sex...