Search This Blog

Thursday, January 17, 2019

Computer network (updated)

From Wikipedia, the free encyclopedia

A computer network, or data network, is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections (data links) between nodes. These data links are established over cable media such as wires or optic cables, or wireless media such as WiFi.

Network computer devices that originate, route and terminate the data are called network nodes. Nodes are identified by network addresses, and can include hosts such as personal computers, phones, and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered (i.e. carried as payload) over other more general communications protocols. This formidable collection of information technology requires skilled network management to keep it all running reliably.

Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications as well as many others. Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, topology, traffic control mechanism and organizational intent. The best-known computer network is the Internet.

History

The chronology of significant computer-network developments includes:

Properties

Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. 

A computer network facilitates interpersonal communications allowing users to communicate efficiently and easily via various means: email, instant messaging, online chat, telephone, video telephone calls, and video conferencing. A network allows sharing of network and computing resources. Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network. Distributed computing uses computing resources across a network to accomplish tasks. 

A computer network may be used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack.

Network packet

Computer communication links that do not support packets, such as traditional point-to-point telecommunication links, simply transmit data as a bit stream. However, the overwhelming majority of computer networks carry their data in packets. A network packet is a formatted unit of data (a list of bits or bytes, usually a few tens of bytes to a few kilobytes long) carried by a packet-switched network. Packets are sent through the network to their destination. Once the packets arrive they are reassembled into their original message.

Packets consist of two kinds of data: control information, and user data (payload). The control information provides data the network needs to deliver the user data, for example: source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between. 

With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link isn't overused. Often the route a packet needs to take through a network is not immediately available. In that case the packet is queued and waits until a link is free.

Network topology

The physical layout of a network is usually less important than the topology that connects network nodes. Most diagrams that describe a physical network are therefore topological, rather than geographic. The symbols on these diagrams usually denote network links and network nodes.

Network links

The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cable, optical fiber, and radio waves. In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer. 

A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards use radio waves, others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.

Wired technologies

Bundle of glass threads with light emitting from the ends
Fiber optic cables are used to transmit light from one computer/network node to another

The following classes of wired technologies are used in computer networking.
  • Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
  • ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed local area network.
  • Twisted pair cabling is used for wired Ethernet and other standards. It typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 Mbit/s to 10 Gbit/s. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
World map with red and blue lines
2007 map showing submarine optical fiber telecommunication cables around the world.
  • An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. Optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea cables to interconnect continents. There are two basic types of fiber optics, single-mode optical fiber (SMF) and multi-mode optical fiber (MMF). Single-mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. Multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade.
Price is a main factor distinguishing wired- and wireless-technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.

Wireless technologies

Black laptop with router in the background
Computers are very often connected to networks using wireless links
  • Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart.
  • Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
  • Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
  • Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
  • Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.

Exotic technologies

There have been various attempts at transporting data over exotic media:
Both cases have a large round-trip delay time, which gives slow two-way communication, but doesn't prevent sending large amounts of information.

Network nodes

Apart from any physical transmission media there may be, networks comprise additional basic system building blocks, such as network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and perform multiple functions.

Network interfaces

A network interface circuit with port for ATM
An ATM network interface in the form of an accessory card. A lot of network interfaces are built-in.

A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry.

The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole. 

In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.

Repeaters and hubs

A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.

A repeater with multiple ports is known as an Ethernet hub. Repeaters work on the physical layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule

Hubs and repeaters in LANs have been mostly obsoleted by modern switches.

Bridges

A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks. 

Bridges come in three basic types:
  1. Local bridges: Directly connect LANs
  2. Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
  3. Wireless bridges: Can be used to join LANs or connect remote devices to LANs.

Switches

A network switch is a device that forwards and filters OSI layer 2 datagrams (frames) between ports based on the destination MAC address in each frame. A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches.

Routers

A typical home or small office router showing the ADSL telephone line and Ethernet network cable connections
 
A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a "null" interface, also known as the "black hole" interface because data can go into it, however, no further processing is done for said data, i.e. the packets are dropped.

Modems

Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a Digital Subscriber Line technology.

Firewalls

A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.

Network structure

Network topology is the layout or organizational hierarchy of interconnected nodes of a computer network. Different network topologies can affect throughput, but reliability is often more critical. With many technologies, such as bus networks, a single failure can cause the network to fail entirely. In general the more interconnections there are, the more robust the network is; but the more expensive it is to install.

Common layouts

Common network topologies

Common layouts are:
Note that the physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring (actually two counter-rotating rings), but the physical topology is often a star, because all neighboring connections can be routed via a central physical location.

Overlay network

A sample overlay network
 
An overlay network is a virtual computer network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.

Overlay networks have been around since the invention of networking when computer systems were connected over telephone lines using modems, before any data network existed. 

The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.

Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys. 

Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees to achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP Multicast have not seen wide acceptance largely because they require modification of all routers in the network. On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination. 

For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others.

Communication protocols

Protocols in relation to the Internet layering scheme.
The TCP/IP model or Internet layering scheme and its relation to common protocols often layered on top of it.
 
When a router is present, message flows go down through protocol layers, across to the router, up the stack inside the router and back down again and is sent on to the final destination where it climbs back up the stack
Message flows (A-B) in the presence of a router (R), red flows are effective communication paths, black paths are across the actual network links.
 
A communication protocol is a set of rules for exchanging information over a network. In a protocol stack (also see the OSI model), each protocol leverages the services of the protocol layer below it, until the lowest layer controls the hardware which sends information across the media. The use of protocol layering is today ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web.

Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing. 

There are many communication protocols, a few of which are described below.

IEEE 802

IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at levels 1 and 2 of the OSI model.
For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based Network Access Control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key".

Ethernet

Ethernet, sometimes simply called LAN, is a family of protocols used in wired LANs, described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers.

Wireless LAN

Wireless LAN, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. It is standardized by IEEE 802.11 and shares many properties with wired Ethernet.

Internet Protocol Suite

The Internet Protocol Suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less as well as connection-oriented services over an inherently unreliable network traversed by data-gram transmission at the Internet protocol (IP) level. At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability.

SONET/SDH

Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support real-time, uncompressed, circuit-switched voice encoded in PCM (Pulse-Code Modulation) format. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.

Asynchronous Transfer Mode

Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet Protocol Suite or Ethernet that use variable sized packets or frames. ATM has similarity with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.

While the role of ATM is diminishing in favor of next-generation networks, it still plays a role in the last mile, which is the connection between an Internet service provider and the home user.

Cellular standards

Geographic scale

A network can be characterized by its physical capacity or its organizational purpose. Use of the network, including user authorization and access rights, differ accordingly.
Nanoscale network
A nanoscale communication network has key components implemented at the nanoscale including message carriers and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for classical communication.
Personal area network
A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.
Local area network
A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Wired LANs are most likely based on Ethernet technology. Newer standards such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.

The defining characteristics of a LAN, in contrast to a wide area network (WAN), include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to 100 Gbit/s, standardized by IEEE in 2010. Currently, 400 Gbit/s Ethernet is being developed. 

A LAN can be connected to a WAN using a router.
Home area network
A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable TV or digital subscriber line (DSL) provider.
Storage area network
A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.
Campus area network
A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling, etc.) are almost entirely owned by the campus tenant / owner (an enterprise, university, government, etc.).

For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls.
Backbone network
A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or sub-networks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area.

For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it. 

Another example of a backbone network is the Internet backbone, which is the set of wide area networks (WANs) and core routers that tie together all networks connected to the Internet.
Metropolitan area network
A Metropolitan area network (MAN) is a large computer network that usually spans a city or a large campus.
Wide area network
A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.
Enterprise private network
An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.
Virtual private network
A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features. 

VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.
Global area network
A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.

Organizational scope

Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.

Intranet

An intranet is a set of networks that are under the control of a single administrative entity. The intranet uses the IP protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information. An intranet is also anything behind the router on a local area network.

Extranet

An extranet is a network that is also under the administrative control of a single organization, but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. Network connection to an extranet is often, but not always, implemented via WAN technology.

Internetwork

An internetwork is the connection of multiple computer networks via a common routing technology using routers.

Internet

Partial map of the Internet based on the January 15, 2005 data found on opte.org. Each line is drawn between two nodes, representing two IP addresses. The length of the lines are indicative of the delay between those two nodes. This graph represents less than 30% of the Class C networks reachable.
 
The Internet is the largest example of an internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW). 

Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

Darknet

A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. A darknet is an anonymizing network where connections are made only between trusted peers — sometimes called "friends" (F2F) — using non-standard protocols and ports.
Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.

Routing

Routing calculates good paths through a network for information to take. For example, from node 1 to node 6 the best routes are likely to be 1-8-7-6 or 1-8-10-6, as this has the thickest routes.
 
Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks

In packet switched networks, routing directs packet forwarding (the transit of logically addressed network packets from their source toward their ultimate destination) through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the router's memory, is very important for efficient routing. 

There are usually multiple routes that can be taken, and to choose between them, different elements can be considered to decide which routes get installed into the routing table, such as (sorted by priority):
  • Prefix-Length: where longer subnet masks are preferred (independent if it is within a routing protocol or over different routing protocol)
  • Metric: where a lower metric/cost is preferred (only valid within one and the same routing protocol)
  • Administrative distance: where a lower distance is preferred (only valid between different routing protocols)
Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths. 

Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging). Routing has become the dominant form of addressing on the Internet. Bridging is still widely used within localized environments.

Network service

Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate. 

The World Wide Web, E-mail, printing and network file sharing are examples of well-known network services. Network services such as DNS (Domain Name System) give names for IP and MAC addresses (people remember names like “nm.lan” better than numbers like “210.121.67.18”), and DHCP to ensure that the equipment on the network has a valid IP address.

Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.

Network performance

Quality of service

Depending on the installation requirements, network performance is usually measured by the quality of service of a telecommunications product. The parameters that affect this typically can include throughput, jitter, bit error rate and latency

The following list gives examples of network performance measures for a circuit-switched network and one type of packet-switched network, viz. ATM:
  • Circuit-switched networks: In circuit switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include the level of noise and echo.
  • ATM: In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique and modem enhancements.
There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.

Network congestion

Network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increase in network throughput, or to an actual reduction in network throughput.

Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion—even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse

Modern networks use congestion control, congestion avoidance and traffic control techniques to try to avoid congestion collapse. These include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers. Another method to avoid the negative effects of network congestion is implementing priority schemes, so that some packets are transmitted with higher priority than others. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for some services. An example of this is 802.1p. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard, which provides high-speed (up to 1 Gbit/s) Local area networking over existing home wires (power lines, phone lines and coaxial cables). 

For the Internet, RFC 2914 addresses the subject of congestion control in detail.

Network resilience

Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.”

Security

Network security

Network security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is the authorization of access to data in a network, which is controlled by the network administrator. Users are assigned an ID and password that allows them access to information and programs within their authority. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies and individuals.

Network surveillance

Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.

Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.

Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent/investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.

However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to numerous lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".

End to end encryption

End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet providers or application service providers, from discovering or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity

Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio.

Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox. Some such systems, for example LavaBit and SecretInk, have even described themselves as offering "end-to-end" encryption when they do not. Some systems that normally offer end-to-end encryption have turned out to contain a back door that subverts negotiation of the encryption key between the communicating parties, for example Skype or Hushmail

The end-to-end encryption paradigm does not directly address risks at the communications endpoints themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the end points and the times and quantities of messages that are sent.

SSL/TLS

The introduction and rapid growth of e-commerce on the world wide web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of CA root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client.

Views of networks

Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a Network Administrator is responsible to keep that network up and running. A community of interest has less of a connection of being in a local area, and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies. 

Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology.

Both users and administrators are aware, to varying extents, of the trust and scope characteristics of a network. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).

Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).

Over the Internet, there can be business-to-business (B2B), business-to-consumer (B2C) and consumer-to-consumer (C2C) communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure Virtual Private Network (VPN) technology.

Communication

From Wikipedia, the free encyclopedia

Communication (from Latin communicare, meaning "to share") is the act of conveying meanings from one entity or group to another through the use of mutually understood signs, symbols, and semiotic rules.

The main steps inherent to all communication are:
The scientific study of communication can be divided into:
The channel of communication can be visual, auditory, tactile (such as in Braille) and haptic, olfactory, electromagnetic, or biochemical.

Human communication is unique for its extensive use of abstract language. Development of civilization has been closely linked with progress in telecommunication.

Non-verbal

Nonverbal communication describes the processes of conveying a type of information in the form of non-linguistic representations. Examples of nonverbal communication include haptic communication, chronemic communication, gestures, body language, facial expressions, eye contact, and how one dresses. Nonverbal communication also relates to the intent of a message. Examples of intent are voluntary, intentional movements like shaking a hand or winking, as well as involuntary, such as sweating. Speech also contains nonverbal elements known as paralanguage, e.g. rhythm, intonation, tempo, and stress. It affects communication most at the subconscious level and establishes trust. Likewise, written texts include nonverbal elements such as handwriting style, the spatial arrangement of words and the use of emoticons to convey emotion.

Nonverbal communication demonstrates one of Paul Wazlawick's laws: you cannot not communicate. Once proximity has formed awareness, living creatures begin interpreting any signals received. Some of the functions of nonverbal communication in humans are to complement and illustrate, to reinforce and emphasize, to replace and substitute, to control and regulate, and to contradict the denovative message.

Nonverbal cues are heavily relied on to express communication and to interpret others' communication and can replace or substitute verbal messages. However, non-verbal communication is ambiguous. When verbal messages contradict non-verbal messages, observation of non-verbal behavior is relied on to judge another's attitudes and feelings, rather than assuming the truth of the verbal message alone.

There are several reasons as to why non-verbal communication plays a vital role in communication:
"Non-verbal communication is omnipresent." They are included in every single communication act. To have total communication, all non-verbal channels such as the body, face, voice, appearance, touch, distance, timing, and other environmental forces must be engaged during face-to-face interaction. Written communication can also have non-verbal attributes. E-mails and web chats allow an individual's the option to change text font colours, stationary, emoticons, and capitalization in order to capture non-verbal cues into a verbal medium. 

"Non-verbal behaviors are multifunctional." Many different non-verbal channels are engaged at the same time in communication acts and allow the chance for simultaneous messages to be sent and received. 

"Non-verbal behaviors may form a universal language system." Smiling, crying, pointing, caressing, and glaring are non-verbal behaviors that are used and understood by people regardless of nationality. Such non-verbal signals allow the most basic form of communication when verbal communication is not effective due to language barriers.

Verbal

Verbal communication is the spoken or written conveyance of a message. Human language can be defined as a system of symbols (sometimes known as lexemes) and the grammars (rules) by which the symbols are manipulated. The word "language" also refers to common properties of languages. Language learning normally occurs most intensively during human childhood. Most of the thousands of human languages use patterns of sound or gesture for symbols which enable communication with others around them. Languages tend to share certain properties, although there are exceptions. There is no defined line between a language and a dialect. Constructed languages such as Esperanto, programming languages, and various mathematical formalism is not necessarily restricted to the properties shared by human languages. 

As previously mentioned, language can be characterized as symbolic. Charles Ogden and I.A Richards developed The Triangle of Meaning model to explain the symbol (the relationship between a word), the referent (the thing it describes), and the meaning (the thought associated with the word and the thing).

The properties of language are governed by rules. Language follows phonological rules (sounds that appear in a language), syntactic rules (arrangement of words and punctuation in a sentence), semantic rules (the agreed upon meaning of words), and pragmatic rules (meaning derived upon context).

The meanings that are attached to words can be literal, or otherwise known as denotative; relating to the topic being discussed, or, the meanings take context and relationships into account, otherwise known as connotative; relating to the feelings, history, and power dynamics of the communicators.

Contrary to popular belief, signed languages of the world (e.g., American Sign Language) are considered to be verbal communication because their sign vocabulary, grammar, and other linguistic structures abide by all the necessary classifications as spoken languages. There are however, nonverbal elements to signed languages, such as the speed, intensity, and size of signs that are made. A signer might sign "yes" in response to a question, or they might sign a sarcastic-large slow yes to convey a different nonverbal meaning. The sign yes is the verbal message while the other movements add nonverbal meaning to the message.

Written communication and its historical development

Over time the forms of and ideas about communication have evolved through the continuing progression of technology. Advances include communications psychology and media psychology, an emerging field of study.

The progression of written communication can be divided into three "information communication revolutions":
  1. Written communication first emerged through the use of pictographs. The pictograms were made in stone, hence written communication was not yet mobile. Pictograms began to develop standardized and simplified forms.
  2. The next step occurred when writing began to appear on paper, papyrus, clay, wax, and other media with commonly shared writing systems, leading to adaptable alphabets. Communication became mobile.
  3. The final stage is characterized by the transfer of information through controlled waves of electromagnetic radiation (i.e., radio, microwave, infrared) and other electronic signals.
Communication is thus a process by which meaning is assigned and conveyed in an attempt to create shared understanding. Gregory Bateson called it "the replication of tautologies in the universe. This process, which requires a vast repertoire of skills in interpersonal processing, listening, observing, speaking, questioning, analyzing, gestures, and evaluating enables collaboration and cooperation.

Business

Business communication is used for a wide variety of activities including, but not limited to: strategic communications planning, media relations, public relations (which can include social media, broadcast and written communications, and more), brand management, reputation management, speech-writing, customer-client relations, and internal/employee communications.

Companies with limited resources may choose to engage in only a few of these activities, while larger organizations may employ a full spectrum of communications. Since it is difficult to develop such a broad range of skills, communications professionals often specialize in one or two of these areas but usually have at least a working knowledge of most of them. By far, the most important qualifications communications professionals can possess are excellent writing ability, good 'people' skills, and the capacity to think critically and strategically.

Political

Communication is one of the most relevant tools in political strategies, including persuasion and propaganda. In mass media research and online media research, the effort of the strategist is that of getting a precise decoding, avoiding "message reactance", that is, message refusal. The reaction to a message is referred also in terms of approach to a message, as follows:
  • In "radical reading" the audience rejects the meanings, values, and viewpoints built into the text by its makers. Effect: message refusal.
  • In "dominant reading", the audience accepts the meanings, values, and viewpoints built into the text by its makers. Effect: message acceptance.
  • In "subordinate reading" the audience accepts, by and large, the meanings, values, and worldview built into the text by its makers. Effect: obey to the message.
Holistic approaches are used by communication campaign leaders and communication strategists in order to examine all the options, "actors" and channels that can generate change in the semiotic landscape, that is, change in perceptions, change in credibility, change in the "memetic background", change in the image of movements, of candidates, players and managers as perceived by key influencers that can have a role in generating the desired "end-state".

The modern political communication field is highly influenced by the framework and practices of "information operations" doctrines that derive their nature from strategic and military studies. According to this view, what is really relevant is the concept of acting on the Information Environment. The information environment is the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consists of three interrelated dimensions, which continuously interact with individuals, organizations, and systems. These dimensions are known as physical, informational, and cognitive.

Family

Family communication is the study of the communication perspective in a broadly defined family, with intimacy and trusting relationship. The main goal of family communication is to understand the interactions of family and the pattern of behaviors of family members in different circumstances. Open and honest communication creates an atmosphere that allows family members to express their differences as well as love and admiration for one another. It also helps to understand the feelings of one another. 

Family communication study looks at topics such as family rules, family roles or family dialectics and how those factors could affect the communication between family members. Researchers develop theories to understand communication behaviors. Family communication study also digs deep into certain time periods of family life such as marriage, parenthood or divorce and how communication stands in those situations. It is important for family members to understand communication as a trusted way which leads to a well constructed family.

Interpersonal

In simple terms, interpersonal communication is the communication between one person and another (or others). It is often referred to as face-to-face communication between two (or more) people. Both verbal and nonverbal communication, or body language, play a part in how one person understands another. In verbal interpersonal communication there are two types of messages being sent: a content message and a relational message. Content messages are messages about the topic at hand and relational messages are messages about the relationship itself. This means that relational messages come across in how one says something and it demonstrates a person's feelings, whether positive or negative, towards the individual they are talking to, indicating not only how they feel about the topic at hand, but also how they feel about their relationship with the other individual.

There are many different aspects of interpersonal communication including:
  • Audiovisual Perception of Communication Problems. The concept follows the idea that our words change what form they take based on the stress level or urgency of the situation. It also explores the concept that stuttering during speech shows the audience that there is a problem or that the situation is more stressful.
  • The Attachment Theory. This is the combined work of John Bowlby and Mary Ainsworth (Ainsworth & Bowlby, 1991) This theory follows the relationships that builds between a mother and child, and the impact it has on their relationships with others.
  • Emotional Intelligence and Triggers. Emotional Intelligence focuses on the ability to monitor ones own emotions as well as those of others. Emotional Triggers focus on events or people that tend to set off intense, emotional reactions within individuals.
  • Attribution Theory. This is the study of how individuals explain what causes different events and behaviors.
  • The Power of Words (Verbal communications). Verbal communication focuses heavily on the power of words, and how those words are said. It takes into consideration tone, volume, and choice of words.
  • Nonverbal Communication. It focuses heavily on the setting that the words are conveyed in, as well as the physical tone of the words.
  • Ethics in Personal Relations. It is about a space of mutual responsibility between two individuals, it's about giving and receiving in a relationship. This theory is explored by Dawn J. Lipthrott in the article What IS Relationship? What is Ethical Partnership?
  • Deception in Communication. This concept goes into that everyone lies, and how this can impact relationships. This theory is explored by James Hearn in his article Interpersonal Deception Theory: Ten Lessons for Negotiators
  • Conflict in Couples. This focuses on the impact that social media has on relationships, as well as how to communicate through conflict. This theory is explored by Amanda Lenhart and Maeve Duggan in their paper Couples, the Internet, and Social Media

Barriers to effectiveness

Barriers to effective communication can retard or distort the message or intention of the message being conveyed. This may result in failure of the communication process or cause an effect that is undesirable. These include filtering, selective perception, information overload, emotions, language, silence, communication apprehension, gender differences and political correctness.

This also includes a lack of expressing "knowledge-appropriate" communication, which occurs when a person uses ambiguous or complex legal words, medical jargon, or descriptions of a situation or environment that is not understood by the recipient.
  • Physical barriers – Physical barriers are often due to the nature of the environment. An example of this is the natural barrier which exists if staff is located in different buildings or on different sites. Likewise, poor or outdated equipment, particularly the failure of management to introduce new technology, may also cause problems. Staff shortages are another factor which frequently causes communication difficulties for an organization.
  • System designSystem design faults refer to problems with the structures or systems in place in an organization. Examples might include an organizational structure which is unclear and therefore makes it confusing to know whom to communicate with. Other examples could be inefficient or inappropriate information systems, a lack of supervision or training, and a lack of clarity in roles and responsibilities which can lead to staff being uncertain about what is expected of them.
  • Attitudinal barriers– Attitudinal barriers come about as a result of problems with staff in an organization. These may be brought about, for example, by such factors as poor management, lack of consultation with employees, personality conflicts which can result in people delaying or refusing to communicate, the personal attitudes of individual employees which may be due to lack of motivation or dissatisfaction at work, brought about by insufficient training to enable them to carry out particular tasks, or simply resistance to change due to entrenched attitudes and ideas.
  • Ambiguity of words/phrases – Words sounding the same but having different meaning can convey a different meaning altogether. Hence the communicator must ensure that the receiver receives the same meaning. It is better if such words are avoided by using alternatives whenever possible.
  • Individual linguistic ability – The use of jargon, difficult or inappropriate words in communication can prevent the recipients from understanding the message. Poorly explained or misunderstood messages can also result in confusion. However, research in communication has shown that confusion can lend legitimacy to research when persuasion fails.
  • Physiological barriers – These may result from individuals' personal discomfort, caused—for example—by ill health, poor eyesight or hearing difficulties.
  • Bypassing – These happens when the communicators (sender and the receiver) do not attach the same symbolic meanings to their words. It is when the sender is expressing a thought or a word but the receiver takes it in a different meaning. For example- ASAP, Rest room
  • Technological multi-tasking and absorbency – With a rapid increase in technologically-driven communication in the past several decades, individuals are increasingly faced with condensed communication in the form of e-mail, text, and social updates. This has, in turn, led to a notable change in the way younger generations communicate and perceive their own self-efficacy to communicate and connect with others. With the ever-constant presence of another "world" in one's pocket, individuals are multi-tasking both physically and cognitively as constant reminders of something else happening somewhere else bombard them. Though perhaps too new of an advancement to yet see long-term effects, this is a notion currently explored by such figures as Sherry Turkle.
  • Fear of being criticized – This is a major factor that prevents good communication. If we exercise simple practices to improve our communication skill, we can become effective communicators. For example, read an article from the newspaper or collect some news from the television and present it in front of the mirror. This will not only boost your confidence but also improve your language and vocabulary.
  • Gender barriers – Most communicators whether aware or not, often have a set agenda. This is very notable among the different genders. For example, many women are found to be more critical in addressing conflict. It's also been noted that men are more than likely to withdraw from conflict when in comparison to women. This breakdown and comparison not only shows that there are many factors to communication between two specific genders but also room for improvement as well as established guidelines for all.

Cultural aspects

Cultural differences exist within countries (tribal/regional differences, dialects etc.), between religious groups and in organisations or at an organisational level – where companies, teams and units may have different expectations, norms and idiolects. Families and family groups may also experience the effect of cultural barriers to communication within and between different family members or groups. For example: words, colours and symbols have different meanings in different cultures. In most parts of the world, nodding your head means agreement, shaking your head means no, except in some parts of the world.

Communication to a great extent is influenced by culture and cultural variables. Understanding cultural aspects of communication refers to having knowledge of different cultures in order to communicate effectively with cross culture people. Cultural aspects of communication are of great relevance in today's world which is now a global village, thanks to globalisation. Cultural aspects of communication are the cultural differences which influences communication across borders. Impact of cultural differences on communication components are explained below:
  • Verbal communication refers to form of communication which uses spoken and written words for expressing and transferring views and ideas. Language is the most important tool of verbal communication and it is the area where cultural difference play its role. All countries have different languages and to have a better understanding of different culture it is required to have knowledge of languages of different countries.
  • Non verbal communication is a very wide concept and it includes all the other forms of communication which do not uses written or spoken words. Non verbal communication takes following forms:
    • Paralinguistics are the voice involved in communication other than actual language and involves tones, pitch, vocal cues etc. It also include sounds from throat and all these are greatly influenced by cultural differences across borders.
    • Proxemics deals with the concept of space element in communication. Proxemics explains four zones of spaces namely intimate personal, social and public. This concept differs with different culture as the permissible space vary in different countries.
    • Artifactics studies about the non verbal signals or communication which emerges from personal accessories such as dresses or fashion accessories worn and it varies with culture as people of different countries follow different dressing codes.
    • Chronemics deal with the time aspects of communication and also include importance given to the time. Some issues explaining this concept are pauses, silences and response lag during an interaction. This aspect of communication is also influenced by cultural differences as it is well known that there is a great difference in the value given by different cultures to time.
    • Kinesics mainly deals with the body languages such as postures, gestures, head nods, leg movements etc. In different countries, the same gestures and postures are used to convey different messages. Sometimes even a particular kinesic indicating something good in a country may have a negative meaning in any other culture.
So in order to have an effective communication across the world it is desirable to have a knowledge of cultural variables effecting communication.

According to Michael Walsh and Ghil'ad Zuckermann, Western conversational interaction is typically "dyadic", between two particular people, where eye contact is important and the speaker controls the interaction; and "contained" in a relatively short, defined time frame. However, traditional Aboriginal conversational interaction is "communal", broadcast to many people, eye contact is not important, the listener controls the interaction; and "continuous", spread over a longer, indefinite time frame.

Nonhuman

Every information exchange between living organisms — i.e. transmission of signals that involve a living sender and receiver can be considered a form of communication; and even primitive creatures such as corals are competent to communicate. Nonhuman communication also include cell signaling, cellular communication, and chemical transmissions between primitive organisms like bacteria and within the plant and fungal kingdoms.

Animals

The broad field of animal communication encompasses most of the issues in ethology. Animal communication can be defined as any behavior of one animal that affects the current or future behavior of another animal. The study of animal communication, called zoo semiotics (distinguishable from anthroposemiotics, the study of human communication) has played an important part in the development of ethology, sociobiology, and the study of animal cognition. Animal communication, and indeed the understanding of the animal world in general, is a rapidly growing field, and even in the 21st century so far, a great share of prior understanding related to diverse fields such as personal symbolic name use, animal emotions, animal culture and learning, and even sexual conduct, long thought to be well understood, has been revolutionized.

Plants and fungi

Communication is observed within the plant organism, i.e. within plant cells and between plant cells, between plants of the same or related species, and between plants and non-plant organisms, especially in the root zone. Plant roots communicate with rhizome bacteria, fungi, and insects within the soil. Recent research has shown that most of the microorganism plant communication processes are neuron-like. Plants also communicate via volatiles when exposed to herbivory attack behavior, thus warning neighboring plants. In parallel they produce other volatiles to attract parasites which attack these herbivores. 

Fungi communicate to coordinate and organize their growth and development such as the formation of Marcelia and fruiting bodies. Fungi communicate with their own and related species as well as with non fungal organisms in a great variety of symbiotic interactions, especially with bacteria, unicellular eukaryote, plants and insects through biochemicals of biotic origin. The biochemicals trigger the fungal organism to react in a specific manner, while if the same chemical molecules are not part of biotic messages, they do not trigger the fungal organism to react. This implies that fungal organisms can differentiate between molecules taking part in biotic messages and similar molecules being irrelevant in the situation. So far five different primary signalling molecules are known to coordinate different behavioral patterns such as filamentation, mating, growth, and pathogenicity. Behavioral coordination and production of signaling substances is achieved through interpretation processes that enables the organism to differ between self or non-self, a biotic indicator, biotic message from similar, related, or non-related species, and even filter out "noise", i.e. similar molecules without biotic content.

Bacteria quorum sensing

Communication is not a tool used only by humans, plants and animals, but it is also used by microorganisms like bacteria. The process is called quorum sensing. Through quorum sensing, bacteria are able to sense the density of cells, and regulate gene expression accordingly. This can be seen in both gram positive and gram negative bacteria. This was first observed by Fuqua et al. in marine microorganisms like V. harveyi and V. fischeri.

Models

Shannon and Weaver Model of Communication
 
Communication major dimensions scheme
 
Interactional Model of Communication
 
Berlo's Sender-Message-Channel-Receiver Model of Communication
 
Transactional model of communication
 
Communication code scheme
 
Linear Communication Model
 
The first major model for communication was introduced by Claude Shannon and Warren Weaver for Bell Laboratories in 1949 The original model was designed to mirror the functioning of radio and telephone technologies. Their initial model consisted of three primary parts: sender, channel, and receiver. The sender was the part of a telephone a person spoke into, the channel was the telephone itself, and the receiver was the part of the phone where one could hear the other person. Shannon and Weaver also recognized that often there is static that interferes with one listening to a telephone conversation, which they deemed noise. 

In a simple model, often referred to as the transmission model or standard view of communication, information or content (e.g. a message in natural language) is sent in some form (as spoken language) from an emitter (emisor in the picture)/ sender/ encoder to a destination/ receiver/ decoder. This common conception of communication simply views communication as a means of sending and receiving information. The strengths of this model are simplicity, generality, and quantifiability. Claude Shannon and Warren Weaver structured this model based on the following elements:
  1. An information source, which produces a message.
  2. A transmitter, which encodes the message into signals
  3. A channel, to which signals are adapted for transmission
  4. A noise source, which distorts the signal while it propagates through the channel
  5. A receiver, which 'decodes' (reconstructs) the message from the signal.
  6. A destination, where the message arrives.
Shannon and Weaver argued that there were three levels of problems for communication within this theory.
  • The technical problem: how accurately can the message be transmitted?
  • The semantic problem: how precisely is the meaning 'conveyed'?
  • The effectiveness problem: how effectively does the received meaning affect behavior?
Daniel Chandler critiques the transmission model by stating:
  • It assumes communicators are isolated individuals.
  • No allowance for differing purposes.
  • No allowance for differing interpretations.
  • No allowance for unequal power relations.
  • No allowance for situational contexts.
In 1960, David Berlo expanded on Shannon and Weaver's (1949) linear model of communication and created the SMCR Model of Communication. The Sender-Message-Channel-Receiver Model of communication separated the model into clear parts and has been expanded upon by other scholars. 

Communication is usually described along a few major dimensions: Message (what type of things are communicated), source / emisor / sender / encoder (by whom), form (in which form), channel (through which medium), destination / receiver / target / decoder (to whom), and Receiver. Wilbur Schram (1954) also indicated that we should also examine the impact that a message has (both desired and undesired) on the target of the message. Between parties, communication includes acts that confer knowledge and experiences, give advice and commands, and ask questions. These acts may take many forms, in one of the various manners of communication. The form depends on the abilities of the group communicating. Together, communication content and form make messages that are sent towards a destination. The target can be oneself, another person or being, another entity (such as a corporation or group of beings). 

Communication can be seen as processes of information transmission with three levels of semiotic rules:
  1. Pragmatic (concerned with the relations between signs/expressions and their users)
  2. Semantic (study of relationships between signs and symbols and what they represent) and
  3. Syntactic (formal properties of signs and symbols).
Therefore, communication is social interaction where at least two interacting agents share a common set of signs and a common set of semiotic rules. This commonly held rule in some sense ignores autocommunication, including intrapersonal communication via diaries or self-talk, both secondary phenomena that followed the primary acquisition of communicative competences within social interactions. 

In light of these weaknesses, Barnlund (2008) proposed a transactional model of communication. The basic premise of the transactional model of communication is that individuals are simultaneously engaging in the sending and receiving of messages. 

In a slightly more complex form a sender and a receiver are linked reciprocally. This second attitude of communication, referred to as the constitutive model or constructionist view, focuses on how an individual communicates as the determining factor of the way the message will be interpreted. Communication is viewed as a conduit; a passage in which information travels from one individual to another and this information becomes separate from the communication itself. A particular instance of communication is called a speech act. The sender's personal filters and the receiver's personal filters may vary depending upon different regional traditions, cultures, or gender; which may alter the intended meaning of message contents. In the presence of "communication noise" on the transmission channel (air, in this case), reception and decoding of content may be faulty, and thus the speech act may not achieve the desired effect. One problem with this encode-transmit-receive-decode model is that the processes of encoding and decoding imply that the sender and receiver each possess something that functions as a codebook, and that these two code books are, at the very least, similar if not identical. Although something like code books is implied by the model, they are nowhere represented in the model, which creates many conceptual difficulties.

Theories of coregulation describe communication as a creative and dynamic continuous process, rather than a discrete exchange of information. Canadian media scholar Harold Innis had the theory that people use different types of media to communicate and which one they choose to use will offer different possibilities for the shape and durability of society. His famous example of this is using ancient Egypt and looking at the ways they built themselves out of media with very different properties stone and papyrus. Papyrus is what he called 'Space Binding'. it made possible the transmission of written orders across space, empires and enables the waging of distant military campaigns and colonial administration. The other is stone and 'Time Binding', through the construction of temples and the pyramids can sustain their authority generation to generation, through this media they can change and shape communication in their society.

Noise

In any communication model, noise is interference with the decoding of messages sent over a channel by an encoder. There are many examples of noise:
  • Environmental noise. Noise that physically disrupts communication, such as standing next to loud speakers at a party, or the noise from a construction site next to a classroom making it difficult to hear the professor.
  • Physiological-impairment noise. Physical maladies that prevent effective communication, such as actual deafness or blindness preventing messages from being received as they were intended.
  • Semantic noise. Different interpretations of the meanings of certain words. For example, the word "weed" can be interpreted as an undesirable plant in a yard, or as a euphemism for marijuana.
  • Syntactical noise. Mistakes in grammar can disrupt communication, such as abrupt changes in verb tense during a sentence.
  • Organizational noise. Poorly structured communication can prevent the receiver from accurate interpretation. For example, unclear and badly stated directions can make the receiver even more lost.
  • Cultural noise. Stereotypical assumptions can cause misunderstandings, such as unintentionally offending a non-Christian person by wishing them a "Merry Christmas".
  • Psychological noise. Certain attitudes can also make communication difficult. For instance, great anger or sadness may cause someone to lose focus on the present moment. Disorders such as autism may also severely hamper effective communication.
To face communication noise, redundancy and acknowledgement must often be used. Acknowledgements are messages from the addressee informing the originator that his/her communication has been received and is understood. Message repetition and feedback about message received are necessary in the presence of noise to reduce the probability of misunderstanding. The act of disambiguation regards the attempt of reducing noise and wrong interpretations, when the semantic value or meaning of a sign can be subject to noise, or in presence of multiple meanings, which makes the sense-making difficult. Disambiguation attempts to decrease the likelihood of misunderstanding. This is also a fundamental skill in communication processes activated by counselors, psychotherapists, interpreters, and in coaching sessions based on colloquium. In Information Technology, the disambiguation process and the automatic disambiguation of meanings of words and sentences has also been an interest and concern since the earliest days of computer treatment of language.

As academic discipline

The academic discipline that deals with processes of human communication is communication studies. The discipline encompasses a range of topics, from face-to-face conversation to mass media outlets such as television broadcasting. Communication studies also examines how messages are interpreted through the political, cultural, economic, semiotic, hermeneutic, and social dimensions of their contexts. Statistics, as a quantitative approach to communication science, has also been incorporated into research on communication science in order to help substantiate claims.

Extracting functional mitochondria using microfluidics devices

January 16, 2019 by Thamarasee Jeewandara
 
a) Cells are introduced into the cross-junction of the microchannel. The stress applied on the cell is optimized to disrupt the cell membrane and release subcellular components, while maintaining the integrity of mitochondria.
 
Mitochondria are dynamic, bioenergetic intracellular organelles, responsible for energy production via ATP production during respiration. They are involved in key cellular metabolic tasks that regulate vital physiological responses of cells, including cell signaling, cell differentiation and cell death. Defective mitochondria are linked to several critical human genetic diseases, including neurodegenerative disorders, cancer and cardiovascular disease.
 
The detailed characterization of functional mitochondria remains relatively unexplored due to a lack of effective organelle extraction methods. For instance, the extraction process must sustain sufficient functionality of the organelle ex vivo to illuminate their cytosolic functions in the presence of cytoskeleton and other subcellular organelles. Since mitochondria grow in a complex reticular network within to undergo structural alternations, their intracellular characterization is further complicated. As a result, in vitro analysis of mitochondria remains the mainstream method, to separately extract and understand the intrinsic properties of mitochondria, without the interference of other subcellular organelles. 

In a recent study, now published in Microsystems & Nanoengineering, Habibur Rahman and colleagues at the Department of Biomedical Engineering explored the possibility of controlling hydrodynamic stress for efficient mitochondrial extraction. For this, they used cross-junction microfluidic geometry at the microscale to selectively disrupt the while securing the mitochondrial membrane's integrity. 

3D geometry of the cross-slot microfluidics channel. (a) Overall geometry and the boundary conditions of the model. (b) Meshing of the elements as zoomed in the cross-slot region. Credit: Microsystems and Nanoengineering, doi: https://doi.org/10.1038/s41378-018-0037-y
 
Advances in microfluidics have demonstrated the advantages of on-chip laboratory procedures with significantly reduced sample size and increased experimental reproducibility. Hydrodynamic stress produced in microfluidic chips can be used to open cellular or nuclear membranes transiently during intracellular gene delivery. The potential of such techniques has rarely been examined for extracting subcellular organelles since the constrained geometries of microchannels can cause subcellular component clogging in the micromachines.
 
The authors optimized the experimental conditions of operation based on previous studies to effectively shred cell membranes while retaining intact mitochondria in model mammalian cell lines. The model cell lines of interest were human embryonic kidney cells (HEK293), mouse muscle cells (C2C12) and neuroblastoma cells (SH-SY5Y).
 
In the working principle of the proposed microscale cell shredder, the scientists measured the difference in elastic modulus between the mitochondrial membrane and the cellular membrane to disrupt the cell while retaining the mitochondrial membrane. An increased stress level in the system could disrupt cell membranes with higher elastic moduli (as seen with the neuroblastoma cell line). The study compared the protein yield and the concentration of extracted functional mitochondria using the proposed method vs. commercially available kits for a range of cell concentrations.

Cell disruption and protein extraction efficiency using the microscale cell shredder, the Dounce Homogenizer and Qiagen Mitochondria Isolation Kit.
 
The findings showed the proposed microscale cell shredder method was more efficient than the commercial kits by yielding approximately 40 percent more functional mitochondria. The scientists were able to preserve the structural integrity of the extracted organelles even at low cell concentrations. The method could rapidly process a limited quantity of samples (200 µl).
 
The detailed outcomes were a first in study demonstration of intact and functional mitochondria extraction using microscale hydrodynamic stress. The possibility of processing a low concentration and small sample size is favorable for clinical investigations of mitochondrial disease. To test the stress exerted by the designed cross-junction, they used a COMSOL Multiphysics simulation model first. Thereafter, Rahman et al. experimentally determined the volumetric flow rate for three model cell lines. During experimental cell membrane disruption, under mean shear stress (16.4 Pa, for a 60 µL/min flow rate), subcellular organelles were released and detected with increased mitochondrial positive signals.
 
The scientists compared the capacity of the miniaturized cell shredder with that of two commercial kits: the Dounce homogenizer (mechanical method of cell disruption) and the Qproteome mitochondria isolation kit (chemical method of cell disruption) to extract mitochondria. To determine the number of functional mitochondria extracted, the scientists used MitoTracker—a fluorescent dye that stains mitochondria during flow cytometric analysis. The results showed that the microscale cell shredder was able to extract 40 percent more functional mitochondria compared to the commercial kits for both HEK 293 and C2C12 cells.

Disruption of neuroblastoma cells (SH-SY5Y) and the subsequent mitochondrial extraction. a Total protein yield and b concentrations of functional mitochondria obtained from the three extraction methods.
 
Rahman et al. conducted the citrate synthase assay to determine mitochondrial integrity through enzymatic activity of damaged mitochondria. As before, compared to the commercial kits, mitochondrial integrity was higher for those extracted using the microscale shredder in HEK293 and C2C12 cells. 

The study demonstrated the importance of membrane stiffness by validating the proposed concept to disrupt neuroblastoma cell membranes (SH-SY5Y). Since the SH-SY5Y cell membrane had a higher elastic modulus than both HEK293 and C2C12 cell lines, the scientists had to optimize the volumetric flow rate in the microscale shredder for effectively disrupting SH-SY5Y cell membranes. Again, compared with the commercial kit extractions, using the proposed method delivered a significantly higher concentration of protein and functional mitochondria for the cell line of interest. 

A necking section is included in the channel design of the microscale cell shredder to ensure the cells are focused laterally to the center of the flow stream in the microfluidics bioreactor. Credit: Microsystems and Nanoengineering, doi: https://doi.org/10.1038/s41378-018-0037-y 
 
In this way, Rahman et al. investigated the possibility of disrupting the cell to retain the integrity of mitochondrial membranes in diverse mammalian model cell lines. They determined the optimal extensional stress and flow rate inside a microfluidic cross-section bioreactor, based on the Young's modulus of the model cell line of interest. During channel design, the scientists included a necking section in the microfluidic bioreactor manufactured using soft lithography .
 
The proposed microfluidics microscale cell shredder demonstrated superior capability for extracting functional and proteins by controlling hydrodynamic stress for the first time, compared with commercially available cell organelle extraction kits. The experiments were feasible even with minute quantities of samples (200 µl volume, containing 104 cells/mL) for potential clinical applications. Rahman et al. were able to faithfully replicate the protocol across three cell lines. The experimental work can be translated to a clinical setting to understand mitochondrial dysfunction related disorders in depth.

More information: Md Habibur Rahman et al. Demarcating the membrane damage for the extraction of functional mitochondria, Microsystems and Nanoengineering (2018). DOI: 10.1038/s41378-018-0037-y

Joyeeta Rahman et al. Mitochondrial medicine in the omics era, The Lancet (2018). DOI: 10.1016/S0140-6736(18)30727-X

Dong Qin et al. Soft lithography for micro- and nanoscale patterning, Nature Protocols (2010). DOI: 10.1038/nprot.2009.234

Young Bok Bae et al. Microfluidic assessment of mechanical cell damage by extensional stress, Lab on a Chip (2015). DOI: 10.1039/c5lc01006c

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...