A rendering of a small standard cell with three metal layers (dielectric
has been removed). The sand-colored structures are metal interconnect,
with the vertical pillars being contacts, typically plugs of tungsten.
The reddish structures are polysilicon gates, and the solid at the
bottom is the crystalline silicon bulk.
For the batteries used as a voltage reference (laboratory standard), see Weston cell and Clark cell.
Cell-based methodology – the general class to which standard
cells belong – makes it possible for one designer to focus on the
high-level (logical function) aspect of digital design, while another
designer focuses on the implementation (physical) aspect. Along with semiconductor manufacturing
advances, standard-cell methodology has helped designers scale ASICs
from comparatively simple single-function ICs (of several thousand
gates), to complex multi-million gate system-on-a-chip (SoC) devices.
Construction of a standard cell
A standard cell is a group of transistor and interconnect structures that provides a boolean logic function (e.g., AND, OR, XOR, XNOR, inverters) or a storage function (flipflop or latch).
The simplest cells are direct representations of the elemental NAND,
NOR, and XOR boolean function, although cells of much greater complexity
are commonly used (such as a 2-bit full-adder, or muxed D-input flipflop.) The cell's boolean logic function is called its logical view: functional behavior is captured in the form of a truth table or Boolean algebra equation (for combinational logic), or a state transition table (for sequential logic).
Usually, the initial design of a standard cell is developed at the transistor level, in the form of a transistor netlist or schematic
view. The netlist is a nodal description of transistors, of their
connections to each other, and of their terminals (ports) to the
external environment. A schematic view may be generated with a number of
different computer-aided design (CAD) or electronic design automation (EDA) programs that provide a graphical user interface (GUI) for this netlist generation process. Designers use additional CAD programs such as SPICE
to simulate the electronic behavior of the netlist, by declaring input
stimulus (voltage or current waveforms) and then calculating the
circuit's time domain (analog) response. The simulations verify whether
the netlist implements the desired function and predict other pertinent
parameters, such as power consumption or signal propagation delay.
Since the logical and netlist views are only useful for abstract
(algebraic) simulation, and not device fabrication, the physical
representation of the standard cell must be designed too. Also called
the layout view, this is the lowest level of design abstraction
in common design practice. From a manufacturing perspective, the
standard cell's VLSI layout is the most important view, as it is closest
to an actual "manufacturing blueprint" of the standard cell. The layout
is organized into base layers, which correspond to the different structures of the transistor devices, and interconnect wiring layers and via layers, which join together the terminals of the transistor formations. The interconnect wiring layers are usually numbered and have specific via
layers representing specific connections between each sequential layer.
Non-manufacturing layers may also be present in a layout for purposes
of design automation, but many layers used explicitly for place and route (PNR) CAD programs are often included in a separate but similar abstract view. The abstract view often contains much less information than the layout and may be recognizable as a Layout Extraction Format (LEF) file or an equivalent.
After a layout is created, additional CAD tools are often used to
perform a number of common validations. A Design Rule Check (DRC) is
done to verify that the design meets foundry and other layout
requirements. A Parasitic EXtraction
(PEX) then is performed to generate a PEX-netlist with parasitic
properties from the layout. The nodal connections of that netlist are
then compared to those of the schematic netlist with a Layout Vs Schematic (LVS) procedure to verify that the connectivity models are equivalent.
The PEX-netlist may then be simulated again (since it contains
parasitic properties) to achieve more accurate timing, power, and noise
models. These models are often characterized (contained) in a Synopsys Liberty format, but other Verilog formats may be used as well.
Finally, powerful place and route (PNR) tools may be used to pull everything together and synthesize (generate) Very Large Scale Integration (VLSI) layouts, in an automated fashion, from higher level design netlists and floor-plans.
Additionally, a number of other CAD tools may be used to validate
other aspects of the cell views and models. And other files may be
created to support various tools that utilize the standard cells for a
plethora of other reasons. All of these files that are created to
support the use of all of the standard-cell variations are collectively
known as a standard-cell library.
For a typical Boolean function, there are many different
functionally equivalent transistor netlists. Likewise, for a typical
netlist, there are many different layouts that fit the netlist's
performance parameters. The designer's challenge is to minimize the
manufacturing cost of the standard cell's layout (generally by
minimizing the circuit's die area), while still meeting the cell's speed
and power performance requirements. Consequently, integrated circuit layout is a highly labor-intensive job, despite the existence of design tools to aid this process.
Library
A standard-cell library is a collection of low-level electronic logic functions
such as AND, OR, INVERT, flip-flops, latches, and buffers. These cells
are realized as fixed-height, variable-width full-custom cells. The key
aspect with these libraries is that they are of a fixed height, which
enables them to be placed in rows, easing the process of automated
digital layout. The cells are typically optimized full-custom layouts,
which minimize delays and area.
A typical standard-cell library contains two main components:
Library Database - Consists of a number of views often including
layout, schematic, symbol, abstract, and other logical or simulation
views. From this, various information may be captured in a number of
formats including the Cadence LEF format, and the Synopsys Milkyway
format, which contain reduced information about the cell layouts,
sufficient for automated "Place and Route" tools.
Timing Abstract - Generally in Liberty format, to provide functional definitions, timing, power, and noise information for each cell.
A standard-cell library may also contain the following additional components:
An example is a simple XOR logic gate, which can be formed from OR, INVERT and AND gates.
Application of standard cell
Strictly
speaking, a 2-input NAND or NOR function is sufficient to form any
arbitrary Boolean function set. But in modern ASIC design, standard-cell
methodology is practiced with a sizable library (or libraries) of
cells. The library usually contains multiple implementations of the
same logic function, differing in area and speed.
This variety enhances the efficiency of automated synthesis, place,
and route (SPR) tools. Indirectly, it also gives the designer greater
freedom to perform implementation trade-offs (area vs. speed vs. power
consumption). A complete group of standard-cell descriptions is commonly
called a technology library.
Commercially available electronic design automation (EDA)
tools use the technology libraries to automate synthesis, placement,
and routing of a digital ASIC. The technology library is developed and
distributed by the foundry
operator. The library (along with a design netlist format) is the basis
for exchanging design information between different phases of the SPR
process.
Synthesis
Using the technology library's cell logical view, the Logic Synthesis tool performs the process of mathematically transforming the ASIC's register-transfer level
(RTL) description into a technology-dependent netlist. This process is
analogous to a software compiler converting a high-level C-program
listing into a processor-dependent assembly-language listing.
The netlist is the standard-cell representation of the ASIC
design, at the logical view level. It consists of instances of the
standard-cell library gates, and port connectivity between gates.
Proper synthesis techniques ensure mathematical equivalency between the
synthesized netlist and original RTL description. The netlist contains
no unmapped RTL statements and declarations.
The high-level synthesis
tool performs the process of transforming the C-level models (SystemC,
ANSI C/C++) description into a technology-dependent netlist.
Placement
The placement
tool starts the physical implementation of the ASIC. With a 2-D
floorplan provided by the ASIC designer, the placer tool assigns
locations for each gate in the netlist. The resulting placed gates
netlist contains the physical location of each of the netlist's
standard-cells, but retains an abstract description of how the gates'
terminals are wired to each other.
Typically the standard cells have a constant size in at least one dimension that allows them to be lined up in rows on the integrated circuit.
The chip will consist of a huge number of rows (with power and ground
running next to each row) with each row filled with the various cells
making up the actual design. Placers obey certain rules: Each gate is
assigned a unique (exclusive) location on the die map. A given gate is
placed once, and may not occupy or overlap the location of any other
gate.
Routing
Using the placed-gates netlist and the layout view of the library, the router
adds both signal connect lines and power supply lines. The fully
routed physical netlist contains the listing of gates from synthesis,
the placement of each gate from placement, and the drawn interconnects
from routing.
DRC/LVS
Simulated lithographic and other fabrication defects visible in small standard-cell metal interconnects.
design rule check (DRC) and layout versus schematic (LVS) are verification processes. Reliable device fabrication at modern deep-submicrometer (0.13 μm
and below) requires strict observance of transistor spacing, metal
layer thickness, and power density rules. DRC exhaustively compares the
physical netlist against a set of "foundry design rules" (from the
foundry operator), then flags any observed violations.
The LVS process confirms that the layout has the same structure
as the associated schematic; this is typically the final step in the
layout process.
The LVS tool takes as an input a schematic diagram and the extracted
view from a layout. It then generates a netlist from each one and
compares them. Nodes, ports, and device sizing are all compared. If
they are the same, LVS passes and the designer can continue. LVS tends
to consider transistor fingers to be the same as an extra-wide
transistor. Thus, 4 transistors (each 1 μm wide) in parallel, a 4-finger
1 μm transistor, or a 4 μm transistor are viewed the same by the LVS
tool.
The functionality of .lib files will be taken from SPICE models and
added as an attribute to the .lib file.
In semiconductor design, standard cells are ensured to be design
rule checking (DRC) and layout versus schematic (LVS) compliant. This
compliance significantly enhances the efficiency of the design process,
leading to reduced turnaround times for designers. By ensuring that
these cells meet critical verification standards, designers can
streamline the integration of these components into larger chip designs,
facilitating a smoother and faster development cycle.
Other cell-based methodologies
"Standard cell" falls into a more general class of design automation flows called cell-based design. Structured ASICs, FPGAs, and CPLDs
are variations on cell-based design. From the designer's standpoint,
all share the same input front end: an RTL description of the design.
The three techniques, however, differ substantially in the details of
the SPR flow (Synthesize, Place-and-Route) and physical implementation.
Complexity measure
For digital standard-cell designs, for instance in CMOS, a common technology-independent metric for complexity measure is gate equivalents (GE).
Layout view
of a simple CMOS Operational Amplifier (inputs are to the left and the
compensation capacitor is to the right). The metal layer is coloured
blue, green and brown are N- and P-doped Si, the polysilicon is red and
vias are crosses.Engineer using an early IC-designing workstation to analyze a section of a circuit design cut on rubylith, circa 1979
IC design can be divided into the broad categories of digital and analog IC design. Digital IC design is to produce components such as microprocessors, FPGAs, memories (RAM, ROM, and flash) and digital ASICs.
Digital design focuses on logical correctness, maximizing circuit
density, and placing circuits so that clock and timing signals are
routed efficiently. Analog IC design also has specializations in power
IC design and RF IC design. Analog IC design is used in the design of op-amps, linear regulators, phase locked loops, oscillators and active filters. Analog design is more concerned with the physics of the semiconductor devices such as gain, matching, power dissipation, and resistance.
Fidelity of analog signal amplification and filtering is usually
critical, and as a result analog ICs use larger area active devices than
digital designs and are usually less dense in circuitry.
Modern ICs are enormously complicated. An average desktop computer chip, as of 2015, has over 1 billion transistors. The rules
for what can and cannot be manufactured are also extremely complex.
Common IC processes of 2015 have more than 500 rules. Furthermore, since
the manufacturing process itself is not completely predictable,
designers must account for its statistical
nature. The complexity of modern IC design, as well as market pressure
to produce designs rapidly, has led to the extensive use of automated design tools
in the IC design process. The design of some processors has become
complicated enough to be difficult to fully test, and this has caused
problems at large cloud providers. In short, the design of an IC using EDA software
is the design, test, and verification of the instructions that the IC
is to carry out. Artificial Intelligence has been demonstrated in chip
design for creating chip layouts which are the locations of standard
cells and macro blocks in a chip.
Fundamentals
Integrated circuit design involves the creation of electronic components, such as transistors, resistors, capacitors and the interconnection of these components onto a piece of semiconductor, typically silicon. A method to isolate the individual components formed in the substrate
is necessary since the substrate silicon is conductive and often forms
an active region of the individual components. The two common methods
are p-n junction isolation and dielectric isolation.
Attention must be given to power dissipation of transistors and
interconnect resistances and current density of the interconnect, contacts and vias since ICs contain very tiny devices compared to discrete components, where such concerns are less of an issue. Electromigration in metallic interconnect and ESD
damage to the tiny components are also of concern. Finally, the
physical layout of certain circuit subblocks is typically critical, in
order to achieve the desired speed of operation, to segregate noisy
portions of an IC from quiet portions, to balance the effects of heat
generation across the IC, or to facilitate the placement of connections to circuitry outside the IC.
RTL design: This step converts the user specification (what the user wants the chip to do) into a register transfer level
(RTL) description. The RTL describes the exact behavior of the digital
circuits on the chip, as well as the interconnections to inputs and
outputs.
Physical circuit design: This step takes the RTL, and a library of available logic gates (standard cell library), and creates a chip design. This step involves use of IC layout editor,
layout and floor planning, figuring out which gates to use, defining
places for them, and wiring (clock timing synthesis, routing) them
together.
Note that the second step, RTL design, is responsible for the chip
doing the right thing. The third step, physical design, does not affect
the functionality at all (if done correctly) but determines how fast the
chip operates and how much it costs.
A standard cell normally represents a single logic gate, a diode or simple logic components such as flip-flops, or logic gates with multiple inputs.
The use of standard cells allows the chip's design to be split into
logical and physical levels. A fabless company would normally only work
on the logical design of a chip, determining how cells are connected and
the functionality of the chip, while following design rules from the
foundry the chip will be made in, while the physical design of the chip,
the cells themselves, are normally done by the foundry and it comprises
the physics of the transistor devices and how they are connected to
form a logic gate. Standard cells allow chips to be designed and
modified more quickly to respond to market demands, but this comes at
the cost of lower transistor density in the chip and thus larger die
sizes.
Foundries supply libraries of standard cells to fabless
companies, for design purposes and to allow manufacturing of their
designs using the foundry's facilities. A Process design kit
(PDK) may be provided by the foundry and it may include the standard
cell library as well as the specifications of the cells, and tools to
verify the fabless company's design against the design rules specified
by the foundry as well as simulate it using the foundry's cells. PDKs
may be provided under non-disclosure agreements. Macros/Macrocells/Macro
blocks, Macrocell arrays
and IP blocks have greater functionality than standard cells, and are
used similarly. There are soft macros and hard macros. Standard cells
are usually placed following standard cell rows.
Design lifecycle
The integrated circuit
(IC) development process starts with defining product requirements,
progresses through architectural definition, implementation, bringup and
finally production. The various phases of the integrated circuit
development process are described below. Although the phases are
presented here in a straightforward fashion, in reality there is iteration and these steps may occur multiple times.
The architecture
defines the fundamental structure, goals and principles of the product.
It defines high level concepts and the intrinsic value proposition of
the product. Architecture teams take into account many variables and
interface with many groups. People creating the architecture generally
have a significant amount of experience dealing with systems in the area
for which the architecture is being created. The work product of the
architecture phase is an architectural specification.
Micro-architecture
The
micro-architecture is a step closer to the hardware. It implements the
architecture and defines specific mechanisms and structures for
achieving that implementation. The result of the micro-architecture
phase is a micro-architecture specification which describes the methods
used to implement the architecture.
Implementation
In
the implementation phase the design itself is created using the
micro-architectural specification as the starting point. This involves
low level definition and partitioning, writing code, entering schematics and verification. This phase ends with a design reaching tapeout.
Bringup
After
a design is created, taped-out and manufactured, actual hardware,
'first silicon', is received which is taken into the lab where it goes
through bringup. Bringup is the process of powering, testing and characterizing the design in the lab. Numerous tests
are performed starting from very simple tests such as ensuring that the
device will power on to much more complicated tests which try to stress
the part in various ways. The result of the bringup phase is
documentation of characterization data (how well the part performs to spec) and errata (unexpected behavior).
Productization
Productization
is the task of taking a design from engineering into mass production
manufacturing. Although a design may have successfully met the
specifications of the product in the lab during the bringup phase there
are many challenges that product engineers face when trying to
mass-produce those designs. The IC
must be ramped up to production volumes with an acceptable yield. The
goal of the productization phase is to reach mass production volumes at
an acceptable cost.
Sustaining
Once
a design is mature and has reached mass production it must be
sustained. The process must be continually monitored and problems dealt
with quickly to avoid a significant impact on production volumes. The
goal of sustaining is to maintain production volumes and continually
reduce costs until the product reaches end of life.
Design process
Microarchitecture and system-level design
The
initial chip design process begins with system-level design and
microarchitecture planning. Within IC design companies, management and
often analytics will draft a proposal for a design team to start the
design of a new chip to fit into an industry segment. Upper-level
designers will meet at this stage to decide how the chip will operate
functionally. This step is where an IC's functionality and design are
decided. IC designers will map out the functional requirements,
verification testbenches, and testing methodologies for the whole
project, and will then turn the preliminary design into a system-level
specification that can be simulated with simple models using languages
like C++ and MATLAB and emulation tools. For pure and new designs, the
system design stage is where an Instruction set
and operation is planned out, and in most chips existing instruction
sets are modified for newer functionality. Design at this stage is often
statements such as encodes in the MP3 format or implements IEEE floating-point arithmetic.
At later stages in the design process, each of these innocent looking
statements expands to hundreds of pages of textual documentation.
RTL design
Upon agreement of a system design, RTL designers then implement the functional models in a hardware description language like Verilog, SystemVerilog, or VHDL.
Using digital design components like adders, shifters, and state
machines as well as computer architecture concepts like pipelining,
superscalar execution, and branch prediction,
RTL designers will break a functional description into hardware models
of components on the chip working together. Each of the simple
statements described in the system design can easily turn into thousands
of lines of RTL
code, which is why it is extremely difficult to verify that the RTL
will do the right thing in all the possible cases that the user may
throw at it.
To reduce the number of functionality bugs, a separate hardware
verification group will take the RTL and design testbenches and systems
to check that the RTL actually is performing the same steps under many
different conditions, classified as the domain of functional verification. Many techniques are used, none of them perfect but all of them useful – extensive logic simulation, formal methods, hardware emulation, lint-like code checking, code coverage, and so on. Verification such as that done by emulators can be carried out in FPGAs or special processors,
and emulation replaced simulation. Simulation was initially done by
simulating logic gates in chips but later on, RTLs in chips were
simulated instead. Simulation is still used when creating analog chip designs.
Prototyping platforms are used to run software on prototypes of the
chip design while it is under development using FPGAs but are slower to
iterate on or modify and can't be used to visualize hardware signals as
they would appear in the finished design.
A tiny error here can make the whole chip useless, or worse. The famous Pentium FDIV bug
caused the results of a division to be wrong by at most 61 parts per
million, in cases that occurred very infrequently. No one even noticed
it until the chip had been in production for months. Yet Intel was forced to offer to replace, for free, every chip sold until they could fix the bug, at a cost of $475 million (US).
Physical design
Physical design steps within the digital design flow
RTL is only a behavioral model of the actual functionality of what
the chip is supposed to operate under. It has no link to a physical
aspect of how the chip would operate in real life at the materials,
physics, and electrical engineering side. For this reason, the next step
in the IC design process, physical design
stage, is to map the RTL into actual geometric representations of all
electronics devices, such as capacitors, resistors, logic gates, and
transistors that will go on the chip.
The main steps of physical design are listed below. In practice
there is not a straightforward progression - considerable iteration is
required to ensure all objectives are met simultaneously. This is a
difficult problem in its own right, called design closure.
Logic synthesis: The RTL is mapped into a gate-level netlist in the target technology of the chip.
Floorplanning:
The RTL of the chip is assigned to gross regions of the chip,
input/output (I/O) pins are assigned and large objects (arrays, cores,
etc.) are placed.
Placement: The gates in the netlist are assigned to nonoverlapping locations on the die area.
Logic/placement refinement: Iterative logical and placement transformations to close performance and power constraints.
Design for manufacturability:
The design is modified, where possible, to make it as easy and
efficient as possible to produce. This is achieved by adding extra vias
or adding dummy metal/diffusion/poly layers wherever possible while
complying to the design rules set by the foundry.
Before
the advent of the microprocessor and software based design tools,
analog ICs were designed using hand calculations and process kit parts.
These ICs were low complexity circuits, for example, op-amps,
usually involving no more than ten transistors and few connections. An
iterative trial-and-error process and "overengineering" of device size
was often necessary to achieve a manufacturable IC. Reuse of proven
designs allowed progressively more complicated ICs to be built upon
prior knowledge. When inexpensive computer processing became available
in the 1970s, computer programs were written to simulate circuit designs
with greater accuracy than practical by hand calculation. The first
circuit simulator for analog ICs was called SPICE
(Simulation Program with Integrated Circuits Emphasis). Computerized
circuit simulation tools enable greater IC design complexity than hand
calculations can achieve, making the design of analog ASICs practical.
As many functional constraints must be considered in analog
design, manual design is still widespread today, in contrast to digital
design which is highly automated, including automated routing and
synthesis. As a result, modern design flows for analog circuits are characterized by two different design styles – top-down and bottom-up.
The top-down design style makes use of optimization-based tools similar
to conventional digital flows. Bottom-up procedures re-use “expert
knowledge” with the result of solutions previously conceived and
captured in a procedural description, imitating an expert's decision. An example are cell generators, such as PCells.
Coping with variability
A
challenge most critical to analog IC design involves the variability of
the individual devices built on the semiconductor chip. Unlike
board-level circuit design which permits the designer to select devices
that have each been tested and binned according to value, the device
values on an IC can vary widely which are uncontrollable by the
designer. For example, some IC resistors can vary ±20% and β of an
integrated BJT
can vary from 20 to 100. In the latest CMOS processes, β of vertical
PNP transistors can even go below 1. To add to the design challenge,
device properties often vary between each processed semiconductor wafer.
Device properties can even vary significantly across each individual IC
due to doping gradients.
The underlying cause of this variability is that many semiconductor
devices are highly sensitive to uncontrollable random variances in the
process. Slight changes to the amount of diffusion time, uneven doping
levels, etc. can have large effects on device properties.
Some design techniques used to reduce the effects of the device variation are:
Using the ratios of resistors, which do match closely, rather than absolute resistor value.
Using devices with matched geometrical shapes so they have matched variations.
Making devices large so that statistical variations become an insignificant fraction of the overall device property.
Segmenting large devices, such as resistors, into parts and interweaving them to cancel variations.
Using common centroid device layout to cancel variations in devices which must match closely (such as the transistor differential pair of an op amp)
In 2000s, researchers had started to propose a type of on-chip interconnection in the form of packet switching networks in order to address the scalability issues of bus-based design. Preceding researches proposed the design that routes data packets instead of routing the wires. Then, the concept of "network on chips" was proposed in 2002. NoCs improve the scalability of systems-on-chip and the power efficiency of complex SoCs compared to other communication subsystem designs. They are an emerging technology, with projections for large growth in the near future as multicore computer architectures become more common.
The
topology determines the physical layout and connections between nodes
and channels. The message traverses hops, and each hop's channel length
depends on the topology. The topology significantly influences both latency
and power consumption. Furthermore, since the topology determines the
number of alternative paths between nodes, it affects the network
traffic distribution, and hence the network bandwidth and performance achieved.
Benefits
Traditionally, ICs have been designed with dedicated point-to-point connections, with one wire dedicated to each signal. This results in a dense network topology. For large designs, in particular, this has several limitations from a physical design viewpoint. It requires powerquadratic in the number of interconnections. The wires occupy much of the area of the chip, and in nanometerCMOS technology, interconnects dominate both performance and dynamic power dissipation, as signal propagation in wires across the chip requires multiple clock cycles. This also allows more parasitic capacitance, resistance and inductance to accrue on the circuit. (See Rent's rule for a discussion of wiring requirements for point-to-point connections).
Sparsity and locality of interconnections in the communications subsystem yield several improvements over traditional bus-based and crossbar-based systems.
Parallelism and scalability
The wires in the links of the network-on-chip are shared by many signals. A high level of parallelism is achieved, because all data links in the NoC can operate simultaneously on different data packets.[why?] Therefore, as the complexity of integrated systems keeps growing, a NoC provides enhanced performance (such as throughput) and scalability in comparison with previous communication architectures (e.g., dedicated point-to-point signal wires, shared buses, or segmented buses with bridges). The algorithms must be designed in such a way that they offer large parallelism and can hence utilize the potential of NoC.
Current research
WiNoC in the 3D-chiplet
Some researchers think that NoCs need to support quality of service (QoS), namely achieve the various requirements in terms of throughput, end-to-end delays, fairness, and deadlines. Real-time computation, including audio and video playback, is one
reason for providing QoS support. However, current system
implementations like VxWorks, RTLinux or QNX are able to achieve sub-millisecond real-time computing without special hardware.
This may indicate that for many real-time applications the service quality of existing on-chip interconnect infrastructure is sufficient, and dedicated hardware logic
would be necessary to achieve microsecond precision, a degree that is
rarely needed in practice for end users (sound or video jitter need only
tenth of milliseconds latency guarantee). Another motivation for
NoC-level quality of service (QoS) is to support multiple concurrent users sharing resources of a single chip multiprocessor in a public cloud computing infrastructure. In such instances, hardware QoS logic enables the service provider to make contractual guarantees on the level of service that a user receives, a feature that may be deemed desirable by some corporate or government clients.
Many challenging research problems remain to be solved at all
levels, from the physical link level through the network level, and all
the way up to the system architecture and application software. The
first dedicated research symposium on networks on chip was held at Princeton University, in May 2007. The second IEEE International Symposium on Networks-on-Chip was held in April 2008 at Newcastle University.
Research has been conducted on integrated optical waveguides and devices comprising an optical network on a chip (ONoC).
The possible way to increasing the performance of NoC is use wireless communication channels between chiplets — named wireless network on chip (WiNoC).
Side benefits
In
a multi-core system, connected by NoC, coherency messages and cache
miss requests have to pass switches. Accordingly, switches can be
augmented with simple tracking and forwarding elements to detect which
cache blocks will be requested in the future by which cores. Then, the
forwarding elements multicast any requested block to all the cores that
may request the block in the future. This mechanism reduces cache miss
rate.
Benchmarks
NoC
development and studies require comparing different proposals and
options. NoC traffic patterns are under development to help such
evaluations. Existing NoC benchmarks include NoCBench and MCSL NoC
Traffic Patterns.
Interconnect processing unit
An interconnect processing unit (IPU) is an on-chip communication network with hardware and software components which jointly implement key functions of different system-on-chip programming models through a set of communication and synchronization primitives and provide low-level platform services to enable advanced features in modern heterogeneous applications on a single die.
The nodes of a computer network can include personal computers, servers, networking hardware, or other specialized or general-purpose hosts. They are identified by network addresses and may have hostnames.
Hostnames serve as memorable labels for the nodes and are rarely
changed after initial assignment. Network addresses serve for locating
and identifying the nodes by communication protocols such as the Internet Protocol.
Computer networking may be considered a branch of computer science, computer engineering, and telecommunications,
since it relies on the theoretical and practical application of the
related disciplines. Computer networking was influenced by a wide array
of technological developments and historical milestones.
In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s).
In 1959, Anatoly Kitov
proposed to the Central Committee of the Communist Party of the Soviet
Union a detailed plan for the re-organization of the control of the
Soviet armed forces and of the Soviet economy on the basis of a network
of computing centers. Kitov's proposal was rejected, as later was the 1962 OGAS economy management network project.
In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1965, Western Electric introduced the first widely used telephone switch that implemented computer control in the switching fabric.
Throughout the 1960s, Paul Baran and Donald Davies independently invented the concept of packet switching for data communication between computers over a network.Baran's work addressed adaptive routing of message blocks across a
distributed network, but did not include routers with software switches,
nor the idea that users, rather than the network itself, would provide
the reliability. Davies' hierarchical network design included high-speed routers, communication protocols and the essence of the end-to-end principle. The NPL network, a local area network at the National Physical Laboratory (United Kingdom), pioneered the implementation of the concept in 1968-69 using 768 kbit/s links. Both Baran's and Davies' inventions were seminal contributions that influenced the development of computer networks.
In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s
circuits between the University of California at Los Angeles, the
Stanford Research Institute, the University of California at Santa
Barbara, and the University of Utah. Designed principally by Bob Kahn, the network's routing, flow control, software design and network control were developed by the IMP team working for Bolt Beranek & Newman. In the early 1970s, Leonard Kleinrock
carried out mathematical work to model the performance of
packet-switched networks, which underpinned the development of the
ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.
In 1972, commercial services were first deployed on experimental public data networks in Europe.
In 1973, the French CYCLADES network, directed by Louis Pouzin
was the first to make the hosts responsible for the reliable delivery
of data, rather than this being a centralized service of the network
itself.
In 1974, Vint Cerf and Bob Kahn published their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. Later that year, Cerf, Yogen Dalal, and Carl Sunshine wrote the first Transmission Control Protocol (TCP) specification, RFC675, coining the term Internet as a shorthand for internetworking.
In July 1976, Metcalfe and Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and in December 1977, together with Butler Lampson and Charles P. Thacker, they received U.S. patent 4063220A for their invention.
Public data networks in Europe, North America and Japan began using X.25 in the late 1970s and interconnected with X.75. This underlying infrastructure was used for expanding TCP/IP networks in the 1980s.
In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices.
In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California.
In 1979, Robert Metcalfe pursued making Ethernet an open standard.
In 1980, Ethernet was upgraded from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was developed by Ron Crane, Bob Garner, Roy Ogus, and Yogen Dalal.
In 1995, the transmission speed capacity for Ethernet increased from
10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission
speeds of 1 Gbit/s. Subsequently, higher speeds of up to 400 Gbit/s were
added (as of 2018). The scaling of Ethernet has been a contributing factor to its continued use.
Use
Computer
networks enhance how users communicate with each other by using various
electronic methods like email, instant messaging, online chat, voice
and video calls, and video conferencing. Networks also enable the
sharing of computing resources. For example, a user can print a document
on a shared printer or use shared storage devices. Additionally,
networks allow for the sharing of files and information, giving
authorized users access to data stored on other computers. Distributed computing leverages resources from multiple computers across a network to perform tasks collaboratively.
Packets consist of two types of data: control information and
user data (payload). The control information provides data the network
needs to deliver the user data, for example, source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.
With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched.
When one user is not sending packets, the link can be filled with
packets from other users, and so the cost can be shared, with relatively
little interference, provided the link is not overused. Often the route
a packet needs to take through a network is not immediately available.
In that case, the packet is queued and waits until a link is free.
The physical link technologies of packet networks typically limit the size of packets to a certain maximum transmission unit
(MTU). A longer message may be fragmented before it is transferred and
once the packets arrive, they are reassembled to construct the original
message.
The physical or geographic locations of network nodes and links
generally have relatively little effect on a network, but the topology
of interconnections of a network can significantly affect its throughput
and reliability. With many technologies, such as bus or star networks, a
single failure can cause the network to fail entirely. In general, the
more interconnections there are, the more robust the network is; but the
more expensive it is to install. Therefore, most network diagrams are arranged by their network topology which is the map of logical interconnections of network hosts.
Common topologies are:
Bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2. This is still a common topology on the data link layer, although modern physical layer variants use point-to-point links instead, forming a star or a tree.
Star network: all nodes are connected to a special central node. This is the typical layout found in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client associates with the central wireless access point.
Ring network:
each node is connected to its left and right neighbor node, such that
all nodes are connected and that each node can reach each other node by
traversing nodes left- or rightwards. Token ring networks, and the Fiber Distributed Data Interface (FDDI), made use of such a topology.
Mesh network:
each node is connected to an arbitrary number of neighbors in such a
way that there is at least one traversal from any node to any other.
Tree network:
nodes are arranged hierarchically. This is the natural topology for a
larger Ethernet network with multiple switches and without redundant
meshing.
The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI,
the network topology is a ring, but the physical topology is often a
star, because all neighboring connections can be routed via a central
physical location. Physical layout is not completely irrelevant,
however, as common ducting and equipment locations can represent single
points of failure due to issues like fires, power failures and flooding.
Overlay network
A sample overlay network
An overlay network
is a virtual network that is built on top of another network. Nodes in
the overlay network are connected by virtual or logical links. Each link
corresponds to a path, perhaps through many physical links, in the
underlying network. The topology of the overlay network may (and often
does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.
Overlay networks have been used since the early days of
networking, back when computers were connected via telephone lines using
modems, even before data networks were developed.
The most striking example of an overlay network is the Internet
itself. The Internet itself was initially built as an overlay on the telephone network.
Even today, each Internet node can communicate with virtually any other
through an underlying mesh of sub-networks of wildly different
topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.
Another example of an overlay network is a distributed hash table,
which maps keys to nodes in the network. In this case, the underlying
network is an IP network, and the overlay network is a table (actually a
map) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network.
On the other hand, an overlay network can be incrementally deployed on
end-hosts running the overlay protocol software, without cooperation
from Internet service providers.
The overlay network has no control over how packets are routed in the
underlying network between two overlay nodes, but it can control, for
example, the sequence of overlay nodes that a message traverses before
it reaches its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others.
The transmission media (often referred to in the literature as the physical medium) used to link devices to form a computer network include electrical cable, optical fiber, and free space. In the OSI model, the software to handle the media is defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adopted family that uses copper and fiber media in local area network
(LAN) technology are collectively known as Ethernet. The media and
protocol standards that enable communication between networked devices
over Ethernet are defined by IEEE 802.3. Wireless LAN standards use radio waves, others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.
Wired
Fiber-optic cables are used to transmit light from one computer/network node to another.
The following classes of wired technologies are used in computer networking.
Coaxial cable
is widely used for cable television systems, office buildings, and
other work-sites for local area networks. Transmission speed ranges from
200 million bits per second to more than 500 million bits per second.
Twisted pair cabling is used for wired Ethernet
and other standards. It typically consists of 4 pairs of copper cabling
that can be utilized for both voice and data transmission. The use of
two wires twisted together helps to reduce crosstalk and electromagnetic induction.
The transmission speed ranges from 2 Mbit/s to 10 Gbit/s. Twisted pair
cabling comes in two forms: unshielded twisted pair (UTP) and shielded
twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
2007 map showing submarine optical fiber telecommunication cables around the world
An optical fiber is a glass fiber. It carries pulses of light that represent data via lasers and optical amplifiers.
Some advantages of optical fibers over metal wires are very low
transmission loss and immunity to electrical interference. Using dense wave division multiplexing,
optical fibers can simultaneously carry multiple streams of data on
different wavelengths of light, which greatly increases the rate that
data can be sent to up to trillions of bits per second. Optic fibers can
be used for long runs of cable carrying very high data rates, and are
used for undersea communications cables to interconnect continents. There are two basic types of fiber optics, single-mode optical fiber (SMF) and multi-mode optical fiber
(MMF). Single-mode fiber has the advantage of being able to sustain a
coherent signal for dozens or even a hundred kilometers. Multimode fiber
is cheaper to terminate but is limited to a few hundred or even only a
few dozens of meters, depending on the data rate and cable grade.
Wireless
Computers are very often connected to networks using wireless links.
Network connections can be established wirelessly using radio or other electromagnetic means of communication.
Terrestrial microwave –
Terrestrial microwave communication uses Earth-based transmitters and
receivers resembling satellite dishes. Terrestrial microwaves are in the
low gigahertz range, which limits all communications to line-of-sight.
Relay stations are spaced approximately 40 miles (64 km) apart.
Communications satellites –
Satellites also communicate via microwave. The satellites are stationed
in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above
the equator. These Earth-orbiting systems are capable of receiving and
relaying voice, data, and TV signals.
Cellular networks
use several radio communications technologies. The systems divide the
region covered into multiple geographic areas. Each area is served by a
low-power transceiver.
Radio and spread spectrum technologies –
Wireless LANs use a high-frequency radio technology similar to digital
cellular. Wireless LANs use spread spectrum technology to enable
communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
The last two cases have a large round-trip delay time, which gives slow two-way communication but does not prevent sending large amounts of information (they can have high throughput).
Apart from any physical transmission media, networks are built from additional basic system building blocks, such as network interface controllers, repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and so may perform multiple functions.
An ATM network interface in the form of an accessory card. A lot of network interfaces are built-in.
A network interface controller (NIC) is computer hardware that connects the computer to the network media
and has the ability to process low-level network information. For
example, the NIC may have a connector for plugging in a cable, or an
aerial for wireless transmission and reception, and the associated
circuitry.
In Ethernet networks, each NIC has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets.
The three most significant octets are reserved to identify NIC
manufacturers. These manufacturers, using only their assigned prefixes,
uniquely assign the three least-significant octets of every Ethernet
interface they produce.
A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted
at a higher power level, or to the other side of obstruction so that
the signal can cover longer distances without degradation. In most
twisted-pair Ethernet configurations, repeaters are required for cable
that runs longer than 100 meters. With fiber optics, repeaters can be
tens or even hundreds of kilometers apart.
Repeaters work on the physical layer of the OSI model but still
require a small amount of time to regenerate the signal. This can cause a
propagation delay
that affects network performance and may affect proper function. As a
result, many network architectures limit the number of repeaters used in
a network, e.g., the Ethernet 5-4-3 rule.
An Ethernet repeater with multiple ports is known as an Ethernet hub.
In addition to reconditioning and distributing network signals, a
repeater hub assists with collision detection and fault isolation for
the network. Hubs and repeaters in LANs have been largely obsoleted by
modern network switches.
Network bridges and network switches are distinct from a hub in that
they only forward frames to the ports involved in the communication
whereas a hub forwards to all ports.
Bridges only have two ports but a switch can be thought of as a
multi-port bridge. Switches normally have numerous ports, facilitating a
star topology for devices, and for cascading additional switches.
Bridges and switches operate at the data link layer (layer 2) of the OSI model and bridge traffic between two or more network segments to form a single local network. Both are devices that forward frames of data between ports based on the destination MAC address in each frame.
They learn the association of physical ports to MAC addresses by
examining the source addresses of received frames and only forward the
frame when necessary. If an unknown destination MAC is targeted, the
device broadcasts the request to all ports except the source, and
discovers the location from the reply.
Bridges and switches divide the network's collision domain but
maintain a single broadcast domain. Network segmentation through
bridging and switching helps break down a large, congested network into
an aggregation of smaller, more efficient networks.
A typical home or small office router showing the ADSL telephone line and Ethernet network cable connections
A router is an internetworking device that forwards packets between
networks by processing the addressing or routing information included in
the packet. The routing information is often processed in conjunction
with the routing table.
A router uses its routing table to determine where to forward packets
and does not require broadcasting packets which is inefficient for very
big networks.
Modems (modulator-demodulator) are used to connect network nodes via
wire not originally designed for digital network traffic, or for
wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Early modems modulated audio signals sent over a standard voice telephone line. Modems are still commonly used for telephone lines, using a digital subscriber line technology and cable television systems using DOCSIS technology.
This is an image of a firewall separating a private network from a public network
A firewall is a network device or software for controlling network
security and access rules. Firewalls are inserted in connections between
secure internal networks and potentially insecure external networks
such as the Internet. Firewalls are typically configured to reject
access requests from unrecognized sources while allowing actions from
recognized ones. The vital role firewalls play in network security grows
in parallel with the constant increase in cyber attacks.
Communication protocols
The TCP/IP model and its relation to common protocols used at different layers of the modelMessage
flows between two devices (A-B) at the four layers of the TCP/IP model
in the presence of a router (R). Red flows are effective communication
paths, black paths are across the actual network links.
A communication protocol is a set of rules for exchanging information over a network. Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing.
In a protocol stack,
often constructed per the OSI model, communications functions are
divided up into protocol layers, where each layer leverages the services
of the layer below it until the lowest layer controls the hardware that
sends information across the media. The use of protocol layering is
ubiquitous across the field of computer networking. An important example
of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web.
There are many communication protocols, a few of which are described below.
The Internet protocol suite,
also called TCP/IP, is the foundation of all modern networking. It
offers connection-less and connection-oriented services over an
inherently unreliable network traversed by datagram transmission using
Internet protocol (IP). At its core, the protocol suite defines the
addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6,
the next generation of the protocol with a much enlarged addressing
capability. The Internet protocol suite is the defining set of protocols
for the Internet.
IEEE 802
IEEE 802
is a family of IEEE standards dealing with local area networks and
metropolitan area networks. The complete IEEE 802 protocol suite
provides a diverse set of networking capabilities. The protocols have a
flat addressing scheme. They operate mostly at layers 1 and 2 of the OSI
model.
For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based network access control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key".
Ethernet
Ethernet is a family of technologies used in wired LANs. It is described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers.
Wireless LAN
Wireless LAN based on the IEEE 802.11 standards, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. IEEE 802.11 shares many properties with wired Ethernet.
SONET/SDH
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing
protocols that transfer multiple digital bit streams over optical fiber
using lasers. They were originally designed to transport circuit mode
communications from a variety of different sources, primarily to support
circuit-switched digital telephony.
However, due to its protocol neutrality and transport-oriented
features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.
Asynchronous Transfer Mode
Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet protocol suite or Ethernet that use variable-sized packets or frames.
ATM has similarities with both circuit and packet switched networking.
This makes it a good choice for a network that must handle both
traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.
ATM still plays a role in the last mile, which is the connection between an Internet service provider and the home user.
Routing
calculates good paths through a network for information to take. For
example, from node 1 to node 6 the best routes are likely to be 1-8-7-6,
1-8-10-6 or 1-9-10-6, as these are the shortest routes.
Routing
is the process of selecting network paths to carry network traffic.
Routing is performed for many kinds of networks, including circuit
switching networks and packet switched networks.
In packet-switched networks, routing protocols direct packet forwarding
through intermediate nodes. Intermediate nodes are typically network
hardware devices such as routers, bridges, gateways, firewalls, or
switches. General-purpose computers
can also forward packets and perform routing, though because they lack
specialized hardware, may offer limited performance. The routing process
directs forwarding on the basis of routing tables,
which maintain a record of the routes to various network destinations.
Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths.
Routing can be contrasted with bridging in its assumption that network addresses
are structured and that similar addresses imply proximity within the
network. Structured addresses allow a single routing table entry to
represent the route to a group of devices. In large networks, the
structured addressing used by routers outperforms unstructured
addressing used by bridging. Structured IP addresses are used on the
Internet. Unstructured MAC addresses are used for bridging on Ethernet
and similar local area networks.
Geographic scale
Networks may be characterized by many properties or features, such as
physical capacity, organizational purpose, user authorization, access
rights, and others. Another distinct classification method is that of
the physical extent or geographic scale.
Nanoscale network
A nanoscale network
has key components implemented at the nanoscale, including message
carriers, and leverages physical principles that differ from macroscale
communication mechanisms. Nanoscale communication extends communication
to very small sensors and actuators such as those found in biological
systems and also tends to operate in environments that would be too
harsh for other communication techniques.
Personal area network
A personal area network
(PAN) is a computer network used for communication among computers and
different information technological devices close to one person. Some
examples of devices that are used in a PAN are personal computers,
printers, fax machines, telephones, PDAs, scanners, and video game
consoles. A PAN may include wired and wireless devices. The reach of a
PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.
Local area network
A local area network
(LAN) is a network that connects computers and devices in a limited
geographical area such as a home, school, office building, or closely
positioned group of buildings. Wired LANs are most commonly based on
Ethernet technology. Other networking technologies such as ITU-TG.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.
A LAN can be connected to a wide area network (WAN) using a router. The defining characteristics of a LAN, in contrast to a WAN, include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity.[citation needed] Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to and in excess of 100 Gbit/s, standardized by IEEE in 2010.
Home area network
A home area network
(HAN) is a residential LAN used for communication between digital
devices typically deployed in the home, usually a small number of
personal computers and accessories, such as printers and mobile
computing devices. An important function is the sharing of Internet
access, often a broadband service through a cable Internet access or digital subscriber line (DSL) provider.
Storage area network
A storage area network
(SAN) is a dedicated network that provides access to consolidated,
block-level data storage. SANs are primarily used to make storage
devices, such as disk arrays, tape libraries, and optical jukeboxes,
accessible to servers so that the storage appears as locally attached
devices to the operating system. A SAN typically has its own network of
storage devices that are generally not accessible through the local area
network by other devices. The cost and complexity of SANs dropped in
the early 2000s to levels allowing wider adoption across both enterprise
and small to medium-sized business environments.
Campus area network
A campus area network
(CAN) is made up of an interconnection of LANs within a limited
geographical area. The networking equipment (switches, routers) and
transmission media (optical fiber, Cat5 cabling, etc.) are almost entirely owned by the campus tenant or owner (an enterprise, university, government, etc.).
For example, a university campus network is likely to link a
variety of campus buildings to connect academic colleges or departments,
the library, and student residence halls.
Backbone network
A backbone network
is part of a computer network infrastructure that provides a path for
the exchange of information between different LANs or subnetworks. A
backbone can tie together diverse networks within the same building,
across different buildings, or over a wide area. When designing a
network backbone, network performance and network congestion
are critical factors to take into account. Normally, the backbone
network's capacity is greater than that of the individual networks
connected to it.
For example, a large company might implement a backbone network
to connect departments that are located around the world. The equipment
that ties together the departmental networks constitutes the network
backbone. Another example of a backbone network is the Internet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data between wide area networks (WANs), metro, regional, national and transoceanic networks.
Metropolitan area network
A metropolitan area network (MAN) is a large computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area.
Wide area network
A wide area network
(WAN) is a computer network that covers a large geographic area such as
a city, country, or spans even intercontinental distances. A WAN uses a
communications channel that combines many types of media such as
telephone lines, cables, and airwaves. A WAN often makes use of
transmission facilities provided by common carriers,
such as telephone companies. WAN technologies generally function at the
lower three layers of the OSI model: the physical layer, the data link layer, and the network layer.
Enterprise private network
An enterprise private network
is a network that a single organization builds to interconnect its
office locations (e.g., production sites, head offices, remote offices,
shops) so they can share computer resources.
Virtual private network
A virtual private network (VPN) is an overlay network
in which some of the links between nodes are carried by open
connections or virtual circuits in some larger network (e.g., the
Internet) instead of by physical wires. The data link layer protocols of
the virtual network are said to be tunneled through the larger network.
One common application is secure communications through the public
Internet, but a VPN need not have explicit security features, such as
authentication or content encryption. VPNs, for example, can be used to
separate the traffic of different user communities over an underlying
network with strong security features.
VPN may have best-effort performance or may have a defined
service level agreement (SLA) between the VPN customer and the VPN
service provider.
Global area network
A global area network
(GAN) is a network used for supporting mobile users across an arbitrary
number of wireless LANs, satellite coverage areas, etc. The key
challenge in mobile communications is handing off communications from
one local coverage area to the next. In IEEE Project 802, this involves a
succession of terrestrial wireless LANs.
Organizational scope
Networks
are typically managed by the organizations that own them. Private
enterprise networks may use a combination of intranets and extranets.
They may also provide network access to the Internet, which has no
single owner and permits virtually unlimited global connectivity.
Intranet
An intranet
is a set of networks that are under the control of a single
administrative entity. An intranet typically uses the Internet Protocol
and IP-based tools such as web browsers and file transfer applications.
The administrative entity limits the use of the intranet to its
authorized users. Most commonly, an intranet is the internal LAN of an
organization. A large intranet typically has at least one web server to
provide users with organizational information.
Extranet
An extranet
is a network that is under the administrative control of a single
organization but supports a limited connection to a specific external
network. For example, an organization may provide access to some aspects
of its intranet to share data with its business partners or customers.
These other entities are not necessarily trusted from a security
standpoint. The network connection to an extranet is often, but not
always, implemented via WAN technology.
Internet
Partial map of the Internet based on 2005 data. Each line is drawn between two nodes, representing two IP addresses. The length of the lines indicates the delay between those two nodes.
An internetwork
is the connection of multiple different types of computer networks to
form a single computer network using higher-layer network protocols and
connecting them together using routers.
The Internet
is the largest example of internetwork. It is a global system of
interconnected governmental, academic, corporate, public, and private
computer networks. It is based on the networking technologies of the
Internet protocol suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet utilizes copper communications and an optical networking backbone to enable the World Wide Web (WWW), the Internet of things, video transfer, and a broad range of information services.
Participants on the Internet use a diverse array of methods of
several hundred documented, and often standardized, protocols compatible
with the Internet protocol suite and the IP addressing system
administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
Darknet
A darknet
is an overlay network, typically running on the Internet, that is only
accessible through specialized software. It is an anonymizing network
where connections are made only between trusted peers — sometimes called
friends (F2F) — using non-standard protocols and ports.
Darknets are distinct from other distributed peer-to-peer networks as sharing
is anonymous (that is, IP addresses are not publicly shared), and
therefore users can communicate with little fear of governmental or
corporate interference.
Network service
Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.
Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.
Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint
to another. Delay may differ slightly, depending on the location of the
specific pair of communicating endpoints. Engineers usually report both
the maximum and average delay, and they divide the delay into several
components, the sum of which is the total delay:
Processing delay – time it takes a router to process the packet header
Queuing delay – time the packet spends in routing queues
Transmission delay – time it takes to push the packet's bits onto the link
Propagation delay – time for a signal to propagate through the media
A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from less than a microsecond to several hundred milliseconds.
In circuit-switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads.[81] Other types of performance measures can include the level of noise and echo.
In an Asynchronous Transfer Mode
(ATM) network, performance can be measured by line rate, quality of
service (QoS), data throughput, connect time, stability, technology,
modulation technique, and modem enhancements.
There are many ways to measure the performance of a network, as
each network is different in nature and design. Performance can also be
modeled instead of measured. For example, state transition diagrams
are often used to model queuing performance in a circuit-switched
network. The network planner uses these diagrams to analyze how the
network performs in each state, ensuring that the network is optimally
designed.
Network congestion
Network congestion
occurs when a link or node is subjected to a greater data load than it
is rated for, resulting in a deterioration of its quality of service.
When networks are congested and queues become too full, packets have to
be discarded, and participants must rely on retransmission to maintain reliable communications. Typical effects of congestion include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either to only a small increase in the network throughput or to a potential reduction in network throughput.
Network protocols
that use aggressive retransmissions to compensate for packet loss tend
to keep systems in a state of network congestion even after the initial
load is reduced to a level that would not normally induce network
congestion. Thus, networks using these protocols can exhibit two stable
states under the same level of load. The stable state with low
throughput is known as congestive collapse.
Modern networks use congestion control, congestion avoidance and traffic control
techniques where endpoints typically slow down or sometimes even stop
transmission entirely when the network is congested to try to avoid
congestive collapse. Specific techniques include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers.
Another method to avoid the negative effects of network congestion is implementing quality of service
priority schemes allowing selected traffic to bypass congestion.
Priority schemes do not solve network congestion by themselves, but they
help to alleviate the effects of congestion for critical services. A
third method to avoid network congestion is the explicit allocation of
network resources to specific flows. One example of this is the use of
Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-TG.hn home networking standard.
For the Internet, RFC2914 addresses the subject of congestion control in detail.
Network resilience
Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation."
Network Security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources.[85]
Network security is used on a variety of computer networks, both public
and private, to secure daily transactions and communications among
businesses, government agencies, and individuals.
Network surveillance
Network surveillance
is the monitoring of data being transferred over computer networks such
as the Internet. The monitoring is often done surreptitiously and may
be done by or at the behest of governments, by corporations, criminal
organizations, or individuals. It may or may not be legal and may or may
not require authorization from a court or other independent agency.
Computer and network surveillance programs are widespread today,
and almost all Internet traffic is or could potentially be monitored for
clues to illegal activity.
End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting
data so only the intended recipient can decrypt it, with no dependency
on third parties. End-to-end encryption prevents intermediaries, such as
Internet service providers or application service providers, from reading or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.
Typical server-based
communications systems do not include end-to-end encryption. These
systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox.
The end-to-end encryption paradigm does not directly address
risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent.
The introduction and rapid growth of e-commerce on the World Wide Web
in the mid-1990s made it obvious that some form of authentication and
encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator.
Netscape created a standard called secure socket layer (SSL). SSL
requires a server with a certificate. When a client requests access to
an SSL-secured server, the server sends a copy of the certificate to the
client. The SSL client checks this certificate (all web browsers come
with an exhaustive list of root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client.
Views of networks
Users
and network administrators typically have different views of their
networks. Users can share printers and some servers from a workgroup,
which usually means they are in the same geographic location and are on
the same LAN, whereas a network administrator is responsible for
keeping that network up and running. A community of interest
has less of a connection of being in a local area and should be thought
of as a set of arbitrarily located users who share a set of servers,
and possibly also communicate via peer-to-peer technologies.
Network administrators can see networks from both physical and
logical perspectives. The physical perspective involves geographic
locations, physical cabling, and the network elements (e.g., routers,
bridges and application-layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets,
map onto one or more transmission media. For example, a common practice
in a campus of buildings is to make a set of LAN cables in each
building appear to be a common subnet, using VLANs.
Users and administrators are aware, to varying extents, of a
network's trust and scope characteristics. Again using TCP/IP
architectural terminology, an intranet
is a community of interest under private administration usually by an
enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet
is an extension of an intranet that allows secure communications to
users outside of the intranet (e.g. business partners, customers).
Unofficially, the Internet is the set of users, enterprises, and
content providers that are interconnected by Internet Service Providers
(ISP). From an engineering viewpoint, the Internet is the set of
subnets, and aggregates of subnets, that share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).
Over the Internet, there can be business-to-business, business-to-consumer and consumer-to-consumer communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security
mechanism. Intranets and extranets can be securely superimposed onto
the Internet, without any access by general Internet users and
administrators, using secure VPN technology.