Search This Blog

Wednesday, June 29, 2022

History of the Internet

The history of the Internet has its origin in information theory and the efforts to build and interconnect computer networks that arose from research and development in the United States and involved international collaboration, particularly with researchers in the United Kingdom and France.

Fundamental theoretical work on information theory was developed by Harry Nyquist and Ralph Hartley in the 1920s. Information theory, as enunciated by Claude Shannon in the 1940s, provided a firm theoretical underpinning to understand the tradeoffs between signal-to-noise ratios, bandwidth and error-free transmission in the presence of noise in telecommunications technology. This was one of the three key developments, along with advances in transistor technology (specifically MOS transistors) and laser technology, that made possible the rapid growth of telecommunication bandwidth over the next half-century.

Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users, and later, the possibility of achieving this over wide area networks. Independently, Paul Baran proposed a distributed network based on data in message blocks in the early 1960s, and Donald Davies conceived of packet switching in 1965 at the National Physical Laboratory (NPL), proposing a national commercial data network in the UK.

The Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense awarded contracts in 1969 for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. ARPANET adopted the packet switching technology proposed by Davies and Baran, underpinned by mathematical work in the early 1970s by Leonard Kleinrock at UCLA. The network was built by Bolt, Beranek, and Newman.

Several early packet-switched networks emerged in the 1970s which researched and provided data networking. ARPA projects and international working groups led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks, which produced various standards. Bob Kahn, at ARPA, and Vint Cerf, at Stanford University, published research in 1974 that evolved into the Transmission Control Protocol (TCP) and Internet Protocol (IP), the two protocols of the Internet protocol suite. The design included concepts from the French CYCLADES project directed by Louis Pouzin.

In the early 1980s, the National Science Foundation (NSF) funded national supercomputing centers at several universities in the United States, and provided interconnectivity in 1986 with the NSFNET project, thus creating network access to these supercomputer sites for research and academic organizations in the United States. International connections to NSFNET, the emergence of architecture such as the Domain Name System, and the adoption of TCP/IP internationally on existing networks marked the beginnings of the Internet. Commercial Internet service providers (ISPs) emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990. The optical backbone of the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic, as traffic transitioned to optical networks managed by Sprint, MCI and AT&T.

Research at CERN in Switzerland by the British computer scientist Tim Berners-Lee in 1989–90 resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network. The dramatic expansion of capacity of the Internet with the advent of wave division multiplexing (WDM) and the roll out of fiber optic cables in the mid-1990s had a revolutionary impact on culture, commerce, and technology. This made possible the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, video chat, and the World Wide Web with its discussion forums, blogs, social networking services, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber-optic networks operating at 1 Gbit/s, 10 Gbit/s, and 800 Gbit/s by 2019. The Internet's takeover of the global communication landscape was rapid in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, 51% by 2000, and more than 97% of the telecommunicated information by 2007. The Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking services. However, the future of the global network may be shaped by regional differences.

Foundations

Precursors

Data communication

The concept of data communication – transmitting data between two different places through an electromagnetic medium such as radio or an electric wire – pre-dates the introduction of the first computers. Such communication systems were typically limited to point to point communication between two end devices. Semaphore lines, telegraph systems and telex machines can be considered early precursors of this kind of communication. The telegraph in the late 19th century was the first fully digital communication system.

Information theory

Fundamental theoretical work on information theory was developed by Harry Nyquist and Ralph Hartley in the 1920s. Information theory, as enunciated by Claude Shannon, in the 1948, provided a firm theoretical underpinning to understand the trade-offs between signal-to-noise ratio, bandwidth, and error-free transmission in the presence of noise, in telecommunications technology. This was one of the three key developments, along with advances in transistor technology (specifically MOS transistors) and laser technology, that made possible the rapid growth of telecommunication bandwidth over the next half-century.

Computers

Early computers in the 1940s had a central processing unit and user terminals. As the technology evolved in the 1950s, new systems were devised to allow communication over longer distances (for terminals) or with higher speed (for interconnection of local devices) that were necessary for the mainframe computer model. These technologies made it possible to exchange data (such as files) between remote computers. However, the point-to-point communication model was limited, as it did not allow for direct communication between any two arbitrary systems; a physical link was necessary. The technology was also considered vulnerable for strategic and military use because there were no alternative paths for the communication in case of a broken link.

Inspiration for networking and interaction with computers

The earliest computers were connected directly to terminals used by an individual user. Christopher Strachey, who became Oxford University's first Professor of Computation, filed a patent application for time-sharing in February 1959. In June that year, he gave a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris where he passed the concept on to J. C. R. Licklider. Licklider, vice president at Bolt Beranek and Newman, Inc., went on to propose a computer network in his January 1960 paper Man-Computer Symbiosis:

A network of such centers, connected to one another by wide-band communication lines [...] the functions of present-day libraries together with anticipated advances in information storage and retrieval and symbiotic functions suggested earlier in this paper

In August 1962, Licklider and Welden Clark published the paper "On-Line Man-Computer Communication" which was one of the first descriptions of a networked future.

In October 1962, Licklider was hired by Jack Ruina as director of the newly established Information Processing Techniques Office (IPTO) within DARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos in 1963 describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network".

Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors, Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.

Packet switching

The issue of connecting separate physical networks to form one logical network was the first of many problems. Early networks used message switched systems that required rigid routing structures prone to single point of failure. In the 1960s, Paul Baran of the RAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war. Information transmitted across Baran's network would be divided into what he called "message blocks". Independently, Donald Davies (National Physical Laboratory, UK), proposed and put into practice a local area network based on what he called packet switching, the term that would ultimately be adopted.

Packet switching is a rapid store and forward networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. It provides better bandwidth utilization and response times than the traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links.

Networks that led to the Internet

NPL network

Following discussions with J. C. R. Licklider in 1965, Donald Davies became interested in data communications for computer networks. Later that year, at the National Physical Laboratory in the United Kingdom, Davies designed and proposed a national commercial data network based on packet switching. The following year, he described the use of an "Interface computer" to act as a router. The proposal was not taken up nationally but he produced a design for a local network to serve the needs of NPL and prove the feasibility of packet switching using high-speed data transmission. To deal with packet permutations (due to dynamically updated route preferences) and to datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control", thus inventing what came to be known the end-to-end principle. He and his team were one of the first to use the term 'protocol' in a data-commutation context in 1967. The network's development was described at a 1968 conference.

By 1968, Davies had begun building the Mark I packet-switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions. The NPL local network and the ARPANET were the first two networks in the world to use packet switching, and the NPL network was the first to use high-speed links. Many other packet switching networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design. The NPL team carried out simulation work on packet networks, including datagram networks, and research into internetworking and computer network security. The Mark II version which operated from 1973 used a layered protocol architecture. In 1976, 12 computers and 75 terminal devices were attached, and more were added until the network was replaced in 1986.

ARPANET

Robert Taylor was promoted to the head of the Information Processing Techniques Office (IPTO) at Defense Advanced Research Projects Agency (DARPA) in 1966. He intended to realize Licklider's ideas of an interconnected networking system. As part of the IPTO's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at University of California, Berkeley, and one for the Compatible Time-Sharing System project at Massachusetts Institute of Technology (MIT). Taylor's identified need for networking became obvious from the waste of resources apparent to him.

For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them.... I said, oh man, it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet.

Bringing in Larry Roberts from MIT in January 1967, he initiated a project to build such a network. Roberts and Thomas Merrill had been researching computer time-sharing over wide area networks (WANs). Wide area networks emerged during the 1950s and became established during the 1960s. At the first ACM Symposium on Operating Systems Principles in October 1967, Roberts presented a proposal for the "ARPA net", based on Wesley Clark's proposal to use Interface Message Processors to create a message switching network. At the conference, Roger Scantlebury presented Donald Davies' work on packet switching for data communications and mentioned the work of Paul Baran at RAND. Roberts incorporated the packet switching concepts into the ARPANET design and upgraded the proposed communications speed from 2.4 kbps to 50 kbps. Leonard Kleinrock subsequently developed the mathematical theory behind the performance of this technology building on his earlier work on queueing theory.

ARPA awarded the contract to build the network to Bolt Beranek & Newman, and the first ARPANET link was established between the University of California, Los Angeles (UCLA) and the Stanford Research Institute at 22:30 hours on October 29, 1969.

"We set up a telephone connection between us and the guys at SRI ...", Kleinrock ... said in an interview: "We typed the L and we asked on the phone,

"Do you see the L?"
"Yes, we see the L," came the response.
We typed the O, and we asked, "Do you see the O."
"Yes, we see the O."
Then we typed the G, and the system crashed ...

Yet a revolution had begun" ....

35 Years of the Internet, 1969–2004. Stamp of Azerbaijan, 2004.

By December 1969, a four-node network was connected by adding the University of Utah and the University of California, Santa Barbara. In the same year, Taylor helped fund ALOHAnet, a system designed by professor Norman Abramson and others at the University of Hawaii at Manoa that transmitted data by radio between seven computers on four islands on Hawaii. The software for establishing links between network sites in the ARPANET was the Network Control Program (NCP), completed in c. 1970.

ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software", was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.

Early international collaborations on ARPANET were sparse. Connections were made in 1973 to the Norwegian Seismic Array (NORSAR), via a satellite link at the Tanum Earth Station in Sweden, and to Peter Kirstein's research group at University College London which provided a gateway to British academic networks. By 1981, the number of hosts had grown to 213. ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used.

Merit Network

The Merit Network was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development.[66] With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network. All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.

CYCLADES

The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. He developed the network to explore alternatives to the early ARPANET design and to support internetworking research. First demonstrated in 1973, it was the first network to implement the end-to-end principle conceived by Donald Davies and make the hosts responsible for reliable delivery of data, rather than the network itself, using unreliable datagrams. Concepts implemented in this network influenced TCP/IP architecture.

X.25 and public data networks

Based on international research initiatives, particularly the contributions of Rémi Després, packet switching network standards were developed by the International Telegraph and Telephone Consultative Committee (ITU-T) in the form of X.25 and related standards. X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976.

The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.

Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.

The first public dial-in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as Telenet and CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.

UUCP and Usenet

In 1979, two students at Duke University, Tom Truscott and Jim Ellis, originated the idea of using Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies compared to later networks like CSNET and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.

Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network represented possibly one of the first examples of the Internet technology becoming progress through popular diffusion.

1973–1989: Merging the networks and creating the Internet

Map of the TCP/IP test network in February 1982

TCP/IP

First Internet demonstration, linking the ARPANET, PRNET, and SATNET on November 22, 1977

With so many different network methods, something was needed to unify them. Steve Crocker had formed a "Networking Working Group" at University of California, Los Angeles in 1969. Louis Pouzin initiated the CYCLADES project in 1971, building on the work of Donald Davies; Pouzin coined the term catenet for concatenated network. An International Networking Working Group formed in 1972; active members included Vint Cerf from Stanford University, Alex McKenzie from BBN, Donald Davies and Roger Scantlebury from NPL, and Louis Pouzin and Hubert Zimmermann from IRIA. Later that year, Bob Kahn of DARPA recruited Vint Cerf to work with him on the problem. By 1973, these groups had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible.

Kahn and Cerf published their ideas in May 1974, which incorporated concepts implemented by Louis Pouzin and Hubert Zimmermann in the CYCLADES network. The specification of the resulting protocol, the Transmission Control Program, was published as RFC 675 by the Network Working Group in December 1974. It contains the first attested use of the term internet, as a shorthand for internetwork. This software was monolithic in design using two simplex communication channels for each user session.

With the role of the network reduced to a core of functionality, it became possible to exchange traffic with other networks independently from their detailed characteristics, thereby solving the fundamental problems of internetworking. DARPA agreed to fund development of prototype software. Testing began in 1975 through concurrent implementations at Stanford, BBN and University College London. After several years of work, the first demonstration of a gateway between the Packet Radio network (PRNET) in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977, a three network demonstration was conducted including the ARPANET, the SRI's Packet Radio Van on the Packet Radio Network and the Atlantic Packet Satellite Network (SATNET).

The software was redesigned as a modular protocol stack, using full-duplex channels; between 1976 and 1977, Yogen Dalal and Robert Metcalfe among others, proposed separating TCP's routing and transmission control functions into two discrete layers, which led to the splitting of the Transmission Control Program into the Transmission Control Protocol (TCP) and the IP protocol (IP) in version 3 in 1978. Originally referred to as IP/TCP, version 4 was described in IETF publication RFC 791 (September 1981), 792 and 793. It was installed on SATNET in 1982 and the ARPANET in January 1983 after the DoD made it standard for all military computer networking. This resulted in a networking model that became known informally as TCP/IP. It was also referred to as the Department of Defense (DoD) model, DARPA model, or ARPANET model. Cerf credits his graduate students Yogen Dalal, Carl Sunshine, Judy Estrin, Richard Karp, and Gérard Le Lann with important work on the design and testing. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems.

Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks.

Decomposition of the quad-dotted IPv4 address representation to its binary value

From ARPANET to NSFNET

BBN Technologies TCP/IP Internet map of early 1986.

After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting-edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.

The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, which began to form networks of fiber optic lines. A growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were. Data transmission speeds depended upon the type of connection, the slowest being analog telephone lines and the fastest using optical networking technology.

Several other branches of the U.S. government, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid-1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.

T3 NSFNET Backbone, c. 1992

NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.

In 1981 NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange.

In 1986, the NSF created NSFNET, a 56 kbit/s backbone to support the NSF-sponsored supercomputing centers. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks. The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a cooperative agreement with the Merit Network in partnership with IBM, MCI, and the State of Michigan. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990.

NSFNET was expanded and upgraded to dedicated fiber, optical lasers and optical amplifier systems capable of delivering T3 start up speeds or 45 Mbit/s in 1991. However, the T3 transition by MCI took longer than expected, allowing Sprint to establish a coast-to-coast long-distance commercial Internet service. When NSFNET was decommissioned in 1995, its optical networking backbones were handed off to several commercial Internet service providers, including MCI, PSI Net and Sprint. As a result, when the handoff was complete, Sprint and its Washington DC Network Access Points began to carry Internet traffic, and by 1996, Sprint was the world's largest carrier of Internet traffic.

The research and academic community continues to develop and use advanced networks such as Internet2 in the United States and JANET in the United Kingdom.

Transition towards the Internet

The term "internet" was reflected in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974) as a short form of internetworking, when the two terms were used interchangeably. In general, an internet was a collection of networks linked by a common protocol. In the time period when the ARPANET was connected to the newly formed NSFNET project in the late 1980s, the term was used as the name of the network, Internet, being the large and global TCP/IP network.

Opening the Internet and the fiber optic backbone to corporate and consumers increased demand for network capacity. The expense and delay of laying new fiber led providers to test a fiber bandwidth expansion alternative that had been pioneered in the late 1970s by Optelecom using “interactions between light and matter, such as lasers and optical devices used for optical amplification and wave mixing”. This technology became known as wave division multiplexing (WDM). Bell Labs deployed a 4-channel WDM system in 1995. To develop a mass capacity (dense) WDM system, Optelecom and its former head of Light Systems Research, David R. Huber formed a new venture, Ciena Corp., that deployed the world's first dense WDM system on the Sprint fiber network in June 1996. This was referred to as the real start of optical networking.

As interest in networking grew by needs of collaboration, exchange of data, and access of remote computing resources, the Internet technologies spread throughout the rest of the world. The hardware-agnostic approach in TCP/IP supported the use of existing network infrastructure, such as the International Packet Switched Service (IPSS) X.25 network, to carry Internet traffic.

Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access to File Transfer Protocol (FTP) sites via UUCP or mail.

Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. The Exterior Gateway Protocol (EGP) was replaced by a new protocol, the Border Gateway Protocol (BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing (CIDR) was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables.

Optical networking

To address the need for transmission capacity beyond that provided by radio, satellite and analog copper telephone lines, engineers developed optical communications systems based on fiber optic cables powered by lasers and optical amplifier techniques.

The concept of lasing arose from a 1917 paper by Albert Einstein, “On the Quantum Theory of Radiation.” Einstein expanded upon a dialog with Max Planck on how atoms absorb and emit light, part of a thought process that, with input from Erwin Schrödinger, Werner Heisenberg and others, gave rise to Quantum Mechanics. Specifically, in his quantum theory, Einstein mathematically determined that light could be generated not only by spontaneous emission, such as the light emitted by an incandescent light or the Sun, but also by stimulated emission.

Forty years later, on November 13, 1957, Columbia University physics student Gordon Gould first realized how to make light by stimulated emission through a process of optical amplification. He coined the term LASER for this technology—Light Amplification by Stimulated Emission of Radiation. Using Gould's light amplification method (patented as “Optically Pumped Laser Amplifier”), Theodore Maiman made the first working laser on May 16, 1960.

Gould co-founded Optelecom, Inc. in 1973 to commercialize his inventions in optical fiber telecommunications. just as Corning Glass was producing the first commercial fiber optic cable in small quantities. Optelecom configured its own fiber lasers and optical amplifiers into the first commercial optical communication systems which it delivered to Chevron and the US Army Missile Defense. Three years later, GTE deployed the first optical telephone system in 1977 in Long Beach, California. By the early 1980s, optical networks powered by lasers, LED and optical amplifier equipment supplied by Bell Labs, NTT and Perelli were used by select universities and long-distance telephone providers.

TCP/IP goes global (1980s)

CERN, the European Internet, the link to the Pacific and beyond

In early 1982, NORSAR and Peter Kirstein's group at University College London (UCL) left the ARPANET and began to use TCP/IP over SATNET. UCL provided access between the Internet and academic networks in the UK.

Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self-developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP intranets remained isolated from the Internet until 1989 when a transatlantic connection to Cornell University was established.

In 1988, the first international connections to NSFNET was established by France's INRIA, and Piet Beertema at the Centrum Wiskunde & Informatica (CWI) in the Netherlands. Daniel Karrenberg, from CWI, visited Ben Segal, CERN's TCP/IP coordinator, looking for advice about the transition EUnet, the European side of the UUCP Usenet network (much of which ran over X.25 links), over to TCP/IP. The previous year, Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and Segal was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks. The NORDUnet connection to NSFNET was in place soon after, providing open access for university students in Denmark, Finland, Iceland, Norway, and Sweden. In January 1989 CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.

In 1991 JANET, the UK national research and education network adopted Internet Protocol on the existing network. The same year, Dai Davies introduced Internet technology into the pan-European NREN, EuropaNet, which was built on the X.25 protocol. The European Academic and Research Network (EARN) and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992.

At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia. New Zealand's first international Internet connection was established the same year.

In May 1982 South Korea set up a two-node domestic TCP/IP network, adding a third node the following year. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNET in 1989 marking the spread of the Internet to Asia. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.

The early global "digital divide" emerges

Fixed broadband Internet subscriptions in 2012
as a percentage of a country's population
 
Mobile broadband Internet subscriptions in 2012
as a percentage of a country's population

While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.

Africa

At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.

In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems.

In 1996, a USAID funded project, the Leland Initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Ivory Coast and Benin in 1998.

Africa is building an Internet infrastructure. AFRINIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.

There are many programs to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.

Asia and Oceania

The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).

South Korea's first Internet system, the System Development Network (SDN) began operation on 15 May 1982. SDN was connected to the rest of the world in August 1983 using UUCP (Unixto-Unix-Copy); connected to CSNET in December 1984; and formally connected to the U.S. Internet in 1990. VDSL, a last mile technology developed in the 1990s by NextLevel Communications, connected corporate and consumer copper-based telephone lines to the Internet in South Korea.

In 1991, the People's Republic of China saw its first TCP/IP college network, Tsinghua University's TUNET. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.

Latin America

As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.

1989–2004: Rise of the global Internet, Web 1.0

Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections. (Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.)

Number of Internet hosts worldwide: 1969–present
Source: Internet Systems Consortium.

As a result, during the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first commercial dialup ISP in the United States was The World, which opened in 1989.

In 1992, the U.S. Congress passed the Scientific and Advanced-Technology Act, 42 U.S.C. § 1862(g), which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks. This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations.

By 1990, ARPANET's goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point of the Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995, when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service. NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States.

Use in wider society

Stamped envelope of Russian Post issued in 1993 with stamp and graphics dedicated to first Russian underwater digital optic cable laid in 1993 by Rostelecom from Kingisepp to Copenhagen

During the first decade or so of the public Internet, the immense changes it would eventually enable in the 2000s were still nascent. In terms of providing context for this period, mobile cellular devices ("smartphones" and other cellular devices) which today provide near-universal access, were used for business and not a routine household item owned by parents and children worldwide. Social media in the modern sense had yet to come into existence, laptops were bulky and most households did not have computers. Data rates were slow and most people lacked means to video or digitize video; media storage was transitioning slowly from analog tape to digital optical discs (DVD and to an extent still, floppy disc to CD). Enabling technologies used from the early 2000s such as PHP, modern JavaScript and Java, technologies such as AJAX, HTML 4 (and its emphasis on CSS), and various software frameworks, which enabled and simplified speed of web development, largely awaited invention and their eventual widespread adoption.

The Internet was widely used for mailing lists, emails, e-commerce and early popular online shopping (Amazon and eBay for example), online forums and bulletin boards, and personal websites and blogs, and use was growing rapidly, but by more modern standards the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society's infrastructure.

Typical design elements of these "Web 1.0" era websites included:  Static pages instead of dynamic HTML; content served from filesystems instead of relational databases; pages built using Server Side Includes or CGI instead of a web application written in a dynamic programming language; HTML 3.2-era structures such as frames and tables to create page layouts; online guestbooks; overuse of GIF buttons and similar small graphics promoting particular items; and HTML forms sent via email. (Support for server side scripting was rare on shared servers so the usual feedback mechanism was via email, using mailto forms and their email program.

During the period 1997 to 2001, the first speculative investment bubble related to the Internet took place, in which "dot-com" companies (referring to the ".com" top level domain used by businesses) were propelled to exceedingly high valuations as investors rapidly stoked stock values, followed by a market crash; the first dot-com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow.

With the call to Web 2.0 following soon afterward, the period of the Internet up to around 2004–2005 was retrospectively named and described by some as Web 1.0.

IPv6

IPv4 uses 32-bit addresses which limits the address space to 232 addresses, i.e. 4294967296 addresses. The last available IPv4 address was assigned in January 2011. IPv4 is being replaced by its successor, called "IPv6", which uses 128 bit addresses, providing 2128 addresses, i.e. 340282366920938463463374607431768211456. This is a vastly increased address space. The shift to IPv6 is expected to take many years, decades, or perhaps longer, to complete, since there were four billion machines with IPv4 when the shift began.

2005–present: Web 2.0, global ubiquity, social media

The changes that would propel the Internet into its place as a social system took place during a relatively short period of no more than five years, from around 2005 to 2010. They included:

  • The call to "Web 2.0" in 2004 (first suggested in 1999),
  • Accelerating adoption and commoditization among households of, and familiarity with, the necessary hardware (such as computers).
  • Accelerating storage technology and data access speeds – hard drives emerged, took over from far smaller, slower floppy discs, and grew from megabytes to gigabytes (and by around 2010, terabytes), RAM from hundreds of kilobytes to gigabytes as typical amounts on a system, and Ethernet, the enabling technology for TCP/IP, moved from common speeds of kilobits to tens of megabits per second, to gigabits per second.
  • High speed Internet and wider coverage of data connections, at lower prices, allowing larger traffic rates, more reliable simpler traffic, and traffic from more locations,
  • The gradually accelerating perception of the ability of computers to create new means and approaches to communication, the emergence of social media and websites such as Twitter and Facebook to their later prominence, and global collaborations such as Wikipedia (which existed before but gained prominence as a result),
  • The mobile revolution, which provided access to the Internet to much of human society of all ages, in their daily lives, and allowed them to share, discuss, and continually update, inquire, and respond.
  • Non-volatile RAM rapidly grew in size and reliability, and decreased in price, becoming a commodity capable of enabling high levels of computing activity on these small handheld devices as well as solid-state drives (SSD).
  • An emphasis on power efficient processor and device design, rather than purely high processing power; one of the beneficiaries of this was ARM, a British company which had focused since the 1980s on powerful but low cost simple microprocessors. ARM architecture rapidly gained dominance in the market for mobile and embedded devices.

The term "Web 2.0" describes websites that emphasize user-generated content (including user-to-user interaction), usability, and interoperability. It first appeared in a January 1999 article called "Fragmented Future" written by Darcy DiNucci, a consultant on electronic information design, where she wrote:

"The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven."

The term resurfaced during 2002–2004, and gained prominence in late 2004 following presentations by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you". They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value.

Web 2.0 does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. Web 2.0 describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to Web sites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking services, blogs, wikis, folksonomies, video sharing sites, hosted services, Web applications, and mashups. Terry Flew, in his 3rd Edition of New Media described what he believed to characterize the differences between Web 1.0 and Web 2.0:

"[The] move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on tagging (folksonomy)".

This era saw several household names gain prominence through their community-oriented operation – YouTube, Twitter, Facebook, Reddit and Wikipedia being some examples.

The mobile revolution

The process of change that generally coincided with "Web 2.0" was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information "on the move" – and used socially, as opposed to items on a desk at home or just used for work.

Location-based services, services using location and other sensor information, and crowdsourcing (frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile-targeted websites (such as "m.website.com") became common, designed especially for the new devices used. Netbooks, ultrabooks, widespread 4G and Wi-Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term "App" emerged (short for "Application program" or "Program") as did the "App store".

This "mobile revolution" has allowed for people to have a nearly unlimited amount of information at their fingertips. With the ability to access the internet from cell phones came a change in the way we consume media. In fact, looking at media consumption statistics, over half of media consumption between those aged 18 and 34 were using a smartphone.

Networking in outer space

The first Internet link into low Earth orbit was established on January 22, 2010, when astronaut T. J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space. (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment.

Communication with spacecraft beyond Earth orbit has traditionally been over point-to-point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, Delay-tolerant networking (DTN) which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space weather disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP Internet Protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008. Testing of DTN-based communications between the International Space Station and Earth (now termed Disruption-Tolerant Networking) has been ongoing since March 2009, and is scheduled to continue until March 2014.

This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-Earth downlinks. According to a February 2011 statement by Google's Vint Cerf, the so-called "Bundle protocols" have been uploaded to NASA's EPOXI mission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds.

Internet governance

As a globally distributed network of voluntarily interconnected autonomous networks, the Internet operates without a central governing body. Each constituent network chooses the technologies and protocols it deploys from the technical standards that are developed by the Internet Engineering Task Force (IETF). However, successful interoperation of many networks requires certain parameters that must be common throughout the network. For managing such parameters, the Internet Assigned Numbers Authority (IANA) oversees the allocation and assignment of various technical identifiers. In addition, the Internet Corporation for Assigned Names and Numbers (ICANN) provides oversight and coordination for the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System.

NIC, InterNIC, IANA, and ICANN

The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to the Network Information Center (NIC) at Stanford Research Institute (SRI International) in Menlo Park, California. ISI's Jonathan Postel managed the IANA, served as RFC Editor and performed other key roles until his premature death in 1998.

As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by ISI's Paul Mockapetris in 1983. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.

The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366, which described the "growth of the Internet and its increasing globalization" and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions"). Registries would be "unbiased and widely recognized by network providers and subscribers" within their region. The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group.

Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.

Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and the Federal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers. Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, the American Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of the National Science Foundation and became the third Regional Internet Registry.

In 1998, both the IANA and remaining DNS-related InterNIC functions were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with the IAB to define the technical work to be carried out by the Internet Assigned Numbers Authority. The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure. ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users.

Internet Engineering Task Force

The Internet Engineering Task Force (IETF) is the largest and most visible of several loosely related ad-hoc groups that provide technical direction for the Internet, including the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF).

The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized into Working Groups. Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet.

The IETF grew out of quarterly meetings with U.S. government-funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, The Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50%, even at meetings held in the United States.

The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG) and the Internet Architecture Board (IAB). The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer-term research issues.

Request for Comments

Request for Comments (RFCs) are the main documentation for the work of the IAB, IESG, IETF, and IRTF. RFC 1, "Host Software", was written by Steve Crocker at UCLA in April 1969, well before the IETF was created. Originally they were technical memos documenting aspects of ARPANET development and were edited by Jon Postel, the first RFC Editor.

RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics. RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original.

The Internet Society

The Internet Society (ISOC) is an international, nonprofit organization founded during 1992 "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". With offices near Washington, DC, USA, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are currently more than 90 chapters around the world.

ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). ISOC also promotes understanding and appreciation of the Internet model of open, transparent processes and consensus-based decision-making.

Globalization and Internet governance in the 21st century

Since the 1990s, the Internet's governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision-making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000, and in September 2009, gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued. Finally, on October 1, 2016, ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community.

The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet's ad-hoc standards body and issues Request for Comments.

In November 2005, the World Summit on the Information Society, held in Tunis, called for an Internet Governance Forum (IGF) to be convened by United Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow up meetings annually thereafter. Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.

Tim Berners-Lee, inventor of the web, was becoming concerned about threats to the web's future and in November 2009 at the IGF in Washington DC launched the World Wide Web Foundation (WWWF) to campaign to make the web a safe and empowering tool for the good of humanity with access to all. In November 2019 at the IGF in Berlin, Berners-Lee and the WWWF went on to launch the Contract for the Web, a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse" with the warning "If we don't act now - and act together - to prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering" (its potential for good).

Politicization of the Internet

Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become more politicized as it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet.

Examples include political activities such as public protest and canvassing of support and votes, but also:

  • The spreading of ideas and opinions;
  • Recruitment of followers, and "coming together" of members of the public, for ideas, products, and causes;
  • Providing and widely distributing and sharing information that might be deemed sensitive or relates to whistleblowing (and efforts by specific countries to prevent this by censorship);
  • Criminal activity and terrorism (and resulting law enforcement use, together with its facilitation by mass surveillance);
  • Politically motivated fake news.

Net neutrality

On April 23, 2014, the Federal Communications Commission (FCC) was reported to be considering a new rule that would permit Internet service providers to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On May 15, 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality. On November 10, 2014, President Obama recommended the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On January 16, 2015, Republicans presented legislation, in the form of a U.S. Congress HR discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers (ISPs). On January 31, 2015, AP News reported that the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 to the internet in a vote expected on February 26, 2015. Adoption of this notion would reclassify internet service from one of information to one of telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC is expected to enforce net neutrality in its vote, according to The New York Times.

On February 26, 2015, the FCC ruled in favor of net neutrality by applying Title II (common carrier) of the Communications Act of 1934 and Section 706 of the Telecommunications act of 1996 to the Internet. The FCC chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept."[

On March 12, 2015, the FCC released the specific details of the net neutrality rules. On April 13, 2015, the FCC published the final rule on its new "Net Neutrality" regulations.

On December 14, 2017, the FCC repealed their March 12, 2015 decision by a 3–2 vote regarding net neutrality rules.

Use and culture

Email and Usenet

E-mail has often been called the killer application of the Internet. It predates the Internet, and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is undocumented, among the first systems to have such a facility were the System Development Corporation (SDC) Q32 and the Compatible Time-Sharing System (CTSS) at MIT.

The ARPANET computer network made a large contribution to the evolution of electronic mail. An experimental inter-system transferred mail on the ARPANET shortly after its creation. In 1971 Ray Tomlinson created what was to become the standard Internet electronic mail addressing format, using the @ sign to separate mailbox names from host names.

A number of protocols were developed to deliver messages among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET email system. Email could be passed this way between a number of networks, including ARPANET, BITNET and NSFNET, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol.

In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).

During the early years of the Internet, email and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the 'alt.binary' groups. Also, FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside email messages. The file was encoded, broken in pieces and sent by email; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned.

File sharing

Resource or file sharing has been an important activity on computer networks from well before the Internet was established and was supported in a variety of ways including bulletin board systems (1978), Usenet (1980), Kermit (1981), and many others. The File Transfer Protocol (FTP) for use on the Internet was standardized in 1985 and is still in use today. A variety of tools were developed to aid the use of FTP by helping users discover files they might want to transfer, including the Wide Area Information Server (WAIS) in 1991, Gopher in 1991, Archie in 1991, Veronica in 1992, Jughead in 1993, Internet Relay Chat (IRC) in 1988, and eventually the World Wide Web (WWW) in 1991 with Web directories and Web search engines.

In 1999, Napster became the first peer-to-peer file sharing system. Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer-to-peer file sharing programs and services with different levels of decentralization and anonymity followed, including: Gnutella, eDonkey2000, and Freenet in 2000, FastTrack, Kazaa, Limewire, and BitTorrent in 2001, and Poisoned in 2003.

All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses. And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005, Kazaa in 2006, and Limewire in 2010 to shut down or refocus their efforts. The Pirate Bay, founded in Sweden in 2003, continues despite a trial and appeal in 2009 and 2010 that resulted in jail terms and large fines for several of its founders. File sharing remains contentious and controversial with charges of theft of intellectual property on the one hand and charges of censorship on the other.

File Hosting Services

File hosting allowed for people to expand their computer's hard drives and "host" their files on a server. Most file hosting services offer free storage, as well as larger storage amount for a fee. These services have greatly expanded the internet for business and personal use.

Google Drive, launched on April 24, 2012, has become the most popular file hosting service. Google Drive allows users to store, edit, and share files with themselves and other users. Not only does this application allow for file editing, hosting, and sharing. It also acts as Google's own free-to-access office programs, such as Google Docs, Google Slides, and Google Sheets. This application served as a useful tool for University professors and students, as well as those who are in need of Cloud storage.

Dropbox, released in June 2007 is a similar file hosting service that allows users to keep all of their files in a folder on their computer, which is synced with Dropbox's servers. This differs from Google Drive as it is not web-browser based. Now, Dropbox works to keep workers and files in sync and efficient.

Mega, having over 200 million users, is an encrypted storage and communication system that offers users free and paid storage, with an emphasis on privacy. Being three of the largest file hosting services, Google Drive, Dropbox, and Mega all represent the core ideas and values of these services.

Online piracy

The earliest form of online piracy began with a P2P (peer to peer) music sharing service named Napster, launched in 1999. Sites like LimeWire, The Pirate Bay, and BitTorrent allowed for anyone to engage in online piracy, sending ripples through the media industry. With online piracy came a change in the media industry as a whole.

Mobile phones and the Internet

Total global mobile data traffic reached 588 exabytes during 2020, a 150-fold increase from 3.86 exabytes/year in 2010. Most recently, smartphones accounted for 95% of this mobile data traffic with video accounting for 66% by type of data. Mobile traffic travels by radio frequency to the closest cell phone tower and its base station where the radio signal is converted into an optical signal that is transmitted over high-capacity optical networking systems that convey the information to data centers. The optical backbones enable much of this traffic as well as a host of emerging mobile services including the Internet of things, 3-D virtual reality, gaming and autonomous vehicles. The most popular mobile phone application is texting, of which 2.1 trillion messages were logged in 2020. The texting phenomenon began on December 3, 1992, when Neil Papworth sent the first text message of “Merry Christmas” over a commercial cell phone network to the CEO of Vodafone.

The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model, and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001, the mobile phone email system by Research in Motion (now BlackBerry Limited) for their BlackBerry product was launched in America. To make efficient use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC. Developing countries followed, with India, South Africa, Kenya, the Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries. The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user.

Growth in Demand

Global Internet traffic continues to grow at a rapid rate, rising 23% from 2020 to 2021 when the number of active Internet users reached 4.66 billion people, representing half of the global population. Further demand for data, and the capacity to satisfy this demand, are forecast to increase to 717 terabits per second in 2021. This capacity stems from the optical amplification and WDM systems that are the common basis of virtually every metro, regional, national, international and submarine telecommunications networks. These optical networking systems have been installed throughout the 5 billion kilometers of fiber optic lines deployed around the world. Continued growth in traffic is expected for the foreseeable future from a combination of new users, increased mobile phone adoption, machine-to-machine connections, connected homes, 5G devices and the burgeoning requirement for cloud and Internet services such as Amazon, Facebook, Apple Music and YouTube.

Historiography

There are nearly insurmountable problems in supplying a historiography of the Internet's development. The process of digitization represents a twofold challenge both for historiography in general and, in particular, for historical communication research. A sense of the difficulty in documenting early developments that led to the internet can be gathered from the quote:

"The Arpanet period is somewhat well documented because the corporation in charge – BBN – left a physical record. Moving into the NSFNET era, it became an extraordinarily decentralized process. The record exists in people's basements, in closets. ... So much of what happened was done verbally and on the basis of individual trust."

— Doug Gale (2007)

Multiple myeloma

From Wikipedia, the free encyclopedia

Multiple myeloma
Other namesPlasma cell myeloma, myelomatosis, Kahler's disease, myeloma
Multiple Myeloma.jpg
An artist’s 3D depiction of myeloma cells producing monoclonal proteins of varying types
SpecialtyHematology and oncology
SymptomsBone pain, fatigue
ComplicationsAmyloidosis, kidney problems, bone fractures, hyperviscosity syndrome, infections, anemia
DurationLong term
CausesUnknown
Risk factorsObesity
Diagnostic methodBlood or urine tests, bone marrow biopsy, medical imaging
TreatmentSteroids, chemotherapy, thalidomide, stem cell transplant, bisphosphonates, radiation therapy
PrognosisFive-year survival rate 54% / life expectancy 6 years (USA)
Frequency488,200 (affected during 2015)
Deaths101,100 (2015)

Multiple myeloma (MM), also known as plasma cell myeloma and simply myeloma, is a cancer of plasma cells, a type of white blood cell that normally produces antibodies. Often, no symptoms are noticed initially. As it progresses, bone pain, anemia, kidney dysfunction, and infections may occur. Complications may include amyloidosis.

The cause of multiple myeloma is unknown. Risk factors include obesity, radiation exposure, family history, and certain chemicals. Multiple myeloma may develop from monoclonal gammopathy of undetermined significance that progresses to smoldering myeloma. The abnormal plasma cells produce abnormal antibodies, which can cause kidney problems and overly thick blood. The plasma cells can also form a mass in the bone marrow or soft tissue. When one tumor is present, it is called a plasmacytoma; more than one is called multiple myeloma. Multiple myeloma is diagnosed based on blood or urine tests finding abnormal antibodies, bone marrow biopsy finding cancerous plasma cells, and medical imaging finding bone lesions. Another common finding is high blood calcium levels.

Multiple myeloma is considered treatable, but generally incurable. Remissions may be brought about with steroids, chemotherapy, targeted therapy, and stem cell transplant. Bisphosphonates and radiation therapy are sometimes used to reduce pain from bone lesions.

Globally, multiple myeloma affected 488,000 people and resulted in 101,100 deaths in 2015. In the United States, it develops in 6.5 per 100,000 people per year and 0.7% of people are affected at some point in their lives. It usually occurs around the age of 60 and is more common in men than women. It is uncommon before the age of 40. Without treatment, the median survival in the prechemotherapy era was about 7 months. After the introduction of chemotherapy, prognosis improved significantly with a median survival of 24 to 30 months and a 10-year survival rate of 3%. Even further improvements in prognosis have occurred because of the introduction of newer biologic therapies and better salvage options, with median survivals now exceeding 60 to 90 months. With current treatments, survival is usually 4–5 years. The five-year survival rate is about 54%. The word myeloma is from the Greek myelo- meaning "marrow" and -oma meaning "tumor".

Signs and symptoms

Because many organs can be affected by myeloma, the symptoms and signs vary greatly. Fatigue and bone pain are the most common symptoms at presentation. The CRAB criteria encompass the most common signs of multiple myeloma:

  • Calcium: serum calcium >0.25 mmol/l (>1 mg/dl) higher than the upper limit of normal or >2.75 mmol/l (>11 mg/dl)
  • Renal insufficiency: creatinine clearance <40 ml per minute or serum creatinine >1.77 mol/l (>2 mg/dl)
  • Anemia: hemoglobin value of >2g/dl below the lowest limit of normal, or a hemoglobin value <10g/dl
  • Bone lesions: osteolytic lesions on skeletal radiography, CT, or PET/CT

Bone pain

Illustration showing the most common site of bone lesions in vertebrae

Bone pain affects almost 70% of people with multiple myeloma and is one of the most common symptoms. Myeloma bone pain usually involves the spine and ribs, and worsens with activity. Persistent, localized pain may indicate a pathological bone fracture. Involvement of the vertebrae may lead to spinal cord compression or kyphosis. Myeloma bone disease is due to the overexpression of receptor activator for nuclear factor κ B ligand (RANKL) by bone marrow stroma. RANKL activates osteoclasts, which resorb bone. The resultant bone lesions are lytic (cause breakdown) in nature, and are best seen in plain radiographs, which may show "punched-out" resorptive lesions (including the "raindrop" appearance of the skull on radiography). The breakdown of bone also leads to the release of calcium ions into the blood, leading to hypercalcemia and its associated symptoms.

Anemia

The anemia found in myeloma is usually normocytic and normochromic. It results from the replacement of normal bone marrow by infiltrating tumor cells and inhibition of normal red blood cell production (hematopoiesis) by cytokines.

Impaired kidney function

Impaired kidney function may develop, either acutely or chronically, and with any degree of severity. The most common cause of kidney failure in multiple myeloma is due to proteins secreted by the malignant cells. Myeloma cells produce monoclonal proteins of varying types, most commonly immunoglobulins (antibodies) and free light chains, resulting in abnormally high levels of these proteins in the blood. Depending on the size of these proteins, they may be excreted through the kidneys. Kidneys can be damaged by the effects of proteins or light chains. Increased bone resorption leads to hypercalcemia and causes nephrocalcinosis, thereby contributing to kidney failure. Amyloidosis is a distant third in the causation. People with amyloidosis have high levels of amyloid protein that can be excreted through the kidneys and cause damage to the kidneys and other organs.

Light chains produce myriad effects that can manifest as the Fanconi syndrome (type II kidney tubular acidosis).

Infection

The most common infections are pneumonias and pyelonephritis. Common pneumonia pathogens include S. pneumoniae, S. aureus, and K. pneumoniae, while common pathogens causing pyelonephritis include E. coli and other Gram-negative organisms. The greatest risk period for the occurrence of infection is in the initial few months after the start of chemotherapy. The increased risk of infection is due to immune deficiency. Although the total immunoglobulin level is typically elevated in multiple myeloma, the majority of the antibodies are ineffective monoclonal antibodies from the clonal plasma cell. A selected group of people with documented hypogammaglobulinemia may benefit from replacement immunoglobulin therapy to reduce the risk of infection.

Neurological symptoms

Some symptoms (e.g., weakness, confusion, and fatigue) may be due to anemia or hypercalcemia. Headache, visual changes, and retinopathy may be the result of hyperviscosity of the blood depending on the properties of the paraprotein. Finally, radicular pain, loss of bowel or bladder control (due to involvement of spinal cord leading to cord compression) or carpal tunnel syndrome, and other neuropathies (due to infiltration of peripheral nerves by amyloid) may occur. It may give rise to paraplegia in late-presenting cases.

When the disease is well-controlled, neurological symptoms may result from current treatments, some of which may cause peripheral neuropathy, manifesting itself as numbness or pain in the hands, feet, and lower legs.

Mouth

The initial symptoms may involve pain, numbness, swelling, expansion of the jaw, tooth mobility, and radiolucency. Multiple myeloma in the mouth can mimic common teeth problems like periapical abscess or periodontal abscess, gingivitis, periodontitis, or other gingival enlargement or masses.

Cause

The cause of multiple myeloma is generally unknown.

Risk factors

Studies have reported a familial predisposition to myeloma. Hyperphosphorylation of a number of proteins—the paratarg proteins—a tendency that is inherited in an autosomal dominant manner, appears a common mechanism in these families. This tendency is more common in African-American with myeloma and may contribute to the higher rates of myeloma in this group.

Epstein–Barr virus

Rarely, Epstein–Barr virus (EBV) is associated with multiple myeloma, particularly in individuals who have an immunodeficiency due to e.g. HIV/AIDS, organ transplantation, or a chronic inflammatory condition such as rheumatoid arthritis. EBV-positive multiple myeloma is classified by the World Health Organization (2016) as one form of the Epstein–Barr virus-associated lymphoproliferative diseases and termed Epstein–Barr virus-associated plasma cell myeloma. EBV-positive disease is more common in the plasmacytoma rather than multiple myeloma form of plasma cell cancer. Tissues involved in EBV+ disease typically show foci of EBV+ cells with the appearance of rapidly proliferating immature or poorly differentiated plasma cells. The cells express products of EBV genes such as EBER1 and EBER2. While the EBV contributes to the development and/or progression of most Epstein–Barr virus-associated lymphoproliferatve diseases, its role in multiple myeloma is not known. However, people who are EBV-positive with localized plasmacytoma(s) are more likely to progress to multiple myeloma compared to people with EBV-negative plasmacytoma(s). This suggest that EBV may have a role in the progression of plasmacytomas to systemic multiple myeloma.

Pathophysiology

B lymphocytes start in the bone marrow and move to the lymph nodes. As they progress, they mature and display different proteins on their cell surfaces. When they are activated to secrete antibodies, they are known as plasma cells.

Multiple myeloma develops in B lymphocytes after they have left the part of the lymph node known as the germinal center. The normal cell type most closely associated with MM cells is generally taken to be either an activated memory B cell or the precursor to plasma cells, the plasmablast.

The immune system keeps the proliferation of B cells and the secretion of antibodies under tight control. When chromosomes and genes are damaged, often through rearrangement, this control is lost. Often, a promoter gene moves (or translocates) to a chromosome, where it stimulates an antibody gene to overproduction.

A chromosomal translocation between the immunoglobulin heavy chain gene (on chromosome 14, locus q32) and an oncogene (often 11q13, 4p16.3, 6p21, 16q23 and 20q11) is frequently observed in people with multiple myeloma. This mutation results in dysregulation of the oncogene which is thought to be an important initiating event in the pathogenesis of myeloma. The result is a proliferation of a plasma cell clone and genomic instability that leads to further mutations and translocations. The chromosome 14 abnormality is observed in about 50% of all cases of myeloma. Deletion of (parts of) chromosome 13 is also observed in about 50% of cases.

Production of cytokines (especially IL-6) by the plasma cells causes much of their localised damage, such as osteoporosis, and creates a microenvironment in which the malignant cells thrive. Angiogenesis (the generation of new blood vessels) is increased.

The produced antibodies are deposited in various organs, leading to kidney failure, polyneuropathy, and various other myeloma-associated symptoms.

Epigenetic

In a study that investigated the DNA methylation profile of multiple myeloma cells and normal plasma cells, a gradual demethylation from stem cells to plasma cells was observed. The observed methylation pattern of CpG within intronic regions with enhancer-related chromatin marks in multiple myeloma is similar to undifferentiated precursor and stem cells. These results may represent a de novo epigenetic reprogramming in multiple myeloma, leading to the acquisition of a methylation pattern related to stemness. Other studies have identified a multiple myeloma specific gene silencing pattern associated with the polycomb repressive complex 2 (PRC2). Increased expression of the PRC2 subunit, EZH2 have been described to be a common feature in multiple myeloma, resulting in an accumulation and redistribution of histone H3 lysine 27 trimethylation which advances with disease severity.

Genetics

Chromosomal abnormalities commonly found in this disease, like trisomy of multiple odd-numbered chromosomes, t(11;14), and del(13q), are not associated with a worse prognosis. However, about 25% of patients with newly diagnosed disease have abnormalities associated with a worse prognosis, like t(4;14), t(14;16), and del(17p). Other less common abnormalities associated with a worse prognosis include t(14;20) and ≥4 copies of 1q.

Associated genetic mutations include ATM, BRAF, CCND1, DIS3, FAM46C, KRAS, NRAS and TP53.

Development

The genetic and epigenetic changes occur progressively. The initial change, often involving one chromosome 14 translocation, establishes a clone of bone marrow plasma cells that causes the asymptomatic disorder MGUS, which is a premalignant disorder characterized by increased numbers of plasma cells in the bone marrow or the circulation of a myeloma protein immunoglobulin. Further genetic or epigenic changes produce a new clone of bone marrow plasma cells, usually descendant from the original clone, that causes the more serious, but still asymptomatic premalignant disorder smoldering multiple myeloma. This myeloma is characterized by a rise in the number of bone marrow plasma cells or levels of the circulating myeloma protein above that seen in MGUS.

Subsequent genetic and epigenetic changes lead to a new, more aggressive clone of plasma cells, which cause further rises in the level of the circulating myeloma protein, further rises in the number of bone marrow plasma cells, or the development of one or more of a specific set of "CRAB" symptoms, which are the basis for diagnosing malignant multiple myeloma and treating the disease.

In a small percentage of multiple myeloma cases, further genetic and epigenetic changes lead to the development of a plasma cell clone that moves from the bone marrow into the circulatory system, invades distant tissues, and thereby causes the most malignant of all plasma cell dyscrasias, plasma cell leukemia. Thus, a fundamental genetic instability in plasma cells or their precursors leads to the progression:

Monoclonal gammopathy of undetermined significance → smoldering multiple myeloma → multiple myeloma → plasma cell leukemia

Being asymptomatic, monoclonal gammopathy of undetermined significance and smoldering multiple myeloma are typically diagnosed fortuitously by detecting a myeloma protein on serum protein electrophoresis tests done for other purposes. MGUS is a relatively stable condition afflicting 3% of people aged 50 and 5% of people aged 70; it progresses to multiple myeloma at a rate of 0.5–1% cases per year; smoldering multiple myeloma does so at a rate of 10% per year for the first 5 years, but then falls off sharply to 3% per year for the next 5 years and thereafter to 1% per year.

Overall, some 2–4% of multiple myeloma cases eventually progress to plasma cell leukemia.

Diagnosis

Blood tests

Serum protein electropherogram showing a paraprotein (peak in the gamma zone) in a person with multiple myeloma

The globulin level may be normal in established disease. A doctor may request protein electrophoresis of the blood and urine, which might show the presence of a paraprotein (monoclonal protein, or M protein) band, with or without reduction of the other (normal) immunoglobulins (known as immune paresis). One type of paraprotein is the Bence Jones protein, which is a urinary paraprotein composed of free light chains. Quantitative measurements of the paraprotein are necessary to establish a diagnosis and to monitor the disease. The paraprotein is an abnormal immunoglobulin produced by the tumor clone.

In theory, multiple myeloma can produce all classes of immunoglobulin, but IgG paraproteins are most common, followed by IgA and IgM. IgD and IgE myeloma are very rare. In addition, light and or heavy chains (the building blocks of antibodies) may be secreted in isolation: κ- or λ-light chains or any of the five types of heavy chains (α-, γ-, δ-, ε- or μ-heavy chains). People without evidence of a monoclonal protein may have "nonsecretory" myeloma (not producing immunoglobulins); this represents about 3% of all people with multiple myeloma.

Additional findings may include a raised calcium level (when osteoclasts are breaking down bone, releasing it into the bloodstream), raised serum creatinine level due to reduced kidney function, which is mainly due to casts of paraprotein deposition in the kidney, although the cast may also contain complete immunoglobulins, Tamm-Horsfall protein and albumin.

Other useful laboratory tests include quantitative measurement of IgA, IgG, and IgM to look for immune paresis, and beta-2 microglobulin, which provides prognostic information. On peripheral blood smear, the rouleaux formation of red blood cells is commonly seen, though this is not specific.

The recent introduction of a commercial immunoassay for measurement of free light chains potentially offers an improvement in monitoring disease progression and response to treatment, particularly where the paraprotein is difficult to measure accurately by electrophoresis (for example in light chain myeloma, or where the paraprotein level is very low). Initial research also suggests that measurement of free light chains may also be used, in conjunction with other markers, for assessment of the risk of progression from MGUS to multiple myeloma.

This assay, the serum free light chain assay, has recently been recommended by the International Myeloma Working Group for the screening, diagnosis, prognosis, and monitoring of plasma cell dyscrasias.

Histopathology

A bone marrow biopsy is usually performed to estimate the percentage of bone marrow occupied by plasma cells. This percentage is used in the diagnostic criteria for myeloma. Immunohistochemistry (staining particular cell types using antibodies against surface proteins) can detect plasma cells that express immunoglobulin in the cytoplasm and occasionally on the cell surface; myeloma cells are often CD56, CD38, CD138, and CD319 positive and CD19, CD20, and CD45 negative. Flow cytometry is often used to establish the clonal nature of the plasma cells, which will generally express only kappa or lambda light chain. Cytogenetics may also be performed in myeloma for prognostic purposes, including a myeloma-specific fluorescent in situ hybridization and virtual karyotype.

The plasma cells seen in multiple myeloma have several possible morphologies. First, they could have the appearance of a normal plasma cell, a large cell two or three times the size of a peripheral lymphocyte. Because they are actively producing antibodies, the Golgi apparatus typically produces a light-colored area adjacent to the nucleus, called a perinuclear halo. The single nucleus (with inside a single nucleolus with vesicular nuclear chromatin) is eccentric, displaced by an abundant cytoplasm. Other common morphologies seen, but which are not usual in normal plasma cells, include:

  • Bizarre cells, which are multinucleated
  • Mott cells, containing multiple clustered cytoplasmic droplets or other inclusions (sometimes confused with auer rods, commonly seen in myeloid blasts)
  • Flame cells, having a fiery red cytoplasm

Historically, the CD138 has been used to isolate myeloma cells for diagnostic purposes. However, this antigen disappears rapidly ex vivo. Recently, however, the surface antigen CD319 (SLAMF7) was discovered to be considerably more stable and allows robust isolation of malignant plasma cells from delayed or even cryopreserved samples.

The prognosis varies widely depending upon various risk factors. The Mayo Clinic has developed a risk-stratification model termed Mayo Stratification for Myeloma and Risk-adapted Therapy (mSMART), which divides people into high-risk and standard-risk categories. People with deletion of chromosome 13 or hypodiploidy by conventional cytogenetics, t(4;14), t(14;16), t(14;20) or 17p- by molecular genetic studies, or with a high plasma cell labeling index (3% or more) are considered to have high-risk myeloma.

Medical imaging

The diagnostic examination of a person with suspected multiple myeloma typically includes a skeletal survey. This is a series of X-rays of the skull, axial skeleton, and proximal long bones. Myeloma activity sometimes appears as "lytic lesions" (with local disappearance of normal bone due to resorption). And on the skull X-ray as "punched-out lesions" (raindrop skull). Lesions may also be sclerotic, which is seen as radiodense. Overall, the radiodensity of myeloma is between −30 and 120 Hounsfield units (HU). Magnetic resonance imaging is more sensitive than simple X-rays in the detection of lytic lesions, and may supersede a skeletal survey, especially when vertebral disease is suspected. Occasionally, a CT scan is performed to measure the size of soft-tissue plasmacytomas. Bone scans are typically not of any additional value in the workup of people with myeloma (no new bone formation; lytic lesions not well visualized on bone scan).

Diagnostic criteria

In 2003, the IMG agreed on diagnostic criteria for symptomatic myeloma, asymptomatic myeloma, and MGUS, which was subsequently updated in 2009:

  • Symptomatic myeloma (all three criteria must be met):
    1. Clonal plasma cells >10% on bone marrow biopsy or (in any quantity) in a biopsy from other tissues (plasmacytoma)
    2. A monoclonal protein (myeloma protein) in either serum or urine and it has to be more than 3g/dL (except in cases of true nonsecretory myeloma)
    3. Evidence of end-organ damage felt related to the plasma cell disorder (related organ or tissue impairment, CRAB):
      • HyperCalcemia (corrected calcium >2.75 mmol/l, >11 mg/dl)
      • Renal failure (kidney insufficiency) attributable to myeloma
      • Anemia (hemoglobin <10 g/dl)
      • Bone lesions (lytic lesions or osteoporosis with compression fractures)

Note: Recurrent infections alone in a person who has none of the CRAB features is not sufficient to make the diagnosis of myeloma. People who lack CRAB features, but have evidence of amyloidosis, should be considered as amyloidosis and not myeloma. CRAB-like abnormalities are common with numerous diseases, and these abnormalities must be felt to be directly attributable to the related plasma cell disorder and every attempt made to rule out other underlying causes of anemia, kidney failure, etc.

In 2014, the IMWG updated their criteria further to include biomarkers of malignancy. These biomarkers are >60% clonal plasma cells, a serum involved / uninvolved free light chain ratio ≥ 100 (the concentration of the involved free light chain must be ≥ 100 mg/l) and more than one focal lesion ≥ 5 mm by MRI. Together, these biomarkers and the CRAB criteria are known as myeloma-defining events (MDEs). A person must have >10 % clonal plasma cells and any MDE to be diagnosed with myeloma. The biomarker criteria were added so that smouldering people with multiple myeloma at high risk of developing multiple myeloma could be diagnosed before organ damage occurred, so they would therefore have a better prognosis.

  • Asymptomatic/smoldering myeloma:
    1. Serum M protein >30 g/l (3 g/dl) or
    2. Clonal plasma cells >10% on bone marrow biopsy and
    3. No myeloma-related organ or tissue impairment
  • Monoclonal gammopathy of undetermined significance (MGUS):
    1. Serum paraprotein <30 g/l (3 g/dl) and
    2. Clonal plasma cells <10% on bone marrow biopsy and
    3. No myeloma-related organ or tissue impairment or a related B-cell lymphoproliferative disorder

Related conditions include solitary plasmacytoma (a single tumor of plasma cells, typically treated with irradiation), plasma cell dyscrasia (where only the antibodies produce symptoms, e.g., AL amyloidosis), and peripheral neuropathy, organomegaly, endocrinopathy, monoclonal plasma cell disorder, and skin changes.

Staging

In multiple myeloma, staging helps with prognostication but does not guide treatment decisions. The Durie-Salmon staging system was used historically and was replaced by the International Staging System (ISS), published by the International Myeloma Working Group In 2005. The revised ISS (R-ISS) was published in 2015 and incorporates cytogenetics and lactate dehydrogenase (LDH).

  • Stage I: β2 microglobulin (β2M) < 3.5 mg/L, albumin ≥ 3.5 g/dL, normal cytogenetics, no elevated LDH
  • Stage II: Not classified under Stage I or Stage III
  • Stage III: β2M ≥ 5.5 mg/L and either elevated LDH or high-risk cytogenetics [t(4,14), t(14,16), and/or del(17p)]

Prevention

The risk of multiple myeloma can be reduced slightly by maintaining a normal body weight.

Treatment

Treatment is indicated in myeloma with symptoms. If there are no symptoms, but a paraprotein typical of myeloma and diagnostic bone marrow is present without end-organ damage, treatment is usually deferred or restricted to clinical trials. Treatment for multiple myeloma is focused on decreasing the clonal plasma cell population and consequently decrease the symptoms of disease.

Chemotherapy

Initial

Initial treatment of multiple myeloma depends on the person's age and other illnesses present.

The preferred treatment for those under the age of 65 is high-dose chemotherapy, commonly with bortezomib-based regimens, and lenalidomide–dexamethasone, to be followed by a stem cell transplant. A 2016 study concluded that stem cell transplant is the preferred treatment of multiple myeloma. There are two types of stem cell transplants to treat multiple myeloma. In autologous hematopoietic stem-cell transplantation (ASCT) – the patient's own stem cells are collected from the patient's own blood. The patient is given high-dose chemotherapy, and the patient's stem cells are then transplanted back into the patient. The process is not curative, but does prolong overall survival and complete remission. In allogeneic stem-cell transplantation, a healthy donor's stem cells are transplanted into the affected person. Allogenic stem-cell transplantation has the potential for a cure, but is used in a very small percentage of people (and in the relapsed setting, not as part of initial treatment). Furthermore, a 5–10% treatment-associated mortality rate is associated with allogeneic stem-cell transplant.

People over age 65 and people with significant concurrent illnesses often cannot tolerate stem-cell transplantation. For these people, the standard of care has been chemotherapy with melphalan and prednisone. Recent studies among this population suggest improved outcomes with new chemotherapy regimens, e.g., with bortezomib. Treatment with bortezomib, melphalan, and prednisone had an estimated overall survival of 83% at 30 months, lenalidomide plus low-dose dexamethasone an 82% survival at 2 years, and melphalan, prednisone, and lenalidomide had a 90% survival at 2 years. Head-to-head studies comparing these regimens have not been performed as of 2008.

There is support for continuous therapies with multiple drug combinations of antimyeloma drugs bortezomib, lenalidomide and thalidomide as initial treatment for transplant-ineligible multiple myeloma. Further clinical studies are required to determine the potential harms of these drugs and the effect on the person's quality of life. A 2009 review noted, "Deep venous thrombosis and pulmonary embolism are the major side effects of thalidomide and lenalidomide. Lenalidomide causes more myelosuppression, and thalidomide causes more sedation. Chemotherapy-induced peripheral neuropathy and thrombocytopenia are major side effects of bortezomib."

Treatment of related hyperviscosity syndrome may be required to prevent neurologic symptoms or kidney failure.

Maintenance

Most people, including those treated with ASCT, relapse after initial treatment. Maintenance therapy using a prolonged course of low-toxicity medications is often used to prevent relapse. A 2017 meta-analysis showed that post-ASCT maintenance therapy with lenalidomide improved progression-free survival and overall survival in people at standard risk. A 2012 clinical trial showed that people with intermediate- and high-risk disease benefit from a bortezomib-based maintenance regimen.

Relapse

Reasons for relapse include disease evolution, either from the selective pressure applied by treatment or by de novo mutations and/or if disease was inadequately represented in the initial biopsy. Relapse within the first 18 months of diagnosis is considered as functional high risk multiple myeloma. Depending on the person's condition, the prior treatment modalities used and the duration of remission, options for relapsed disease include retreatment with the original agent, use of other agents (such as melphalan, cyclophosphamide, thalidomide, or dexamethasone, alone or in combination), and a second ASCT.

Later in the course of the disease, it becomes refractory (resistant) to formerly effective treatment. This stage is referred to as relapsed/refractory multiple myeloma (RRMM). Treatment modalities that are commonly use to treat RRMM include dexamethasone, proteasome inhibitors (e.g. bortezomib and carfilzomib), immunomodulatory imide drugs (e.g. thalidomide, lenalidomide, and pomalidomide), and certain monoclonal antibodies (e.g. against CD38 and CD319). Survival expectancy has risen in recent years, and new treatments are under development.

Kidney failure in multiple myeloma can be acute (reversible) or chronic (irreversible). Acute kidney failure typically resolves when the calcium and paraprotein levels are brought under control. Treatment of chronic kidney failure is dependent on the type of kidney failure and may involve dialysis.

Several newer options are approved for the management of advanced disease:

  • belantamab mafodotin — a monoclonal antibody against B-cell maturation antigen (BCMA), also known as CD269, indicated for the treatment of adults with relapsed or refractory multiple myeloma who have received at least four prior therapies including an anti-CD38 monoclonal antibody, a proteasome inhibitor, and an immunomodulatory agent.
  • carfilzomib—a proteasome inhibitor that is indicated:
    • as a single agent in people who have received one or more lines of therapy
    • in combination with dexamethasone or with lenalidomide and dexamethasone in people who have received one to three lines of therapy
  • daratumumab—a monoclonal antibody against CD38 indicated in people who have received at least three prior lines of therapy including a proteasome inhibitor and an immunomodulatory agent or who are double refractory to a proteasome inhibitor and an immunomodulatory agent
  • elotuzumab—an immunostimulatory humanized monoclonal antibody against SLAMF7 (also known as CD319) indicated in combination with lenalidomide and dexamethasone in people who have received one to three prior therapies
  • isatuximab—a monoclonal antibody against CD38 indicated in combination with pomalidomide and dexamethasone for the treatment of adults with multiple myeloma who have received at least two prior therapies including lenalidomide and a proteasome inhibitor.
  • ixazomib—an orally available proteasome inhibitor indicated in combination with lenalidomide and dexamethasone in people who have received at least one prior therapy
  • panobinostat—an orally available histone deacetylase inhibitor used in combination with bortezomib and dexamethasone in people who have received at least two prior chemotherapy regimens, including bortezomib and an immunomodulatory agent
  • selinexor—an orally available selective inhibitor of nuclear export indicated in combination with dexamethasone in people who have received at least four prior therapies and whose disease does not respond to at least two proteasome inhibitors, two immunomodulatory agents and an anti-CD38 monoclonal antibody
  • idecabtagene vicleucel—first cell-based gene therapy was approved by FDA in 2021 for the treatment of adults with relapsed or refractory multiple myeloma who have received at least four prior therapies

Stem cell transplant

Stem cell transplant can be used to treat multiple myeloma. Stem cell transplants come with a risk of a graft-versus-host-disease. Mesenchymal stromal cells may reduce the all-cause mortality if they are used for a therapeutic reason and the therapeutic use of MSCs may increase the complete response of acute and chronic GvHD, but the evidence is very uncertain. The evidence suggests that MSCs for prophylactic reason result in little to no difference in the all-cause mortality, in the relapse of malignant diseases and in the incidence of acute GvHD. The evidence suggests that MSCs for prophylactic reason reduce the incidence of chronic GvHD.

Gene therapy

Ciltacabtagene autoleucel (Carvykti) was approved for medical use in the United States in February 2022. Ciltacabtagene autoleucel is indicated for the treatment of adults with relapsed or refractory multiple myeloma after four or more prior lines of therapy, including a proteasome inhibitor, an immunomodulatory agent, and an anti-CD38 monoclonal antibody.

Other measures

In addition to direct treatment of the plasma cell proliferation, bisphosphonates (e.g., pamidronate or zoledronic acid) are routinely administered to prevent fractures; they have also been observed to have a direct antitumor effect even in people without known skeletal disease. If needed, red blood cell transfusions or erythropoietin can be used for management of anemia.

Side effects

Chemotherapies and stem cell transplants can cause unwanted bleedings and may require platelet transfusions. It was seen that platelet transfusions for people undergoing a chemotherapy or a stem cell transplantation for the prevention of bleeding events had different effects on the number of participants with a bleeding event, the number of days on which a bleeding occurred, the mortality secondary to bleeding and the number of platelet transfusions depending on the way they were used (therapeutic, depending on a threshold, different dose schedules or prophylactic).

Supportive treatment

Adding physical exercises to the standard treatment for adult patients with haematological malignancies like multiple myeloma may result in little to no difference in the mortality, in the quality of life and in the physical functioning. These exercises may result in a slight reduction in depression. Furthermore, aerobic physical exercises probably reduce fatigue. The evidence is very uncertain about the effect and serious adverse events 

Palliative care

Multiple national cancer treatment guidelines recommend early palliative care for people with advanced multiple myeloma at the time of diagnosis and for anyone who has significant symptoms.

Palliative care is appropriate at any stage of multiple myeloma and can be provided alongside curative treatment. In addition to addressing symptoms of cancer, palliative care helps manage unwanted side effects, such as pain and nausea related to treatments.

Teeth

Oral prophylaxis, hygiene instruction and elimination of sources of infection within the mouth before beginning cancer treatment, can reduce the risk of infectious complications. Before starting bisphosphonates therapy, the person's dental health should be evaluated to assess the risk factors to prevent the development of medication-related osteonecrosis of the jaw (MRONJ). If there are any symptoms or radiographic appearance of MRONJ like jaw pain, loose tooth, mucosal swelling, early referral to an oral surgeon is recommended. Dental extractions should be avoided during the active period of treatment and treat the tooth with nonsurgical root canal treatment instead.

Prognosis

Overall the 5-year survival rate is around 54% in the United States. With high-dose therapy followed by ASCT, the median survival has been estimated in 2003 to be about 4.5 years, compared to a median around 3.5 years with "standard" therapy.

The international staging system can help to predict survival, with a median survival (in 2005) of 62 months for stage-1 disease, 45 months for stage-2 disease, and 29 months for stage-3 disease. The median age at diagnosis is 69 years.

Genetic testing

SNP array karyotyping can detect copy number alterations of prognostic significance that may be missed by a targeted FISH panel.

Epidemiology

Deaths from lymphomas and multiple myeloma per million persons in 2012
  0–13
  14–18
  19–22
  23–28
  29–34
  35–42
  43–57
  58–88
  89–121
  122–184

Age-standardized death from lymphomas and multiple myeloma per 100,000 inhabitants in 2004.
  no data
  less than 1.8
  1.8–3.6
  3.6–5.4
  5.4–7.2
  7.2–9
  9–10.8
  10.8–12.6
  12.6–14.4
  14.4–16.2
  16.2–18
  18–19.8
  more than 19.8

Globally, multiple myeloma affected 488,000 people and resulted in 101,100 deaths in 2015. This is up from 49,000 in 1990.

United States

In the United States in 2016, an estimated 30,330 new cases and 12,650 deaths were reported. These numbers are based on assumptions made using data from 2011, which estimated the number of people affected as 83,367 people, the number of new cases as 6.1 per 100,000 people per year, and the mortality as 3.4 per 100,000 people per year.

Multiple myeloma is the second-most prevalent blood cancer (10%) after non-Hodgkin's lymphoma. It represents about 1.8% of all new cancers and 2.1% of all cancer deaths.

Multiple myeloma affects slightly more men than women. African Americans and native Pacific Islanders have the highest reported number of new cases of this disease in the United States and Asians the lowest. Results of one study found the number of new cases of myeloma to be 9.5 cases per 100,000 African Americans and 4.1 cases per 100,000 Caucasian Americans. Among African Americans, myeloma is one of the top-10 causes of cancer death.

UK

Myeloma is the 17th-most common cancer in the UK: around 4,800 people were diagnosed with the disease in 2011. It is the 16th-most common cause of cancer death: around 2,700 people died of it in 2012.

Other animals

Multiple myeloma has been diagnosed in dogs, cats, and horses.

In dogs, multiple myeloma accounts for around 8% of all haemopoietic tumors. Multiple myeloma occurs in older dogs and is not particularly associated with either males or females. No breeds appear overrepresented in case reviews that have been conducted. Diagnosis in dogs is usually delayed due to the initial nonspecificity and range of clinical signs possible. Diagnosis usually involves bone-marrow studies, X-rays, and plasma-protein studies. In dogs, protein studies usually reveal the monoclonal gammaglobulin elevation to be IgA or IgG in equal number of cases. In rare cases the globulin elevation is IgM, which is referred to as Waldenström's macroglobulinemia. The prognosis for initial control and return to good quality of life in dogs is good; 43% of dogs started on a combination chemotherapeutic protocol achieved complete remission. Long-term survival is normal, with a median of 540 days reported. The disease eventually recurs, becoming resistant to available therapies. The complications of kidney failure, sepsis, or pain can lead to an animal's death, frequently by euthanasia.

Butane

From Wikipedia, the free encyclopedia ...