Search This Blog

Thursday, June 30, 2022

ARPANET

From Wikipedia, the free encyclopedia

ARPANET
Arpanet logical map, march 1977.png
ARPANET logical map, March 1977
TypeData
LocationUnited States, United Kingdom, Norway
Protocols1822 protocol, NCP, TCP/IP
OperatorFrom 1975, Defense Communications Agency
Established1969
Closed1990
Commercial?No
FundingFrom 1966, Advanced Research Projects Agency (ARPA)
ARPANET access points in the 1970s

The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency (ARPA) of the United States Department of Defense.

Building on the ideas of J. C. R. Licklider, Bob Taylor initiated the ARPANET project in 1966 to enable access to remote computers. Taylor appointed Larry Roberts as program manager. Roberts made the key decisions about the network design. He incorporated Donald Davies’ concepts and designs for packet switching, and sought input from Paul Baran. ARPA awarded the contract to build the network to Bolt Beranek & Newman who developed the first protocol for the network. Roberts engaged Leonard Kleinrock at UCLA to develop mathematical methods for analyzing the packet network technology.

The first computers were connected in 1969 and the Network Control Program was implemented in 1970. The network was declared operational in 1971. Further software development enabled remote login, file transfer and email. The network expanded rapidly and operational control passed to the Defense Communications Agency in 1975.

Internetworking research in the early 1970s led by Bob Kahn at DARPA and Vint Cerf at Stanford University and later DARPA formulated the Transmission Control Program, which incorporated concepts from the French CYCLADES project. As this work progressed, a protocol was developed by which multiple separate networks could be joined into a network of networks. Version 4 of TCP/IP was installed in the ARPANET for production use in January 1983 after the Department of Defense made it standard for all military computer networking.

Access to the ARPANET was expanded in 1981, when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In the early 1980s, the NSF funded the establishment of national supercomputing centers at several universities, and provided network access and network interconnectivity with the NSFNET project in 1986. The ARPANET was formally decommissioned in 1990, after partnerships with the telecommunication and computer industry had assured private sector expansion and future commercialization of an expanded world-wide network, known as the Internet.

History

Inspiration

Historically, voice and data communications were based on methods of circuit switching, as exemplified in the traditional telephone network, wherein each telephone call is allocated a dedicated, end to end, electronic connection between the two communicating stations. The connection is established by switching systems that connected multiple intermediate call legs between these systems for the duration of the call.

The traditional model of the circuit-switched telecommunication network was challenged in the early 1960s by Paul Baran at the RAND Corporation, who had been researching systems that could sustain operation during partial destruction, such as by nuclear war. He developed the theoretical model of distributed adaptive message block switching. However, the telecommunication establishment rejected the development in favor of existing models. Donald Davies at the United Kingdom's National Physical Laboratory (NPL) independently arrived at a similar concept in 1965.

The earliest ideas for a computer network intended to allow general communications among computer users were formulated by computer scientist J. C. R. Licklider of Bolt Beranek and Newman (BBN), in April 1963, in memoranda discussing the concept of the "Intergalactic Computer Network". Those ideas encompassed many of the features of the contemporary Internet. In October 1963, Licklider was appointed head of the Behavioral Sciences and Command and Control programs at the Defense Department's Advanced Research Projects Agency (ARPA). He convinced Ivan Sutherland and Bob Taylor that this network concept was very important and merited development, although Licklider left ARPA before any contracts were assigned for development.

Sutherland and Taylor continued their interest in creating the network, in part, to allow ARPA-sponsored researchers at various corporate and academic locales to utilize computers provided by ARPA, and, in part, to quickly distribute new software and other computer science results. Taylor had three computer terminals in his office, each connected to separate computers, which ARPA was funding: one for the System Development Corporation (SDC) Q-32 in Santa Monica, one for Project Genie at the University of California, Berkeley, and another for Multics at the Massachusetts Institute of Technology. Taylor recalls the circumstance: "For each of these three terminals, I had three different sets of user commands. So, if I was talking online with someone at S.D.C., and I wanted to talk to someone I knew at Berkeley, or M.I.T., about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. I said, "Oh Man!", it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go. That idea is the ARPANET".

Donald Davies' work caught the attention of ARPANET developers at Symposium on Operating Systems Principles in October 1967. He gave the first public presentation, having coined the term packet switching, in August 1968 and incorporated it into the NPL network in England. The NPL network and ARPANET were the first two networks in the world to use packet switching, and were themselves interconnected in 1973. Roberts said the ARPANET and other packet switching networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design.

Creation

In February 1966, Bob Taylor successfully lobbied ARPA's Director Charles M. Herzfeld to fund a network project. Herzfeld redirected funds in the amount of one million dollars from a ballistic missile defense program to Taylor's budget. Taylor hired Larry Roberts as a program manager in the ARPA Information Processing Techniques Office in January 1967 to work on the ARPANET.

Roberts asked Frank Westervelt to explore the initial design questions for a network. In April 1967, ARPA held a design session on technical standards. The initial standards for identification and authentication of users, transmission of characters, and error checking and retransmission procedures were discussed. Roberts' proposal was that all mainframe computers would connect to one another directly. The other investigators were reluctant to dedicate these computing resources to network administration. Wesley Clark proposed minicomputers should be used as an interface to create a message switching network. Roberts modified the ARPANET plan to incorporate Clark's suggestion and named the minicomputers Interface Message Processors (IMPs).

The plan was presented at the inaugural Symposium on Operating Systems Principles in October 1967. Donald Davies' work on packet switching and the NPL network, presented by a colleague (Roger Scantlebury), came to the attention of the ARPA investigators at this conference. Roberts applied Davies' concept of packet switching for the ARPANET, and sought input from Paul Baran. The NPL network was using line speeds of 768 kbit/s, and the proposed line speed for the ARPANET was upgraded from 2.4 kbit/s to 50 kbit/s.

By mid-1968, Roberts and Barry Wessler wrote a final version of the Interface Message Processor (IMP) specification based on a Stanford Research Institute (SRI) report that ARPA commissioned to write detailed specifications describing the ARPANET communications network. Roberts gave a report to Taylor on 3 June, who approved it on 21 June. After approval by ARPA, a Request for Quotation (RFQ) was issued for 140 potential bidders. Most computer science companies regarded the ARPA proposal as outlandish, and only twelve submitted bids to build a network; of the twelve, ARPA regarded only four as top-rank contractors. At year's end, ARPA considered only two contractors, and awarded the contract to build the network to Bolt, Beranek and Newman Inc. (BBN) in January 1969.

The initial, seven-person BBN team were much aided by the technical specificity of their response to the ARPA RFQ, and thus quickly produced the first working system. This team was led by Frank Heart and included Robert Kahn and Dave Walden. The BBN-proposed network closely followed Roberts' ARPA plan: a network composed of small computers called Interface Message Processors (or IMPs), similar to the later concept of routers, that functioned as gateways interconnecting local resources. At each site, the IMPs performed store-and-forward packet switching functions, and were interconnected with leased lines via telecommunication data sets (modems), with initial data rates of 56kbit/s. The host computers were connected to the IMPs via custom serial communication interfaces. The system, including the hardware and the packet switching software, was designed and installed in nine months. The BBN team continued to interact with the NPL team with meetings between them taking place in the U.S. and the U.K.

The first-generation IMPs were built by BBN Technologies using a rugged computer version of the Honeywell DDP-516 computer, configured with 24KB of expandable magnetic-core memory, and a 16-channel Direct Multiplex Control (DMC) direct memory access unit. The DMC established custom interfaces with each of the host computers and modems. In addition to the front-panel lamps, the DDP-516 computer also features a special set of 24 indicator lamps showing the status of the IMP communication channels. Each IMP could support up to four local hosts, and could communicate with up to six remote IMPs via early Digital Signal 0 leased telephone lines. The network connected one computer in Utah with three in California. Later, the Department of Defense allowed the universities to join the network for sharing hardware and software resources.

Debate on design goals

According to Charles Herzfeld, ARPA Director (1965–1967):

The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.

Nonetheless, according to Stephen J. Lukasik, who as deputy director (1967–1970) and Director of DARPA (1970–1975) was "the person who signed most of the checks for Arpanet's development":

The goal was to exploit new computer technologies to meet the needs of military command and control against nuclear threats, achieve survivable control of US nuclear forces, and improve military tactical and management decision making.

The ARPANET incorporated distributed computation, and frequent re-computation, of routing tables. This increased the survivability of the network in the face of significant interruption. Automatic routing was technically challenging at the time. The ARPANET was designed to survive subordinate-network losses, since the principal reason was that the switching nodes and network links were unreliable, even without any nuclear attacks.

The Internet Society agrees with Herzfeld in a footnote in their online article, A Brief History of the Internet:

It was from the RAND study that the false rumor started, claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, but was an aspect of the earlier RAND study of secure communication. The later work on internetworking did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.

Paul Baran, the first to put forward a theoretical model for communication using packet switching, conducted the RAND study referenced above. Though the ARPANET did not exactly share Baran's project's goal, he said his work did contribute to the development of the ARPANET. Minutes taken by Elmer Shapiro of Stanford Research Institute at the ARPANET design meeting of 9–10 October 1967 indicate that a version of Baran's routing method ("hot potato") may be used, consistent with the NPL team's proposal at the Symposium on Operating System Principles in Gatlinburg.

Implementation

The first four nodes were designated as a testbed for developing and debugging the 1822 protocol, which was a major undertaking. While they were connected electronically in 1969, network applications were not possible until the Network Control Program was implemented in 1970 enabling the first two host-host protocols, remote login (Telnet) and file transfer (FTP) which were specified and implemented between 1969 and 1973. The network was declared operational in 1971. Network traffic began to grow once email was established at the majority of sites by around 1973.

Initial four hosts

First ARPANET IMP log: the first message ever sent via the ARPANET, 10:30 pm PST on 29 October 1969 (6:30 UTC on 30 October 1969). This IMP Log excerpt, kept at UCLA, describes setting up a message transmission from the UCLA SDS Sigma 7 Host computer to the SRI SDS 940 Host computer.

The first four IMPs were:

The first successful host to host connection on the ARPANET was made between Stanford Research Institute (SRI) and UCLA, by SRI programmer Bill Duvall and UCLA student programmer Charley Kline, at 10:30 pm PST on 29 October 1969 (6:30 UTC on 30 October 1969). Kline connected from UCLA's SDS Sigma 7 Host computer (in Boelter Hall room 3420) to the Stanford Research Institute's SDS 940 Host computer. Kline typed the command "login," but initially the SDS 940 crashed after he typed two characters. About an hour later, after Duvall adjusted parameters on the machine, Kline tried again and successfully logged in. Hence, the first two characters successfully transmitted over the ARPANET were "lo". The first permanent ARPANET link was established on 21 November 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By 5 December 1969, the initial four-node network was established.

Elizabeth Feinler created the first Resource Handbook for ARPANET in 1969 which led to the development of the ARPANET directory. The directory, built by Feinler and a team made it possible to navigate the ARPANET.

Growth and evolution

ARPA network map 1973

Roberts engaged Howard Frank to consult on the topological design of the network. Frank made recommendations to increase throughput and reduce costs in a scaled-up network. By March 1970, the ARPANET reached the East Coast of the United States, when an IMP at BBN in Cambridge, Massachusetts was connected to the network. Thereafter, the ARPANET grew: 9 IMPs by June 1970 and 13 IMPs by December 1970, then 18 by September 1971 (when the network included 23 university and government hosts); 29 IMPs by August 1972, and 40 by September 1973. By June 1974, there were 46 IMPs, and in July 1975, the network numbered 57 IMPs. By 1981, the number was 213 host computers, with another host connecting approximately every twenty days.

Support for inter-IMP circuits of up to 230.4 kbit/s was added in 1970, although considerations of cost and IMP processing power meant this capability was not actively used.

Larry Roberts saw the ARPANET and NPL projects as complementary and sought in 1970 to connect them via a satellite link. Peter Kirstein's research group at University College London (UCL) was subsequently chosen in 1971 in place of NPL for the UK connection. In June 1973, a transatlantic satellite link connected ARPANET to the Norwegian Seismic Array (NORSAR), via the Tanum Earth Station in Sweden, and onward via a terrestrial circuit to a TIP at UCL. UCL provided a gateway for an interconnection with the NPL network, the first interconnected network, and subsequently the SRCnet, the forerunner of UK's JANET network.

1971 saw the start of the use of the non-ruggedized (and therefore significantly lighter) Honeywell 316 as an IMP. It could also be configured as a Terminal Interface Processor (TIP), which provided terminal server support for up to 63 ASCII serial terminals through a multi-line controller in place of one of the hosts. The 316 featured a greater degree of integration than the 516, which made it less expensive and easier to maintain. The 316 was configured with 40 kB of core memory for a TIP. The size of core memory was later increased, to 32 kB for the IMPs, and 56 kB for TIPs, in 1973.

In 1975, BBN introduced IMP software running on the Pluribus multi-processor. These appeared in a few sites. In 1981, BBN introduced IMP software running on its own C/30 processor product.

Network performance

In 1968, Roberts contracted with Kleinrock to measure the performance of the network and find areas for improvement. Building on his earlier work on queueing theory, Kleinrock specified mathematical models of the performance of packet-switched networks, which underpinned the development of the ARPANET as it expanded rapidly in the early 1970s.

Operation

Internetworking demonstration, linking the ARPANET, PRNET, and SATNET in 1977

The ARPANET was a research project that was communications-oriented, rather than user-oriented in design. Nonetheless, in the summer of 1975, the ARPANET was declared "operational". The Defense Communications Agency took control since ARPA was intended to fund advanced research. At about this time, the first ARPANET encryption devices were deployed to support classified traffic.

The transatlantic connectivity with NORSAR and UCL later evolved into the SATNET. The ARPANET, SATNET and PRNET were interconnected in 1977.

The ARPANET Completion Report, published in 1981 jointly by BBN and ARPA, concludes that:

 ... it is somewhat fitting to end on the note that the ARPANET program has had a strong and direct feedback into the support and strength of computer science, from which the network, itself, sprang.

CSNET, expansion

Access to the ARPANET was expanded in 1981, when the National Science Foundation (NSF) funded the Computer Science Network (CSNET).

Adoption of TCP/IP

The DoD made TCP/IP standard for all military computer networking in 1980. NORSAR and University College London left the ARPANET and began using TCP/IP over SATNET in early 1982.

On January 1, 1983, known as flag day, TCP/IP protocols became the standard for the ARPANET, replacing the earlier Network Control Program.

MILNET, phasing out

In September 1984 work was completed on restructuring the ARPANET giving U.S. military sites their own Military Network (MILNET) for unclassified defense department communications. Both networks carried unclassified information, and were connected at a small number of controlled gateways which would allow total separation in the event of an emergency. MILNET was part of the Defense Data Network (DDN).

Separating the civil and military networks reduced the 113-node ARPANET by 68 nodes. After MILNET was split away, the ARPANET would continue be used as an Internet backbone for researchers, but be slowly phased out.

Decommissioning

In 1985, the National Science Foundation (NSF) funded the establishment of national supercomputing centers at several universities, and provided network access and network interconnectivity with the NSFNET project in 1986. NSFNET became the Internet backbone for government agencies and universities.

The ARPANET project was formally decommissioned in 1990. The original IMPs and TIPs were phased out as the ARPANET was shut down after the introduction of the NSFNet, but some IMPs remained in service as late as July 1990.

In the wake of the decommissioning of the ARPANET on 28 February 1990, Vinton Cerf wrote the following lamentation, entitled "Requiem of the ARPANET":

It was the first, and being first, was best,
but now we lay it down to ever rest.
Now pause with me a moment, shed some tears.
For auld lang syne, for love, for years and years
of faithful service, duty done, I weep.
Lay down thy packet, now, O friend, and sleep.

-Vinton Cerf

Legacy

ARPANET in a broader context

The ARPANET was related to many other research projects, which either influenced the ARPANET design, or which were ancillary projects or spun out of the ARPANET.

Senator Al Gore authored the High Performance Computing and Communication Act of 1991, commonly referred to as "The Gore Bill", after hearing the 1988 concept for a National Research Network submitted to Congress by a group chaired by Leonard Kleinrock. The bill was passed on 9 December 1991 and led to the National Information Infrastructure (NII) which Gore called the information superhighway.

Inter-networking protocols developed by ARPA and implemented on the ARPANET paved the way for future commercialization of a new world-wide network, known as the Internet.

The ARPANET project was honored with two IEEE Milestones, both dedicated in 2009.

Software and protocols

IMP functionality

Because it was never a goal for the ARPANET to support IMPs from vendors other than BBN, the IMP-to-IMP protocol and message format were not standardized. However, the IMPs did nonetheless communicate amongst themselves to perform link-state routing, to do reliable forwarding of messages, and to provide remote monitoring and management functions to ARPANET's Network Control Center. Initially, each IMP had a 6-bit identifier, and supported up to 4 hosts, which were identified with a 2-bit index. An ARPANET host address, therefore, consisted of both the port index on its IMP and the identifier of the IMP, which was written with either port/IMP notation or as a single byte; for example, the address of MIT-DMG (notable for hosting development of Zork) could be written as either 1/6 or 70. An upgrade in early 1976 extended the host and IMP numbering to 8-bit and 16-bit, respectively.

In addition to primary routing and forwarding responsibilities, the IMP ran several background programs, titled TTY, DEBUG, PARAMETER-CHANGE, DISCARD, TRACE, and STATISTICS. These were given host numbers in order to be addressed directly and provided functions independently of any connected host. For example, "TTY" allowed an on-site operator to send ARPANET packets manually via the teletype connected directly to the IMP.

1822 protocol

The starting point for host-to-host communication on the ARPANET in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP. The message format was designed to work unambiguously with a broad range of computer architectures. An 1822 message essentially consisted of a message type, a numeric host address, and a data field. To send a data message to another host, the transmitting host formatted a data message containing the destination host's address and the data message being sent, and then transmitted the message through the 1822 hardware interface. The IMP then delivered the message to its destination address, either by delivering it to a locally connected host, or by delivering it to another IMP. When the message was ultimately delivered to the destination host, the receiving IMP would transmit a Ready for Next Message (RFNM) acknowledgement to the sending, host IMP.

Network Control Program

Unlike modern Internet datagrams, the ARPANET was designed to reliably transmit 1822 messages, and to inform the host computer when it loses a message; the contemporary IP is unreliable, whereas the TCP is reliable. Nonetheless, the 1822 protocol proved inadequate for handling multiple connections among different applications residing in a host computer. This problem was addressed with the Network Control Program (NCP), which provided a standard method to establish reliable, flow-controlled, bidirectional communications links among different processes in different host computers. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept later incorporated in the OSI model.

NCP was developed under the leadership of Stephen D. Crocker, then a graduate student at UCLA. Crocker created and led the Network Working Group (NWG) which was made up of a collection of graduate students at universities and research laboratories sponsored by ARPA to carry out the development of the ARPANET and the software for the host computers that supported applications. The various application protocols such as TELNET for remote time-sharing access, File Transfer Protocol (FTP) and rudimentary electronic mail protocols were developed and eventually ported to run over the TCP/IP protocol suite or replaced in the case of email by the Simple Mail Transfer Protocol.

TCP/IP

Steve Crocker formed a "Networking Working Group" in 1969 with Vint Cerf, who also joined an International Networking Working Group in 1972. These groups considered how to interconnect packet switching networks with different specifications, that is, internetworking. Stephen J. Lukasik directed DARPA to focus on internetworking research in the early 1970s. Research led by Bob Kahn at DARPA and Vint Cerf at Stanford University and later DARPA resulted in the formulation of the Transmission Control Program, which incorporated concepts from the French CYCLADES project directed by Louis Pouzin. Its specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974 (RFC 675). The following year, testing began through concurrent implementations at Stanford, BBN and University College London. At first a monolithic design, the software was redesigned as a modular protocol stack in version 3 in 1978. Version 4 was installed in the ARPANET for production use in January 1983, replacing NCP. The development of the complete Internet protocol suite by 1989, as outlined in RFC 1122 and RFC 1123, and partnerships with the telecommunication and computer industry laid the foundation for the adoption of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet.

Network applications

NCP provided a standard set of network services that could be shared by several applications running on a single host computer. This led to the evolution of application protocols that operated, more or less, independently of the underlying network service, and permitted independent advances in the underlying protocols.

Telnet was developed in 1969 beginning with RFC 15, extended in RFC 855.

The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as RFC 114 on 16 April 1971. By 1973, the File Transfer Protocol (FTP) specification had been defined (RFC 354) and implemented, enabling file transfers over the ARPANET.

In 1971, Ray Tomlinson, of BBN sent the first network e-mail (RFC 524, RFC 561). Within a few years, e-mail came to represent a very large part of the overall ARPANET traffic.

The Network Voice Protocol (NVP) specifications were defined in 1977 (RFC 741), and implemented. But, because of technical shortcomings, conference calls over the ARPANET never worked well; the contemporary Voice over Internet Protocol (packet voice) was decades away.

Password protection

The Purdy Polynomial hash algorithm was developed for the ARPANET to protect passwords in 1971 at the request of Larry Roberts, head of ARPA at that time. It computed a polynomial of degree 224 + 17 modulo the 64-bit prime p = 264 − 59. The algorithm was later used by Digital Equipment Corporation (DEC) to hash passwords in the VMS operating system and is still being used for this purpose.

Rules and etiquette

Because of its government funding, certain forms of traffic were discouraged or prohibited.

Leonard Kleinrock claims to have committed the first illegal act on the Internet, having sent a request for return of his electric razor after a meeting in England in 1973. At the time, use of the ARPANET for personal reasons was unlawful.

In 1978, against the rules of the network, Gary Thuerk of Digital Equipment Corporation (DEC) sent out the first mass email to approximately 400 potential clients via the ARPANET. He claims that this resulted in $13 million worth of sales in DEC products, and highlighted the potential of email marketing.

A 1982 handbook on computing at MIT's AI Lab stated regarding network etiquette:

It is considered illegal to use the ARPANet for anything which is not in direct support of Government business ... personal messages to other ARPANet subscribers (for example, to arrange a get-together or check and say a friendly hello) are generally not considered harmful ... Sending electronic mail over the ARPANet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people, and it is possible to get MIT in serious trouble with the Government agencies which manage the ARPANet.

In popular culture

  • Computer Networks: The Heralds of Resource Sharing, a 30-minute documentary film featuring Fernando J. Corbató, J. C. R. Licklider, Lawrence G. Roberts, Robert Kahn, Frank Heart, William R. Sutherland, Richard W. Watson, John R. Pasta, Donald W. Davies, and economist, George W. Mitchell.
  • "Scenario", an episode of the U.S. television sitcom Benson (season 6, episode 20—dated February 1985), was the first incidence of a popular TV show directly referencing the Internet or its progenitors. The show includes a scene in which the ARPANET is accessed.
  • There is an electronic music artist known as "Arpanet", Gerald Donald, one of the members of Drexciya. The artist's 2002 album Wireless Internet features commentary on the expansion of the internet via wireless communication, with songs such as NTT DoCoMo, dedicated to the mobile communications giant based in Japan.
  • Thomas Pynchon mentions the ARPANET in his 2009 novel Inherent Vice, which is set in Los Angeles in 1970, and in his 2013 novel Bleeding Edge.
  • The 1993 television series The X-Files featured the ARPANET in a season 5 episode, titled "Unusual Suspects". John Fitzgerald Byers offers to help Susan Modeski (known as Holly ... "just like the sugar") by hacking into the ARPANET to obtain sensitive information.
  • In the spy-drama television series The Americans, a Russian scientist defector offers access to ARPANET to the Russians in a plea to not be repatriated (Season 2 Episode 5 "The Deal"). Episode 7 of Season 2 is named 'ARPANET' and features Russian infiltration to bug the network.
  • In the television series Person of Interest, main character Harold Finch hacked the ARPANET in 1980 using a homemade computer during his first efforts to build a prototype of the Machine. This corresponds with the real life virus that occurred in October of that year that temporarily halted ARPANET functions. The ARPANET hack was first discussed in the episode 2PiR (stylised 2R) where a computer science teacher called it the most famous hack in history and one that was never solved. Finch later mentioned it to Person of Interest Caleb Phipps and his role was first indicated when he showed knowledge that it was done by "a kid with a homemade computer" which Phipps, who had researched the hack, had never heard before.
  • In the third season of the television series Halt and Catch Fire, the character Joe MacMillan explores the potential commercialization of the ARPANET.

Lunar water

From Wikipedia, the free encyclopedia
 
Diffuse reflection spectra of lunar regolith samples extracted at depths of 118 and 184 cm by the 1976 soviet probe Luna 24 showing minima near 3, 5 and 6µm, valence-vibration bands for water molecules.
 
These images show a very young lunar crater on the far side, as imaged by the Moon Mineralogy Mapper aboard Chandrayaan-1
 
The image shows the distribution of surface ice at the Moon's south pole (left) and north pole (right) as viewed by NASA's Moon Mineralogy Mapper (M3) spectrometer onboard India's Chandrayaan-1 orbiter

Lunar water is water that is present on the Moon. Diffuse water molecules can persist at the Moon's sunlit surface, as discovered by NASA's SOFIA observatory in 2020. Gradually water vapor is decomposed by sunlight, leaving hydrogen and oxygen lost to outer space. Scientists have found water ice in the cold, permanently shadowed craters at the Moon's poles. Water molecules are also present in the extremely thin lunar atmosphere.

Water (H2O), and the chemically related hydroxyl group (-OH), exist in forms chemically bound as hydrates and hydroxides to lunar minerals (rather than free water), and evidence strongly suggests that this is the case in low concentrations as for much of the Moon's surface. In fact, of surface matter, adsorbed water is calculated to exist at trace concentrations of 10 to 1000 parts per million. Inconclusive evidence of free water ice at the lunar poles had accumulated during the second half of the 20th century from a variety of observations suggesting the presence of bound hydrogen.

On 18 August 1976, the Soviet Luna 24 probe landed at Mare Crisium, took samples from the depths of 118, 143, and 184 cm of the lunar regolith, and then took them to Earth. In February 1978, it was published that laboratory analysis of these samples shown they contained 0.1% water by mass. Spectral measurements shown minima near 3, 5, and 6 µm, distinctive valence-vibration bands for water molecules, with intensities two or three times larger than the noise level.

On 24 September 2009, it was reported that the NASA's Moon Mineralogy Mapper (M3) spectrometer onboard India's Chandrayaan-1 probe had detected absorption features near 2.8–3.0 μm on the surface of the Moon. On 14 November 2008, India released the Moon Impact Probe onboard Chandrayaan-1 orbiter to impact into the Shackleton crater which helped confirm the presence of water ice. For silicate bodies, such features are typically attributed to hydroxyl- and/or water-bearing materials. In August 2018, NASA confirmed that M3 showed water ice is present on the surface at the Moon poles. Water was confirmed to be on the sunlit surface of the Moon by NASA on October 26, 2020.

Water may have been delivered to the Moon over geological timescales by the regular bombardment of water-bearing comets, asteroids, and meteoroids or continuously produced in situ by the hydrogen ions (protons) of the solar wind impacting oxygen-bearing minerals.

The search for the presence of lunar water has attracted considerable attention and motivated several recent lunar missions, largely because of water's usefulness in rendering long-term lunar habitation feasible.

History of observations

20th century

Apollo Program

The possibility of ice in the floors of polar lunar craters was first suggested in 1961 by Caltech researchers Kenneth Watson, Bruce C. Murray, and Harrison Brown. Although trace amounts of water were found in lunar rock samples collected by Apollo astronauts, this was assumed to be a result of contamination, and the majority of the lunar surface was generally assumed to be completely dry. However, a 2008 study of lunar rock samples revealed evidence of water molecules trapped in volcanic glass beads.

The first direct evidence of water vapor near the Moon was obtained by the Apollo 14 ALSEP Suprathermal Ion Detector Experiment, SIDE, on March 7, 1971. A series of bursts of water vapor ions were observed by the instrument mass spectrometer at the lunar surface near the Apollo 14 landing site.

Luna 24

In February 1978 Soviet scientists M. Akhmanova, B. Dement'ev, and M. Markov of the Vernadsky Institute of Geochemistry and Analytical Chemistry published a paper claiming a detection of water fairly definitively. Their study showed that the samples returned to Earth by the 1976 Soviet probe Luna 24 contained about 0.1% water by mass, as seen in infrared absorption spectroscopy (at about 3 μm (0.00012 in) wavelength), at a detection level about 10 times above the threshold.

Clementine
Composite image of the Moon's south polar region, captured by NASA's Clementine probe over two lunar days. Permanently shadowed areas could harbour water ice.

A proposed evidence of water ice on the Moon came in 1994 from the United States military Clementine probe. In an investigation known as the 'bistatic radar experiment', Clementine used its transmitter to beam radio waves into the dark regions of the south pole of the Moon. Echoes of these waves were detected by the large dish antennas of the Deep Space Network on Earth. The magnitude and polarisation of these echoes was consistent with an icy rather than rocky surface, but the results were inconclusive, and their significance has been questioned. Earth-based radar measurements were used to identify the areas that are in permanent shadow and hence have the potential to harbour lunar ice: Estimates of the total extent of shadowed areas poleward of 87.5 degrees latitude are 1,030 and 2,550 square kilometres (400 and 980 sq mi) for the north and south poles, respectively. Subsequent computer simulations encompassing additional terrain suggested that an area up to 14,000 square kilometres (5,400 sq mi) might be in permanent shadow.

Lunar Prospector

The Lunar Prospector probe, launched in 1998, employed a neutron spectrometer to measure the amount of hydrogen in the lunar regolith near the polar regions. It was able to determine hydrogen abundance and location to within 50 parts per million and detected enhanced hydrogen concentrations at the lunar north and south poles. These were interpreted as indicating significant amounts of water ice trapped in permanently shadowed craters, but could also be due to the presence of the hydroxyl radical (OH) chemically bound to minerals. Based on data from Clementine and Lunar Prospector, NASA scientists have estimated that, if surface water ice is present, the total quantity could be of the order of 1–3 cubic kilometres (0.24–0.72 cu mi). In July 1999, at the end of its mission, the Lunar Prospector probe was deliberately crashed into Shoemaker crater, near the Moon's south pole, in the hope that detectable quantities of water would be liberated. However, spectroscopic observations from ground-based telescopes did not reveal the spectral signature of water.

Cassini–Huygens

More suspicions about the existence of water on the Moon were generated by inconclusive data produced by Cassini–Huygens mission, which passed the Moon in 1999.

21st century

Deep Impact

In 2005, observations of the Moon by the Deep Impact spacecraft produced inconclusive spectroscopic data suggestive of water on the Moon. In 2006, observations with the Arecibo planetary radar showed that some of the near-polar Clementine radar returns, previously claimed to be indicative of ice, might instead be associated with rocks ejected from young craters. If true, this would indicate that the neutron results from Lunar Prospector were primarily from hydrogen in forms other than ice, such as trapped hydrogen molecules or organics. Nevertheless, the interpretation of the Arecibo data do not exclude the possibility of water ice in permanently shadowed craters. In June 2009, NASA's Deep Impact spacecraft, now redesignated EPOXI, made further confirmatory bound hydrogen measurements during another lunar flyby.

Kaguya

As part of its lunar mapping programme, Japan's Kaguya probe, launched in September 2007 for a 19-month mission, carried out gamma ray spectrometry observations from orbit that can measure the abundances of various elements on the Moon's surface. Japan's Kaguya probe's high resolution imaging sensors failed to detect any signs of water ice in permanently shaded craters around the south pole of the Moon, and it ended its mission by crashing into the lunar surface in order to study the ejecta plume content.

Chang'e 1

The People's Republic of China's Chang'e 1 orbiter, launched in October 2007, took the first detailed photographs of some polar areas where ice water is likely to be found.

Chandrayaan-1
Direct evidence of lunar water in the Moon atmosphere obtained by the Chandrayaan-1's Altitudinal Composition (CHACE) output profile
 
Image of the Moon taken by the Moon Mineralogy Mapper. Blue shows the spectral signature of hydroxide, green shows the brightness of the surface as measured by reflected infrared radiation from the Sun and red shows a mineral called pyroxene.

India's ISRO spacecraft Chandrayaan-1 released the Moon Impact Probe (MIP) that impacted Shackleton Crater, of the lunar south pole, at 20:31 on 14 November 2008 releasing subsurface debris that was analysed for presence of water ice. During its 25-minute descent, the impact probe's Chandra's Altitudinal Composition Explorer (CHACE) recorded evidence of water in 650 mass spectra gathered in the thin atmosphere above the Moon's surface and hydroxyl absorption lines in reflected sunlight.

On September 25, 2009, NASA declared that data sent from its M3 confirmed the existence of hydrogen over large areas of the Moon's surface, albeit in low concentrations and in the form of hydroxyl group ( · OH) chemically bound to soil. This supports earlier evidence from spectrometers aboard the Deep Impact and Cassini probes.  On the Moon, the feature is seen as a widely distributed absorption that appears strongest at cooler high latitudes and at several fresh feldspathic craters. The general lack of correlation of this feature in sunlit M3 data with neutron spectrometer H abundance data suggests that the formation and retention of OH and H2O is an ongoing surficial process. OH/H2O production processes may feed polar cold traps and make the lunar regolith a candidate source of volatiles for human exploration.

Although M3 results are consistent with recent findings of other NASA instruments onboard Chandrayaan-1, the discovered water molecules in the Moon's polar regions is not consistent with the presence of thick deposits of nearly pure water ice within a few meters of the lunar surface, but it does not rule out the presence of small (<∼10 cm (3.9 in)), discrete pieces of ice mixed in with the regolith. Additional analysis with M3 published in 2018 had provided more direct evidence of water ice near the surface within 20° latitude of both poles. In addition to observing reflected light from the surface, scientists used M3's near-infrared absorption capabilities in the permanently shadowed areas of the polar regions to find absorption spectra consistent with ice. At the north pole region, the water ice is scattered in patches, while it is more concentrated in a single body around the south pole. Because these polar regions do not experience the high temperatures (greater than 373 Kelvin), it was postulated that the poles act as cold traps where vaporized water is collected on the Moon.

In March 2010, it was reported that the Mini-SAR on board Chandrayaan-1 had discovered more than 40 permanently darkened craters near the Moon's north pole that are hypothesized to contain an estimated 600 million metric tonnes of water-ice. The radar's high CPR is not uniquely diagnostic of either roughness or ice; the science team must take into account the environment of the occurrences of high CPR signal to interpret its cause. The ice must be relatively pure and at least a couple of meters thick to give this signature. The estimated amount of water ice potentially present is comparable to the quantity estimated from the previous mission of Lunar Prospector's neutron data.

Lunar Reconnaissance Orbiter | Lunar Crater Observation and Sensing Satellite

On October 9, 2009, the Centaur upper stage of its Atlas V carrier rocket was directed to impact Cabeus crater at 11:31 UTC, followed shortly by the NASA's Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft that flew through the ejecta plume. LCROSS detected a significant amount of hydroxyl group in the material thrown up from a south polar crater by an impactor; this may be attributed to water-bearing materials – what appears to be "near pure crystalline water-ice" mixed in the regolith. What was actually detected was the chemical group hydroxyl ( · OH), which is suspected to be from water, but could also be hydrates, which are inorganic salts containing chemically bound water molecules. The nature, concentration and distribution of this material requires further analysis; chief mission scientist Anthony Colaprete has stated that the ejecta appears to include a range of fine-grained particulates of near pure crystalline water-ice. A later definitive analysis found the concentration of water to be "5.6 ± 2.9% by mass".

The Mini-RF instrument on board the Lunar Reconnaissance Orbiter (LRO) observed the plume of debris from the impact of the LCROSS orbiter, and it was concluded that the water ice must be in the form of small (< ~10 cm), discrete pieces of ice distributed throughout the regolith, or as thin coating on ice grains. This, coupled with monostatic radar observations, suggest that the water ice present in the permanently shadowed regions of lunar polar craters is unlikely to be present in the form of thick, pure ice deposits.

The data acquired by the Lunar Exploration Neutron Detector (LEND) instrument onboard LRO show several regions where the epithermal neutron flux from the surface is suppressed, which is indicative of enhanced hydrogen content. Further analysis of LEND data suggests that water content in the polar regions is not directly determined by the illumination conditions of the surface, as illuminated and shadowed regions do not manifest any significant difference in the estimated water content. According to the observations by this instrument alone, "the permanent low surface temperature of the cold traps is not a necessary and sufficient condition for enhancement of water content in the regolith."

LRO laser altimeter's examination of the Shackleton crater at the lunar south pole suggests up to 22% of the surface of that crater is covered in ice.

Melt inclusions in Apollo 17 samples

In May 2011, Erik Hauri et al. reported 615-1410 ppm water in melt inclusions in lunar sample 74220, the famous high-titanium "orange glass soil" of volcanic origin collected during the Apollo 17 mission in 1972. The inclusions were formed during explosive eruptions on the Moon approximately 3.7 billion years ago.

This concentration is comparable with that of magma in Earth's upper mantle. While of considerable selenological interest, this announcement affords little comfort to would-be lunar colonists. The sample originated many kilometers below the surface, and the inclusions are so difficult to access that it took 39 years to detect them with a state-of-the-art ion microprobe instrument.

Stratospheric Observatory for Infrared Astronomy

In October 2020, astronomers reported detecting molecular water on the sunlit surface of the Moon by several independent scientific teams, including the Stratospheric Observatory for Infrared Astronomy (SOFIA). The estimated abundance is about 100 to 400 ppm, with a distribution over a small latitude range, likely a result of local geology and not a global phenomenon. It was suggested that the detected water is stored within glasses or in voids between grains sheltered from the harsh lunar environment, thus allowing the water to remain on the lunar surface. Using data from the Lunar Reconnaissance Orbiter, it was shown that besides the large, permanently shadowed regions in the Moon's polar regions, there are many unmapped cold traps, substantially augmenting the areas where ice may accumulate. Approximately 10–20% of the permanent cold-trap area for water is found to be contained in "micro cold traps" found in shadows on scales from 1 km to 1 cm, for a total area of ~40,000 km2, about 60% of which is in the South, and a majority of cold traps for water ice are found at latitudes >80° due to permanent shadows.

October 26, 2020: In a paper published in Nature Astronomy, a team of scientists used SOFIA, an infrared telescope mounted inside a 747 jumbo jet, to make observations that showed unambiguous evidence of water on parts of the Moon where the sun shines. “This discovery reveals that water might be distributed across the lunar surface and not limited to the cold shadowed places near the lunar poles,” Paul Hertz, the director of NASA's astrophysics division, said.

PRIME-1

A dedicated on-site experiment by NASA dubbed PRIME-1 is slated to land on the Moon in December, 2022 near Shackleton Crater at the Lunar South Pole. The mission will drill for water ice.

Lunar Trailblazer

Slated to launch as a ride-along mission in 2025, the Lunar Trailblazer satellite is part of NASA's Small Innovative Missions for Planetary Exploration (SIMPLEx) program. The satellite carries two instruments—a high-resolution spectrometer, which will detect and map different forms of water, and a thermal mapper. The mission's primary objectives are to characterize the form of lunar water, how much is present and where; determine how lunar volatiles change and move over time; measure how much and what form of water exists in permanently shadowed regions of the Moon; and to assess how differences in the reflectivity and temperature of lunar surfaces affect the concentration of lunar water.

Possible water cycle

Production

Lunar water has two potential origins: water-bearing comets (and other bodies) striking the Moon, and in  situ production. It has been theorized that the latter may occur when hydrogen ions (protons) in the solar wind chemically combine with the oxygen atoms present in the lunar minerals (oxides, silicates, etc.) to produce small amounts of water trapped in the minerals' crystal lattices or as hydroxyl groups, potential water precursors. (This mineral-bound water, or mineral surface, must not be confused with water ice.)

The hydroxyl surface groups (X–OH) formed by the reaction of protons (H+) with oxygen atoms accessible at oxide surface (X=O) could further be converted in water molecules (H2O) adsorbed onto the oxide mineral's surface. The mass balance of a chemical rearrangement supposed at the oxide surface could be schematically written as follows:

2 X–OH → X=O + X + H2O

or,

2 X–OH → X–O–X + H2O

where "X" represents the oxide surface.

The formation of one water molecule requires the presence of two adjacent hydroxyl groups or a cascade of successive reactions of one oxygen atom with two protons. This could constitute a limiting factor and decreases the probability of water production if the proton density per surface unit is too low.

Trapping

Solar radiation would normally strip any free water or water ice from the lunar surface, splitting it into its constituent elements, hydrogen and oxygen, which then escape to space. However, because of the only very slight axial tilt of the Moon's spin axis to the ecliptic plane (1.5 °), some deep craters near the poles never receive any sunlight, and are permanently shadowed (see, for example, Shackleton crater, and Whipple crater). The temperature in these regions never rises above about 100 K (about −170 ° Celsius), and any water that eventually ended up in these craters could remain frozen and stable for extremely long periods of time — perhaps billions of years, depending on the stability of the orientation of the Moon's axis.

While the ice deposits may be thick, they are most likely mixed with the regolith, possibly in a layered formation.

Transport

Although free water cannot persist in illuminated regions of the Moon, any such water produced there by the action of the solar wind on lunar minerals might, through a process of evaporation and condensation, migrate to permanently cold polar areas and accumulate there as ice, perhaps in addition to any ice brought by comet impacts.

The hypothetical mechanism of water transport / trapping (if any) remains unknown: indeed lunar surfaces directly exposed to the solar wind where water production occurs are too hot to allow trapping by water condensation (and solar radiation also continuously decomposes water), while no (or much less) water production is expected in the cold areas not directly exposed to the Sun. Given the expected short lifetime of water molecules in illuminated regions, a short transport distance would in principle increase the probability of trapping. In other words, water molecules produced close to a cold, dark polar crater should have the highest probability of surviving and being trapped.

To what extent, and at what spatial scale, direct proton exchange (protolysis) and proton surface diffusion directly occurring at the naked surface of oxyhydroxide minerals exposed to space vacuum (see surface diffusion and self-ionization of water) could also play a role in the mechanism of the water transfer towards the coldest point is presently unknown and remains a conjecture.

Liquid water

The temperature and pressure of the Moon's interior increase with depth

4–3.5 billion years ago, the Moon could have had sufficient atmosphere and liquid water on its surface. Warm and pressurized regions in the Moon's interior might still contain liquid water.

Uses

The presence of large quantities of water on the Moon would be an important factor in rendering lunar habitation cost-effective since transporting water (or hydrogen and oxygen) from Earth would be prohibitively expensive. If future investigations find the quantities to be particularly large, water ice could be mined to provide liquid water for drinking and plant propagation, and the water could also be split into hydrogen and oxygen by solar panel-equipped electric power stations or a nuclear generator, providing breathable oxygen as well as the components of rocket fuel. The hydrogen component of the water ice could also be used to draw out the oxides in the lunar soil and harvest even more oxygen.

Analysis of lunar ice would also provide scientific information about the impact history of the Moon and the abundance of comets and asteroids in the early Inner Solar System.

Ownership

The hypothetical discovery of usable quantities of water on the Moon may raise legal questions about who owns the water and who has the right to exploit it. The United Nations Outer Space Treaty does not prevent the exploitation of lunar resources, but does prevent the appropriation of the Moon by individual nations and is generally interpreted as barring countries from claiming ownership of Lunar resources. However most legal experts agree that the ultimate test of the question will arise through precedents of national or private activity.

The Moon Treaty specifically stipulates that exploitation of lunar resources is to be governed by an "international regime", but that treaty has only been ratified by a few nations, and primarily those with no independent spaceflight capabilities.

Luxembourg and the US have granted their citizens the right to mine and own space resources, including the resources of the Moon. The US executive, as at the last year of the Trump presidency, explicitly opposes the Moon Treaty.

Tribute

On 13 November 2009, the discovery of water on the Moon was celebrated with a Google Doodle.

Deep sea mining

From Wikipedia, the free encyclopedia

Deep sea mining
Deep sea mining
 

Deep sea mining is a growing subfield of experimental seabed mining that involves the retrieval of minerals and deposits from the ocean floor found at depths of 200 meters or greater. As of 2021, the majority of marine mining efforts are limited to shallow coastal waters only, where sand, tin and diamonds are more readily accessible. There are three types of deep sea mining that have generated great interest: polymetallic nodule mining, polymetallic sulphide mining, and the mining of cobalt-rich ferromanganese crusts. The majority of proposed deep sea mining sites are near of polymetallic nodules or active and extinct hydrothermal vents at 1,400 to 3,700 metres (4,600 to 12,100 ft) below the ocean’s surface. The vents create globular or massive sulfide deposits, which contain valuable metals such as silver, gold, copper, manganese, cobalt, and zinc. The deposits are mined using either hydraulic pumps or bucket systems that take ore to the surface to be processed.

Marine minerals include sea-dredged and seabed minerals. Sea-dredged minerals are normally extracted by dredging operations within coastal zones, to maximum sea depths of about 200 m. Minerals normally extracted from these depths include sand, silt and mud for construction purposes, mineral rich sands such as ilmenite and diamonds.

As with all mining operations, deep sea mining raises questions about its potential environmental impact. There is a growing debate about whether deep sea mining should be allowed or not. Environmental advocacy groups such as Greenpeace and the Deep Sea Mining Campaign have argued that seabed mining should not be permitted in most of the world's oceans because of the potential for damage to deep sea ecosystems and pollution by heavy metal-laden plumes. Prominent environmental activists and state leaders have also called for moratoriums or total bans due to the potential of devastating environmental impacts. Some argue that there should be a total ban on seabed mining. Some anti-seabed mining campaigns have won the support of large industry such as some of the technology giants, and large car companies. However, these same companies will be increasingly reliant on the metals seabed minerals can provide. Some scientists argue that seabed mining should not go ahead, as we know such a relatively small amount about the biodiversity of the deep ocean environment. Individual countries with significant deposits of seabed minerals within their large EEZ’s are making their own decisions pertaining to seabed mining, exploring ways of undertaking seabed mining without causing too much damage to the deep ocean environment, or deciding not to develop seabed mines.

As of 2021 there was no commercial mining of seabed minerals. However, the International Seabed Authority has granted numerous exploration licenses for mining companies who operate, for example, within the Clarion Clipperton Zone. There is the potential for mining at a range of scales within the oceans from small to very large. Technologies involved in the mining of seabed minerals would be highly technological, and involve a range of robotic mining machines, as well as surface ships, and metal refineries at onshore locations. One vision for the post-fossil fuel world will rely on wind farms, solar energy, electric cars, and improved battery technologies: these use a high volume and wide range of metallic commodities including ‘green’ or ‘critical’ metals many of which are in relatively short supply. Seabed mining could provide a near-term solution to the provision of many of these metals, though only serves to worsen the fundamental problems posed by extraction.

Mining sites

Deep sea mining is a relatively new mineral retrieval process undergoing research which takes place on the ocean floor. Ocean mining sites are usually around large areas of polymetallic nodules or active and extinct hydrothermal vents at about 3,000 – 6,500 meters below the ocean's surface. The vents create sulfide deposits, which contain precious metals such as silver, gold, copper, manganese, cobalt, and zinc. The deposits are mined using either hydraulic pumps or bucket systems that take ore to the surface to be processed.

Types of minerals

Seabed minerals are mostly located between 1 and 6 km beneath the ocean surface and comprise three main types:

  • Polymetallic or seabed massive sulfide deposits that form in active oceanic tectonic settings such as island arcs and back-arcs and mid ocean ridge environments. These deposits are associated with hydrothermal activity and hydrothermal vents at sea depths of mostly between 1 and 4 km. Polymetallic Sulfide minerals are rich in copper, gold, lead, silver and other metals. They are found within the Mid Atlantic Ridge system, around Papua New Guinea, Solomon Islands, Vanuatu, and Tonga and other similar ocean environments around the world.
  • Polymetallic or Manganese nodules are found between 4 and 6 km beneath the sea surface, largely within abyssal plain environments. Manganese and related hydroxides precipitate from ocean water or sediment-pore water around a nucleus, which may be a shark’s tooth or a quartz grain, forming potato-shaped nodules some 4–14 cm in diameter. They accrete very slowly at rates of 1–15 mm per million years. Polymetallic/Manganese nodules are rich in many elements including rare earths, cobalt, nickel, copper, molybdenum, lithium, and Yttrium. The largest deposits of Polymetallic Nodules occur in the Pacific Ocean between Mexico and Hawaii in an area called the Clarion Clipperton Fracture Zone. The Cook Islands contains the world’s fourth largest Polymetallic Nodule deposit in an area called the South Penrhyn basin close to the Manihiki Plateau.
  • Cobalt-rich crusts (CRC’s) form on sediment free rock surfaces, around oceanic seamounts, ocean plateau and other elevated topographic features within the ocean. The deposits are found at depths of 600–7000 m beneath sea level and form ‘carpets’ of polymetallic rich layers about 30 cm thick at the surface of the elevated features. Crusts are rich in a range of metals including cobalt, tellurium, nickel, copper, platinum, zirconium, tungsten and rare earth elements. They are found in many parts of all oceans such as seamounts in the Atlantic and Indian Oceans, as well as countries such as the Federated States of Micronesia, Marshall Islands, and Kiribati.

The deep sea contains many different resources available for extraction, including silver, gold, copper, manganese, cobalt, and zinc. These raw materials are found in various forms on the sea floor.

Example of manganese nodule that can be found on the sea floor
 
Minerals and related depths
Type of mineral deposit Average Depth Resources found
Polymetallic nodules

Manganese nodule

4,000 – 6,000 m Nickel, copper, cobalt, and manganese
Manganese crusts 800 – 2,400 m Mainly cobalt, some vanadium, molybdenum and platinum
Sulfide deposits 1,400 – 3,700 m Copper, lead and zinc some gold and silver

Diamonds are also mined from the seabed by De Beers and others. Nautilus Minerals Inc. planned to mine offshore waters in Papua New Guinea but the project never got off the ground due to company financial troubles. Neptune Minerals holds tenements in Japan, Papua New Guinea, Solomon Islands, Vanuatu, Fiji, Tonga, and New Zealand and intends to explore and mine these areas at a later date.

Cobalt-rich ferromanganese formations are found at various depths between 400 and 7000 meters below sea level (masl). These formations are a type of Manganese crust deposits. The substrates of rock consist of layered iron and Magnesium layers ( Fe-Mn oxyhydroxide deposits ) that will host mineralization.

Cobalt-rich ferromanganese formations exist in two categories depending on the Depositional environment, (1) hydrogenetic cobalt-rich ferromanganese crusts and (2) hydrothermal crusts and encrustations. Temperature, depth and sources of seawater are dependent variables that shape how the formations grow. Hydrothermal crusts precipitate quickly, near 1600–1800 mm/Ma and grow in hydrothermal fluids at approximately 200 °C. Hydrogenetic crusts grow much slower at 1–5 mm/Ma but will have higher concentrations of critical metals.

Submarine seamount provinces, linked to hotspots and seafloor spreading, vary in depth along the ocean floor. These seamount show characteristics distribution that connects them to Cobalt-rich ferromanganese formation. In Western Pacific, a study conducted at <1500 m to 3500 m (mbsl) proved that the cobalt crusts are concentrated in the seamount section that slops at less than 20°. The high-grade cobalt crust in the Western Pacific trended /correlated with latitude and longitude, a high region within 150°E‐140°W and 30°S‐30°N

Polymetallic sulphides are resources available for extraction from Seafloor massive sulfide deposits, composed on and within the seafloor base when mineralized water discharges from Hydrothermal vent. The hot mineral-rich water precipitates and condenses when released from hydrothermal vents and meets the cold seawater. The stock area of the chimney structures of hydrothermal vents can be highly mineralized.

Polymetallic nodules/manganese nodules are founded on Abyssal plain, in a range of sizes, some as large as 15 cm long. The Clipperton Fracture Zone (CCZ) is a well known area of occurrences. Nodules are recorded to have average growth rates near 10–20 mm/Ma.

The Clipperton Fracture Zone is host to the largest untapped deposit nickel resource; Polymetallic nodules or Manganese nodule sit on the seafloor. These nodules require no need for drilling or typical Surface mining techniques. The composition of nickel, cobalt, copper and manganese make up nearly 100% of the nodules, and generates no toxic tailings. Polymetallic nodules in the Clipperton Fracture Zone are currently being studied to produce battery metals.

Deep sea mining efforts

Over the past decade, a new phase of deep-sea mining has begun. Rising demand for precious metals in Japan, China, Korea and India has pushed these countries in search of new sources. Interest has recently shifted toward hydrothermal vents as the source of metals instead of scattered nodules. The trend of transition towards an electricity-based information and transportation infrastructure currently seen in western societies further pushes demands for precious metals. The current revived interest in phosphorus nodule mining at the seafloor stems from phosphor-based artificial fertilizers being of significant importance for world food production. Growing world population pushes the need for artificial fertilizers or greater incorporation of organic systems within agricultural infrastructure.

The world's first "large-scale" mining of hydrothermal vent mineral deposits was carried out by Japan in August - September, 2017. Japan Oil, Gas and Metals National Corporation (JOGMEC) carried out this operation using the Research Vessel Hakurei. This mining was carried out at the 'Izena hole/cauldron' vent field within the hydrothermally active back-arc basin known as the Okinawa Trough which contains 15 confirmed vent fields according to the InterRidge Vents Database.

A deep sea mining venture in Papua New Guinea, the Solwara 1 Project, was granted a mining permit to begin mining a high grade copper-gold resource from a weakly active hydrothermal vent. This controversial project generated an enormous backlash from the community and environmental activists The Solwara 1 Project was located at 1600 metres water depth in the Bismarck Sea, New Ireland Province. Using ROV (remotely operated underwater vehicles) technology developed by UK-based Soil Machine Dynamics, Nautilus Minerals Inc. was the first company of its kind to announce plans to begin full-scale undersea excavation of mineral deposits. However a dispute with the government of Papua-New Guinea delayed production and operations until early 2018. In September 2019, it was announced that the project had collapsed as Nautilus Minerals Inc. went into administration and its major creditors sought to recoup the millions of dollars they had sunk into the project. The Prime Minister of Papua New Guinea called the project a "total failure", sparking calls for a deep sea mining moratorium from his Pacific counterparts.

An additional site that is being explored and looked at as a potential deep sea mining site is the Clarion-Clipperton Fracture Zone (CCZ). The CCZ stretches over 4.5 million square kilometers of the Northern Pacific Ocean between Hawaii and Mexico. Scattered across the abyssal plain are trillions of polymetallic nodules, potato-sized rocklike deposits containing minerals such as magnesium, nickel, copper, zinc, cobalt, and others. Development of technologies to collect polymetallic nodules in the CCZ began in the 1970s when oil, gas and mining majors including Shell, Rio Tinto (Kennecott) and Sumitomo, conducted pilot test work, recovering over ten thousand tons of nodules. Polymetallic nodules are also abundant in the Central Indian Ocean Basin and the Peru Basin. Mining claims registered with the International Seabed Authority (ISA) are mostly located in the CCZ, most commonly in the manganese nodule province. The ISA has entered into 18 different contracts with private companies and national governments to explore the suitability of polymetallic nodule mining in the CCZ.

In 2019, the government of the Cook Islands passed two legislative bills pertaining to deep sea mining in the country's EEZ. The Sea Bed Minerals (SBM) Act of 2019 was passed to "enable the effective and responsible management of the seabed minerals of the Cook Islands in a way that also...seeks to maximize the benefits of seabed minerals for present and future generations of Cook Islanders." Sea Bed Minerals (Exploration) Regulations Act and the Sea Bed Minerals Amendment Act were passed by Parliament in 2020 and 2021 respectively. As much as 12 billion tons of polymetallic nodules are spread across the ocean floor in the Cook Island's EEZ. The nodules found in the EEZ contain cobalt, nickel, manganese, titanium, and Rare Earth Elements.

On November 10, 2020, the Chinese submersible Fendouzhe reached the bottom of the Mariana Trench 10,909 meters (35,790 feet). It didn't surpass the record of American undersea explorer Victor Vescovo who claimed 10,927 meters (35,853 feet) in May 2019. Chief designer of the submersible, Ye Cong said the seabed was abundant with resources and a "treasure map" can be made of the deep sea.

Extraction methods

Recent technological advancements have given rise to the use remotely operated vehicles (ROVs) to collect mineral samples from prospective mine sites. Using drills and other cutting tools, the ROVs obtain samples to be analyzed for precious materials. Once a site has been located, a mining ship or station is set up to mine the area.

There are two predominant forms of mineral extraction being considered for full-scale operations: continuous-line bucket system (CLB) and the hydraulic suction system. The CLB system is the preferred method of nodule collection. It operates much like a conveyor-belt, running from the sea floor to the surface of the ocean where a ship or mining platform extracts the desired minerals, and returns the tailings to the ocean. Hydraulic suction mining lowers a pipe to the seafloor which transfers nodules up to the mining ship. Another pipe from the ship to the seafloor returns the tailings to the area of the mining site.

In recent years, the most promising mining areas have been the Central and Eastern Manus Basin around Papua New Guinea and the crater of Conical Seamount to the east. These locations have shown promising amounts of gold in the area's sulfide deposits (an average of 26 parts per million). The relatively shallow water depth of 1050 m, along with the close proximity of a gold processing plant makes for an excellent mining site.

Deep sea mining project value chain can be differentiated using the criteria of the type of activities where the value is actually added. During prospecting, exploration and resource assessment phases the value is added to intangible assets, for the extraction, processing and distribution phases the value increases with relation to product processing. There is an intermediate phase – the pilot mining test which could be considered to be an inevitable step in the shift from “resources” to “reserves” classification, where the actual value starts.

Exploration phase involves such operations as locating, sea bottom scanning and sampling using technologies such as echo-sounders, side scan sonars, deep-towed photography, ROVs, AUVs. The resource valuation incorporates the examination of data in the context of potential mining feasibility.

Value chain based on product processing involves such operations as actual mining (or extraction), vertical transport, storing, offloading, transport, metallurgical processing for final products. Unlike the exploration phase, the value increases after each operation on processed material eventually delivered to the metal market. Logistics involves technologies analogous to those applied in land mines. This is also the case for the metallurgical processing, although rich and polymetallic mineral composition which distinguishes marine minerals from its land analogs requires special treatment of the deposit. Environmental monitoring and impact assessment analysis relate to the temporal and spatial discharges of the mining system if they occur, sediment plumes, disturbance to the benthic environment and the analysis of the regions affected by seafloor machines. The step involves an examination of disturbances near the seafloor, as well as disturbances near the surface. Observations include baseline comparisons for the sake of quantitative impact assessments for ensuring the sustainability of the mining process.

Small scale mining of the deep sea floor is being developed off the coast of Papua New Guinea using robotic techniques, but the obstacles are formidable.

Environmental impacts

As with all mining operations, deep sea mining raises questions about potential environmental damages to the surrounding areas. Because deep sea mining is a relatively new field, the complete consequences of full-scale mining operations are unknown. However, experts are certain that removal of parts of the sea floor will result in disturbances to the benthic layer, increased toxicity of the water column, and sediment plumes from tailings. Removing parts of the sea floor disturbs the habitat of benthic organisms, possibly, depending on the type of mining and location, causing permanent disturbances. Aside from direct impact of mining the area, leakage, spills, and corrosion could alter the mining area's chemical makeup.

Among the impacts of deep sea mining, it is theorized that sediment plumes could have the greatest impact. Plumes are caused when the tailings from mining (usually fine particles) are dumped back into the ocean, creating a cloud of particles floating in the water. Two types of plumes occur: near-bottom plumes and surface plumes. Near-bottom plumes occur when the tailings are pumped back down to the mining site. The floating particles increase the turbidity, or cloudiness, of the water, clogging filter-feeding apparatuses used by benthic organisms. Surface plumes cause a more serious problem. Depending on the size of the particles and water currents the plumes could spread over vast areas. The plumes could impact zooplankton and light penetration, in turn affecting the food web of the area. Further research has been conducted by the Massachusetts Institute of Technology to investigate how these plumes travel through water and how their ecological impact could be mitigated. This research is used to contribute to the work of the International Seabed Authority, the body which is mandated to develop, implement and enforce rules for deep-sea mining activities within its area of responsibility, in gaining a full understanding of the environmental impacts.

Many opponents to deep sea mining efforts point to the threats of grave and irreversible damage it could cause to fragile deep sea ecosystems. For this reason, organizations Fauna and Flora International and World Wide Fund for Nature, broadcaster David Attenborough, and companies BMW, Google, Volvo Cars and Samsung have called for a global moratorium on deep sea mining.

Marine life

Research shows that polymetallic nodule fields are hotspots of abundance and diversity for a highly vulnerable abyssal fauna. Because deep sea mining is a relatively new field, the complete consequences of full-scale mining operations on this ecosystem are unknown. However, some researchers have said they believe that removal of parts of the sea floor will result in disturbances to the benthic layer, increased toxicity of the water column and sediment plumes from tailings. Removing parts of the sea floor could disturb the habitat of benthic organisms, with unknown long-term effects. Preliminary studies on seabed disturbances from mining-related activities have indicated that it takes decades for the seabed to recover from minor disturbances. Minerals targeted by seabed mining activities take millions of years to regenerate, if they do so at all. Aside from the direct impact of mining the area, some researchers and environmental activists have raised concerns about leakage, spills and corrosion that could alter the mining area’s chemical makeup.

Polymetallic Nodule fields form some of the few areas of hard substrate on the pelagic red clay bottom, attracting macrofauna. In 2013, Researchers from the University of Hawaii at Manoa conducted a baseline study of benthic communities in the CCZ, assessing a 350 square mile area with a remote-operated vehicle (ROV). They found that the area surveyed contained one of the most diverse megafaunal communities recorded on the abyssal plain. The megafauna (species greater than 0.78 inches) surveyed included glass sponges, anemones, eyeless fish, sea stars, psychropotes, amphipods, and isopods. Macrofauna (species greater than 0.5mm) were found to have very high local species diversity, with 80 -100 macrofaunal species per square meter. The highest species diversity was found living amongst the polymetallic nodules. In a follow-up survey, researchers identified over 1000 species, 90% of them previously unknown, and over 50% of them dependent on the polymetallic nodules for survival; all were identified in areas demarcated for potential seabed mining. Many scientists believe that seabed mining is posed to irreparably harm fragile abyssal plain habitats. Despite the potential environmental impacts, research shows that the loss of biomass involved in Deep Sea Mining is significantly smaller than the expected loss of biomass as a result of land ore mining. It is estimated that with the continued process of land ore mining will lead to a loss of 568 megatons (approximately the same as that of the entire human population) of biomass whereas projections of the potential environmental impact of Deep Sea Mining will lead to a loss of 42 megatons of biomass. In addition to the loss of biomass, land ore mining will lead to a loss of 47 trillion megafauna organisms, whereas deep-sea mining is expected to lead to a loss of 3 trillion megafauna organisms.

A rare species called 'Scaly-foot snail', also known as sea pangolin, has become first species to be threatened because of deep sea mining.

Sediment Plumes

Among the impacts of deep sea mining, sediment plumes could have the greatest impact. Plumes are caused when the tailings from mining (usually fine particles) are dumped back into the ocean, creating a cloud of particles floating in the water. Two types of plumes occur: near bottom plumes and surface plumes. Near bottom plumes occur when the tailings are pumped back down to the mining site. The floating particles increase the turbidity, or cloudiness, of the water, clogging filter-feeding apparatuses used by benthic organisms. Surface plumes cause a more serious problem. Depending on the size of the particles and water currents the plumes could spread over vast areas. The plumes could impact zooplankton and light penetration, in turn affecting the food web of the area. A study conducted in Portmán Bay (Murcia, Spain) revealed that sediment plumes carry concentrations of metals that can accumulate in tissues of shellfish and persist for several hours after initial mining activities. Mine tailing deposits and resuspension plume sites caused the worst environmental conditions of their area compared to sites just off the mine tailing deposits, leaving significant ecotoxicological impacts on fauna within a short period of time. The accumulation of toxic metals in an organism, known as bioaccumulation, works its way through the food web causing detrimental health effects in larger organisms and essentially humans.

Noise and Light Pollution

Deep Sea Mining efforts will increase ambient noise in the normally-quiet pelagic environments. Anthropogenic noise is known to affect deep sea fish species and marine mammals. Impacts include behavior changes, communication difficulties, and temporary and permanent hearing damage.

The areas where Deep Sea Mining may take places are normally devoid of sunlight and anthropogenic light sources. Mining efforts employ floodlighting would drastically increase light levels. Previous studies show that deep sea shrimps found at hydrothermal vents suffered permanent retinal damage when exposed to floodlights from crewed submersibles. Behavioral changes include vertical migration patterns, ability to communicate, and detect prey. Each source of pollution contribute to alterations of ecosystems beyond points of immediate recovery.

Laws and regulations

The international law–based regulations on deep sea mining are contained in the United Nations Conventions on the Law of the Sea from 1973 to 1982, which came into force in 1994. The convention set up the International Seabed Authority (ISA), which regulates nations’ deep sea mining ventures outside each nations’ Exclusive Economic Zone (a 200-nautical-mile (370 km) area surrounding coastal nations). The ISA requires nations interested in mining to explore two equal mining sites and turn one over to the ISA, along with a transfer of mining technology over a 10- to 20-year period. This seemed reasonable at the time because it was widely believed that nodule mining would be extremely profitable. However, these strict requirements led some industrialized countries to refuse to sign the initial treaty in 1982.

The United States abides by the Deep Seabed Hard Mineral Resources Act, which was originally written in 1980. This legislation is largely recognized as one of the main concerns the US has with ratifying UNCLOS.

Deep sea mining efforts within the EEZ of nation states seabed mining comes under the jurisdiction of national laws. Despite extensive exploration both within and outside of EEZs, only a few countries, notably New Zealand, have established legal and institutional frameworks for the future development of deep seabed mining.

Papua New Guinea was the first country to approve a permit for the exploration of minerals in the deep seabed. Solwara 1 was awarded its licence and environmental permits despite three independent reviews of the environmental impact statement mine finding significant gaps and flaws in the underlying science.

The ISA has recently arranged a workshop in Australia where scientific experts, industry representatives, legal specialists and academics worked towards improving existing regulations and ensuring that development of seabed minerals does not cause serious and permanent damage to the marine environment.

A moratorium on deep sea mining was adopted at the Global biodiversity summit in 2021. Some argue that deep sea mining is needed for producing Electric vehicles and batteries, but according to Jessica Battle, an expert on ocean policy and governance: "We can decarbonize through innovation, redesigning, reducing, reusing, and recycling."

Controversy

An article in the Harvard Environmental Law Review in April 2018 argued that "the 'new global gold rush' of deep sea mining shares many features with past resource scrambles – including a general disregard for environmental and social impacts, and the marginalisation of indigenous peoples and their rights". The Foreshore and Seabed Act (2004) ignited fierce indigenous opposition in New Zealand, as its claiming of the seabed for the Crown in order to open it up to mining conflicted with Māori claims to their customary lands, who protested the Act as a "sea grab." Later, this act was repealed after an investigation from the UN Commission on Human Rights upheld charges of discrimination. The Act was subsequently repealed and replaced with the Marine and Coastal Area Bill (2011). However, conflicts between indigenous sovereignty and seabed mining continue. Organizations like the Deep Sea Mining Campaign and Alliance of Solwara Warriors, comprising 20 communities in the Bismarck and Solomon Sea, are examples of organizations that are seeking to ban seabed mining in Papua New Guinea, where the Solwara 1 project is set to occur, and in the Pacific. They argue primarily that decision-making about deep sea mining has not adequately addressed Free Prior and Informed Consent from affected communities and have not adhered to the precautionary principle, a rule proposed by the 1982 UN World Charter for Nature which informs the ISA regulatory framework for mineral exploitation of the deep sea.

History

In the 1960s, the prospect of deep-sea mining was brought up by the publication of J. L. Mero's Mineral Resources of the Sea. The book claimed that nearly limitless supplies of cobalt, nickel and other metals could be found throughout the planet's oceans. Mero stated that these metals occurred in deposits of manganese nodules, which appear as lumps of compressed flowers on the seafloor at depths of about 5,000 m. Some nations including France, Germany and the United States sent out research vessels in search of nodule deposits. Initial estimates of deep sea mining viability turned out to be much exaggerated. This overestimate, coupled with depressed metal prices, led to the near abandonment of nodule mining by 1982. From the 1960s to 1984 an estimated US $650 million had been spent on the venture, with little to no return.

Deep sea mining equipment vendors

https://deme-gsr.com https://impossiblemining.com https://allseas.com/activities/deep-seapolymetallicnodulecollection/

Rydberg atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Rydberg_atom Figure 1: Electron orbi...