Superconducting logic refers to a class of logic circuits or logic gates that use the unique properties of superconductors, including zero-resistance wires, ultrafast Josephson junction switches, and quantization of magnetic flux (fluxoid). As of 2023, superconducting computing is a form of cryogenic computing, as superconductive electronic circuits require cooling to cryogenic temperatures for operation, typically below 10 kelvin. Often superconducting computing is applied to quantum computing, with an important application known as superconducting quantum computing.
Superconducting digital logic circuits use single flux quanta (SFQ), also known as magnetic flux quanta,
to encode, process, and transport data. SFQ circuits are made up of
active Josephson junctions and passive elements such as inductors,
resistors, transformers, and transmission lines. Whereas voltages and
capacitors are important in semiconductor logic circuits such as CMOS, currents and inductors are most important in SFQ logic circuits. Power can be supplied by either direct current or alternating current, depending on the SFQ logic family.
Fundamental concepts
The primary advantage of superconducting computing is improved power efficiency over conventional CMOS
technology. Much of the power consumed, and heat dissipated, by
conventional processors comes from moving information between logic
elements rather than the actual logic operations. Because
superconductors have zero electrical resistance,
little energy is required to move bits within the processor. This is
expected to result in power consumption savings of a factor of 500 for
an exascale computer. For comparison, in 2014 it was estimated that a 1 exaFLOPS computer built in CMOS logic is estimated to consume some 500 megawatts of electrical power. Superconducting logic can be an attractive option for ultrafast CPUs,
where switching times are measured in picoseconds and operating
frequencies approach 770 GHz. However, since transferring information between the processor and the
outside world does still dissipate energy, superconducting computing was
seen as well-suited for computations-intensive tasks where the data
largely stays in the cryogenic environment, rather than big data applications where large amounts of information are streamed from outside the processor.
As superconducting logic supports standard digital machine
architectures and algorithms, the existing knowledge base for CMOS
computing will still be useful in constructing superconducting
computers. However, given the reduced heat dissipation, it may enable
innovations such as three-dimensional stacking of components. However, as they require inductors, it is harder to reduce their size. As of 2014, devices using niobium as the superconducting material operating at 4 K
were considered state-of-the-art. Important challenges for the field
were reliable cryogenic memory, as well as moving from research on
individual components to large-scale integration.
Josephson junction count is a measure of superconducting circuit or device complexity, similar to the transistor count used for semiconductor integrated circuits.
History
Superconducting computing research has been pursued by the U. S. National Security Agency since the mid-1950s. However, progress could not keep up with the increasing performance
of standard CMOS technology. As of 2016 there are no commercial
superconducting computers, although research and development continues.
Research in the mid-1950s to early 1960s focused on the cryotron invented by Dudley Allen Buck,
but the liquid-helium temperatures and the slow switching time between
superconducting and resistive states caused this research to be
abandoned. In 1962 Brian Josephson established the theory behind the Josephson effect,
and within a few years IBM had fabricated the first Josephson junction.
IBM invested heavily in this technology from the mid-1960s to 1983. By the mid-1970s IBM had constructed a superconducting quantum interference device using these junctions, mainly working with lead-based
junctions and later switching to lead/niobium junctions. In 1980 the
Josephson computer revolution was announced by IBM through the cover
page of the May issue of Scientific American. One of the reasons which
justified such a large-scale investment lies in that Moore's law -
enunciated in 1965 - was expected to slow down and reach a plateau
'soon'. However, on the one hand Moore's law kept its validity, while
the costs of improving superconducting devices were basically borne
entirely by IBM alone and the latter, however big, could not compete
with the whole world of semiconductors which provided nearly limitless
resources. Thus, the program was shut down in 1983 because the technology was not
considered competitive with standard semiconductor technology. Founded
by researchers with this IBM program, HYPRES developed and
commercialized superconductor integrated circuits from its commercial
superconductor foundry in Elmsford, New York. The Japanese Ministry of International Trade and Industry funded a superconducting research effort from 1981 to 1989 that produced the ETL-JC1, which was a 4-bit machine with 1,000 bits of RAM.
In 1983, Bell Labs created niobium/aluminum oxide Josephson junctions that were more reliable and easier to fabricate. In 1985, the Rapid single flux quantum logic scheme, which had improved speed and energy efficiency, was developed by researchers at Moscow State University.
These advances led to the United States' Hybrid Technology
Multi-Threaded project, started in 1997, which sought to beat
conventional semiconductors to the petaflop computing scale. The
project was abandoned in 2000, however, and the first conventional
petaflop computer was constructed in 2008. After 2000, attention turned
to superconducting quantum computing. The 2011 introduction of reciprocal quantum logic by Quentin Herr of Northrop Grumman, as well as energy-efficient rapid single flux quantum by Hypres, were seen as major advances.
Rapid single flux quantum (RSFQ) superconducting logic was developed in the Soviet Union in the 1980s. Information is carried by the presence or absence of a single flux quantum (SFQ). The Josephson junctions are critically damped,
typically by addition of an appropriately sized shunt resistor, to make
them switch without a hysteresis. Clocking signals are provided to
logic gates by separately distributed SFQ voltage pulses.
Power is provided by bias currents distributed using resistors
that can consume more than 10 times as much static power than the
dynamic power used for computation. The simplicity of using resistors to
distribute currents can be an advantage in small circuits and RSFQ
continues to be used for many applications where energy efficiency is
not of critical importance.
RSFQ has been used to build specialized circuits for
high-throughput and numerically intensive applications, such as
communications receivers and digital signal processing.
Josephson junctions in RSFQ circuits are biased in parallel.
Therefore, the total bias current grows linearly with the Josephson
junction count. This currently presents the major limitation on the
integration scale of RSFQ circuits, which does not exceed a few tens of
thousands of Josephson junctions per circuit.
LR-RSFQ
Reducing
the resistor (R) used to distribute currents in traditional RSFQ
circuits and adding an inductor (L) in series can reduce the static
power dissipation and improve energy efficiency.
Low Voltage RSFQ (LV-RSFQ)
Reducing the bias voltage in traditional RSFQ circuits can reduce the static power dissipation and improve energy efficiency.
Energy-Efficient Single Flux Quantum Technology (ERSFQ/eSFQ)
Efficient
rapid single flux quantum (ERSFQ) logic was developed to eliminate the
static power losses of RSFQ by replacing bias resistors with sets of
inductors and current-limiting Josephson junctions.
Efficient single flux quantum (eSFQ) logic is also powered by
direct current, but differs from ERSFQ in the size of the bias current
limiting inductor and how the limiting Josephson junctions are
regulated.
Reciprocal Quantum Logic (RQL)
Reciprocal Quantum Logic (RQL) was developed to fix some of the problems of RSFQ logic. RQL uses reciprocal pairs of SFQ pulses to encode a logical '1'. Both power and clock are provided by multi-phase alternating current signals. RQL gates do not use resistors to distribute power and thus dissipate negligible static power.
Major RQL gates include: AndOr, AnotB, Set/Reset (with nondestructive readout), which together form a universal logic set and provide memory capabilities.
Adiabatic
Quantum flux parametron (AQFP) logic was developed for energy-efficient
operation and is powered by alternating current.
On January 13, 2021, it was announced that a 2.5 GHz prototype
AQFP-based processor called MANA (Monolithic Adiabatic iNtegration
Architecture) had achieved an energy efficiency that was 80 times that
of traditional semiconductor processors, even accounting for the
cooling.
Superconducting quantum computing is a promising implementation of quantum information technology that involves nanofabricated superconductingelectrodes coupled through Josephson junctions. As in a superconducting electrode, the phase and the charge are conjugate variables.
There exist three families of superconducting qubits, depending on
whether the charge, the phase, or neither of the two are good quantum
numbers. These are respectively termed charge qubits, flux qubits, and hybrid qubits.
A Connection Machine CM-2 (1987) and accompanying DataVault on display at the Mimms Museum of Technology and Art in Roswell, Georgia. The CM-2 used the same casing as the CM-1.
Danny Hillis and Sheryl Handler founded Thinking Machines Corporation (TMC) in Waltham, Massachusetts,
in 1983, moving in 1984 to Cambridge, MA. At TMC, Hillis assembled a
team to develop what would become the CM-1 Connection Machine, a design
for a massively parallel hypercube-based arrangement of thousands of microprocessors, springing from his PhD thesis work at MIT in Electrical Engineering and Computer Science (1985). The dissertation won the ACM Distinguished Dissertation prize in 1985, and was presented as a monograph that overviewed the philosophy,
architecture, and software for the first Connection Machine, including
information on its data routing between central processing unit (CPU) nodes, its memory handling, and the programming language Lisp applied in the parallel machine. Very early concepts contemplated just over a million processors, each connected in a 20-dimensional hypercube, which was later scaled down.
Thinking Machines CM-2 at the Computer History Museum in Mountain View, California. One of the face plates has been partly removed to show the circuit boards inside.
Each CM-1 microprocessor has its own 4 kilobits of random-access memory (RAM), and the hypercube-based
array of them was designed to perform the same operation on multiple
data points simultaneously, i.e., to execute tasks in single
instruction, multiple data (SIMD)
fashion. The CM-1, depending on the configuration, has as many as
65,536 individual processors, each extremely simple, processing one bit at a time. CM-1 and its successor CM-2 take the form of a cube 1.5 meters on a side, divided equally into eight smaller cubes. Each subcube contains 16 printed circuit boards and a main processor called a sequencer. Each circuit board contains 32 chips. Each chip contains a router, 16 processors, and 16 RAMs. The CM-1 as a whole has a 12-dimensional hypercube-based routing network (connecting the 212 chips), a main RAM, and an input-output processor (a channel controller).
Each router contains five buffers to store the data being transmitted
when a clear channel is not available. The engineers had originally
calculated that seven buffers per chip would be needed, but this made
the chip slightly too large to build. Nobel Prize-winning physicist Richard Feynman
had previously calculated that five buffers would be enough, using a
differential equation involving the average number of 1 bits in an
address. They resubmitted the design of the chip with only five buffers,
and when they put the machine together, it worked fine. Each chip is
connected to a switching device called a nexus. The CM-1 uses Feynman's algorithm for computing logarithms that he had developed at Los Alamos National Laboratory for the Manhattan Project.
It is well suited to the CM-1, using as it did, only shifting and
adding, with a small table shared by all the processors. Feynman also
discovered that the CM-1 would compute the Feynman diagrams for quantum chromodynamics (QCD) calculations faster than an expensive special-purpose machine developed at Caltech.
To improve its commercial viability, TMC launched the CM-2 in 1987, adding Weitek 3132 floating-point numeric coprocessors
and more RAM to the system. Thirty-two of the original one-bit
processors shared each numeric processor. The CM-2 can be configured
with up to 512 MB of RAM, and a redundant array of independent disks (RAID) hard disk system, called a DataVault, of up to 25 GB. Two later variants of the CM-2 were also produced, the smaller CM-2a with either 4096 or 8192 single-bit processors, and the faster CM-200.
The light panels of FROSTBURG, a CM-5, on display at the National Cryptologic Museum. The panels were used to check the usage of the processing nodes, and to run diagnostics.
Due to its origins in AI research, the software for the CM-1/2/200 single-bit processor was influenced by the Lisp programming language and a version of Common Lisp, *Lisp (spoken: Star-Lisp), was implemented on the CM-1. Other early languages included Karl Sims'
IK and Cliff Lasser's URDU. Much system utility software for the CM-1/2
was written in *Lisp. Many applications for the CM-2, however, were
written in C*, a data-parallel superset of ANSI C.
With the CM-5, announced in 1991, TMC switched from the
CM-2's hypercubic architecture of simple processors to a new and
different multiple instruction, multiple data (MIMD) architecture based on a fat tree network of reduced instruction set computing (RISC) SPARC processors. To make programming easier, it was made to simulate a SIMD design. The later CM-5E replaces the SPARC processors with faster SuperSPARCs. A CM-5 was the fastest computer in the world in 1993 according to the TOP500 list, running 1024 cores with Rpeak of 131.0 GFLOPS, and for several years many of the top 10 fastest computers were CM-5s.
Visual design
The CM-5 LED panels could show randomly generated moving patterns that served purely as eye candy, as seen in Jurassic Park.
Connection Machines were noted for their striking visual design. The CM-1 and CM-2 design teams were led by Tamiko Thiel. The physical form of the CM-1, CM-2, and CM-200 chassis was a cube-of-cubes, referencing the machine's internal 12-dimensional hypercube network, with the red light-emitting diodes (LEDs), by default indicating the processor status, visible through the doors of each cube.
By default, when a processor is executing an instruction, its LED
is on. In a SIMD program, the goal is to have as many processors as
possible working the program at the same time – indicated by having all
LEDs being steady on. Those unfamiliar with the use of the LEDs wanted
to see the LEDs blink – or even spell out messages to visitors. The
result is that finished programs often have superfluous operations to
blink the LEDs.
The CM-5, in plan view, had a staircase-like shape, and also had
large panels of red blinking LEDs. Prominent sculptor-architect Maya Lin contributed to the CM-5 design.
Surviving examples
Permanent exhibits
The very first CM-1 is on permanent display in the Computer History Museum, Mountain View, California, which also has two other CM-1s and CM-5.
There is a decommissioned CM-1 or CM-2 on display in the main building of the Karlsruhe Institute of Technology computer science department. Students have converted it into a Bluetooth-controlled LED matrix display which can be used to play games or display art.
Several parts of a CM-1 are in the collection of the Smithsonian Institution National Museum of American History, though it may not be a complete example.
The Living Computers: Museum + Labs in Seattle displayed a CM-2 with flashing LEDs prior to its closing in 2020.[19][better source needed] It is possible this machine is now in private hands, though it is not listed among the objects auctioned by Christie's.
Private collections
As of 2007, a preserved CM-2a was owned by the Corestore, a type of online-only museum.
References in popular culture
A CM-5 was featured in the film Jurassic Park in the control room for the island (instead of a Cray X-MPsupercomputer as in the novel). Two banks, one bank of 4 Units and a single off to the right of the set could be seen in the control room.
The computer mainframes in Fallout 3 were inspired heavily by the CM-5.
Cyberpunk 2077 features numerous CM-1/CM-2 style units in various portions of the game.
The b-side to Clock DVA's 1989 single "The Hacker" is titled "The Connection Machine" in reference to the CM-1.
An important intermediate in industrial chemistry,
nitric oxide forms in combustion systems and can be generated by
lightning in thunderstorms. In mammals, including humans, nitric oxide
is a signaling molecule in many physiological and pathological processes. It was proclaimed the "Molecule of the Year" in 1992. The 1998 Nobel Prize in Physiology or Medicine was awarded for discovering nitric oxide's role as a cardiovascular signalling molecule. Its impact extends beyond biology, with applications in medicine, such as the development of sildenafil (Viagra), and in industry, including semiconductor manufacturing.
Nitric oxide (NO) was first identified by Joseph Priestley in the late 18th century, originally seen as merely a toxic byproduct of combustion and an environmental pollutant. Its biological significance was later uncovered in the 1980s when researchers Robert F. Furchgott, Louis J. Ignarro, and Ferid Murad discovered its critical role as a vasodilator in the cardiovascular system, a breakthrough that earned them the 1998 Nobel Prize in Physiology or Medicine.
Physical properties
Electronic configuration
The ground-state electronic configuration of NO in united-atom notation is
The first two orbitals are actually pure atomic 1sO and 1sN
from oxygen and nitrogen respectively and therefore are usually not
noted in the united-atom notation. Orbitals noted with an asterisk are
antibonding. The ordering of 5σ and 1π according to their binding
energies is subject to discussion. Removal of a 1π electron leads to 6
states whose energies span over a range starting at a lower level than a
5σ electron an extending to a higher level. This is due to the
different orbital momentum couplings between a 1π and a 2π electron.
The lone electron in the 2π orbital makes NO a doublet (X2Π) in its ground state, whose degeneracy is split in the fine structure from spin–orbit coupling with a total momentum J = 3/2 or J = 1/2.
Dipole
The dipole of NO has been measured experimentally to 0.15740 D and is oriented from O to N (−NO+) due to the transfer of negative electronic charge from oxygen to nitrogen.
Reactions
With di- and triatomic molecules
Upon condensing to a neat liquid, nitric oxide dimerizes to colorless dinitrogen dioxide
(O=N–N=O), but the association is weak and reversible. The N–N
distance in crystalline NO is 218 pm, nearly twice the N–O distance.
Condensation in a highly polar environment instead gives the red
alternant isomer O=N–O+=N−.
Since the heat of formation of •NO is endothermic, NO can be decomposed to the elements. Catalytic converters in cars exploit this reaction:
Nitric oxide rarely sees organic chemistry use. Most reactions with
it produce complex mixtures of salts, separable only through careful recrystallization.
The addition of a nitric oxide moiety to another molecule is often referred to as nitrosylation. The Traube reaction is the addition of a two equivalents of nitric oxide onto an enolate, giving a diazeniumdiolate (also called a nitrosohydroxylamine). The product can undergo a subsequent retro-aldol reaction, giving an overall process similar to the haloform reaction. For example, nitric oxide reacts with acetone and an alkoxide to form a diazeniumdiolate on each α position, with subsequent loss of methyl acetate as a by-product:
This reaction, which was discovered around 1898, remains of interest in nitric oxide prodrug research. Nitric oxide can also react directly with sodium methoxide, ultimately forming sodium formate and nitrous oxide by way of an N-methoxydiazeniumdiolate.
Sufficiently basic secondary amines undergo a Traube-like reaction to give NONOates. However, very few nucleophiles undergo the Traube reaction, either failing to adduce NO or immediately decomposing with nitrous oxide release.
Nitric oxide reacts with transition metals to give complexes called metal nitrosyls. The most common bonding mode of nitric oxide is the terminal linear type (M−NO). Alternatively, nitric oxide can serve as a one-electron pseudohalide.
In such complexes, the M−N−O group is characterized by an angle between
120° and 140°. The NO group can also bridge between metal centers
through the nitrogen atom in a variety of geometries.
The uncatalyzed endothermic reaction of oxygen (O2) and nitrogen (N2),
which is effected at high temperature (>2000 °C) by lightning has
not been developed into a practical commercial synthesis (see Birkeland–Eyde process):
N2 + O2 → 2 •NO
Laboratory methods
In the laboratory, nitric oxide is conveniently generated by reduction of dilute nitric acid with copper:
The iron(II) sulfate route is simple and has been used in undergraduate laboratory experiments.
So-called NONOate
compounds are also used for nitric oxide generation, especially in
biological laboratories. However, other Traube adducts may decompose to
instead give nitrous oxide.[22]
Detection and assay
Nitric oxide (white) in conifer cells, visualized using DAF-2 DA (diaminofluorescein diacetate)
Nitric oxide concentration can be determined using a chemiluminescent reaction involving ozone. A sample containing nitric oxide is mixed with a large quantity of ozone. The nitric oxide reacts with the ozone to produce oxygen and nitrogen dioxide, accompanied with emission of light (chemiluminescence):
•NO + O3 → •NO2 + O2 + hν
which can be measured with a photodetector. The amount of light produced is proportional to the amount of nitric oxide in the sample.
Other methods of testing include electroanalysis
(amperometric approach), where ·NO reacts with an electrode to induce a
current or voltage change. The detection of NO radicals in biological
tissues is particularly difficult due to the short lifetime and
concentration of these radicals in tissues. One of the few practical
methods is spin trapping of nitric oxide with iron-dithiocarbamate complexes and subsequent detection of the mono-nitrosyl-iron complex with electron paramagnetic resonance (EPR).
•NO participates in ozone layer depletion. Nitric oxide reacts with stratospheric ozone to form O2 and nitrogen dioxide:
•NO + O3 → •NO2 + O2
This reaction is also utilized to measure concentrations of •NO in control volumes.
Precursor to NO2
As seen in the acid deposition section, nitric oxide can transform into nitrogen dioxide (this can happen with the hydroperoxy radical, HO• 2, or diatomic oxygen, O2). Symptoms of short-term nitrogen dioxide exposure include nausea, dyspnea and headache. Long-term effects could include impaired immune and respiratory function.
NO is a gaseous signaling molecule. It is a key vertebratebiological messenger, playing a role in a variety of biological processes. It is a bioproduct in almost all types of organisms, including bacteria, plants, fungi, and animal cells.
Nitric oxide, an endothelium-derived relaxing factor (EDRF), is biosynthesized endogenously from L-arginine, oxygen, and NADPH by various nitric oxide synthase (NOS) enzymes. Reduction of inorganic nitrate may also make nitric oxide. One of the main enzymatic targets of nitric oxide is guanylyl cyclase. The binding of nitric oxide to the heme region of the enzyme leads to activation, in the presence of iron. Nitric oxide is highly reactive (having a lifetime of a few seconds),
yet diffuses freely across membranes. These attributes make nitric oxide
ideal for a transient paracrine (between adjacent cells) and autocrine (within a single cell) signaling molecule. Once nitric oxide is converted to nitrates and nitrites by oxygen and water, cell signaling is deactivated.
Liquid
nitrogen oxide is very sensitive to detonation even in the absence of
fuel, and can be initiated as readily as nitroglycerin. Detonation of
the endothermic liquid oxide close to its boiling point (−152 °C or
−241.6 °F or 121.1 K) generated a 100 kbar pulse and fragmented the test
equipment. It is the simplest molecule that is capable of detonation in
all three phases. The liquid oxide is sensitive and may explode during
distillation, and this has been the cause of industrial accidents. Gaseous nitric oxide detonates at about 2,300 metres per second
(8,300 km/h; 5,100 mph), but as a solid it can reach a detonation
velocity of 6,100 metres per second (22,000 km/h; 13,600 mph).
Two CubeSats orbiting around Earth after being deployed from the ISS Kibō module's Small Satellite Orbital Deployer
A satellite or an artificial satellite is an object, typically a spacecraft, placed into orbit around a celestial body. They have a variety of uses, including communication relay, weather forecasting, navigation (GPS), broadcasting, scientific research, and Earth observation. Additional military uses are reconnaissance, early warning,
signals intelligence and, potentially, weapon delivery. Other
satellites include the final rocket stages that place satellites in
orbit and formerly useful satellites that later become defunct.
Spaceships become satellites by accelerating or decelerating to reach orbital velocities, occupying an orbit high enough to avoid orbital decay due to drag in the presence of an atmosphere and above their Roche limit. Satellites are spacecraft launched from the surface into space by launch systems. Satellites can then change or maintain their orbit by propulsion, usually by chemical or ion thrusters. As of 2018, about 90% of the satellites orbiting the Earth are in low Earth orbit or geostationary orbit;
geostationary means the satellites stay still in the sky (relative to a
fixed point on the ground). Some imaging satellites choose a Sun-synchronous orbit because they can scan the entire globe with similar lighting. As the number of satellites and amount of space debris around Earth increases, the threat of collision has become more severe. An orbiter is a spacecraft that is designed to perform an orbital insertion, entering orbit around an astronomical body from another, and as such becoming an artificial satellite. A small number of satellites orbit other bodies (such as the Moon, Mars, and the Sun) or many bodies at once (two for a halo orbit, three for a Lissajous orbit).
The first artificial satellite launched into the Earth's orbit was the Soviet Union's Sputnik 1,
on October 4, 1957. As of December 31, 2022, there are 6,718
operational satellites in the Earth's orbit, of which 4,529 belong to
the United States (3,996 commercial), 590 belong to China, 174 belong to
Russia, and 1,425 belong to other nations.[2]
In 1903, Konstantin Tsiolkovsky (1857–1935) published Exploring Space Using Jet Propulsion Devices, which was the first academic treatise on the use of rocketry to launch spacecraft. He calculated the orbital speed required for a minimal orbit, and inferred that a multi-stage rocket fueled by liquid propellants could achieve this.
Herman Potočnik explored the idea of using orbiting spacecraft for detailed peaceful and military observation of the ground in his 1928 book, The Problem of Space Travel. He described how the special conditions of space could be useful for scientific experiments. The book described geostationary satellites (first put forward by Konstantin Tsiolkovsky)
and discussed the communication between them and the ground using
radio, but fell short with the idea of using satellites for mass
broadcasting and as telecommunications relays.
In a 1945 Wireless World article, English science fiction writer Arthur C. Clarke described in detail the possible use of communications satellites for mass communications. He suggested that three geostationary satellites would provide coverage over the entire planet.
In May 1946, the United States Air Force's Project RAND released the Preliminary Design of an Experimental World-Circling Spaceship,
which stated "A satellite vehicle with appropriate instrumentation can
be expected to be one of the most potent scientific tools of the
Twentieth Century." The United States had been considering launching orbital satellites since 1945 under the Bureau of Aeronautics of the United States Navy.
Project RAND eventually released the report, but considered the
satellite to be a tool for science, politics, and propaganda, rather
than a potential military weapon.
In February 1954, Project RAND released "Scientific Uses for a Satellite Vehicle", by R. R. Carhart. This expanded on potential scientific uses for satellite vehicles and
was followed in June 1955 with "The Scientific Use of an Artificial
Satellite", by H. K. Kallmann and W. W. Kellogg.
The first artificial satellite was Sputnik 1, launched by the Soviet Union on 4 October 1957 under the Sputnik program, with Sergei Korolev as chief designer. Sputnik 1 helped to identify the density of high atmospheric layers through measurement of its orbital change and provided data on radio-signal distribution in the ionosphere. The unanticipated announcement of Sputnik 1's success precipitated the Sputnik crisis in the United States and ignited the so-called Space Race within the Cold War.
In the context of activities planned for the International Geophysical Year (1957–1958), the White House announced on 29 July 1955 that the U.S. intended to launch satellites by the spring of 1958. This became known as Project Vanguard. On 31 July, the Soviet Union announced its intention to launch a satellite by the fall of 1957.
Sputnik 2 was launched on 3 November 1957 and carried the first living passenger into orbit, a dog named Laika. The dog was sent without possibility of return.
In early 1955, after being pressured by the American Rocket Society, the National Science Foundation, and the International Geophysical Year, the Army and Navy worked on Project Orbiter with two competing programs. The army used the Jupiter C rocket, while the civilian–Navy program used the Vanguard rocket to launch a satellite. Explorer 1 became the United States' first artificial satellite, on 31 January 1958. The information sent back from its radiation detector led to the discovery of the Earth's Van Allen radiation belts. The TIROS-1 spacecraft, launched on April 1, 1960, as part of NASA's Television Infrared Observation Satellite (TIROS) program, sent back the first television footage of weather patterns to be taken from space.
While Canada was the third country to build a satellite which was launched into space, it was launched aboard an American rocket from an American spaceport. The same goes for Australia, whose launch of the first satellite involved a donated U.S. Redstone rocket and American support staff as well as a joint launch facility with the United Kingdom. The first Italian satellite San Marco 1 was launched on 15 December 1964 on a U.S. Scout rocket from Wallops Island (Virginia, United States) with an Italian launch team trained by NASA. In similar occasions, almost all further first national satellites were launched by foreign rockets.
France was the third country to launch a satellite on its own rocket. On 26 November 1965, the Astérix or A-1 (initially conceptualized as FR.2 or FR-2), was put into orbit by a Diamant A rocket launched from the CIEES site at Hammaguir, Algeria. With Astérix, France became the sixth country to have an artificial satellite.
Later Satellite Development
Early satellites were built to unique designs. With advancements in technology, multiple satellites began to be built on single model platforms called satellite buses. The first standardized satellite bus design was the HS-333geosynchronous (GEO) communication satellite launched in 1972. Beginning in 1997, FreeFlyer is a commercial off-the-shelf software application for satellite mission analysis, design, and operations.
After the late 2010s, and especially after the advent and operational fielding of large satellite internet constellations—where
on-orbit active satellites more than doubled over a period of five
years—the companies building the constellations began to propose regular
planned deorbiting of the older satellites that reached the end of life, as a part of the regulatory process of obtaining a launch license. The largest artificial satellite ever is the International Space Station.
By the early 2000s, and particularly after the advent of CubeSats and increased launches of microsats—frequently launched to the lower altitudes of low Earth orbit (LEO)—satellites began to more frequently be designed to get destroyed, or breakup and burnup entirely in the atmosphere. For example, SpaceXStarlink
satellites, the first large satellite internet constellation to exceed
1000 active satellites on orbit in 2020, are designed to be 100%
demisable and burn up completely on their atmospheric reentry at the end
of their life, or in the event of an early satellite failure.
Japan's space agency (JAXA) and NASA
plan to send a wooden satellite prototype called LingoSat into orbit in
the summer of 2024. They have been working on this project for few
years and sent first wood samples to the space in 2021 to test the
material's resilience to space conditions.
Most satellites use chemical or ion propulsion to adjust or maintain their orbit, coupled with reaction wheels to control their three axis of rotation or attitude. Satellites close to Earth are affected the most by variations in the Earth's magnetic, gravitational field and the Sun's radiation pressure;
satellites that are further away are affected more by other bodies'
gravitational field by the Moon and the Sun. Satellites utilize
ultra-white reflective coatings to prevent damage from UV radiation. Without orbit and orientation control, satellites in orbit will not be able to communicate with ground stations on the Earth.
Chemical thrusters on satellites usually use monopropellant (one-part) or bipropellant (two-parts) that are hypergolic. Hypergolic means able to combust spontaneously when in contact with each other or to a catalyst. The most commonly used propellant mixtures on satellites are hydrazine-based monopropellants or monomethylhydrazine–dinitrogen tetroxide bipropellants. Ion thrusters on satellites usually are Hall-effect thrusters, which generate thrust by accelerating positive ions
through a negatively-charged grid. Ion propulsion is more efficient
propellant-wise than chemical propulsion but its thrust is very small
(around 0.5 N or 0.1 lbf), and thus requires a longer burn time. The thrusters usually use xenon because it is inert, can be easily ionized, has a high atomic mass and storable as a high-pressure liquid.
Most satellites use solar panels to generate power, and a few in deep space with limited sunlight use radioisotope thermoelectric generators. Slip rings
attach solar panels to the satellite; the slip rings can rotate to be
perpendicular with the sunlight and generate the most power. All
satellites with a solar panel must also have batteries, because sunlight is blocked inside the launch vehicle and at night. The most common types of batteries for satellites are lithium-ion, and in the past nickel–hydrogen.
Earth observation satellites are designed to monitor and survey the Earth, called remote sensing. Most Earth observation satellites are placed in low Earth orbit for a high data resolution, though some are placed in a geostationary orbit for an uninterrupted coverage. Some satellites are placed in a Sun-synchronous orbit to have consistent lighting and obtain a total view of the Earth. Depending on the satellites' functions, they might have a normal camera, radar, lidar, photometer, or atmospheric instruments. Earth observation satellite's data is most used in archaeology, cartography, environmental monitoring, meteorology, and reconnaissance applications. As of 2021, there are over 950 Earth observation satellites, with the largest number of satellites operated with Planet Labs.
Weather satellites monitor clouds, city lights, fires, effects of pollution, auroras, sand and dust storms, snow cover, ice mapping, boundaries of ocean currents, energy flows, etc. Environmental monitoring satellites can detect changes in the Earth's vegetation,
atmospheric trace gas content, sea state, ocean color, and ice fields.
By monitoring vegetation changes over time, droughts can be monitored by
comparing the current vegetation state to its long term average. Anthropogenic emissions can be monitored by evaluating data of tropospheric NO2 and SO2.
The radio waves used for telecommunications links travel by line of sight
and so are obstructed by the curve of the Earth. The purpose of
communications satellites is to relay the signal around the curve of the
Earth allowing communication between widely separated geographical
points. Communications satellites use a wide range of radio and microwavefrequencies.
To avoid signal interference, international organizations have
regulations for which frequency ranges or "bands" certain organizations
are allowed to use. This allocation of bands minimizes the risk of
signal interference.
When an Earth observation satellite or a communications satellite is
deployed for military or intelligence purposes, it is known as a spy
satellite or reconnaissance satellite.
Their uses include early missile warning, nuclear explosion
detection, electronic reconnaissance, and optical or radar imaging
surveillance.
Navigational satellites are satellites that use radio time signals
transmitted to enable mobile receivers on the ground to determine their
exact location. The relatively clear line of sight between the
satellites and receivers on the ground, combined with ever-improving
electronics, allows satellite navigation systems to measure location to
accuracies on the order of a few meters in real time.
Astronomical satellites are satellites used for observation of distant planets, galaxies, and other outer space objects.
Experimental
Tether satellites are satellites that are connected to another satellite by a thin cable called a tether. Recovery satellites are satellites that provide a recovery of reconnaissance, biological, space-production and other payloads from orbit to Earth. Biosatellites are satellites designed to carry living organisms, generally for scientific experimentation. Space-based solar power satellites are proposed satellites that would collect energy from sunlight and transmit it for use on Earth or other places.
Since the mid-2000s, satellites have been hacked by militant
organizations to broadcast propaganda and to pilfer classified
information from military communication networks.For testing purposes, satellites in low earth orbit have been destroyed by ballistic missiles launched from the Earth. Russia, United States, China and India have demonstrated the ability to eliminate satellites. In 2007, the Chinese military shot down an aging weather satellite, followed by the US Navy shooting down a defunct spy satellite in February 2008. On 18 November 2015, after two failed attempts, Russia successfully carried out a flight test of an anti-satellite missile known as Nudol. On 27 March 2019, India shot down a live test satellite at 300 km
altitude in 3 minutes, becoming the fourth country to have the
capability to destroy live satellites.
Environmental impact
The environmental impact of satellites is not currently well
understood as they were previously assumed to be benign due to the
rarity of satellite launches. However, the exponential increase and
projected growth of satellite launches are bringing the issue into
consideration. The main issues are resource use and the release of
pollutants into the atmosphere which can happen at different stages of a
satellite's lifetime.
Resource use
Resource use is difficult to monitor and quantify for satellites and launch vehicles due to their commercially sensitive nature. However, aluminium
is a preferred metal in satellite construction due to its lightweight
and relative cheapness and typically constitutes around 40% of a
satellite's mass. Through mining and refining, aluminium has numerous negative
environmental impacts and is one of the most carbon-intensive metals. Satellite manufacturing also requires rare elements such as lithium, gold, and gallium, some of which have significant environmental consequences linked to their mining and processing and/or are in limited supply. Launch vehicles require larger amounts of raw materials to manufacture and the booster stages are usually dropped into the ocean after fuel exhaustion. They are not normally recovered. Two empty boosters used for Ariane 5, which were composed mainly of steel, weighed around 38 tons each, to give an idea of the quantity of materials that are often left in the ocean.
Launches
Rocket launches release numerous pollutants into every layer of the atmosphere, especially affecting the atmosphere above the tropopause where the byproducts of combustion can reside for extended periods. These pollutants can include black carbon, CO2, nitrogen oxides (NOx), aluminium and water vapour, but the mix of pollutants is dependent on rocket design and fuel type. The amount of green house gases emitted by rockets is considered trivial as it contributes significantly less, around 0.01%, than the aviation industry yearly which itself accounts for 2-3% of the total global greenhouse gas emissions.
Rocket emissions in the stratosphere
and their effects are only beginning to be studied and it is likely
that the impacts will be more critical than emissions in the
troposphere. The stratosphere includes the ozone layer and pollutants emitted from rockets can contribute to ozone depletion in a number of ways. Radicals such as NOx, HOx, and ClOx deplete stratospheric O3 through intermolecular reactions and can have huge impacts in trace amounts. However, it is currently understood that launch rates would need to
increase by ten times to match the impact of regulated ozone-depleting
substances.Whilst emissions of water vapour are largely deemed as inert, H2O is the source gas for HOx and can also contribute to ozone loss through the formation of ice particles. Black carbon particles emitted by rockets can absorb solar radiation in
the stratosphere and cause warming in the surrounding air which can
then impact the circulatory dynamics of the stratosphere. Both warming and changes in circulation can then cause depletion of the ozone layer.
Operational
Low earth orbit satellites
Several pollutants are released in the upper atmospheric layers during the orbital lifetime of LEO satellites.Orbital decay
is caused by atmospheric drag and to keep the satellite in the correct
orbit the platform occasionally needs repositioning. To do this
nozzle-based systems use a chemical propellant to create thrust. In most
cases hydrazine is the chemical propellant used which then releases ammonia, hydrogen and nitrogen as gas into the upper atmosphere. Also, the environment of the outer atmosphere causes the degradation of
exterior materials. The atomic oxygen in the upper atmosphere oxidises
hydrocarbon-based polymers like Kapton, Teflon and Mylar that are used to insulate and protect the satellite which then emits gasses like CO2 and CO into the atmosphere.
Night sky
Given the current surge in satellites in the sky, soon hundreds of
satellites may be clearly visible to the human eye at dark sites. It is
estimated that the overall levels of diffuse brightness of the night
skies has increased by up to 10% above natural levels. This has the potential to confuse organisms, like insects and
night-migrating birds, that use celestial patterns for migration and
orientation.The impact this might have is currently unclear. The visibility of
man-made objects in the night sky may also impact people's linkages with
the world, nature, and culture.
Ground-based infrastructure
At all points of a satellite's lifetime, its movement and processes
are monitored on the ground through a network of facilities. The
environmental cost of the infrastructure as well as day-to-day
operations is likely to be quite high, but quantification requires further investigation.
Degeneration
Particular threats arise from uncontrolled de-orbit.
When in a controlled manner satellites reach the end of life they are intentionally deorbited or moved to a graveyard orbit further away from Earth in order to reduce space debris.
Physical collection or removal is not economical or even currently
possible. Moving satellites out to a graveyard orbit is also
unsustainable because they remain there for hundreds of years. It will lead to the further pollution of space and future issues with
space debris.
When satellites deorbit much of it is destroyed during re-entry into the
atmosphere due to the heat. This introduces more material and
pollutants into the atmosphere. There have been concerns expressed about the potential damage to the ozone layer and the possibility of increasing the earth's albedo, reducing warming but also resulting in accidental geoengineering of the earth's climate. After deorbiting 70% of satellites end up in the ocean and are rarely recovered.
Mitigation
Using wood as an alternative material has been posited in order to
reduce pollution and debris from satellites that reenter the atmosphere.
Interference
Collision threat
The growth of all tracked objects in space over time
Space debris pose dangers to the spacecraft (including satellites) in or crossing geocentric orbits and have the potential to drive a Kessler syndrome which could potentially curtail humanity from conducting space endeavors in the future.
With increase in the number of satellite constellations, like SpaceXStarlink, the astronomical community, such as the IAU, report that orbital pollution is getting increased significantly. A report from the SATCON1 workshop in 2020 concluded that the effects
of large satellite constellations can severely affect some astronomical
research efforts and lists six ways to mitigate harm to astronomy. The IAU is establishing a center (CPS) to coordinate or aggregate measures to mitigate such detrimental effects.
Radio interference
Due to the low received signal strength of satellite transmissions, they are prone to jamming
by land-based transmitters. Such jamming is limited to the geographical
area within the transmitter's range. GPS satellites are potential
targets for jamming, but satellite phone and television signals have also been subjected to jamming.
Also, it is very easy to transmit a carrier radio signal to a
geostationary satellite and thus interfere with the legitimate uses of
the satellite's transponder. It is common for Earth stations to transmit
at the wrong time or on the wrong frequency in commercial satellite
space, and dual-illuminate the transponder, rendering the frequency
unusable. Satellite operators now have sophisticated monitoring tools
and methods that enable them to pinpoint the source of any carrier and
manage the transponder space effectively.
Regulation
Issues like space debris, radio and light pollution are increasing in magnitude and at the same time lack progress in national or international regulation.