Search This Blog

Wednesday, April 27, 2022

Radar

From Wikipedia, the free encyclopedia

A long-range radar antenna, known as ALTAIR, used to detect and track space objects in conjunction with ABM testing at the Ronald Reagan Test Site on Kwajalein Atoll.
Long-range radar antenna, used to track space objects and ballistic missiles.
 
Israeli military radar is typical of the type of radar used for air traffic control. The antenna rotates at a steady rate, sweeping the local airspace with a narrow vertical fan-shaped beam, to detect aircraft at all altitudes.
Radar of the type used for detection of aircraft. It rotates steadily, sweeping the airspace with a narrow beam.

Radar (radio detection and ranging) is a detection system that uses radio waves to determine the distance (ranging), angle, and radial velocity of objects relative to the site. It can be used to detect aircraft, ships, spacecraft, guided missiles, motor vehicles, weather formations, and terrain. A radar system consists of a transmitter producing electromagnetic waves in the radio or microwaves domain, a transmitting antenna, a receiving antenna (often the same antenna is used for transmitting and receiving) and a receiver and processor to determine properties of the object(s). Radio waves (pulsed or continuous) from the transmitter reflect off the object and return to the receiver, giving information about the object's location and speed.

Radar was developed secretly for military use by several countries in the period before and during World War II. A key development was the cavity magnetron in the United Kingdom, which allowed the creation of relatively small systems with sub-meter resolution. The term RADAR was coined in 1940 by the United States Navy as an acronym for "radio detection and ranging". The term radar has since entered English and other languages as a common noun, losing all capitalization. During RAF RADAR courses in 1954–5 at Yatesbury Training Camp "radio azimuth direction and ranging" was suggested. The modern uses of radar are highly diverse, including air and terrestrial traffic control, radar astronomy, air-defense systems, antimissile systems, marine radars to locate landmarks and other ships, aircraft anti-collision systems, ocean surveillance systems, outer space surveillance and rendezvous systems, meteorological precipitation monitoring, altimetry and flight control systems, guided missile target locating systems, self-driving cars, and ground-penetrating radar for geological observations. High tech radar systems are associated with digital signal processing, machine learning and are capable of extracting useful information from very high noise levels.

Other systems similar to radar make use of other parts of the electromagnetic spectrum. One example is LIDAR, which uses predominantly infrared light from lasers rather than radio waves. With the emergence of driverless vehicles, radar is expected to assist the automated platform to monitor its environment, thus preventing unwanted incidents.

History

First experiments

As early as 1886, German physicist Heinrich Hertz showed that radio waves could be reflected from solid objects. In 1895, Alexander Popov, a physics instructor at the Imperial Russian Navy school in Kronstadt, developed an apparatus using a coherer tube for detecting distant lightning strikes. The next year, he added a spark-gap transmitter. In 1897, while testing this equipment for communicating between two ships in the Baltic Sea, he took note of an interference beat caused by the passage of a third vessel. In his report, Popov wrote that this phenomenon might be used for detecting objects, but he did nothing more with this observation.

The German inventor Christian Hülsmeyer was the first to use radio waves to detect "the presence of distant metallic objects". In 1904, he demonstrated the feasibility of detecting a ship in dense fog, but not its distance from the transmitter. He obtained a patent for his detection device in April 1904 and later a patent for a related amendment for estimating the distance to the ship. He also obtained a British patent on 23 September 1904 for a full radar system, that he called a telemobiloscope. It operated on a 50 cm wavelength and the pulsed radar signal was created via a spark-gap. His system already used the classic antenna setup of horn antenna with parabolic reflector and was presented to German military officials in practical tests in Cologne and Rotterdam harbour but was rejected.

In 1915, Robert Watson-Watt used radio technology to provide advance warning to airmen and during the 1920s went on to lead the U.K. research establishment to make many advances using radio techniques, including the probing of the ionosphere and the detection of lightning at long distances. Through his lightning experiments, Watson-Watt became an expert on the use of radio direction finding before turning his inquiry to shortwave transmission. Requiring a suitable receiver for such studies, he told the "new boy" Arnold Frederic Wilkins to conduct an extensive review of available shortwave units. Wilkins would select a General Post Office model after noting its manual's description of a "fading" effect (the common term for interference at the time) when aircraft flew overhead.

Across the Atlantic in 1922, after placing a transmitter and receiver on opposite sides of the Potomac River, U.S. Navy researchers A. Hoyt Taylor and Leo C. Young discovered that ships passing through the beam path caused the received signal to fade in and out. Taylor submitted a report, suggesting that this phenomenon might be used to detect the presence of ships in low visibility, but the Navy did not immediately continue the work. Eight years later, Lawrence A. Hyland at the Naval Research Laboratory (NRL) observed similar fading effects from passing aircraft; this revelation led to a patent application as well as a proposal for further intensive research on radio-echo signals from moving targets to take place at NRL, where Taylor and Young were based at the time.

Similarly, in the UK, L. S. Alder took out a secret provisional patent for Naval radar in 1928. W.A.S. Butement and P. E. Pollard developed a breadboard test unit, operating at 50 cm (600 MHz) and using pulsed modulation which gave successful laboratory results. In January 1931, a writeup on the apparatus was entered in the Inventions Book maintained by the Royal Engineers. This is the first official record in Great Britain of the technology that was used in Coastal Defence and was incorporated into Chain Home as Chain Home (low).

Just before World War II

Experimental radar antenna, US Naval Research Laboratory, Anacostia, D. C., late 1930s

Before the Second World War, researchers in the United Kingdom, France, Germany, Italy, Japan, the Netherlands, the Soviet Union, and the United States, independently and in great secrecy, developed technologies that led to the modern version of radar. Australia, Canada, New Zealand, and South Africa followed prewar Great Britain's radar development, and Hungary generated its radar technology during the war.

In France in 1934, following systematic studies on the split-anode magnetron, the research branch of the Compagnie Générale de Télégraphie Sans Fil (CSF) headed by Maurice Ponte with Henri Gutton, Sylvain Berline and M. Hugon, began developing an obstacle-locating radio apparatus, aspects of which were installed on the ocean liner Normandie in 1935.

During the same period, Soviet military engineer P.K. Oshchepkov, in collaboration with the Leningrad Electrotechnical Institute, produced an experimental apparatus, RAPID, capable of detecting an aircraft within 3 km of a receiver. The Soviets produced their first mass production radars RUS-1 and RUS-2 Redut in 1939 but further development was slowed following the arrest of Oshchepkov and his subsequent gulag sentence. In total, only 607 Redut stations were produced during the war. The first Russian airborne radar, Gneiss-2, entered into service in June 1943 on Pe-2 dive bombers. More than 230 Gneiss-2 stations were produced by the end of 1944. The French and Soviet systems, however, featured continuous-wave operation that did not provide the full performance ultimately synonymous with modern radar systems.

Full radar evolved as a pulsed system, and the first such elementary apparatus was demonstrated in December 1934 by the American Robert M. Page, working at the Naval Research Laboratory. The following year, the United States Army successfully tested a primitive surface-to-surface radar to aim coastal battery searchlights at night. This design was followed by a pulsed system demonstrated in May 1935 by Rudolf Kühnhold and the firm GEMA [de] in Germany and then another in June 1935 by an Air Ministry team led by Robert Watson-Watt in Great Britain.

The first workable unit built by Robert Watson-Watt and his team

In 1935, Watson-Watt was asked to judge recent reports of a German radio-based death ray and turned the request over to Wilkins. Wilkins returned a set of calculations demonstrating the system was basically impossible. When Watson-Watt then asked what such a system might do, Wilkins recalled the earlier report about aircraft causing radio interference. This revelation led to the Daventry Experiment of 26 February 1935, using a powerful BBC shortwave transmitter as the source and their GPO receiver setup in a field while a bomber flew around the site. When the plane was clearly detected, Hugh Dowding, the Air Member for Supply and Research was very impressed with their system's potential and funds were immediately provided for further operational development. Watson-Watt's team patented the device in GB593017.

A Chain Home tower in Great Baddow, Essex, United Kingdom
 
Memorial plaque commemorating Robert Watson-Watt and Arnold Wilkins

Development of radar greatly expanded on 1 September 1936 when Watson-Watt became Superintendent of a new establishment under the British Air Ministry, Bawdsey Research Station located in Bawdsey Manor, near Felixstowe, Suffolk. Work there resulted in the design and installation of aircraft detection and tracking stations called "Chain Home" along the East and South coasts of England in time for the outbreak of World War II in 1939. This system provided the vital advance information that helped the Royal Air Force win the Battle of Britain; without it, significant numbers of fighter aircraft, which Great Britain did not have available, would always need to be in the air to respond quickly. If enemy aircraft detection had relied solely on the observations of ground-based individuals, Great Britain might have lost the Battle of Britain. The radar formed part of the "Dowding system" for collecting reports of enemy aircraft and coordinating the response.

Given all required funding and development support, the team produced working radar systems in 1935 and began deployment. By 1936, the first five Chain Home (CH) systems were operational and by 1940 stretched across the entire UK including Northern Ireland. Even by standards of the era, CH was crude; instead of broadcasting and receiving from an aimed antenna, CH broadcast a signal floodlighting the entire area in front of it, and then used one of Watson-Watt's own radio direction finders to determine the direction of the returned echoes. This fact meant CH transmitters had to be much more powerful and have better antennas than competing systems but allowed its rapid introduction using existing technologies.

During World War II

A key development was the cavity magnetron in the UK, which allowed the creation of relatively small systems with sub-meter resolution. Britain shared the technology with the U.S. during the 1940 Tizard Mission.

In April 1940, Popular Science showed an example of a radar unit using the Watson-Watt patent in an article on air defence. Also, in late 1941 Popular Mechanics had an article in which a U.S. scientist speculated about the British early warning system on the English east coast and came close to what it was and how it worked. Watson-Watt was sent to the U.S. in 1941 to advise on air defense after Japan's attack on Pearl Harbor. Alfred Lee Loomis organized the secret MIT Radiation Laboratory at Massachusetts Institute of Technology, Cambridge, Massachusetts which developed microwave radar technology in the years 1941–45. Later, in 1943, Page greatly improved radar with the monopulse technique that was used for many years in most radar applications.

The war precipitated research to find better resolution, more portability, and more features for radar, including complementary navigation systems like Oboe used by the RAF's Pathfinder.

Applications

Commercial marine radar antenna. The rotating antenna radiates a vertical fan-shaped beam.
 

The information provided by radar includes the bearing and range (and therefore position) of the object from the radar scanner. It is thus used in many different fields where the need for such positioning is crucial. The first use of radar was for military purposes: to locate air, ground and sea targets. This evolved in the civilian field into applications for aircraft, ships, and automobiles.

In aviation, aircraft can be equipped with radar devices that warn of aircraft or other obstacles in or approaching their path, display weather information, and give accurate altitude readings. The first commercial device fitted to aircraft was a 1938 Bell Lab unit on some United Air Lines aircraft. Aircraft can land in fog at airports equipped with radar-assisted ground-controlled approach systems in which the plane's position is observed on precision approach radar screens by operators who thereby give radio landing instructions to the pilot, maintaining the aircraft on a defined approach path to the runway. Military fighter aircraft are usually fitted with air-to-air targeting radars, to detect and target enemy aircraft. In addition, larger specialized military aircraft carry powerful airborne radars to observe air traffic over a wide region and direct fighter aircraft towards targets.

Marine radars are used to measure the bearing and distance of ships to prevent collision with other ships, to navigate, and to fix their position at sea when within range of shore or other fixed references such as islands, buoys, and lightships. In port or in harbour, vessel traffic service radar systems are used to monitor and regulate ship movements in busy waters.

Meteorologists use radar to monitor precipitation and wind. It has become the primary tool for short-term weather forecasting and watching for severe weather such as thunderstorms, tornadoes, winter storms, precipitation types, etc. Geologists use specialized ground-penetrating radars to map the composition of Earth's crust. Police forces use radar guns to monitor vehicle speeds on the roads. Smaller radar systems are used to detect human movement. Examples are breathing pattern detection for sleep monitoring and hand and finger gesture detection for computer interaction. Automatic door opening, light activation and intruder sensing are also common.

Principles

Radar signal

3D Doppler Radar Spectrum showing a Barker Code of 13
 

A radar system has a transmitter that emits radio waves known as radar signals in predetermined directions. When these signals contact an object they are usually reflected or scattered in many directions, although some of them will be absorbed and penetrate into the target. Radar signals are reflected especially well by materials of considerable electrical conductivity—such as most metals, seawater, and wet ground. This makes the use of radar altimeters possible in certain cases. The radar signals that are reflected back towards the radar receiver are the desirable ones that make radar detection work. If the object is moving either toward or away from the transmitter, there will be a slight change in the frequency of the radio waves due to the Doppler effect.

Radar receivers are usually, but not always, in the same location as the transmitter. The reflected radar signals captured by the receiving antenna are usually very weak. They can be strengthened by electronic amplifiers. More sophisticated methods of signal processing are also used in order to recover useful radar signals.

The weak absorption of radio waves by the medium through which they pass is what enables radar sets to detect objects at relatively long ranges—ranges at which other electromagnetic wavelengths, such as visible light, infrared light, and ultraviolet light, are too strongly attenuated. Weather phenomena, such as fog, clouds, rain, falling snow, and sleet, that block visible light are usually transparent to radio waves. Certain radio frequencies that are absorbed or scattered by water vapour, raindrops, or atmospheric gases (especially oxygen) are avoided when designing radars, except when their detection is intended.

Illumination

Radar relies on its own transmissions rather than light from the Sun or the Moon, or from electromagnetic waves emitted by the target objects themselves, such as infrared radiation (heat). This process of directing artificial radio waves towards objects is called illumination, although radio waves are invisible to the human eye as well as optical cameras.

Reflection

Brightness can indicate reflectivity as in this 1960 weather radar image (of Hurricane Abby). The radar's frequency, pulse form, polarization, signal processing, and antenna determine what it can observe.

If electromagnetic waves travelling through one material meet another material, having a different dielectric constant or diamagnetic constant from the first, the waves will reflect or scatter from the boundary between the materials. This means that a solid object in air or in a vacuum, or a significant change in atomic density between the object and what is surrounding it, will usually scatter radar (radio) waves from its surface. This is particularly true for electrically conductive materials such as metal and carbon fibre, making radar well-suited to the detection of aircraft and ships. Radar absorbing material, containing resistive and sometimes magnetic substances, is used on military vehicles to reduce radar reflection. This is the radio equivalent of painting something a dark colour so that it cannot be seen by the eye at night.

Radar waves scatter in a variety of ways depending on the size (wavelength) of the radio wave and the shape of the target. If the wavelength is much shorter than the target's size, the wave will bounce off in a way similar to the way light is reflected by a mirror. If the wavelength is much longer than the size of the target, the target may not be visible because of poor reflection. Low-frequency radar technology is dependent on resonances for detection, but not identification, of targets. This is described by Rayleigh scattering, an effect that creates Earth's blue sky and red sunsets. When the two length scales are comparable, there may be resonances. Early radars used very long wavelengths that were larger than the targets and thus received a vague signal, whereas many modern systems use shorter wavelengths (a few centimetres or less) that can image objects as small as a loaf of bread.

Short radio waves reflect from curves and corners in a way similar to glint from a rounded piece of glass. The most reflective targets for short wavelengths have 90° angles between the reflective surfaces. A corner reflector consists of three flat surfaces meeting like the inside corner of a box. The structure will reflect waves entering its opening directly back to the source. They are commonly used as radar reflectors to make otherwise difficult-to-detect objects easier to detect. Corner reflectors on boats, for example, make them more detectable to avoid collision or during a rescue. For similar reasons, objects intended to avoid detection will not have inside corners or surfaces and edges perpendicular to likely detection directions, which leads to "odd" looking stealth aircraft. These precautions do not totally eliminate reflection because of diffraction, especially at longer wavelengths. Half wavelength long wires or strips of conducting material, such as chaff, are very reflective but do not direct the scattered energy back toward the source. The extent to which an object reflects or scatters radio waves is called its radar cross section.

Radar range equation

The power Pr returning to the receiving antenna is given by the equation:

where

  • Pt = transmitter power
  • Gt = gain of the transmitting antenna
  • Ar = effective aperture (area) of the receiving antenna; this can also be expressed as , where
  • = transmitted wavelength
  • Gr = gain of receiving antenna
  • σ = radar cross section, or scattering coefficient, of the target
  • F = pattern propagation factor
  • Rt = distance from the transmitter to the target
  • Rr = distance from the target to the receiver.

In the common case where the transmitter and the receiver are at the same location, Rt = Rr and the term Rt² Rr² can be replaced by R4, where R is the range. This yields:

This shows that the received power declines as the fourth power of the range, which means that the received power from distant targets is relatively very small.

Additional filtering and pulse integration modifies the radar equation slightly for pulse-Doppler radar performance, which can be used to increase detection range and reduce transmit power.

The equation above with F = 1 is a simplification for transmission in a vacuum without interference. The propagation factor accounts for the effects of multipath and shadowing and depends on the details of the environment. In a real-world situation, pathloss effects should also be considered.

Doppler effect

Change of wavelength caused by motion of the source.

Frequency shift is caused by motion that changes the number of wavelengths between the reflector and the radar. This can degrade or enhance radar performance depending upon how it affects the detection process. As an example, Moving Target Indication can interact with Doppler to produce signal cancellation at certain radial velocities, which degrades performance.

Sea-based radar systems, semi-active radar homing, active radar homing, weather radar, military aircraft, and radar astronomy rely on the Doppler effect to enhance performance. This produces information about target velocity during the detection process. This also allows small objects to be detected in an environment containing much larger nearby slow moving objects.

Doppler shift depends upon whether the radar configuration is active or passive. Active radar transmits a signal that is reflected back to the receiver. Passive radar depends upon the object sending a signal to the receiver.

The Doppler frequency shift for active radar is as follows, where is Doppler frequency, is transmit frequency, is radial velocity, and is the speed of light:

.

Passive radar is applicable to electronic countermeasures and radio astronomy as follows:

.

Only the radial component of the velocity is relevant. When the reflector is moving at right angle to the radar beam, it has no relative velocity. Vehicles and weather moving parallel to the radar beam produce the maximum Doppler frequency shift.

When the transmit frequency () is pulsed, using a pulse repeat frequency of , the resulting frequency spectrum will contain harmonic frequencies above and below with a distance of . As a result, the Doppler measurement is only non-ambiguous if the Doppler frequency shift is less than half of , called the Nyquist frequency, since the returned frequency otherwise cannot be distinguished from shifting of a harmonic frequency above or below, thus requiring:

Or when substituting with :

As an example, a Doppler weather radar with a pulse rate of 2 kHz and transmit frequency of 1 GHz can reliably measure weather speed up to at most 150 m/s (340 mph), thus cannot reliably determine radial velocity of aircraft moving 1,000 m/s (2,200 mph).

Polarization

In all electromagnetic radiation, the electric field is perpendicular to the direction of propagation, and the electric field direction is the polarization of the wave. For a transmitted radar signal, the polarization can be controlled to yield different effects. Radars use horizontal, vertical, linear, and circular polarization to detect different types of reflections. For example, circular polarization is used to minimize the interference caused by rain. Linear polarization returns usually indicate metal surfaces. Random polarization returns usually indicate a fractal surface, such as rocks or soil, and are used by navigation radars.

Limiting factors

Beam path and range

Echo heights above ground

Where :
  r : distance radar-target
ke : 4/3
ae : Earth radius
θe : elevation angle above the radar horizon
ha : height of the feedhorn above ground

A radar beam follows a linear path in vacuum but follows a somewhat curved path in atmosphere due to variation in the refractive index of air, which is called the radar horizon. Even when the beam is emitted parallel to the ground, the beam rises above the ground as the curvature of the Earth sinks below the horizon. Furthermore, the signal is attenuated by the medium the beam crosses, and the beam disperses.

The maximum range of conventional radar can be limited by a number of factors:

  • Line of sight, which depends on the height above the ground. Without a direct line of sight, the path of the beam is blocked.
  • The maximum non-ambiguous range, which is determined by the pulse repetition frequency. The maximum non-ambiguous range is the distance the pulse can travel to and return from before the next pulse is emitted.
  • Radar sensitivity and the power of the return signal as computed in the radar equation. This component includes factors such as the environmental conditions and the size (or radar cross section) of the target.

Noise

Signal noise is an internal source of random variations in the signal, which is generated by all electronic components.

Reflected signals decline rapidly as distance increases, so noise introduces a radar range limitation. The noise floor and signal to noise ratio are two different measures of performance that affect range performance. Reflectors that are too far away produce too little signal to exceed the noise floor and cannot be detected. Detection requires a signal that exceeds the noise floor by at least the signal to noise ratio.

Noise typically appears as random variations superimposed on the desired echo signal received in the radar receiver. The lower the power of the desired signal, the more difficult it is to discern it from the noise. Noise figure is a measure of the noise produced by a receiver compared to an ideal receiver, and this needs to be minimized.

Shot noise is produced by electrons in transit across a discontinuity, which occurs in all detectors. Shot noise is the dominant source in most receivers. There will also be flicker noise caused by electron transit through amplification devices, which is reduced using heterodyne amplification. Another reason for heterodyne processing is that for fixed fractional bandwidth, the instantaneous bandwidth increases linearly in frequency. This allows improved range resolution. The one notable exception to heterodyne (downconversion) radar systems is ultra-wideband radar. Here a single cycle, or transient wave, is used similar to UWB communications, see List of UWB channels.

Noise is also generated by external sources, most importantly the natural thermal radiation of the background surrounding the target of interest. In modern radar systems, the internal noise is typically about equal to or lower than the external noise. An exception is if the radar is aimed upwards at clear sky, where the scene is so "cold" that it generates very little thermal noise. The thermal noise is given by kB T B, where T is temperature, B is bandwidth (post matched filter) and kB is Boltzmann's constant. There is an appealing intuitive interpretation of this relationship in a radar. Matched filtering allows the entire energy received from a target to be compressed into a single bin (be it a range, Doppler, elevation, or azimuth bin). On the surface it would appear that then within a fixed interval of time one could obtain perfect, error free, detection. To do this one simply compresses all energy into an infinitesimal time slice. What limits this approach in the real world is that, while time is arbitrarily divisible, current is not. The quantum of electrical energy is an electron, and so the best one can do is match filter all energy into a single electron. Since the electron is moving at a certain temperature (Planck spectrum) this noise source cannot be further eroded. We see then that radar, like all macro-scale entities, is profoundly impacted by quantum theory.

Noise is random and target signals are not. Signal processing can take advantage of this phenomenon to reduce the noise floor using two strategies. The kind of signal integration used with moving target indication can improve noise up to for each stage. The signal can also be split among multiple filters for pulse-Doppler signal processing, which reduces the noise floor by the number of filters. These improvements depend upon coherence.

Interference

Radar systems must overcome unwanted signals in order to focus on the targets of interest. These unwanted signals may originate from internal and external sources, both passive and active. The ability of the radar system to overcome these unwanted signals defines its signal-to-noise ratio (SNR). SNR is defined as the ratio of the signal power to the noise power within the desired signal; it compares the level of a desired target signal to the level of background noise (atmospheric noise and noise generated within the receiver). The higher a system's SNR the better it is at discriminating actual targets from noise signals.

Clutter

Clutter refers to radio frequency (RF) echoes returned from targets which are uninteresting to the radar operators. Such targets include natural objects such as ground, sea, and when not being tasked for meteorological purposes, precipitation (such as rain, snow or hail), sand storms, animals (especially birds), atmospheric turbulence, and other atmospheric effects, such as ionosphere reflections, meteor trails, and Hail spike. Clutter may also be returned from man-made objects such as buildings and, intentionally, by radar countermeasures such as chaff.

Some clutter may also be caused by a long radar waveguide between the radar transceiver and the antenna. In a typical plan position indicator (PPI) radar with a rotating antenna, this will usually be seen as a "sun" or "sunburst" in the center of the display as the receiver responds to echoes from dust particles and misguided RF in the waveguide. Adjusting the timing between when the transmitter sends a pulse and when the receiver stage is enabled will generally reduce the sunburst without affecting the accuracy of the range, since most sunburst is caused by a diffused transmit pulse reflected before it leaves the antenna. Clutter is considered a passive interference source, since it only appears in response to radar signals sent by the radar.

Clutter is detected and neutralized in several ways. Clutter tends to appear static between radar scans; on subsequent scan echoes, desirable targets will appear to move, and all stationary echoes can be eliminated. Sea clutter can be reduced by using horizontal polarization, while rain is reduced with circular polarization (meteorological radars wish for the opposite effect, and therefore use linear polarization to detect precipitation). Other methods attempt to increase the signal-to-clutter ratio.

Clutter moves with the wind or is stationary. Two common strategies to improve measure or performance in a clutter environment are:

  • Moving target indication, which integrates successive pulses and
  • Doppler processing, which uses filters to separate clutter from desirable signals.

The most effective clutter reduction technique is pulse-Doppler radar. Doppler separates clutter from aircraft and spacecraft using a frequency spectrum, so individual signals can be separated from multiple reflectors located in the same volume using velocity differences. This requires a coherent transmitter. Another technique uses a moving target indicator that subtracts the receive signal from two successive pulses using phase to reduce signals from slow moving objects. This can be adapted for systems that lack a coherent transmitter, such as time-domain pulse-amplitude radar.

Constant false alarm rate, a form of automatic gain control (AGC), is a method that relies on clutter returns far outnumbering echoes from targets of interest. The receiver's gain is automatically adjusted to maintain a constant level of overall visible clutter. While this does not help detect targets masked by stronger surrounding clutter, it does help to distinguish strong target sources. In the past, radar AGC was electronically controlled and affected the gain of the entire radar receiver. As radars evolved, AGC became computer-software controlled and affected the gain with greater granularity in specific detection cells.

Radar multipath echoes from a target cause ghosts to appear.

Clutter may also originate from multipath echoes from valid targets caused by ground reflection, atmospheric ducting or ionospheric reflection/refraction (e.g., anomalous propagation). This clutter type is especially bothersome since it appears to move and behave like other normal (point) targets of interest. In a typical scenario, an aircraft echo is reflected from the ground below, appearing to the receiver as an identical target below the correct one. The radar may try to unify the targets, reporting the target at an incorrect height, or eliminating it on the basis of jitter or a physical impossibility. Terrain bounce jamming exploits this response by amplifying the radar signal and directing it downward. These problems can be overcome by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below ground or above a certain height. Monopulse can be improved by altering the elevation algorithm used at low elevation. In newer air traffic control radar equipment, algorithms are used to identify the false targets by comparing the current pulse returns to those adjacent, as well as calculating return improbabilities.

Jamming

Radar jamming refers to radio frequency signals originating from sources outside the radar, transmitting in the radar's frequency and thereby masking targets of interest. Jamming may be intentional, as with an electronic warfare tactic, or unintentional, as with friendly forces operating equipment that transmits using the same frequency range. Jamming is considered an active interference source, since it is initiated by elements outside the radar and in general unrelated to the radar signals.

Jamming is problematic to radar since the jamming signal only needs to travel one way (from the jammer to the radar receiver) whereas the radar echoes travel two ways (radar-target-radar) and are therefore significantly reduced in power by the time they return to the radar receiver in accordance with inverse-square law.. Jammers therefore can be much less powerful than their jammed radars and still effectively mask targets along the line of sight from the jammer to the radar (mainlobe jamming). Jammers have an added effect of affecting radars along other lines of sight through the radar receiver's sidelobes (sidelobe jamming).

Mainlobe jamming can generally only be reduced by narrowing the mainlobe solid angle and cannot fully be eliminated when directly facing a jammer which uses the same frequency and polarization as the radar. Sidelobe jamming can be overcome by reducing receiving sidelobes in the radar antenna design and by using an omnidirectional antenna to detect and disregard non-mainlobe signals. Other anti-jamming techniques are frequency hopping and polarization.

Radar signal processing

Distance measurement

Transit time

Pulse radar: The round-trip time for the radar pulse to get to the target and return is measured. The distance is proportional to this time.

One way to obtain a distance measurement is based on the time-of-flight: transmit a short pulse of radio signal (electromagnetic radiation) and measure the time it takes for the reflection to return. The distance is one-half the round trip time multiplied by the speed of the signal. The factor of one-half comes from the fact that the signal has to travel to the object and back again. Since radio waves travel at the speed of light, accurate distance measurement requires high-speed electronics. In most cases, the receiver does not detect the return while the signal is being transmitted. Through the use of a duplexer, the radar switches between transmitting and receiving at a predetermined rate. A similar effect imposes a maximum range as well. In order to maximize range, longer times between pulses should be used, referred to as a pulse repetition time, or its reciprocal, pulse repetition frequency.

These two effects tend to be at odds with each other, and it is not easy to combine both good short range and good long range in a single radar. This is because the short pulses needed for a good minimum range broadcast have less total energy, making the returns much smaller and the target harder to detect. This could be offset by using more pulses, but this would shorten the maximum range. So each radar uses a particular type of signal. Long-range radars tend to use long pulses with long delays between them, and short range radars use smaller pulses with less time between them. As electronics have improved many radars now can change their pulse repetition frequency, thereby changing their range. The newest radars fire two pulses during one cell, one for short range (about 10 km (6.2 mi)) and a separate signal for longer ranges (about 100 km (62 mi)).

Distance may also be measured as a function of time. The radar mile is the time it takes for a radar pulse to travel one nautical mile, reflect off a target, and return to the radar antenna. Since a nautical mile is defined as 1,852 m, then dividing this distance by the speed of light (299,792,458 m/s), and then multiplying the result by 2 yields a result of 12.36 μs in duration.

Frequency modulation

Continuous wave (CW) radar. Using frequency modulation allows range to be extracted.

Another form of distance measuring radar is based on frequency modulation. In these systems, the frequency of the transmitted signal is changed over time. Since the signal takes a finite time to travel to and from the target, the received signal is a different frequency than what the transmitter is broadcasting at the time the reflected signal arrives back at the radar. By comparing the frequency of the two signals the difference can be easily measured. This is easily accomplished with very high accuracy even in 1940s electronics. A further advantage is that the radar can operate effectively at relatively low frequencies. This was important in the early development of this type when high-frequency signal generation was difficult or expensive.

This technique can be used in continuous wave radar and is often found in aircraft radar altimeters. In these systems a "carrier" radar signal is frequency modulated in a predictable way, typically varying up and down with a sine wave or sawtooth pattern at audio frequencies. The signal is then sent out from one antenna and received on another, typically located on the bottom of the aircraft, and the signal can be continuously compared using a simple beat frequency modulator that produces an audio frequency tone from the returned signal and a portion of the transmitted signal.

The modulation index riding on the receive signal is proportional to the time delay between the radar and the reflector. The frequency shift becomes greater with greater time delay. The frequency shift is directly proportional to the distance travelled. That distance can be displayed on an instrument, and it may also be available via the transponder. This signal processing is similar to that used in speed detecting Doppler radar. Example systems using this approach are AZUSA, MISTRAM, and UDOP.

Terrestrial radar uses low-power FM signals that cover a larger frequency range. The multiple reflections are analyzed mathematically for pattern changes with multiple passes creating a computerized synthetic image. Doppler effects are used which allows slow moving objects to be detected as well as largely eliminating "noise" from the surfaces of bodies of water.

Pulse compression

The two techniques outlined above both have their disadvantages. The pulse timing technique has an inherent tradeoff in that the accuracy of the distance measurement is inversely related to the length of the pulse, while the energy, and thus direction range, is directly related. Increasing power for longer range while maintaining accuracy demands extremely high peak power, with 1960s early warning radars often operating in the tens of megawatts. The continuous wave methods spread this energy out in time and thus require much lower peak power compared to pulse techniques, but requires some method of allowing the sent and received signals to operate at the same time, often demanding two separate antennas.

The introduction of new electronics in the 1960s allowed the two techniques to be combined. It starts with a longer pulse that is also frequency modulated. Spreading the broadcast energy out in time means lower peak energies can be used, with modern examples typically on the order of tens of kilowatts. On reception, the signal is sent into a system that delays different frequencies by different times. The resulting output is a much shorter pulse that is suitable for accurate distance measurement, while also compressing the received energy into a much higher energy peak and thus reducing the signal to noise ratio. The technique is largely universal on modern large radars.

Speed measurement

Speed is the change in distance to an object with respect to time. Thus the existing system for measuring distance, combined with a memory capacity to see where the target last was, is enough to measure speed. At one time the memory consisted of a user making grease pencil marks on the radar screen and then calculating the speed using a slide rule. Modern radar systems perform the equivalent operation faster and more accurately using computers.

If the transmitter's output is coherent (phase synchronized), there is another effect that can be used to make almost instant speed measurements (no memory is required), known as the Doppler effect. Most modern radar systems use this principle into Doppler radar and pulse-Doppler radar systems (weather radar, military radar). The Doppler effect is only able to determine the relative speed of the target along the line of sight from the radar to the target. Any component of target velocity perpendicular to the line of sight cannot be determined by using the Doppler effect alone, but it can be determined by tracking the target's azimuth over time.

It is possible to make a Doppler radar without any pulsing, known as a continuous-wave radar (CW radar), by sending out a very pure signal of a known frequency. CW radar is ideal for determining the radial component of a target's velocity. CW radar is typically used by traffic enforcement to measure vehicle speed quickly and accurately where the range is not important.

When using a pulsed radar, the variation between the phase of successive returns gives the distance the target has moved between pulses, and thus its speed can be calculated. Other mathematical developments in radar signal processing include time-frequency analysis (Weyl Heisenberg or wavelet), as well as the chirplet transform which makes use of the change of frequency of returns from moving targets ("chirp").

Pulse-Doppler signal processing

Pulse-Doppler signal processing. The Range Sample axis represents individual samples taken in between each transmit pulse. The Range Interval axis represents each successive transmit pulse interval during which samples are taken. The Fast Fourier Transform process converts time-domain samples into frequency domain spectra. This is sometimes called the bed of nails.

Pulse-Doppler signal processing includes frequency filtering in the detection process. The space between each transmit pulse is divided into range cells or range gates. Each cell is filtered independently much like the process used by a spectrum analyzer to produce the display showing different frequencies. Each different distance produces a different spectrum. These spectra are used to perform the detection process. This is required to achieve acceptable performance in hostile environments involving weather, terrain, and electronic countermeasures.

The primary purpose is to measure both the amplitude and frequency of the aggregate reflected signal from multiple distances. This is used with weather radar to measure radial wind velocity and precipitation rate in each different volume of air. This is linked with computing systems to produce a real-time electronic weather map. Aircraft safety depends upon continuous access to accurate weather radar information that is used to prevent injuries and accidents. Weather radar uses a low PRF. Coherency requirements are not as strict as those for military systems because individual signals ordinarily do not need to be separated. Less sophisticated filtering is required, and range ambiguity processing is not normally needed with weather radar in comparison with military radar intended to track air vehicles.

The alternate purpose is "look-down/shoot-down" capability required to improve military air combat survivability. Pulse-Doppler is also used for ground based surveillance radar required to defend personnel and vehicles. Pulse-Doppler signal processing increases the maximum detection distance using less radiation in close proximity to aircraft pilots, shipboard personnel, infantry, and artillery. Reflections from terrain, water, and weather produce signals much larger than aircraft and missiles, which allows fast moving vehicles to hide using nap-of-the-earth flying techniques and stealth technology to avoid detection until an attack vehicle is too close to destroy. Pulse-Doppler signal processing incorporates more sophisticated electronic filtering that safely eliminates this kind of weakness. This requires the use of medium pulse-repetition frequency with phase coherent hardware that has a large dynamic range. Military applications require medium PRF which prevents range from being determined directly, and range ambiguity resolution processing is required to identify the true range of all reflected signals. Radial movement is usually linked with Doppler frequency to produce a lock signal that cannot be produced by radar jamming signals. Pulse-Doppler signal processing also produces audible signals that can be used for threat identification.

Reduction of interference effects

Signal processing is employed in radar systems to reduce the radar interference effects. Signal processing techniques include moving target indication, Pulse-Doppler signal processing, moving target detection processors, correlation with secondary surveillance radar targets, space-time adaptive processing, and track-before-detect. Constant false alarm rate and digital terrain model processing are also used in clutter environments.

Plot and track extraction

A Track algorithm is a radar performance enhancement strategy. Tracking algorithms provide the ability to predict future position of multiple moving objects based on the history of the individual positions being reported by sensor systems.

Historical information is accumulated and used to predict future position for use with air traffic control, threat estimation, combat system doctrine, gun aiming, and missile guidance. Position data is accumulated by radar sensors over the span of a few minutes.

There are four common track algorithms.

Radar video returns from aircraft can be subjected to a plot extraction process whereby spurious and interfering signals are discarded. A sequence of target returns can be monitored through a device known as a plot extractor.

The non-relevant real time returns can be removed from the displayed information and a single plot displayed. In some radar systems, or alternatively in the command and control system to which the radar is connected, a radar tracker is used to associate the sequence of plots belonging to individual targets and estimate the targets' headings and speeds.

Engineering

Radar components

A radar's components are:

  • A transmitter that generates the radio signal with an oscillator such as a klystron or a magnetron and controls its duration by a modulator.
  • A waveguide that links the transmitter and the antenna.
  • A duplexer that serves as a switch between the antenna and the transmitter or the receiver for the signal when the antenna is used in both situations.
  • A receiver. Knowing the shape of the desired received signal (a pulse), an optimal receiver can be designed using a matched filter.
  • A display processor to produce signals for human readable output devices.
  • An electronic section that controls all those devices and the antenna to perform the radar scan ordered by software.
  • A link to end user devices and displays.

Antenna design

AS-3263/SPS-49(V) antenna (US Navy)

Radio signals broadcast from a single antenna will spread out in all directions, and likewise a single antenna will receive signals equally from all directions. This leaves the radar with the problem of deciding where the target object is located.

Early systems tended to use omnidirectional broadcast antennas, with directional receiver antennas which were pointed in various directions. For instance, the first system to be deployed, Chain Home, used two straight antennas at right angles for reception, each on a different display. The maximum return would be detected with an antenna at right angles to the target, and a minimum with the antenna pointed directly at it (end on). The operator could determine the direction to a target by rotating the antenna so one display showed a maximum while the other showed a minimum. One serious limitation with this type of solution is that the broadcast is sent out in all directions, so the amount of energy in the direction being examined is a small part of that transmitted. To get a reasonable amount of power on the "target", the transmitting aerial should also be directional.

Parabolic reflector

Surveillance radar antenna
 

More modern systems use a steerable parabolic "dish" to create a tight broadcast beam, typically using the same dish as the receiver. Such systems often combine two radar frequencies in the same antenna in order to allow automatic steering, or radar lock.

Parabolic reflectors can be either symmetric parabolas or spoiled parabolas: Symmetric parabolic antennas produce a narrow "pencil" beam in both the X and Y dimensions and consequently have a higher gain. The NEXRAD Pulse-Doppler weather radar uses a symmetric antenna to perform detailed volumetric scans of the atmosphere. Spoiled parabolic antennas produce a narrow beam in one dimension and a relatively wide beam in the other. This feature is useful if target detection over a wide range of angles is more important than target location in three dimensions. Most 2D surveillance radars use a spoiled parabolic antenna with a narrow azimuthal beamwidth and wide vertical beamwidth. This beam configuration allows the radar operator to detect an aircraft at a specific azimuth but at an indeterminate height. Conversely, so-called "nodder" height finding radars use a dish with a narrow vertical beamwidth and wide azimuthal beamwidth to detect an aircraft at a specific height but with low azimuthal precision.

Types of scan

  • Primary Scan: A scanning technique where the main antenna aerial is moved to produce a scanning beam, examples include circular scan, sector scan, etc.
  • Secondary Scan: A scanning technique where the antenna feed is moved to produce a scanning beam, examples include conical scan, unidirectional sector scan, lobe switching, etc.
  • Palmer Scan: A scanning technique that produces a scanning beam by moving the main antenna and its feed. A Palmer Scan is a combination of a Primary Scan and a Secondary Scan.
  • Conical scanning: The radar beam is rotated in a small circle around the "boresight" axis, which is pointed at the target.

Slotted waveguide

Slotted waveguide antenna

Applied similarly to the parabolic reflector, the slotted waveguide is moved mechanically to scan and is particularly suitable for non-tracking surface scan systems, where the vertical pattern may remain constant. Owing to its lower cost and less wind exposure, shipboard, airport surface, and harbour surveillance radars now use this approach in preference to a parabolic antenna.

Phased array

Phased array: Not all radar antennas must rotate to scan the sky.
 

Another method of steering is used in a phased array radar.

Phased array antennas are composed of evenly spaced similar antenna elements, such as aerials or rows of slotted waveguide. Each antenna element or group of antenna elements incorporates a discrete phase shift that produces a phase gradient across the array. For example, array elements producing a 5 degree phase shift for each wavelength across the array face will produce a beam pointed 5 degrees away from the centerline perpendicular to the array face. Signals travelling along that beam will be reinforced. Signals offset from that beam will be cancelled. The amount of reinforcement is antenna gain. The amount of cancellation is side-lobe suppression.

Phased array radars have been in use since the earliest years of radar in World War II (Mammut radar), but electronic device limitations led to poor performance. Phased array radars were originally used for missile defence (see for example Safeguard Program). They are the heart of the ship-borne Aegis Combat System and the Patriot Missile System. The massive redundancy associated with having a large number of array elements increases reliability at the expense of gradual performance degradation that occurs as individual phase elements fail. To a lesser extent, Phased array radars have been used in weather surveillance. As of 2017, NOAA plans to implement a national network of Multi-Function Phased array radars throughout the United States within 10 years, for meteorological studies and flight monitoring.

Phased array antennas can be built to conform to specific shapes, like missiles, infantry support vehicles, ships, and aircraft.

As the price of electronics has fallen, phased array radars have become more common. Almost all modern military radar systems are based on phased arrays, where the small additional cost is offset by the improved reliability of a system with no moving parts. Traditional moving-antenna designs are still widely used in roles where cost is a significant factor such as air traffic surveillance and similar systems.

Phased array radars are valued for use in aircraft since they can track multiple targets. The first aircraft to use a phased array radar was the B-1B Lancer. The first fighter aircraft to use phased array radar was the Mikoyan MiG-31. The MiG-31M's SBI-16 Zaslon Passive electronically scanned array radar was considered to be the world's most powerful fighter radar, until the AN/APG-77 Active electronically scanned array was introduced on the Lockheed Martin F-22 Raptor.

Phased-array interferometry or aperture synthesis techniques, using an array of separate dishes that are phased into a single effective aperture, are not typical for radar applications, although they are widely used in radio astronomy. Because of the thinned array curse, such multiple aperture arrays, when used in transmitters, result in narrow beams at the expense of reducing the total power transmitted to the target. In principle, such techniques could increase spatial resolution, but the lower power means that this is generally not effective.

Aperture synthesis by post-processing motion data from a single moving source, on the other hand, is widely used in space and airborne radar systems.

Frequency bands

Antennas generally have to be sized similar to the wavelength of the operational frequency, normally within an order of magnitude. This provides a strong incentive to use shorter wavelengths as this will result in smaller antennas. Shorter wavelengths also result in higher resolution due to diffraction, meaning the shaped reflector seen on most radars can also be made smaller for any desired beamwidth.

Opposing the move to smaller wavelengths are a number of practical issues. For one, the electronics needed to produce high power very short wavelengths were generally more complex and expensive than the electronics needed for longer wavelengths or didn't exist at all. Another issue is that the radar equation's effective aperture figure means that for any given antenna (or reflector) size will be more efficient at longer wavelengths. Additionally, shorter wavelengths may interact with molecules or raindrops in the air, scattering the signal. Very long wavelengths also have additional diffraction effects that make them suitable for over the horizon radars. For this reason, a wide variety of wavelengths are used in different roles.

The traditional band names originated as code-names during World War II and are still in military and aviation use throughout the world. They have been adopted in the United States by the Institute of Electrical and Electronics Engineers and internationally by the International Telecommunication Union. Most countries have additional regulations to control which parts of each band are available for civilian or military use.

Other users of the radio spectrum, such as the broadcasting and electronic countermeasures industries, have replaced the traditional military designations with their own systems.

Radar frequency bands
Band name Frequency range Wavelength range Notes
HF 3–30 MHz 10–100 m Coastal radar systems, over-the-horizon (OTH) radars; 'high frequency'
VHF 30–300 MHz 1–10 m Very long range, ground penetrating; 'very high frequency'. Early radar systems generally operated in VHF as suitable electronics had already been developed for broadcast radio. Today this band is heavily congested and no longer suitable for radar due to interference.
P < 300 MHz > 1 m 'P' for 'previous', applied retrospectively to early radar systems; essentially HF + VHF. Often used for remote sensing because of good vegetation penetration.
UHF 300–1000 MHz 0.3–1 m Very long range (e.g. ballistic missile early warning), ground penetrating, foliage penetrating; 'ultra high frequency'. Efficiently produced and received at very high energy levels, and also reduces the effects of nuclear blackout, making them useful in the missile detection role.
L 1–2 GHz 15–30 cm Long range air traffic control and surveillance; 'L' for 'long'. Widely used for long range early warning radars as they combine good reception qualities with reasonable resolution.
S 2–4 GHz 7.5–15 cm Moderate range surveillance, Terminal air traffic control, long-range weather, marine radar; 'S' for 'sentimetric', its code-name during WWII. Less efficient than L, but offering higher resolution, making them especially suitable for long-range ground controlled interception tasks.
C 4–8 GHz 3.75–7.5 cm Satellite transponders; a compromise (hence 'C') between X and S bands; weather; long range tracking
X 8–12 GHz 2.5–3.75 cm Missile guidance, marine radar, weather, medium-resolution mapping and ground surveillance; in the United States the narrow range 10.525 GHz ±25 MHz is used for airport radar; short-range tracking. Named X band because the frequency was a secret during WW2. Diffraction off raindrops during heavy rain limits the range in the detection role and makes this suitable only for short-range roles or those that deliberately detect rain.
K 18–24 GHz 1.11–1.67 cm From German kurz, meaning 'short'. Limited use due to absorption by water vapour at 22 GHz, so Ku and Ka on either side used instead for surveillance. K-band is used for detecting clouds by meteorologists, and by police for detecting speeding motorists. K-band radar guns operate at 24.150 ± 0.100 GHz.
Ku 12–18 GHz 1.67–2.5 cm High-resolution, also used for satellite transponders, frequency under K band (hence 'u')
Ka 24–40 GHz 0.75–1.11 cm Mapping, short range, airport surveillance; frequency just above K band (hence 'a') Photo radar, used to trigger cameras which take pictures of license plates of cars running red lights, operates at 34.300 ± 0.100 GHz.
mm 40–300 GHz 1.0–7.5 mm Millimetre band, subdivided as below. Oxygen in the air is an extremely effective attenuator around 60 GHz, as are other molecules at other frequencies, leading to the so-called propagation window at 94 GHz. Even in this window the attenuation is higher than that due to water at 22.2 GHz. This makes these frequencies generally useful only for short-range highly specific radars, like power line avoidance systems for helicopters or use in space where attenuation is not a problem. Multiple letters are assigned to these bands by different groups. These are from Baytron, a now defunct company that made test equipment.
V 40–75 GHz 4.0–7.5 mm Very strongly absorbed by atmospheric oxygen, which resonates at 60 GHz.
W 75–110 GHz 2.7–4.0 mm Used as a visual sensor for experimental autonomous vehicles, high-resolution meteorological observation, and imaging.

Modulators

Modulators act to provide the waveform of the RF-pulse. There are two different radar modulator designs:

  • High voltage switch for non-coherent keyed power-oscillators These modulators consist of a high voltage pulse generator formed from a high voltage supply, a pulse forming network, and a high voltage switch such as a thyratron. They generate short pulses of power to feed, e.g., the magnetron, a special type of vacuum tube that converts DC (usually pulsed) into microwaves. This technology is known as pulsed power. In this way, the transmitted pulse of RF radiation is kept to a defined and usually very short duration.
  • Hybrid mixers,[51] fed by a waveform generator and an exciter for a complex but coherent waveform. This waveform can be generated by low power/low-voltage input signals. In this case the radar transmitter must be a power-amplifier, e.g., a klystron or a solid state transmitter. In this way, the transmitted pulse is intrapulse-modulated and the radar receiver must use pulse compression techniques.

Coolant

Coherent microwave amplifiers operating above 1,000 watts microwave output, like travelling wave tubes and klystrons, require liquid coolant. The electron beam must contain 5 to 10 times more power than the microwave output, which can produce enough heat to generate plasma. This plasma flows from the collector toward the cathode. The same magnetic focusing that guides the electron beam forces the plasma into the path of the electron beam but flowing in the opposite direction. This introduces FM modulation which degrades Doppler performance. To prevent this, liquid coolant with minimum pressure and flow rate is required, and deionized water is normally used in most high power surface radar systems that utilize Doppler processing.

Coolanol (silicate ester) was used in several military radars in the 1970s. However, it is hygroscopic, leading to hydrolysis and formation of highly flammable alcohol. The loss of a U.S. Navy aircraft in 1978 was attributed to a silicate ester fire. Coolanol is also expensive and toxic. The U.S. Navy has instituted a program named Pollution Prevention (P2) to eliminate or reduce the volume and toxicity of waste, air emissions, and effluent discharges. Because of this, Coolanol is used less often today.

Regulations

Radar (also: RADAR) is defined by article 1.100 of the International Telecommunication Union's (ITU) ITU Radio Regulations (RR) as:

A radiodetermination system based on the comparison of reference signals with radio signals reflected, or retransmitted, from the position to be determined. Each radiodetermination system shall be classified by the radiocommunication service in which it operates permanently or temporarily. Typical radar utilizations are primary radar and secondary radar, these might operate in the radiolocation service or the radiolocation-satellite service.

Configurations

Radar come in a variety of configurations in the emitter, the receiver, the antenna, wavelength, scan strategies, etc.

Tuesday, April 26, 2022

Scientific management

From Wikipedia, the free encyclopedia

Frederick Taylor (1856–1915), leading proponent of scientific management

Scientific management is a theory of management that analyzes and synthesizes workflows. Its main objective is improving economic efficiency, especially labor productivity. It was one of the earliest attempts to apply science to the engineering of processes to management. Scientific management is sometimes known as Taylorism after its pioneer, Frederick Winslow Taylor.

Taylor began the theory's development in the United States during the 1880s and 1890s within manufacturing industries, especially steel. Its peak of influence came in the 1910s; Taylor died in 1915 and by the 1920s, scientific management was still influential but had entered into competition and syncretism with opposing or complementary ideas.

Although scientific management as a distinct theory or school of thought was obsolete by the 1930s, most of its themes are still important parts of industrial engineering and management today. These include: analysis; synthesis; logic; rationality; empiricism; work ethic; efficiency and elimination of waste; standardization of best practices; disdain for tradition preserved merely for its own sake or to protect the social status of particular workers with particular skill sets; the transformation of craft production into mass production; and knowledge transfer between workers and from workers into tools, processes, and documentation.

Name

Taylor's own names for his approach initially included "shop management" and "process management". However, "scientific management" came to national attention in 1910 when crusading attorney Louis Brandeis (then not yet Supreme Court justice) popularized the term. Brandeis had sought a consensus term for the approach with the help of practitioners like Henry L. Gantt and Frank B. Gilbreth. Brandeis then used the consensus of "SCIENTIFIC management" when he argued before the Interstate Commerce Commission (ICC) that a proposed increase in railroad rates was unnecessary despite an increase in labor costs; he alleged scientific management would overcome railroad inefficiencies (The ICC ruled against the rate increase, but also dismissed as insufficiently substantiated that concept the railroads were necessarily inefficient.) Taylor recognized the nationally known term "scientific management" as another good name for the concept, and adopted it in the title of his influential 1911 monograph.

History

The Midvale Steel Company, "one of America's great armor plate making plants," was the birthplace of scientific management. In 1877, at age 22, Frederick W. Taylor started as a clerk in Midvale, but advanced to foreman in 1880. As foreman, Taylor was "constantly impressed by the failure of his [team members] to produce more than about one-third of [what he deemed] a good day's work". Taylor determined to discover, by scientific methods, how long it should take men to perform each given piece of work; and it was in the fall of 1882 that he started to put the first features of scientific management into operation.

Horace Bookwalter Drury, in his 1918 work, Scientific management: A History and Criticism, identified seven other leaders in the movement, most of whom learned of and extended scientific management from Taylor's efforts:

  • Henry L. Gantt (1861–1919)
  • Carl G. Barth (1860–1939)
  • Horace K. Hathaway (1878–1944)
  • Morris L. Cooke (1872–1960)
  • Sanford E. Thompson (1867–1949)
  • Frank B. Gilbreth (1868–1924). Gilbreth's independent work on "motion study" is on record as early as 1885; after meeting Taylor in 1906 and being introduced to scientific management, Gilbreth devoted his efforts to introducing scientific management into factories. Gilbreth and his wife Dr Lillian Moller Gilbreth (1878–1972) performed micro-motion studies using stop-motion cameras as well as developing the profession of industrial/organizational psychology.
  • Harrington Emerson (1853–1931) began determining what industrial plants' products and costs were compared to what they ought to be in 1895. Emerson did not meet Taylor until December 1900, and the two never worked together.

Emerson's testimony in late 1910 to the Interstate Commerce Commission brought the movement to national attention and instigated serious opposition. Emerson contended the railroads might save $1,000,000 a day by paying greater attention to efficiency of operation. By January 1911, a leading railroad journal began a series of articles denying they were inefficiently managed.

When steps were taken to introduce scientific management at the government-owned Rock Island Arsenal in early 1911, it was opposed by Samuel Gompers, founder and President of the American Federation of Labor (an alliance of craft unions). When a subsequent attempt was made to introduce the bonus system into the government's Watertown Arsenal foundry during the summer of 1911, the entire force walked out for a few days. Congressional investigations followed, resulting in a ban on the use of time studies and pay premiums in Government service.

Taylor's death in 1915 at age 59 left the movement without its original leader. In management literature today, the term "scientific management" mostly refers to the work of Taylor and his disciples ("classical", implying "no longer current, but still respected for its seminal value") in contrast to newer, improved iterations of efficiency-seeking methods. Today, task-oriented optimization of work tasks is nearly ubiquitous in industry.

Pursuit of economic efficiency

Flourishing in the late 19th and early 20th century, scientific management built on earlier pursuits of economic efficiency. While it was prefigured in the folk wisdom of thrift, it favored empirical methods to determine efficient procedures rather than perpetuating established traditions. Thus it was followed by a profusion of successors in applied science, including time and motion study, the Efficiency Movement (which was a broader cultural echo of scientific management's impact on business managers specifically), Fordism, operations management, operations research, industrial engineering, management science, manufacturing engineering, logistics, business process management, business process reengineering, lean manufacturing, and Six Sigma. There is a fluid continuum linking scientific management with the later fields, and the different approaches often display a high degree of compatibility.

Taylor rejected the notion, which was universal in his day and still held today, that the trades, including manufacturing, were resistant to analysis and could only be performed by craft production methods. In the course of his empirical studies, Taylor examined various kinds of manual labor. For example, most bulk materials handling was manual at the time; material handling equipment as we know it today was mostly not developed yet. He looked at shoveling in the unloading of railroad cars full of ore; lifting and carrying in the moving of iron pigs at steel mills; the manual inspection of bearing balls; and others. He discovered many concepts that were not widely accepted at the time. For example, by observing workers, he decided that labor should include rest breaks so that the worker has time to recover from fatigue, either physical (as in shoveling or lifting) or mental (as in the ball inspection case). Workers were allowed to take more rests during work, and productivity increased as a result.

Subsequent forms of scientific management were articulated by Taylor's disciples, such as Henry Gantt; other engineers and managers, such as Benjamin S. Graham; and other theorists, such as Max Weber. Taylor's work also contrasts with other efforts, including those of Henri Fayol and those of Frank Gilbreth, Sr. and Lillian Moller Gilbreth (whose views originally shared much with Taylor's but later diverged in response to Taylorism's inadequate handling of human relations).

Soldiering

Scientific management requires a high level of managerial control over employee work practices and entails a higher ratio of managerial workers to laborers than previous management methods.[citation needed] Such detail-oriented management may cause friction between workers and managers.

Taylor observed that some workers were more talented than others, and that even smart ones were often unmotivated. He observed that most workers who are forced to perform repetitive tasks tend to work at the slowest rate that goes unpunished. This slow rate of work has been observed in many industries and many countries and has been called by various terms. Taylor used the term "soldiering", a term that reflects the way conscripts may approach following orders, and observed that, when paid the same amount, workers will tend to do the amount of work that the slowest among them does. Taylor describes soldiering as "the greatest evil with which the working-people ... are now afflicted".

This reflects the idea that workers have a vested interest in their own well-being, and do not benefit from working above the defined rate of work when it will not increase their remuneration. He, therefore, proposed that the work practice that had been developed in most work environments was crafted, intentionally or unintentionally, to be very inefficient in its execution. He posited that time and motion studies combined with rational analysis and synthesis could uncover one best method for performing any particular task, and that prevailing methods were seldom equal to these best methods. Crucially, Taylor himself prominently acknowledged that if each employee's compensation was linked to their output, their productivity would go up. Thus his compensation plans usually included piece rates. In contrast, some later adopters of time and motion studies ignored this aspect and tried to get large productivity gains while passing little or no compensation gains to the workforce, which contributed to resentment against the system.

Productivity, automation, and unemployment

A machinist at the Tabor Company, a firm where Frederick Taylor's consultancy was applied to practice, about 1905

Taylorism led to productivity increases, meaning fewer workers or working hours were needed to produce the same amount of goods. In the short term, productivity increases like those achieved by Taylor's efficiency techniques can cause considerable disruption. Labor relations often become contentious over whether the financial benefits will accrue to owners in the form of increased profits, or workers in the form of increased wages. As a result of decomposition and documentation of manufacturing processes, companies employing Taylor's methods might be able to hire lower-skill workers, enlarging the pool of workers and thus lowering wages and job security.

In the long term, most economists consider productivity increases as a benefit to the economy overall, and necessary to improve the standard of living for consumers in general. By the time Taylor was doing his work, improvements in agricultural productivity had freed up a large portion of the workforce for the manufacturing sector, allowing those workers in turn to buy new types of consumer goods instead of working as subsistence farmers. In later years, increased manufacturing efficiency would free up large sections of the workforce for the service sector. If captured as profits or wages, the money generated by more-productive companies would be spent on new goods and services; if free market competition forces prices down close to the cost of production, consumers effectively capture the benefits and have more money to spend on new goods and services. Either way, new companies and industries spring up to profit from increased demand, and due to freed-up labor are able to hire workers. But the long-term benefits are no guarantee that individual displaced workers will be able to get new jobs that paid them as well or better as their old jobs, as this may require access to education or job training, or moving to different part of the country where new industries are growing. Inability to obtain new employment due to mismatches like these is known as structural unemployment, and economists debate to what extent this is happening in the long term, if at all, as well as the impact on income inequality for those who do find jobs.

Though not foreseen by early proponents of scientific management, detailed decomposition and documentation of an optimal production method also makes automation of the process easier, especially physical processes that would later use industrial control systems and numerical control. Widespread economic globalization also creates opportunity for work to be outsourced to lower-wage areas, with knowledge transfer made easier if an optimal method is already clearly documented. Especially when wages or wage differentials are high, automation and offshoring can result in significant productivity gains and similar questions of who benefits and whether or not technological unemployment is persistent. Because automation is often best suited to tasks that are repetitive and boring, and can also be used for tasks that are dirty, dangerous, and demeaning, proponents believe that in the long run it will free up human workers for more creative, safer, and more enjoyable work.

Taylorism and unions

The early history of labor relations with scientific management in the U.S. was described by Horace Bookwalter Drury:

...for a long time there was thus little or no direct [conflict] between scientific management and organized labor... [However] One of the best known experts once spoke to us with satisfaction of the manner in which, in a certain factory where there had been a number of union men, the labor organization had, upon the introduction of scientific management, gradually disintegrated.

...From 1882 (when the system was started) until 1911, a period of approximately thirty years, there was not a single strike under it, and this in spite of the fact that it was carried on primarily in the steel industry, which was subject to a great many disturbances. For instance, in the general strike in Philadelphia, one man only went out at the Tabor plant [managed by Taylor], while at the Baldwin Locomotive shops across the street two thousand struck.

...Serious opposition may be said to have been begun in 1911, immediately after certain testimony presented before the Interstate Commerce Commission [by Harrington Emerson] revealed to the country the strong movement setting towards scientific management. National labor leaders, wide-awake as to what might happen in the future, decided that the new movement was a menace to their organization, and at once inaugurated an attack... centered about the installation of scientific management in the government arsenal at Watertown.

In 1911, organized labor erupted with strong opposition to scientific management, including from Samuel Gompers, founder and president of the American Federation of Labor (AFL).

Once the time-and-motion men had completed their studies of a particular task, the workers had very little opportunity for further thinking, experimenting, or suggestion-making. Taylorism was criticized for turning the worker into an "automaton" or "machine", making work monotonous and unfulfilling by doing one small and rigidly defined piece of work instead of using complex skills with the whole production process done by one person. "The further 'progress' of industrial development... increased the anomic or forced division of labor," the opposite of what Taylor thought would be the effect. Some workers also complained about being made to work at a faster pace and producing goods of lower quality.

TRADE UNION OBJECTIONS TO SCIENTIFIC MANAGEMENT: ...It intensifies the modern tendency toward specialization of the work and the task... displaces skilled workers and... weakens the bargaining strength of the workers through specialization of the task and the destruction of craft skill. ...leads to over-production and the increase of unemployment... looks upon the worker as a mere instrument of production and reduces him to a semi-automatic attachment to the machine or tool... tends to undermine the worker's health, shortens his period of industrial activity and earning power, and brings on premature old age. — Scientific Management and Labor, Robert F. Hoxie, 1915 report to the Commission on Industrial Relations

Owing to [application of "scientific management"] in part in government arsenals, and a strike by the union molders against some of its features as they were introduced in the foundry at the Watertown Arsenal, "scientific management" received much publicity. The House of Representatives appointed a committee, consisting of William B. Wilson, William C. Redfield and John Q. Tilson to investigate the system as it had been applied in the Watertown Arsenal. In its report to Congress this committee sustained Labor's contention that the system forced abnormally high speed upon workmen, that its disciplinary features were arbitrary and harsh, and that the use of a stop-watch and the payment of a bonus were injurious to the worker's manhood and welfare. At a succeeding session of Congress a measure [HR 8665 by Clyde Howard Tavenner] was passed which prohibited the further use of the stop-watch and the payment of a premium or bonus to workmen in government establishments. — John P. Frey. "Scientific Management and Labor". The American Federationist. XXII (4): 257 (April 1916)

The Watertown Arsenal in Massachusetts provides an example of the application and repeal of the Taylor system in the workplace, due to worker opposition. In the early 20th century, neglect in the Watertown shops included overcrowding, dim lighting, lack of tools and equipment, and questionable management strategies in the eyes of the workers. Frederick W. Taylor and Carl G. Barth visited Watertown in April 1909 and reported on their observations at the shops. Their conclusion was to apply the Taylor system of management to the shops to produce better results. Efforts to install the Taylor system began in June 1909. Over the years of time study and trying to improve the efficiency of workers, criticisms began to evolve. Workers complained of having to compete with one another, feeling strained and resentful, and feeling excessively tired after work. There is, however, no evidence that the times enforced were unreasonable. In June 1913, employees of the Watertown Arsenal petitioned to abolish the practice of scientific management there. A number of magazine writers inquiring into the effects of scientific management found that the "conditions in shops investigated contrasted favorably with those in other plants".

A committee of the U.S. House of Representatives investigated and reported in 1912, concluding that scientific management did provide some useful techniques and offered valuable organizational suggestions, but that it also gave production managers a dangerously high level of uncontrolled power. After an attitude survey of the workers revealed a high level of resentment and hostility towards scientific management, the Senate banned Taylor's methods at the arsenal.

Taylor had a largely negative view of unions, and believed they only led to decreased productivity. Efforts to resolve conflicts with workers included methods of scientific collectivism, making agreements with unions, and the personnel management movement.

Relationship to Fordism

It is often assumed that Fordism derives from Taylor's work. Taylor apparently made this assumption himself when visiting the Ford Motor Company's Michigan plants not too long before he died, but it is likely that the methods at Ford were evolved independently, and that any influence from Taylor's work was indirect at best. Charles E. Sorensen, a principal of the company during its first four decades, disclaimed any connection at all. There was a belief at Ford, which remained dominant until Henry Ford II took over the company in 1945, that the world's experts were worthless, because if Ford had listened to them, it would have failed to attain its great successes. Henry Ford felt that he had succeeded in spite of, not because of, experts, who had tried to stop him in various ways (disagreeing about price points, production methods, car features, business financing, and other issues). Sorensen thus was dismissive of Taylor and lumped him into the category of useless experts. Sorensen held the New England machine tool vendor Walter Flanders in high esteem and credits him for the efficient floorplan layout at Ford, claiming that Flanders knew nothing about Taylor. Flanders may have been exposed to the spirit of Taylorism elsewhere, and may have been influenced by it, but he did not cite it when developing his production technique. Regardless, the Ford team apparently did independently invent modern mass production techniques in the period of 1905–1915, and they themselves were not aware of any borrowing from Taylorism. Perhaps it is only possible with hindsight to see the zeitgeist that (indirectly) connected the budding Fordism to the rest of the efficiency movement during the decade of 1905–1915.

Adoption in planned economies

Scientific management appealed to managers of planned economies because central economic planning relies on the idea that the expenses that go into economic production can be precisely predicted and can be optimized by design.

Soviet Union

By 1913 Vladimir Lenin wrote that the "most widely discussed topic today in Europe, and to some extent in Russia, is the 'system' of the American engineer, Frederick Taylor"; Lenin decried it as merely a "'scientific' system of sweating" more work from laborers. Again in 1914, Lenin derided Taylorism as "man's enslavement by the machine". However, after the Russian Revolutions brought him to power, Lenin wrote in 1918 that the "Russian is a bad worker [who must] learn to work. The Taylor system... is a combination of the refined brutality of bourgeois exploitation and a number of the greatest scientific achievements in the field of analysing mechanical motions during work, the elimination of superfluous and awkward motions, the elaboration of correct methods of work, the introduction of the best system of accounting and control, etc. The Soviet Republic must at all costs adopt all that is valuable in the achievements of science and technology in this field."

In the Soviet Union, Taylorism was advocated by Aleksei Gastev and nauchnaia organizatsia truda (the movement for the scientific organization of labor). It found support in both Vladimir Lenin and Leon Trotsky. Gastev continued to promote this system of labor management until his arrest and execution in 1939. In the 1920s and 1930s, the Soviet Union enthusiastically embraced Fordism and Taylorism, importing American experts in both fields as well as American engineering firms to build parts of its new industrial infrastructure. The concepts of the Five Year Plan and the centrally planned economy can be traced directly to the influence of Taylorism on Soviet thinking. As scientific management was believed to epitomize American efficiency, Joseph Stalin even claimed that "the combination of the Russian revolutionary sweep with American efficiency is the essence of Leninism."

Sorensen was one of the consultants who brought American know-how to the USSR during this era, before the Cold War made such exchanges unthinkable. As the Soviet Union developed and grew in power, both sides, the Soviets and the Americans, chose to ignore or deny the contribution that American ideas and expertise had made: the Soviets because they wished to portray themselves as creators of their own destiny and not indebted to a rival, and the Americans because they did not wish to acknowledge their part in creating a powerful communist rival. Anti-communism had always enjoyed widespread popularity in America, and anti-capitalism in Russia, but after World War II, they precluded any admission by either side that technologies or ideas might be either freely shared or clandestinely stolen.

East Germany

Photograph of East German machine tool builders in 1953, from the German Federal Archives. The workers are discussing standards specifying how each task should be done and how long it should take.

By the 1950s, scientific management had grown dated, but its goals and practices remained attractive and were also being adopted by the German Democratic Republic as it sought to increase efficiency in its industrial sectors. Workers engaged in a state-planned instance of process improvement, pursuing the same goals that were contemporaneously pursued in capitalist societies, as in the Toyota Production System.

Criticism of rigor

Taylor believed that the scientific method of management included the calculations of exactly how much time it takes a man to do a particular task, or his rate of work. Critics of Taylor complained that such a calculation relies on certain arbitrary, non-scientific decisions such as what constituted the job, which men were timed, and under which conditions. Any of these factors are subject to change, and therefore can produce inconsistencies. Some dismiss so-called "scientific management" or Taylorism as pseudoscience. Others are critical of the representativeness of the workers Taylor selected to take his measurements.

Variations of scientific management after Taylorism

In the 1900s

Taylorism was one of the first attempts to systematically treat management and process improvement as a scientific problem, and Taylor is considered a founder of modern industrial engineering. Taylorism may have been the first "bottom-up" method and found a lineage of successors that have many elements in common. Later methods took a broader approach, measuring not only productivity but quality. With the advancement of statistical methods, quality assurance and quality control began in the 1920s and 1930s. During the 1940s and 1950s, the body of knowledge for doing scientific management evolved into operations management, operations research, and management cybernetics. In the 1980s total quality management became widely popular, growing from quality control techniques. In the 1990s "re-engineering" went from a simple word to a mystique. Today's Six Sigma and lean manufacturing could be seen as new kinds of scientific management, although their evolutionary distance from the original is so great that the comparison might be misleading. In particular, Shigeo Shingo, one of the originators of the Toyota Production System, believed that this system and Japanese management culture in general should be seen as a kind of scientific management. These newer methods are all based on systematic analysis rather than relying on tradition and rule of thumb.

Other thinkers, even in Taylor's own time, also proposed considering the individual worker's needs, not just the needs of the process. Critics said that in Taylorism, "the worker was taken for granted as a cog in the machinery." James Hartness published The Human Factor in Works Management in 1912, while Frank Gilbreth and Lillian Moller Gilbreth offered their own alternatives to Taylorism. The human relations school of management (founded by the work of Elton Mayo) evolved in the 1930s as a counterpoint or complement of scientific management. Taylorism focused on the organization of the work process, and human relations helped workers adapt to the new procedures. Modern definitions of "quality control" like ISO-9000 include not only clearly documented and optimized manufacturing tasks, but also consideration of human factors like expertise, motivation, and organizational culture. The Toyota Production System, from which lean manufacturing in general is derived, includes "respect for people" and teamwork as core principles.

Peter Drucker saw Frederick Taylor as the creator of knowledge management, because the aim of scientific management was to produce knowledge about how to improve work processes. Although the typical application of scientific management was manufacturing, Taylor himself advocated scientific management for all sorts of work, including the management of universities and government. For example, Taylor believed scientific management could be extended to "the work of our salesmen". Shortly after his death, his acolyte Harlow S. Person began to lecture corporate audiences on the possibility of using Taylorism for "sales engineering" (Person was talking about what is now called sales process engineering—engineering the processes that salespeople use—not about what we call sales engineering today.) This was a watershed insight in the history of corporate marketing.

In the 2000s

Google's methods of increasing productivity and output can be seen to be influenced by Taylorism as well. The Silicon Valley company is a forerunner in applying behavioral science (ref: Dan Pinks Motivations of Purpose, Mastery and Autonomy) to increase knowledge worker productivity. In classic scientific management as well as approaches like lean management where leaders facilitate and empower teams to continuously improve their standards and values. Leading high-tech companies use the concept of nudge management to increase productivity of employees. More and more business leaders start to make use of this new scientific management.

Today's militaries employ all of the major goals and tactics of scientific management, if not under that name. Of the key points, all but wage incentives for increased output are used by modern military organizations. Wage incentives rather appear in the form of skill bonuses for enlistments.

Scientific management has had an important influence in sports, where stop watches and motion studies rule the day. (Taylor himself enjoyed sports, especially tennis and golf. He and a partner won a national championship in doubles tennis. He invented improved tennis racquets and improved golf clubs, although other players liked to tease him for his unorthodox designs, and they did not catch on as replacements for the mainstream implements).

Modern human resources can be seen to have begun in the scientific management era, most notably in the writings of Katherine M. H. Blackford.

Practices descended from scientific management are currently used in offices and in medicine (e.g. managed care) as well.

In countries with a post-industrial economy, manufacturing jobs are a relatively few, with most workers in the service sector. One approach to efficiency in information work is called digital Taylorism, which uses software to monitor the performance of employees who use computers all day.

Computational complexity theory

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Computational_complexity_theory ...