Search This Blog

Friday, August 18, 2023

Ensemble forecasting

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Ensemble_forecasting

Top: Weather Research and Forecasting model simulation of Hurricane Rita tracks. Bottom: The spread of National Hurricane Center multi-model ensemble forecast.

Ensemble forecasting is a method used in or within numerical weather prediction. Instead of making a single forecast of the most likely weather, a set (or ensemble) of forecasts is produced. This set of forecasts aims to give an indication of the range of possible future states of the atmosphere. Ensemble forecasting is a form of Monte Carlo analysis. The multiple simulations are conducted to account for the two usual sources of uncertainty in forecast models: (1) the errors introduced by the use of imperfect initial conditions, amplified by the chaotic nature of the evolution equations of the atmosphere, which is often referred to as sensitive dependence on initial conditions; and (2) errors introduced because of imperfections in the model formulation, such as the approximate mathematical methods to solve the equations. Ideally, the verified future atmospheric state should fall within the predicted ensemble spread, and the amount of spread should be related to the uncertainty (error) of the forecast. In general, this approach can be used to make probabilistic forecasts of any dynamical system, and not just for weather prediction.

Instances

Today ensemble predictions are commonly made at most of the major operational weather prediction facilities worldwide, including:

Experimental ensemble forecasts are made at a number of universities, such as the University of Washington, and ensemble forecasts in the US are also generated by the US Navy and Air Force. There are various ways of viewing the data such as spaghetti plots, ensemble means or Postage Stamps where a number of different results from the models run can be compared.

History

As proposed by Edward Lorenz in 1963, it is impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of skill owing to the chaotic nature of the fluid dynamics equations involved. Furthermore, existing observation networks have limited spatial and temporal resolution (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. The practical importance of ensemble forecasts derives from the fact that in a chaotic and hence nonlinear system, the rate of growth of forecast error is dependent on starting conditions. An ensemble forecast therefore provides a prior estimate of state-dependent predictability, i.e. an estimate of the types of weather that might occur, given inevitable uncertainties in the forecast initial conditions and in the accuracy of the computational representation of the equations. These uncertainties limit forecast model accuracy to about six days into the future. The first operational ensemble forecasts were produced for sub-seasonal timescales in 1985. However, it was realised that the philosophy underpinning such forecasts was also relevant on shorter timescales – timescales where predictions had previously been made by purely deterministic means.

Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed a stochastic dynamic model that produced means and variances for the state of the atmosphere. Although these Monte Carlo simulations showed skill, in 1974 Cecil Leith revealed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere. It was not until 1992 that ensemble forecasts began being prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP).

Methods for representing uncertainty

There are two main sources of uncertainty that must be accounted for when making an ensemble weather forecast: initial condition uncertainty and model uncertainty.

Initial condition uncertainty

Initial condition uncertainty arises due to errors in the estimate of the starting conditions for the forecast, both due to limited observations of the atmosphere, and uncertainties involved in using indirect measurements, such as satellite data, to measure the state of atmospheric variables. Initial condition uncertainty is represented by perturbing the starting conditions between the different ensemble members. This explores the range of starting conditions consistent with our knowledge of the current state of the atmosphere, together with its past evolution. There are a number of ways to generate these initial condition perturbations. The ECMWF model, the Ensemble Prediction System (EPS), uses a combination of singular vectors and an ensemble of data assimilations (EDA) to simulate the initial probability density. The singular vector perturbations are more active in the extra-tropics, while the EDA perturbations are more active in the tropics. The NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding.

Model uncertainty

Model uncertainty arises due to the limitations of the forecast model. The process of representing the atmosphere in a computer model involves many simplifications such as the development of parametrisation schemes, which introduce errors into the forecast. Several techniques to represent model uncertainty have been proposed.

Perturbed parameter schemes

When developing a parametrisation scheme, many new parameters are introduced to represent simplified physical processes. These parameters may be very uncertain. For example, the 'entrainment coefficient' represents the turbulent mixing of dry environmental air into a convective cloud, and so represents a complex physical process using a single number. In a perturbed parameter approach, uncertain parameters in the model's parametrisation schemes are identified and their value changed between ensemble members. While in probabilistic climate modelling, such as climateprediction.net, these parameters are often held constant globally and throughout the integration, in modern numerical weather prediction it is more common to stochastically vary the value of the parameters in time and space. The degree of parameter perturbation can be guided using expert judgement, or by directly estimating the degree of parameter uncertainty for a given model.

Stochastic parametrisations

A traditional parametrisation scheme seeks to represent the average effect of the sub grid-scale motion (e.g. convective clouds) on the resolved scale state (e.g. the large scale temperature and wind fields). A stochastic parametrisation scheme recognises that there may be many sub-grid scale states consistent with a particular resolved scale state. Instead of predicting the most likely sub-grid scale motion, a stochastic parametrisation scheme represents one possible realisation of the sub-grid. It does this through including random numbers into the equations of motion. This samples from the probability distribution assigned to uncertain processes. Stochastic parametrisations have significantly improved the skill of weather forecasting models, and are now used in operational forecasting centres worldwide. Stochastic parametrisations were first developed at the European Centre for Medium Range Weather Forecasts.

Multi model ensembles

When many different forecast models are used to try to generate a forecast, the approach is termed multi-model ensemble forecasting. This method of forecasting can improve forecasts when compared to a single model-based approach. When the models within a multi-model ensemble are adjusted for their various biases, this process is known as "superensemble forecasting". This type of a forecast significantly reduces errors in model output. When models of different physical processes are combined, such as combinations of atmospheric, ocean and wave models, the multi-model ensemble is called hyper-ensemble.

Probability assessment

The ensemble forecast is usually evaluated by comparing the ensemble average of the individual forecasts for one forecast variable to the observed value of that variable (the "error"). This is combined with consideration of the degree of agreement between various forecasts within the ensemble system, as represented by their overall standard deviation or "spread". Ensemble spread can be visualised through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small, such that the observed atmospheric state falls outside of the ensemble forecast. This can lead the forecaster to be overconfident in their forecast. This problem becomes particularly severe for forecasts of the weather about 10 days in advance, particularly if model uncertainty is not accounted for in the forecast.

Reliability and resolution (calibration and sharpness)

The spread of the ensemble forecast indicates how confident the forecaster can be in his or her prediction. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the forecast in general. When the spread is large, this indicates more uncertainty in the prediction. Ideally, a spread-skill relationship should exist, whereby the spread of the ensemble is a good predictor of the expected error in the ensemble mean. If the forecast is reliable, the observed state will behave as if it is drawn from the forecast probability distribution. Reliability (or calibration) can be evaluated by comparing the standard deviation of the error in the ensemble mean with the forecast spread: for a reliable forecast, the two should match, both at different forecast lead times and for different locations.

The reliability of forecasts of a specific weather event can also be assessed. For example, if 30 of 50 members indicated greater than 1 cm rainfall during the next 24 h, the probability of exceeding 1 cm could be estimated to be 60%. The forecast would be considered reliable if, considering all the situations in the past when a 60% probability was forecast, on 60% of those occasions did the rainfall actually exceed 1 cm. In practice, the probabilities generated from operational weather ensemble forecasts are not highly reliable, though with a set of past forecasts (reforecasts or hindcasts) and observations, the probability estimates from the ensemble can be adjusted to ensure greater reliability.

Another desirable property of ensemble forecasts is resolution. This is an indication of how much the forecast deviates from the climatological event frequency – provided that the ensemble is reliable, increasing this deviation will increase the usefulness of the forecast. This forecast quality can also be considered in terms of sharpness, or how small the spread of the forecast is. The key aim of a forecaster should be to maximise sharpness, while maintaining reliability. Forecasts at long leads will inevitably not be particularly sharp (have particularly high resolution), for the inevitable (albeit usually small) errors in the initial condition will grow with increasing forecast lead until the expected difference between two model states is as large as the difference between two random states from the forecast model's climatology.

Calibration of ensemble forecasts

If ensemble forecasts are to be used for predicting probabilities of observed weather variables they typically need calibration in order to create unbiased and reliable forecasts. For forecasts of temperature one simple and effective method of calibration is linear regression, often known in this context as model output statistics. The linear regression model takes the ensemble mean as a predictor for the real temperature, ignores the distribution of ensemble members around the mean, and predicts probabilities using the distribution of residuals from the regression. In this calibration setup the value of the ensemble in improving the forecast is then that the ensemble mean typically gives a better forecast than any single ensemble member would, and not because of any information contained in the width or shape of the distribution of the members in the ensemble around the mean. However, in 2004, a generalisation of linear regression (now known as Nonhomogeneous Gaussian regression) was introduced that uses a linear transformation of the ensemble spread to give the width of the predictive distribution, and it was shown that this can lead to forecasts with higher skill than those based on linear regression alone. This proved for the first time that information in the shape of the distribution of the members of an ensemble around the mean, in this case summarized by the ensemble spread, can be used to improve forecasts relative to linear regression. Whether or not linear regression can be beaten by using the ensemble spread in this way varies, depending on the forecast system, forecast variable and lead time.

Predicting the size of forecast changes

In addition to being used to improve predictions of uncertainty, the ensemble spread can also be used as a predictor for the likely size of changes in the mean forecast from one forecast to the next. This works because, in some ensemble forecast systems, narrow ensembles tend to precede small changes in the mean, while wide ensembles tend to precede larger changes in the mean. This has applications in the trading industries, for whom understanding the likely sizes of future forecast changes can be important.

Co-ordinated research

The Observing System Research and Predictability Experiment (THORPEX) is a 10-year international research and development programme to accelerate improvements in the accuracy of one-day to two-week high impact weather forecasts for the benefit of society, the economy and the environment. It establishes an organizational framework that addresses weather research and forecast problems whose solutions will be accelerated through international collaboration among academic institutions, operational forecast centres and users of forecast products.

One of its key components is THORPEX Interactive Grand Global Ensemble (TIGGE), a World Weather Research Programme to accelerate the improvements in the accuracy of 1-day to 2 week high-impact weather forecasts for the benefit of humanity. Centralized archives of ensemble model forecast data, from many international centers, are used to enable extensive data sharing and research.

Human fertilization

From Wikipedia, the free encyclopedia
 
Human fertilization
Sperm about to enter the ovum with acrosomal head ready to penetrate the zona pellucida and fertilize the egg
 
Illustration depicting ovulation and fertilization

Human fertilization is the union of a human egg and sperm, occurring primarily in the ampulla of the fallopian tube. The result of this union leads to the production of a fertilized egg called a zygote, initiating embryonic development. Scientists discovered the dynamics of human fertilization in the nineteenth century.

The process of fertilization involves a sperm fusing with an ovum. The most common sequence begins with ejaculation during copulation, follows with ovulation, and finishes with fertilization. Various exceptions to this sequence are possible, including artificial insemination, in vitro fertilization, external ejaculation without copulation, or copulation shortly after ovulation. Upon encountering the secondary oocyte, the acrosome of the sperm produces enzymes which allow it to burrow through the outer shell called the zona pellucida of the egg. The sperm plasma then fuses with the egg's plasma membrane and their nuclei fuse, triggering the sperm head to disconnect from its flagellum as the egg travels down the fallopian tube to reach the uterus.

In vitro fertilization (IVF) is a process by which egg cells are fertilized by sperm outside the womb, in vitro.

History

In Antiquity, Aristotle depicted the formation of new individuals occurring through fusion of male and female fluids, with form and function emerging gradually, in a mode called by him as epigenetic. He proposed that during conception the female supplied the material, or the body, while the male provided the soul.

Galen argued that during conception the male and the female both produced a seed. However, the male seed was thicker and more powerful than the female seed because it had a natural heat that the female seed lacked.

Hippocrates theorized that fertilization occurred when the male and female seed came together, and the sex of the child would be determined by the strength of the seed. If both parents had strong seed, the child would be male. If both parents had weak seed the child would be female. If one parent produced strong seed and the other produced weak seed, the more numerous type of sperm would determine the sex. Hippocrates did not assign the production of strong or weak seed to a specific sex; both males and females could produce strong or weak seed.

Sperm and oocyte meet

Ampulla

Fertilization occurs in the ampulla of the fallopian tube, the section that curves around the ovary. Capacitated sperm are attracted to progesterone, which is secreted from the cumulus cells surrounding the oocyte. Progesterone binds to the CatSper receptor on the sperm membrane and increases intracellular calcium levels, causing hyperactive motility. The sperm will continue to swim towards higher concentrations of progesterone, effectively guiding it to the oocyte. Around 200 of 200 millions spermatozoa reach the ampulla.

Sperm preparation

The sperm entering the ovum using acrosomal enzymes to dissolve the gelatinous envelope (zona pellucida)of the oocyte.

At the beginning of the process, the sperm undergoes a series of changes, as freshly ejaculated sperm is unable or poorly able to fertilize. The sperm must undergo capacitation in the female's reproductive tract, which increases its motility and hyperpolarizes its membrane, preparing it for the acrosome reaction, the enzymatic penetration of the egg's tough membrane, the zona pellucida, which surrounds the oocyte.

Corona radiata

The sperm binds through the corona radiata, a layer of follicle cells on the outside of the secondary oocyte. The corona radiata sends out chemicals that attract the sperm in the fallopian tube to the oocyte. It lies above the zona pellucida, a membrane of glycoproteins that surrounds the oocyte. 

Cone of attraction and perivitelline membrane

Where the spermatozoan is about to pierce, the yolk (ooplasm) is drawn out into a conical elevation, termed the cone of attraction or reception cone. Once the spermatozoon has entered, the peripheral portion of the yolk changes into a membrane, the perivitelline membrane, which prevents the passage of additional spermatozoa.

Zona pellucida and acrosome reaction

After binding to the corona radiata the sperm reaches the zona pellucida, which is an extracellular matrix of glycoproteins. A ZP3 glycoprotein on the zona pellucida binds to a receptor on the cell surface of the sperm head. This binding triggers the acrosome to burst, releasing acrosomal enzymes that help the sperm penetrate through the thick zona pellucida layer surrounding the oocyte, ultimately gaining access to the egg's cell membrane.

Some sperm cells consume their acrosome prematurely on the surface of the egg cell, facilitating the penetration by other sperm cells. As a population, mature haploid sperm cells have on average 50% genome similarity, so the premature acrosomal reactions aid fertilization by a member of the same cohort. It may be regarded as a mechanism of kin selection.

Recent studies have shown that the egg is not passive during this process. In other words, they too appear to undergo changes that help facilitate such interaction.

Fusion

Fertilization and implantation in humans.

Cortical reaction

After the sperm enters the cytoplasm of the oocyte, the tail and the outer coating of the sperm disintegrate. The fusion of sperm and oocyte membranes causes cortical reaction to occur. Cortical granules inside the secondary oocyte fuse with the plasma membrane of the cell, causing enzymes inside these granules to be expelled by exocytosis to the zona pellucida. This in turn causes the glycoproteins in the zona pellucida to cross-link with each other — i.e. the enzymes cause the ZP2 to hydrolyse into ZP2f — making the whole matrix hard and impermeable to sperm. This prevents fertilization of an egg by more than one sperm.

Fusion of genetic material

Preparation

In preparation for the fusion of their genetic material both the oocyte and the sperm undergo transformations as a reaction to the fusion of cell membranes.

The oocyte completes its second meiotic division. This results in a mature haploid ovum and the release of a polar body. The nucleus of the oocyte is called a pronucleus in this process, to distinguish it from the nuclei that are the result of fertilization.

Drawing of an ovum

The sperm's tail and mitochondria degenerate with the formation of the male pronucleus. This is why all mitochondria in humans are of maternal origin. Still, a considerable amount of RNA from the sperm is delivered to the resulting embryo and likely influences embryo development and the phenotype of the offspring.

Fusion

The sperm nucleus then fuses with the ovum, enabling fusion of their genetic material.

Blocks of polyspermy

When the sperm enters the perivitelline space, a sperm-specific protein Izumo on the head binds to Juno receptors on the oocyte membrane. Once it’s bound, two blocks to polyspermy then occur. After approximately 40 minutes, the other Juno receptors on the oocyte are lost from the membrane, causing it to no longer be fusogenic. Additionally, the cortical reaction will happen which is caused by ovastacin binding and cleaving ZP2 receptors on the zona pellucida. These two blocks of polyspermy are what prevent the zygote from having too much DNA.

Replication

The pronuclei migrate toward the center of the oocyte, rapidly replicating their DNA as they do so to prepare the zygote for its first mitotic division.

Mitosis

Usually 23 chromosomes from spermatozoon and 23 chromosomes from egg cell fuse (approximately half of spermatozoons carry X chromosome and the other half Y chromosome). Their membranes dissolve, leaving no barriers between the male and female chromosomes. During this dissolution, a mitotic spindle forms between them. The spindle captures the chromosomes before they disperse in the egg cytoplasm. Upon subsequently undergoing mitosis (which includes pulling of chromatids towards centrioles in anaphase) the cell gathers genetic material from the male and female together. Thus, the first mitosis of the union of sperm and oocyte is the actual fusion of their chromosomes.

Each of the two daughter cells resulting from that mitosis has one replica of each chromatid that was replicated in the previous stage. Thus, they are genetically identical.

Fertilization age

Fertilization is the event most commonly used to mark the beginning point of life, in descriptions of prenatal development of the embryo or fetus. The resultant age is known as fertilization age, fertilizational age, conceptional age, embryonic age, fetal age or (intrauterine) developmental (IUD) age.

Gestational age, in contrast, takes the beginning of the last menstrual period (LMP) as the start point. By convention, gestational age is calculated by adding 14 days to fertilization age and vice versa. Fertilization though usually occurs within a day of ovulation, which, in turn, occurs on average 14.6 days after the beginning of the preceding menstruation (LMP). There is also considerable variability in this interval, with a 95% prediction interval of the ovulation of 9 to 20 days after menstruation even for an average woman who has a mean LMP-to-ovulation time of 14.6. In a reference group representing all women, the 95% prediction interval of the LMP-to-ovulation is 8.2 to 20.5 days.

The average time to birth has been estimated to be 268 days (38 weeks and two days) from ovulation, with a standard deviation of 10 days or coefficient of variation of 3.7%.

Fertilization age is sometimes used postnatally (after birth) as well to estimate various risk factors. For example, it is a better predictor than postnatal age for risk of intraventricular hemorrhage in premature babies treated with extracorporeal membrane oxygenation.

Diseases affecting human fertility

Various disorders can arise from defects in the fertilization process. Whether that results in the process of contact between the sperm and egg, or the state of health of the biological parent carrying the zygote cell. The following are a few of the diseases that can occur and be present during the process.

  • Polyspermy results from multiple sperm fertilizing an egg, leading to an offset number of chromosomes within the embryo. Polyspermy, while physiologically possible in some species of vertebrates and invertebrates, is a lethal condition for the human zygote.
  • Polycystic ovary syndrome is a condition in which the woman does not produce enough follicle stimulating hormone and excessively produces androgens. This results in the ovulation period between contact of the egg being postponed or excluded.
  • Autoimmune disorders can lead to complications in implantation of the egg in the uterus, which may be the immune system’s attack response to an established embryo on the uterine wall.
  • Cancer ultimately affects fertility and may lead to birth defects or miscarriages. Cancer severely damages reproductive organs, which affects fertility. 
  • Endocrine system disorders affect human fertility by decreasing the body’s ability to produce the level of hormones needed to successfully carry a zygote. Examples of these disorders include diabetes, adrenal disorders, and thyroid disorders.
  • Endometriosis is a condition that affects women in which the tissue normally produced in the uterus proceeds to grow outside of the uterus. This leads to extreme amounts of pain and discomfort and may result in an irregular menstrual cycle.

Precision-guided munition

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Precision-guided_munition
Afghan Air Force GBU-58 guided bomb strikes a Taliban compound in Farah Province, Afghanistan

A precision-guided munition (PGM, smart weapon, smart munition, smart bomb) is a guided munition intended to precisely hit a specific target, to minimize collateral damage and increase lethality against intended targets. During the First Gulf War guided munitions accounted for only 9% of weapons fired, but accounted for 75% of all successful hits. Despite guided weapons generally being used on more difficult targets, they were still 35 times more likely to destroy their targets per weapon dropped.

Because the damage effects of explosive weapons decrease with distance due to an inverse cube law, even modest improvements in accuracy (hence reduction in miss distance) enable a target to be attacked with fewer or smaller bombs. Thus, even if some guided bombs miss, fewer air crews are put at risk and the harm to civilians and the amount of collateral damage may be reduced.

The advent of precision-guided munitions resulted in the renaming of older, low-technology bombs as "unguided bombs", "dumb bombs", or "iron bombs".

Types

A laser-guided GBU-24 (BLU-109 warhead variant) strikes its target

Recognizing the difficulty of hitting moving ships during the Spanish Civil War, the Germans were first to develop steerable munitions, using radio control or wire guidance. The U.S. tested TV-guided (GB-4), semi-active radar-guided (Bat), and infrared-guided (Felix) weapons.

Inertial-guided

The CBU-107 Passive Attack Weapon is an air-dropped guided bomb containing metal penetrator rods of various sizes. It was designed to attack targets where an explosive effect may be undesirable, such as fuel storage tanks or chemical weapon stockpiles in civilian areas.

Radio-controlled

The Germans were first to introduce PGMs in combat, with KG 100 deploying the 3,100 lb (1,400 kg) MCLOS-guidance Fritz X armored glide bomb, guided by the Kehl-Straßburg radio guidance system, to successfully attack the Italian battleship Roma in 1943, and the similarly Kehl-Straßburg MCLOS-guided Henschel Hs 293 rocket-boosted glide bomb (also in use since 1943, but only against lightly armored or unarmored ship targets).

The closest Allied equivalents, both unpowered designs, were the 1,000 lb (450 kg) VB-1 AZON (from "AZimuth ONly" control), used in both Europe and the CBI theater, and the US Navy's Bat, primarily used in the Pacific Theater of World War II — the Navy's Bat was more advanced than either German PGM ordnance design or the USAAF's VB-1 AZON, in that it had its own on board, autonomous radar seeker system to direct it to a target. In addition, the U.S. tested the rocket-propelled Gargoyle, which never entered service. Japanese PGMs—with the exception of the anti-ship air-launched, rocket-powered, human-piloted Yokosuka MXY-7 Ohka, "Kamikaze" flying bomb did not see combat in World War II.

Prior to the war, the British experimented with radio-controlled remotely guided planes laden with explosives, such as Larynx. The United States Army Air Forces used similar techniques with Operation Aphrodite, but had few successes; the German Mistel (Mistletoe) "parasite aircraft" was no more effective, guided by the human pilot flying the single-engined fighter mounted above the unmanned, explosive-laden twin-engined "flying bomb" below it, released in the Mistel's attack dive from the fighter.

The U.S. programs restarted in the Korean War. In the 1960s, the electro-optical bomb (or camera bomb) was reintroduced. They were equipped with television cameras and flare sights, by which the bomb would be steered until the flare superimposed the target. The camera bombs transmitted a "bomb's eye view" of the target back to a controlling aircraft. An operator in this aircraft then transmitted control signals to steerable fins fitted to the bomb. Such weapons were used increasingly by the USAF in the last few years of the Vietnam War because the political climate was increasingly intolerant of civilian casualties, and because it was possible to strike difficult targets (such as bridges) effectively with a single mission; the Thanh Hoa Bridge, for instance, was attacked repeatedly with iron bombs, to no effect, only to be dropped in one mission with PGMs.

Although not as popular as the newer JDAM and JSOW weapons, or even the older laser-guided bomb systems, weapons like the AGM-62 Walleye TV guided bomb are still being used, in conjunction with the AAW-144 Data Link Pod, on US Navy F/A-18 Hornets.

Infrared-guided/electro-optical

In World War II, the U.S. National Defense Research Committee developed the VB-6 Felix, which used infrared to home on ships. While it entered production in 1945, it was never employed operationally. The first successful electro optical guided munition was the AGM-62 Walleye during the Vietnam war. It was a family of large glide bombs which could automatically track targets using contrast differences in the video feed. The original concept was created by engineer Norman Kay while tinkering with televisions as a hobby. It was based on a device which could track objects on a television screen and place a "blip" on them to indicate where it was aiming. The first test of the weapon on 29 January 1963 was a success, with the weapon making a direct hit on the target. It served successfully for three decades until the 1990s.

The Raytheon Maverick is the most common electro optical guided missile. As a heavy anti-tank missile it has among its various marks guidance systems such as electro-optical (AGM-65A), imaging infrared (AGM-65D), and laser homing (AGM-65E). The first two, by guiding themselves based on the visual or IR scene of the target, are fire-and-forget in that the pilot can release the weapon and it will guide itself to the target without further input, which allows the delivery aircraft to manoeuvre to escape return fire. The Pakistani NESCOM H-2 MUPSOW and H-4 MUPSOW is an electro-optical (IR imaging and television guided) is a drop and forget precision-guided glide bomb. The Israeli Elbit Opher is also an IR imaging "drop and forget" guided bomb that has been reported to be considerably cheaper than laser-homing bombs and can be used by any aircraft, not requiring specialized wiring for a laser designator or for another aircraft to illuminate the target. During NATO's air campaign in 1999 in Kosovo the new Italian AF AMX employed the Opher.

Laser-guided

BOLT-117, the world's first laser-guided bomb

In 1962, the US Army began research into laser guidance systems and by 1967 the USAF had conducted a competitive evaluation leading to full development of the world's first laser-guided bomb, the BOLT-117, in 1968. All such bombs work in much the same way, relying on the target being illuminated, or "painted," by a laser target designator on the ground or on an aircraft. They have the significant disadvantage of not being usable in poor weather where the target illumination cannot be seen, or where a target designator cannot get near the target. The laser designator sends its beam in a coded series of pulses so the bomb cannot be confused by an ordinary laser, and also so multiple designators can operate in reasonable proximity.

Originally the project began as a surface to air missile seeker developed by Texas Instruments. When TI executive Glenn E. Penisten attempted to sell the new technology to the Air Force they inquired if it could instead be used as a ground attack system to overcome problems they were having with accuracy of bombing in Vietnam. After 6 attempts the weapon improved accuracy from 148 to 10 ft (50 to 3 m) and greatly exceeded the design requirements. The system was sent to Vietnam and performed well. Without the existence of tracking pods they had to be aimed using a hand held laser from the back seat of an F-4 Phantom, but still performed well. Eventually over 28,000 were dropped during the war.

Diagram showing the operation of a laser-guided ammunition round. From a CIA report, 1986.

Laser-guided weapons did not become commonplace until the advent of the microchip. They made their practical debut in Vietnam, where on 13 May 1972 they were used in the second successful attack on the Thanh Hóa Bridge ("Dragon's Jaw"). This structure had previously been the target of 800 American sorties (using unguided weapons) and was partially destroyed in each of two successful attacks, the other being on 27 April 1972 using Walleyes.

They were used, though not on a large scale, by the British forces during the 1982 Falklands War. The first large-scale use of smart weapons came in the early 1990s during Operation Desert Storm when they were used by coalition forces against Iraq. Even so, most of the air-dropped ordnance used in that war was "dumb," although the percentages are biased by the large use of various (unguided) cluster bombs. Laser-guided weapons were used in large numbers during the 1999 Kosovo War, but their effectiveness was often reduced by the poor weather conditions prevalent in the southern Balkans.

There are two basic families of laser-guided bombs in American (and American-sphere) service: the Paveway II and the Paveway III. The Paveway III guidance system is more aerodynamically efficient and so has a longer range; however, it is more expensive. Paveway II 500 lb (230 kg) LGBs (such as GBU-12) are a cheaper lightweight PGM suitable for use against vehicles and other small targets, while a Paveway III 2,000 lb (910 kg) penetrator (such as GBU-24) is a more expensive weapon suitable for use against high-value targets. GBU-12s were used to great effect in the first Gulf War, dropped from F-111F aircraft to destroy Iraqi armored vehicles in a process informally referred to by pilots as "tank plinking."

It is composed of a Mark 83 bomb fitted with a Paveway guidance kit and two Mk 78 solid propellant rockets that fire upon launch.
The notable novelty is that the system does not use aerodynamic flight control (e.g. tail fins), but impulse steering with mini-thrusters. It has been dubbed as the Russian concept of impulse corrections (RCIC).
  • The Roketsan Cirit is a Turkish laser guided missile.
  • Cirit is a 2.8 in (70 mm) guided missile system fitted with a semi-active laser homing seeker. The seeker and guidance section is attached to a purpose-built warhead with a Class 5 Insensitive Munition (IM). The multipurpose warhead has a combined armour-piercing ammunition with enhanced behind armor anti-personnel and incendiary effects. The engine is of reduced smoke design, with IM properties. It is connected to the rear section by a roll bearing that enables it to rotate in flight. There are four small stabilising surfaces at the very rear of the missile in front of the exhaust nozzle that ensures stable flight. Roketsan has developed a new launch pod and a new canister in which Cirit is delivered as an all-up round. The Cirit has a maximum effective guided range of 5.0 mi (8 km) with a high probability of hit on a 9.8 ft × 9.8 ft (3 m × 3 m) target at this range.

Radar-guided

The Lockheed-Martin Hellfire II light-weight anti-tank weapon in one mark uses the radar on the Boeing AH-64D Apache Longbow to provide fire-and-forget guidance for that weapon.

Satellite-guided

A F-22 releases a JDAM from its center internal bay while flying at supersonic speed
HOPE/HOSBO of the Luftwaffe with a combination of GPS/INS and electro-optical guidance

Lessons learned during the first Gulf War showed the value of precision munitions, yet they also highlighted the difficulties in employing them—specifically when visibility of the ground or target from the air was degraded. The problem of poor visibility does not affect satellite-guided weapons such as Joint Direct Attack Munition (JDAM) and Joint Stand-Off Weapon (JSOW), which make use of the United States' GPS system for guidance. This weapon can be employed in all weather conditions, without any need for ground support. Because it is possible to jam GPS, the guidance package reverts to inertial navigation in the event of GPS signal loss. Inertial navigation is significantly less accurate; the JDAM achieves a published Circular Error Probable (CEP) of 43 ft (13 m) under GPS guidance, but typically only 98 ft (30 m) under inertial guidance (with free fall times of 100 seconds or less).

The Griffin conversion kit consists of a front "seeker" section and a set of steerable tailplanes. The resulting guided munition features "trajectory shaping", which allows the bomb to fall along a variety of trajectories – from a shallow angle to a vertical top attack profile. IAI publishes a circular error probable figure for the weapon of 5 metres.
KAB-500S-E. Russian GLONASS-Guided Bomb
  • The GBU-57A/B Massive Ordnance Penetrator (MOP) is a U.S. Air Force, precision-guided, 30,000-pound (14,000 kg) "bunker buster" bomb. This is substantially larger than the deepest penetrating bunker busters previously available, the 5,000-pound (2,300 kg) GBU-28 and GBU-37.
  • The SMKB (Smart-MK-Bomb) is a Brazilian guidance kit that turns a standard 500-pound (230 kg) Mk 82 or 1,000-pound (450 kg) Mk 83 into a precision-guided weapon, respectively called SMKB-82 and SMKB-83. The kit provides extended range up to 31 mi (50 km) and are guided by an integrated inertial guidance system coupled to three satellites networks (GPS, Galileo and GLONASS), relying on wireless to handle the flow of data between the aircraft and the munition.
  • FT PGB is a family of Chinese satellite and Inertial, guided munitions.
  • LS PGB is a family of Chinese GPS+INS or laser guided munitions.

The precision of these weapons is dependent both on the precision of the measurement system used for location determination and the precision in setting the coordinates of the target. The latter critically depends on intelligence information, not all of which is accurate. According to a CIA report, the accidental United States bombing of the Chinese embassy in Belgrade during Operation Allied Force by NATO aircraft was attributed to faulty target information. However, if the targeting information is accurate, satellite-guided weapons are significantly more likely to achieve a successful strike in any given weather conditions than any other type of precision-guided munition.

Advanced guidance concepts

Responding to after-action reports from pilots who employed laser or satellite guided weapons, Boeing developed a Laser JDAM (LJDAM) to provide both types of guidance in a single kit. Based on the existing Joint Direct Attack Munition configurations, a laser guidance package is added to a GPS/INS-guided weapon to increase its overall accuracy. Raytheon has developed the Enhanced Paveway family, which adds GPS/INS guidance to their Paveway family of laser-guidance packages. These "hybrid" laser and GPS guided weapons permit the carriage of fewer weapons types, while retaining mission flexibility, because these weapons can be employed equally against moving and fixed targets, or targets of opportunity. For instance, a typical weapons load on an F-16 flying in the Iraq War included a single 2,000-pound (910 kg) JDAM and two 1,000-pound (450 kg) LGBs. With LJDAM, and the new GBU-39 Small Diameter Bomb (SDB), these same aircraft can carry more bombs if necessary, and have the option of satellite or laser guidance for each weapon release.

The U.S. Navy leads development for a new 155 mm (6.1 in) artillery round called Moving Target Artillery Round, capable of destroying moving targets in GPS-denied environments". The Office of Naval Research (ONR), the Naval Surface Warfare Center Dahlgren Division (NSWC Dahlgren), and the U.S. Army Research Laboratory (ARL) have been coordinating MTAR, with final development scheduled for 2019.
Key features of the MTAR shell include extended range against moving targets, precision guidance and navigation without GPS, subsystem modularity, subsystem maturity, weapon system compatibility, restricted altitude, all-weather capability, reduced time of flight, and affordability. The new munition is intended for the Army or Marine Corps M777A1 howitzer, the M109A6 Paladin, and M109A7 Paladin Integrated Management (PIM) self-propelled 155 mm (6.1 in) artillery systems. The shell also would be for the Navy's Advanced Gun System (AGS) aboard the Zumwalt-class destroyer, and other future naval gun systems.
  • Precision Guidance Kit – Modernization (PGK-M)
The U.S. Army is planning for GPS-denied environments with the new Precision Guidance Kit – Modernization (PGK-M). An enhancement of previous technologies, PGK-M will give U.S. forces the ability to continue launching precision strikes when GPS is compromised by the enemy.
Picatinny Arsenal engineers are leading the development of a GPS alternative using image navigation for precision guidance of munitions, under the Armament Research, Development and Engineering Center (ARDEC). Other research partners include Draper Labs, U.S. Army Research Laboratory, Air Force Research Laboratory and the Aviation and Missile Research, Development, and Engineering Center.
The enhanced munition can navigate to a desired location, through a reference image used by the technology to reach the target. The PGK-M includes a collection of ad hoc software programmable radio networks, various kinds of wave-relay connectivity technologies and navigational technology.
  • PBK-500U Drel is a Russian guided jamming-resistant stealth glide bomb.

Cannon and mortar-launched guided projectiles

A cannon-launched guided projectile (CLGP), is fired from artillery, ship's cannon, or armored vehicles. Several agencies and organizations sponsored the CLGP programs. The United States Navy sponsored the Deadeye program, a laser-guided shell for its 5 in (127 mm) guns and a program to mate a Paveway guidance system to an 8 in (203 mm) shell for the 8"/55 caliber Mark 71 gun in the 1970s (Photo). Other Navy efforts include the BTERM, ERGM, and LRLAP shells.

STRIX is fired like a conventional mortar round. The round contains an infrared imaging sensor that it uses to guide itself onto any tank or armoured fighting vehicle in the vicinity where it lands. The seeker is designed to ignore targets that are already burning.

Guided small arms

Precision-guided small arms prototypes have been developed which use a laser designator to guide an electronically actuated bullet to a target. Another system in development uses a laser range finder to trigger an explosive small arms shell in proximity to a target. The U.S. Army plans to use such devices in the future.

In 2008 the EXACTO program began under DARPA to develop a "fire and forget" smart sniper rifle system including a guided smart bullet and improved scope. The exact technologies of this smart bullet have not been released. EXACTO was test fired in 2014 and 2015 and results showing the bullet alter course to correct its path to its target were released.

In 2012 Sandia National Laboratories announced a self-guided bullet prototype that could track a target illuminated with a laser designator. The bullet is capable of updating its position 30 times a second and hitting targets over a mile away.

In mid-2016, Russia revealed it was developing a similar "smart bullet" weapon designed to hit targets at a distance of up to 6 mi (10 km).

Pike is a precision-guided mini-missile fired from an underslung grenade launcher.

Air burst grenade launchers are a type of precision-guided weapons. Such grenade launchers can preprogram their grenades using a fire-control system to explode in the air above or beside the enemy.

Neurophilosophy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurophilosophy ...