Search This Blog

Saturday, May 3, 2025

SNP genotyping

From Wikipedia, the free encyclopedia

SNP genotyping is the measurement of genetic variations of single nucleotide polymorphisms (SNPs) between members of a species. It is a form of genotyping, which is the measurement of more general genetic variation. SNPs are one of the most common types of genetic variation. An SNP is a single base pair mutation at a specific locus, usually consisting of two alleles (where the rare allele frequency is > 1%). SNPs are found to be involved in the etiology of many human diseases and are becoming of particular interest in pharmacogenetics. Because SNPs are conserved during evolution, they have been proposed as markers for use in quantitative trait loci (QTL) analysis and in association studies in place of microsatellites. The use of SNPs is being extended in the HapMap project, which aims to provide the minimal set of SNPs needed to genotype the human genome. SNPs can also provide a genetic fingerprint for use in identity testing. The increase of interest in SNPs has been reflected by the furious development of a diverse range of SNP genotyping methods.

Hybridization-based methods

Several applications have been developed that interrogate SNPs by hybridizing complementary DNA probes to the SNP site. The challenge of this approach is reducing cross-hybridization between the allele-specific probes. This challenge is generally overcome by manipulating the hybridization stringency conditions.

Dynamic allele-specific hybridization

Dynamic allele-specific hybridization (DASH) genotyping takes advantage of the differences in the melting temperature in DNA that results from the instability of mismatched base pairs. The process can be vastly automated and encompasses a few simple principles.

In the first step, a genomic segment is amplified and attached to a bead through a PCR reaction with a biotinylated primer. In the second step, the amplified product is attached to a streptavidin column and washed with NaOH to remove the unbiotinylated strand. An allele-specific oligonucleotide is then added in the presence of a molecule that fluoresces when bound to double-stranded DNA. The intensity is then measured as temperature is increased until the melting temperature (Tm) can be determined. A SNP will result in a lower than expected Tm.

Because DASH genotyping is measuring a quantifiable change in Tm, it is capable of measuring all types of mutations, not just SNPs. Other benefits of DASH include its ability to work with label free probes and its simple design and performance conditions.

Molecular beacons

SNP detection through molecular beacons makes use of a specifically engineered single-stranded oligonucleotide probe. The oligonucleotide is designed such that there are complementary regions at each end and a probe sequence located in between. This design allows the probe to take on a hairpin, or stem-loop, structure in its natural, isolated state. Attached to one end of the probe is a fluorophore and to the other end a fluorescence quencher. Because of the stem-loop structure of the probe, the fluorophore is close to the quencher, thus preventing the molecule from emitting any fluorescence. The molecule is also engineered such that only the probe sequence is complementary to the genomic DNA that will be used in the assay (Abravaya et al. 2003).

If the probe sequence of the molecular beacon encounters its target genomic DNA during the assay, it will anneal and hybridize. Because of the length of the probe sequence, the hairpin segment of the probe will be denatured in favour of forming a longer, more stable probe-target hybrid. This conformational change permits the fluorophore and quencher to be free of their tight proximity due to the hairpin association, allowing the molecule to fluoresce.

If on the other hand, the probe sequence encounters a target sequence with as little as one non-complementary nucleotide, the molecular beacon will preferentially stay in its natural hairpin state and no fluorescence will be observed, as the fluorophore remains quenched.

The unique design of these molecular beacons allows for a simple diagnostic assay to identify SNPs at a given location. If a molecular beacon is designed to match a wild-type allele and another to match a mutant of the allele, the two can be used to identify the genotype of an individual. If only the first probe's fluorophore wavelength is detected during the assay then the individual is homozygous to the wild type. If only the second probe's wavelength is detected then the individual is homozygous to the mutant allele. Finally, if both wavelengths are detected, then both molecular beacons must be hybridizing to their complements and thus the individual must contain both alleles and be heterozygous.

SNP microarrays

In high-density oligonucleotide SNP arrays, hundreds of thousands of probes are arrayed on a small chip, allowing for many SNPs to be interrogated simultaneously. Because SNP alleles only differ in one nucleotide and because it is difficult to achieve optimal hybridization conditions for all probes on the array, the target DNA has the potential to hybridize to mismatched probes. This is addressed somewhat by using several redundant probes to interrogate each SNP. Probes are designed to have the SNP site in several different locations as well as containing mismatches to the SNP allele. By comparing the differential amount of hybridization of the target DNA to each of these redundant probes, it is possible to determine specific homozygous and heterozygous alleles. Although oligonucleotide microarrays have a comparatively lower specificity and sensitivity, the scale of SNPs that can be interrogated is a major benefit. The Affymetrix Human SNP 5.0 GeneChip performs a genome-wide assay that can genotype over 500,000 human SNPs (Affymetrix 2007)..

Enzyme-based methods

A broad range of enzymes including DNA ligase, DNA polymerase and nucleases have been employed to generate high-fidelity SNP genotyping methods.

Restriction fragment length polymorphism

Restriction fragment length polymorphism (RFLP) is considered to be the simplest and earliest method to detect SNPs. SNP-RFLP makes use of the many different restriction endonucleases and their high affinity to unique and specific restriction sites. By performing a digestion on a genomic sample and determining fragment lengths through a gel assay it is possible to ascertain whether or not the enzymes cut the expected restriction sites. A failure to cut the genomic sample results in an identifiably larger than expected fragment implying that there is a mutation at the point of the restriction site which is rendering it protection from nuclease activity.

The combined factors of the high complexity of most eukaryotic genomes, the requirement for specific endonucleases, the fact that the exact mutation cannot necessarily be resolved in a single experiment, and the slow nature of gel assays make RFLP a poor choice for high throughput analysis.

PCR-based methods

Tetra-primer amplification refractory mutation system PCR, or ARMS-PCR, employs two pairs of primers to amplify two alleles in one PCR reaction. The primers are designed such that the two primer pairs overlap at a SNP location but each match perfectly to only one of the possible SNPs. The basis of the invention is that unexpectedly, oligonucleotides with a mismatched 3'-residue will not function as primers in the PCR under appropriate conditions. As a result, if a given allele is present in the PCR reaction, the primer pair specific to that allele will produce product but not to the alternative allele with a different SNP. The two primer pairs are also designed such that their PCR products are of a significantly different length allowing for easily distinguishable bands by gel electrophoresis or melt temperature analysis. In examining the results, if a genomic sample is homozygous, then the PCR products that result will be from the primer that matches the SNP location and the outer opposite-strand primer, as well from the two outer primers. If the genomic sample is heterozygous, then products will result from the primer of each allele and their respective outer primer counterparts as well as the outer primers.

An alternative strategy is to run multiple qPCR reactions with different primer sets that target each allele separately. Well-designed primers will amplify their target SNP at a much earlier cycle than the other SNPs. This allows more than two alleles to be distinguished, although an individual qPCR reaction is required for each SNP. To achieve high enough specificity, the primer sequence may require placement of an artificial mismatch near its 3'-end, which is an approach generally known as Taq-MAMA.

Flap endonuclease

Flap endonuclease (FEN) is an endonuclease that catalyzes structure-specific cleavage. This cleavage is highly sensitive to mismatches and can be used to interrogate SNPs with a high degree of specificity

In the basic Invader assay, a FEN called cleavase is combined with two specific oligonucleotide probes, that together with the target DNA, can form a tripartite structure recognized by cleavase.[7] The first probe, called the Invader oligonucleotide is complementary to the 3’ end of the target DNA. The last base of the Invader oligonucleotide is a non-matching base that overlaps the SNP nucleotide in the target DNA. The second probe is an allele-specific probe which is complementary to the 5’ end of the target DNA, but also extends past the 3’ side of the SNP nucleotide. The allele-specific probe will contain a base complementary to the SNP nucleotide. If the target DNA contains the desired allele, the Invader and allele-specific probes will bind to the target DNA forming the tripartite structure. This structure is recognized by cleavase, which will cleave and release the 3’ end of the allele-specific probe. If the SNP nucleotide in the target DNA is not complementary to the allele-specific probe, the correct tripartite structure is not formed and no cleavage occurs. The Invader assay is usually coupled with fluorescence resonance energy transfer (FRET) system to detect the cleavage event. In this setup, a quencher molecule is attached to the 3’ end and a fluorophore is attached to the 5’ end of the allele-specific probe. If cleavage occurs, the fluorophore will be separated from the quencher molecule generating a detectable signal.

Only minimal cleavage occurs with mismatched probes making the Invader assay highly specific. However, in its original format, only one SNP allele could be interrogated per reaction sample and it required a large amount of target DNA to generate a detectable signal in a reasonable time frame. Several developments have extended the original Invader assay. By carrying out secondary FEN cleavage reactions, the Serial Invasive Signal Amplification Reaction (SISAR) allows both SNP alleles to be interrogated in a single reaction. SISAR Invader assay also requires less target DNA, improving the sensitivity of the original Invader assay. The assay has also been adapted in several ways for use in a high-throughput format. In one platform, the allele-specific probes are anchored to microspheres. When cleavage by FEN generates a detectable fluorescent signal, the signal is measured using flow-cytometry. The sensitivity of flow-cytometry, eliminates the need for PCR amplification of the target DNA (Rao et al. 2003). These high-throughput platforms have not progressed beyond the proof-of-principle stage and so far the Invader system has not been used in any large scale SNP genotyping projects.

Primer extension

Primer extension is a two step process that first involves the hybridization of a probe to the bases immediately upstream of the SNP nucleotide followed by a ‘mini-sequencing’ reaction, in which DNA polymerase extends the hybridized primer by adding a base that is complementary to the SNP nucleotide. This incorporated base is detected and determines the SNP allele (Goelet et al. 1999; Syvanen 2001). Because primer extension is based on the highly accurate DNA polymerase enzyme, the method is generally very reliable. Primer extension is able to genotype most SNPs under very similar reaction conditions making it also highly flexible. The primer extension method is used in a number of assay formats. These formats use a wide range of detection techniques that include MALDI-TOF Mass spectrometry (see Sequenom) and ELISA-like methods.

Generally, there are two main approaches which use the incorporation of either fluorescently labeled dideoxynucleotides (ddNTP) or fluorescently labeled deoxynucleotides (dNTP). With ddNTPs, probes hybridize to the target DNA immediately upstream of SNP nucleotide, and a single, ddNTP complementary to the SNP allele is added to the 3’ end of the probe (the missing 3'-hydroxyl in didioxynucleotide prevents further nucleotides from being added). Each ddNTP is labeled with a different fluorescent signal allowing for the detection of all four alleles in the same reaction. With dNTPs, allele-specific probes have 3’ bases which are complementary to each of the SNP alleles being interrogated. If the target DNA contains an allele complementary to the probe's 3’ base, the target DNA will completely hybridize to the probe, allowing DNA polymerase to extend from the 3’ end of the probe. This is detected by the incorporation of the fluorescently labeled dNTPs onto the end of the probe. If the target DNA does not contain an allele complementary to the probe's 3’ base, the target DNA will produce a mismatch at the 3’ end of the probe and DNA polymerase will not be able to extend from the 3' end of the probe. The benefit of the second approach is that several labeled dNTPs may get incorporated into the growing strand, allowing for increased signal. However, DNA polymerase in some rare cases, can extend from mismatched 3’ probes giving a false positive result.

A different approach is used by Sequenom's iPLEX SNP genotyping method, which uses a MassARRAY mass spectrometer. Extension probes are designed in such a way that 40 different SNP assays can be amplified and analyzed in a PCR cocktail. The extension reaction uses ddNTPs as above, but the detection of the SNP allele is dependent on the actual mass of the extension product and not on a fluorescent molecule. This method is for low to medium high throughput, and is not intended for whole genome scanning.

The flexibility and specificity of primer extension make it amenable to high throughput analysis. Primer extension probes can be arrayed on slides allowing for many SNPs to be genotyped at once. Broadly referred to as arrayed primer extension (APEX), this technology has several benefits over methods based on differential hybridization of probes. Comparatively, APEX methods have greater discriminating power than methods using this differential hybridization, as it is often impossible to obtain the optimal hybridization conditions for the thousands of probes on DNA microarrays (usually this is addressed by having highly redundant probes). However, the same density of probes cannot be achieved in APEX methods, which translates into lower output per run.

Illumina Incorporated's Infinium assay is an example of a whole-genome genotyping pipeline that is based on primer extension method. In the Infinium assay, over 100,000 SNPs can be genotyped. The assay uses hapten-labelled nucleotides in a primer extension reaction. The hapten label is recognized by anti-bodies, which in turn are coupled to a detectable signal (Gunderson et al. 2006).

APEX-2 is an arrayed primer extension genotyping method which is able to identify hundreds of SNPs or mutations in parallel using efficient homogeneous multiplex PCR (up to 640-plex) and four-color single-base extension on a microarray. The multiplex PCR requires two oligonucleotides per SNP/mutation generating amplicons that contain the tested base pair. The same oligonucleotides are used in the following step as immobilized single-base extension primers on a microarray (Krjutskov et al. 2008).

5’- nuclease

Taq DNA polymerase's 5’-nuclease activity is used in the TaqMan assay for SNP genotyping. The TaqMan assay is performed concurrently with a PCR reaction and the results can be read in real-time as the PCR reaction proceeds (McGuigan & Ralston 2002). The assay requires forward and reverse PCR primers that will amplify a region that includes the SNP polymorphic site. Allele discrimination is achieved using FRET combined with one or two allele-specific probes that hybridize to the SNP polymorphic site. The probes will have a fluorophore linked to their 5’ end and a quencher molecule linked to their 3’ end. While the probe is intact, the quencher will remain in close proximity to the fluorophore, eliminating the fluorophore's signal. During the PCR amplification step, if the allele-specific probe is perfectly complementary to the SNP allele, it will bind to the target DNA strand and then get degraded by 5’-nuclease activity of the Taq polymerase as it extends the DNA from the PCR primers. The degradation of the probe results in the separation of the fluorophore from the quencher molecule, generating a detectable signal. If the allele-specific probe is not perfectly complementary, it will have lower melting temperature and not bind as efficiently. This prevents the nuclease from acting on the probe (McGuigan & Ralston 2002).

Since the TaqMan assay is based on PCR, it is relatively simple to implement. The TaqMan assay can be multiplexed by combining the detection of up to seven SNPs in one reaction. However, since each SNP requires a distinct probe, the TaqMan assay is limited by the how close the SNPs can be situated. The scale of the assay can be drastically increased by performing many simultaneous reactions in microtitre plates. Generally, TaqMan is limited to applications that involve interrogating a small number of SNPs since optimal probes and reaction conditions must be designed for each SNP (Syvanen 2001).

Oligonucleotide Ligation Assay

DNA ligase catalyzes the ligation of the 3' end of a DNA fragment to the 5' end of a directly adjacent DNA fragment. This mechanism can be used to interrogate a SNP by hybridizing two probes directly over the SNP polymorphic site, whereby ligation can occur if the probes are identical to the target DNA. In the oligonucleotide ligase assay, two probes are designed; an allele-specific probe which hybridizes to the target DNA so that its 3' base is situated directly over the SNP nucleotide and a second probe that hybridizes the template upstream (downstream in the complementary strand) of the SNP polymorphic site providing a 5' end for the ligation reaction. If the allele-specific probe matches the target DNA, it will fully hybridize to the target DNA and ligation can occur. Ligation does not generally occur in the presence of a mismatched 3' base. Ligated or unligated products can be detected by gel electrophoresis, MALDI-TOF mass spectrometry or by capillary electrophoresis for large-scale applications. With appropriate sequences and tags on the oligonucleotides, high-throughput sequence data can be generated from the ligated products and genotypes determined (Curry et al., 2012). The use of large numbers of sample indexes allows high-throughput sequence data on hundreds of SNPs in thousands of samples to be generated in a small portion of a high-throughput sequencing run. This is a massive genotyping by sequencing technology (MGST).

Other post-amplification methods based on physical properties of DNA

The characteristic DNA properties of melting temperature and single stranded conformation have been used in several applications to distinguish SNP alleles. These methods very often achieve high specificity but require highly optimized conditions to obtain the best possible results.

Single strand conformation polymorphism

Single-stranded DNA (ssDNA) folds into a tertiary structure. The conformation is sequence dependent and most single base pair mutations will alter the shape of the structure. When applied to a gel, the tertiary shape will determine the mobility of the ssDNA, providing a mechanism to differentiate between SNP alleles. This method first involves PCR amplification of the target DNA. The double-stranded PCR products are denatured using heat and formaldehyde to produce ssDNA. The ssDNA is applied to a non-denaturing electrophoresis gel and allowed to fold into a tertiary structure. Differences in DNA sequence will alter the tertiary conformation and be detected as a difference in the ssDNA strand mobility (Costabile et al. 2006). This method is widely used because it is technically simple, relatively inexpensive and uses commonly available equipment. However compared to other SNP genotyping methods, the sensitivity of this assay is lower. It has been found that the ssDNA conformation is highly dependent on temperature and it is not generally apparent what the ideal temperature is. Very often the assay will be carried out using several different temperatures. There is also a restriction on the length of fragment because the sensitivity drops when sequences longer than 400 bp are used (Costabile et al. 2006).

Temperature gradient gel electrophoresis

The temperature gradient gel electrophoresis (TGGE) or temperature gradient capillary electrophoresis (TGCE) method is based on the principle that partially denatured DNA is more restricted and travels slower in a porous material such as a gel. This property allows for the separation of DNA by melting temperature. To adapt these methods for SNP detection, two fragments are used; the target DNA which contain the SNP polymorphic site being interrogated and an allele-specific DNA sequence, referred to as the normal DNA fragment. The normal fragment is identical to the target DNA except potentially at the SNP polymorphic site, which is unknown in the target DNA. The fragments are denatured and then reannealed. If the target DNA has the same allele as the normal fragment, homoduplexes will form that will have the same melting temperature. When run on the gel with a temperature gradient, only one band will appear. If the target DNA has a distinct allele, four products will form following the reannealing step; homoduplexes consisting of target DNA, homoduplexes consisting of normal DNA and two heterduplexes of each strand of target DNA hybridized with the normal DNA strand. These four products will have distinct melting temperatures and will appear as four bands in the denaturing gel.

Denaturing high performance liquid chromatography

Denaturing high performance liquid chromatography (DHPLC) uses reversed-phase HPLC to interrogate SNPs. The key to DHPLC is the solid phase which has differential affinity for single and double-stranded DNA. In DHPLC, DNA fragments are denatured by heating and then allowed to reanneal. The melting temperature of the reannealed DNA fragments determines the length of time they are retained in the column. Using PCR, two fragments are generated; target DNA containing the SNP polymorphic site and an allele-specific DNA sequence, referred to as the normal DNA fragment. This normal fragment is identical to the target DNA except potentially at the SNP polymorphic site, which is unknown in the target DNA. The fragments are denatured and then allowed to gradually reanneal. The reannaled products are added to the DHPLC column. If the SNP allele in the target DNA matches the normal DNA fragment, only identical homoduplexes will form during the reannealing step. If the target DNA contains a different SNP allele than the normal DNA fragment, heteroduplexes of the target DNA and normal DNA containing a mismatched polymorphic site will form in addition to homoduplexes. The mismatched heteroduplexes will have a different melting temperature than the homoduplexes and will not be retained in the column as long. This generates a chromatograph pattern that is distinctive from the pattern that would be generated if the target DNA fragment and normal DNA fragments were identical. The eluted DNA is detected by UV absorption.

DHPLC is easily automated as no labeling or purification of the DNA fragments is needed. The method is also relatively fast and has a high specificity. One major drawback of DHPLC is that the column temperature must be optimized for each target in order to achieve the right degree of denaturation.

High-resolution melting of the entire amplicon

High Resolution Melting analysis is the simplest PCR-based method to understand. Basically, the same thermodynamic properties that allowed for the gel techniques to work apply here, and in real-time. A fluorimeter monitors the post-PCR denaturation of the entire dsDNA amplicon. You make primers specific to the site you want to amplify. You "paint" the amplicon with a double-strand specific dye, included in the PCR mix. The ds-specific dye integrates itself into the PCR product. In essence, the entire amplicon becomes a probe. This opens up new possibilities for discovery. Either you position the primers very close to either side of the SNP in question (small amplicon genotyping, Liew, 2004) or amplify a larger region (100-400bp in length) for scanning purposes. For simple genotyping of an SNP, it is easier to just make the amplicon small to minimize the chances you mistake one SNP for another. The melting temperature (Tm) of the entire amplicon is determined and most homozygotes are sufficiently different (in the better instruments) in Tm to genotype. Heterozygotes are even easier to differentiate because they have heteroduplexes generated (refer to the gel-based explanations) which broadens the melt transition and usually gives two discernible peaks. Amplicon melting using a fluorescently-labeled primer has been described (Gundry et al., 2003) but is less practical than using ds-specific dyes due to the cost of the fluorogenic primer.

Scanning of larger amplicons is based on the same principles as outlined above. However, melting temperature and the overall shape of the melting curve become informative. For amplicons >c.150bp there are often >2 melting peaks, each of which can vary, depending on the DNA template composition. Numerous investigators have been able to successfully eliminate the majority of their sequencing through melt-based scanning, allowing accurate locus-based genotyping of large numbers of individuals. Many investigators have found scanning for mutations using high resolution melting as a viable and practical way to study entire genes.

Use of DNA mismatch-binding proteins

DNA mismatch-binding proteins can distinguish single nucleotide mismatches and thus facilitate differential analysis of SNPs. For example, MutS protein from Thermus aquaticus binds different single nucleotide mismatches with different affinities and can be used in capillary electrophoresis to differentiate all six sets of mismatches (Drabovich & Krylov 2006).

SNPlex

SNPlex is a proprietary genotyping platform sold by Applied Biosystems.

Surveyor nuclease assay

Surveyor nuclease is a mismatch endonuclease enzyme that recognizes all base substitutions and small insertions/deletions (indels), and cleaves the 3′ side of mismatched sites in both DNA strands.

Sequencing

Next-generation sequencing technologies such as pyrosequencing sequence less than 250 bases in a read which limits their ability to sequence whole genomes. However, their ability to generate results in real-time and their potential to be massively scaled up makes them a viable option for sequencing small regions to perform SNP genotyping. Compared to other SNP genotyping methods, sequencing is in particular, suited to identifying multiple SNPs in a small region, such as the highly polymorphic Major Histocompatibility Complex region of the genome.

Friday, May 2, 2025

Intelligence analysis

From Wikipedia, the free encyclopedia

Overview

Analysis is part of the Intelligence Process or Cycle

Intelligence analysis is a way of reducing the ambiguity of highly ambiguous situations. Many analysts prefer the middle-of-the-road explanation, rejecting high or low probability explanations. Analysts may use their own standard of proportionality as to the risk acceptance of the opponent, rejecting that the opponent may take an extreme risk to achieve what the analyst regards as a minor gain. The analyst must avoid the special cognitive traps for intelligence analysis projecting what she or he wants the opponent to think, and using available information to justify that conclusion. Being aware that one's enemies may try to confuse is a relevant factor, especially in the areas of intelligence cycle security and its subdiscipline counterintelligence. During World War II, the German word for counterintelligence art was Funkspiel, or radio game—not a game in the sense of playing fields, but something that draws from game theory and seeks to confuse one's opponents.

A set of problem-solving talents are essential for analysts. Since the other side may be hiding their intention, the analyst must be tolerant of ambiguity, of false leads, and of partial information far more fragmentary than faces the experimental scientist. According to Dick Heuer, in an experiment in which analyst behavior was studied, the process is one of incremental refinement: "with test subjects in the experiment demonstrating that initial exposure to blurred stimuli interferes with accurate perception even after more and better information becomes available...the experiment suggests that an analyst who starts observing a potential problem situation at an early and unclear stage is at a disadvantage as compared with others, such as policymakers, whose first exposure may come at a later stage when more and better information is available."

The receipt of information in small increments over time also facilitates assimilation of this information into the analyst's existing views. One item of information may not be sufficient to prompt the analyst to change a previous view. The cumulative message inherent in many pieces of information may be significant but is attenuated when this information is not examined as a whole. The Intelligence Community's review of its performance before the 1973 Yom Kippur War noted [in the only declassified paragraph].

The problem of incremental analysis—especially as it applies to the current intelligence process—was also at work in the period preceding hostilities. Analysts, according to their own accounts, were often proceeding on the basis of the day's take, hastily comparing it with material received the previous day. They then produced in 'assembly line fashion' items which may have reflected perceptive intuition but which [did not] accrue from a systematic consideration of an accumulated body of integrated evidence.

Writers on analysis have suggested reasons why analysts come to incorrect conclusions, by falling into cognitive traps for intelligence analysis. Without falling into the trap of avoiding decisions by wanting more information, analysts also need to recognize that they always can learn more about the opponent.

Analytic tradecraft

Intelligence reflects a progressive refinement of data and information

The body of specific methods for intelligence analysis is generally referred to as analytic tradecraft. The academic disciplines examining the art and science of intelligence analysis are most routinely referred to as "Intelligence Studies", and exemplified by institutions such as the Joint Military Intelligence College, University of Pittsburgh Graduate School of Public and International Affairs (Security and Intelligence Studies major), and Mercyhurst College Institute for Intelligence Studies. The goal of the Analytic Tradecraft Notes of the Central Intelligence Agency's Directorate of Intelligence (DI) include the

Pursuit of expertise in analytic tradecraft is a central element of this plan. Our tradecraft enables analysts to provide "value-added" to consumers of intelligence by ensuring:

  • Dedication to objectivity – tough-minded evaluation of information and explicit defense of judgments – which enhances our credibility with consumers dealing with complex and sensitive policy issues.
  • Delivery of our products to the right people in time to be useful in their decision-making, and using feedback and tasking from them to drive the collection of the basic intelligence that we need to produce our analysis.

Analytic tradecraft skills also serve as "force multipliers", helping us provide top-quality analysis:

  • The feedback our customers give us on our customized analysis clarifies for the analyst what questions most need answering.
  • Employing rules for evaluating information and making judgments helps analysts manage the deluge of information, discern trends, and identify attempts at deception.
  • Tradecraft standards can be used to iron out differences among experts who have complementary substantive specialties. Their interaction enhances teamwork, which allows the [Directorate of Intelligence] to be more productive.

On January 2, 2015, the Office of the Director of National Intelligence (ODNI) issued Intelligence Community Directive (ICD) 203, which "establishe[d] Intelligence Community (IC) Analytic Standards that govern the production and evaluation of analytic products; articulates the responsibility of intelligence analysts to strive for excellence, integrity, and rigor in their analytic thinking and work practices..."

Setting goals for an intelligence analysis

Stating the objective from the consumer's standpoint is an excellent starting point for goal-setting:

Ambassador Robert D. Blackwill ... seized the attention of the class of some 30 [intelligence community managers] by asserting that as a policy official he never read ... analytic papers. Why? "Because they were nonadhesive." As Blackwill explained, they were written by people who did not know what he was trying to do and, so, could not help him get it done: "When I was working at State on European affairs, for example, on certain issues I was the Secretary of State. DI analysts did not know that—that I was one of a handful of key decision makers on some very important matters."

More charitably, he now characterizes his early periods of service at the NSC Staff and in State Department bureaus as ones of "mutual ignorance"

"DI analysts did not have the foggiest notion of what I did; and I did not have a clue as to what they could or should do."

Blackwill explained how he used his time efficiently, which rarely involved reading general CIA reports. "I read a lot. Much of it was press. You have to know how issues are coming across politically to get your job done. Also, cables from overseas for preparing agendas for meetings and sending and receiving messages from my counterparts in foreign governments. Countless versions of policy drafts from those competing for the President's blessing. And dozens of phone calls. Many are a waste of time but have to be answered, again, for policy and political reasons.

"One more minute, please, on what I did not find useful. This is important. My job description called for me to help prepare the President for making policy decisions, including at meetings with foreign counterparts and other officials...Do you think that after I have spent long weeks shaping the agenda, I have to be told a day or two before the German foreign minister visits Washington why he is coming?"

Be bold and honest

Weasel-wording is problematic in intelligence analysis; still, some things truly are uncertain. Arguably, when uncertainties are given with probabilities or at least some quantification of likelihood, they become less a case of weasel wording and more a case of reflecting reality as it is best understood.

While a good analyst must be able to consider, thoughtfully, alternative viewpoints, an analyst must be willing to stand by his or her position. This is especially important in specialized areas, when the analyst may be the only one that reads every field report, every technical observation on a subject.

"Believe in your own professional judgments. Always be willing to listen to alternative conclusions or other points of view, but stand your ground if you really believe the intelligence supports a certain conclusion. Just because someone is your boss, is a higher grade, or has been around longer than you does not mean he or she knows more about your account than you do. You are the one who reads the traffic every day and who studies the issue". At the same time, Watanabe observes, "It is better to be mistaken than wrong". Not willing to be wrong is also a disease of the highest policymaker levels, and why there needs to be a delicately balanced relationship, built of trust, between a policymaker and his closest intelligence advisors.

"Being an intelligence analyst is not a popularity contest...But your job is to pursue the truth. I recall a colleague who forwarded an analysis that called into question the wisdom behind several new US weapon systems. This analysis caused criticism of the CIA, of his office, and of himself. He stood his ground, however; the Agency supported him, and eventually he was proven right. He did not make a lot of friends, but he did his job.

Intelligence analysts are expected to give policymakers' opinions both support and reality checks. The most effective products have several common features:

  • Opportunities and dangers for interests of the analyst's country, especially unexpected developments that may require a reaction.
  • Motives, objectives, strengths, and vulnerabilities of adversaries, allies, and other actors.
  • Direct and indirect sources of friendly parties' leverage on foreign players and issues.
  • Tactical alternatives for advancing stated national policy goals.

Reality checking is not to be underestimated. In World War II, the Allies launched an air offensive against a target system that they really did not understand: the V-1 cruise missile. Their rationale to attack ("if the enemy apparently valued it, then it must be worth attacking") may have been rational when there were large numbers of aircraft and pilots, but it might not be applicable to current situations, at least not until analysts rule out the possibility of the target system being a decoy. If the threat is real, then it might be warranted to defer attack until a massive one can be delivered.

Agreement on content

The analytic process must be interactive with the customer to succeed. For example, the first IMINT of Soviet missiles during the Cuban Missile Crisis was verified and quickly taken to the President and Secretary of Defense. The highest level of authority immediately requested more detail, but also wanted a perspective on the Soviet strategy, which was not available from photography.

Photographs convey information, not necessarily intentions

As the White House requested more CIA and Navy support for photography, it simultaneously searched for HUMINT and SIGINT from Cuba, as well as diplomatic HUMINT. Until John F. Kennedy was briefed by excellent briefers, such as Dino Brugioni, he probably did not understand the capabilities of IMINT.

Frequently, the intelligence service will organize the production process and its output to mirror the customer organization. Government production by the single-source intelligence agencies is largely organized geographically or topically, to meet the needs of all-source country, region, or topic analysts in the finished-intelligence producing agencies.

In terms of intended use by the customer, both business and government producers may generate intelligence to be applied in the current, estimative, operational, research, science and technology, or warning context. Serendipity plays a role here, because the collected and analyzed information may meet any or all of these criteria.

A good example is warning intelligence. Military and political analysts are always watching for predefined indication that an emergency, such as outbreak of war, or a political coup, is imminent. When an indicator is approved, policymakers are alerted and a crisis team is often convened, with the mission of providing time-sensitive intelligence on the situation to all relevant customers.

Orienting oneself to the consumers

Experienced analysts recommend seeing oneself as a specialist on a team, with 5–10 key players. Learn something about each of them, both in terms of how they express themselves, and how you can reinforce their strengths and support their weaknesses. The analyst must constantly ask himself, "what do they want/need to know? How do they prefer to have it presented? Are they still trying to select the best course of action, or have they committed and now need to know the obstacles and vulnerabilities on their chosen path?"

Others on the team may know, how to handle the likely challenges. The analyst's contribution is in recognizing the unlikely, or providing connections that are not obvious. Consumers must get information in a timely manner, not after they commit to a decision they might not have made having rougher information available sooner.

Sometimes, when the producer is struggling with how to meet the needs of both internal and external customers, the solution is to create two different types of products, one for each type of customer. An internal product might contain detail of sources, collection methods, and analytic techniques, while an external product is more like journalism. Remember that journalists always address:

  1. Who
  2. What
  3. When
  4. Where
  5. Why

"How" is often relevant to journalists, but, in intelligence, may wander into that delicate area of sources and methods, appropriate only for internal audiences. The external consumer needs to know more of potential actions. Actions exist in three phases:

  1. The decision to act
  2. The action
  3. Disengagement from the action

Internal products contain details about the sources and methods used to generate the intelligence, while external products emphasize actionable target information. Similarly, the producer adjusts the product content and tone to the customer's level of expertise.

Orienting oneself to peers

Even in professional sports, where there are strict anti-fraternization rules on the playing field, players often have deep friendships with counterparts on opposing teams. They might have been on a college team together, or are simply aware that the team they oppose today might be the team to which they might be traded tomorrow. If a technique is personal, rather than a proprietary idea of a coach, one professional might be quite willing to show a nominal opponent how he does some maneuver. Watanabe observed

If you are examining a problem and there is no intelligence available, or the available intelligence is insufficient, be aggressive in pursuing collection and in energizing collectors. ... As an analyst, you have the advantage of knowing both what the consumer needs to know (sometimes better than the consumer knows himself) and which collectors can obtain the needed intelligence.

Aggressively pursue collection of information you need. In the Intelligence Community, we have the unique ability to bring substantial collection resources to bear in order to collect information on important issues. An analyst needs to understand the general capabilities and limitations of collection systems...If the analyst is in a technical discipline, the analyst might have an insight about a collection system that the operators have not considered ... If you are not frequently tasking collectors and giving them feedback on their reporting, you are failing to do an important part of your job.

Peers, both consumer and analyst, also have a psychological context. Johnston suggests the three major components of that context are:

  1. socialization
  2. degree of risk taking or risk aversion
  3. organizational-historical context

Devlin observes that while traditional logical work does not consider socialization, work on extending logic into the real world of intelligence requires it. "The first thing to note, and this is crucial, is that the process by which an agent attaches meaning to a symbol always takes place in a context, indeed generally several contexts, and is always dependent on those contexts. An analytic study of the way that people interpret symbols comes down to an investigation of the mechanism captured by the diagram:

[agent] + [symbol] + [context] +. . . + [context] → [interpretation]

Things that are true about contexts include:

  1. Contexts are pervasive
  2. Contexts are primary
  3. Contexts perpetuate
  4. Contexts proliferate
  5. Contexts are potentially pernicious

The discipline of critical discourse analysis will help organize the context. Michael Crichton, in giving examples of physicians communicating with other physicians, points out that laymen have trouble following such discourses not only because there is specialized vocabulary in use, but the discourse takes place in an extremely high context. One physician may ask a question about some diagnostic test, and the other will respond with a result from an apparently unrelated test. The shared context was that the first test looked for evidence of a specific disease, while the answer cited a test result that ruled out the disease. The disease itself was never named, but, in the trained context, perfectly obvious to the participants in the discourse.

Intelligence analysis is also extremely high context. Whether the subject is political behavior or weapons capabilities, the analysts and consumers share a great deal of context. Intelligence consumers express great frustration with generic papers that waste their time by giving them context they already have internalized.

Organizing what you have

Collection processes provide analysts with assorted kinds of information, some important and some irrelevant, some true and some false (with many shades in between), and some requiring further preprocessing before they can be used in analysis. Raw information reports use a standard code for the presumed reliability of the source and of the information. The U.S. Intelligence Community uses some formal definition of the kinds of information.

Term Definition Example
Fact Verified information; something known to exist or to have happened. A confirmed inventory of a resource of one's own service
Direct Information The content of reports, research, and analytic reflection on an intelligence issue that helps analysts and their consumers evaluate the likelihood that something is factual and thereby reduces uncertainty, Information relating to an intelligence issue under scrutiny the details of which can, as a rule, be considered factual, because of the nature of the source, the source's direct access to the information, and the concrete and readily verifiable character of the contents COMINT or OSINT quoting what a foreign official said; IMINT providing a count of the number of ships at a pier. HUMINT from a US diplomatic officer who directly observed an event.
Indirect Information Information relating to an intelligence issue the details of which may or may not be factual, the doubt reflecting some combination of the source's questionable reliability, the source's lack of direct access, and the complex character of the contents HUMINT from a reliable agent, citing secondhand what an informant said that a government official said. OSINT providing a foreign government document that gives the number of ships at a pier. Indirect OSINT from a US embassy officer. COMINT that contains a report by a foreign official to his government, about what something he cannot confirm, but states with a probability.
Direct Data Organized information that provides context for evaluating the likelihood that a matter under scrutiny is factual. A chronology of events based on observations by US officers
Indirect Data Organized information that provides context for evaluating the likelihood that a matter under scrutiny is factual. A chronology based on reports from a liaison intelligence service

Collation describes the process of organizing raw data, interpolating known data, evaluating the value of data, putting in working hypotheses. The simplest approaches often are an excellent start. With due regard for protecting documents and information, a great deal can be done with pieces of paper, a whiteboard, a table, and perhaps a corkboard. Maps often are vital adjuncts, maps that can be written upon.

There are automated equivalents of all of these functions, and each analyst will have a personal balance between manual and machine-assisted methods. Unquestionably, when quantitative methods such as modeling and simulation are appropriate, the analyst will want computer assistance, and possibly consultation from experts in methodology. When combining maps and imagery, especially different kinds of imagery, a geographic information system is usually needed to normalize coordinate systems, scale and magnification, and the ability to suppress certain details and add others.

Outlining, possibly in a word processing program, or using visualization tools such as mind maps can give structure, as can file folders and index cards. Data bases, with statistical techniques such as correlation, factor analysis, and time series analysis can give insight.

Mind-map showing a wide range of nonhierarchical relationships

Some analysts speak of a Zen-like state in which they allow the data to "speak" to them. Others may meditate, or even seek insight in dreams, hoping for an insight such as that given to August Kekulé in a daydream that resolved one of the fundamental structural problems of organic chemistry.

Krizan took criteria from. Regardless of its form or setting, an effective collation method will have the following attributes:

  1. Be impersonal. It should not depend on the memory of one analyst; another person knowledgeable in the subject should be able to carry out the operation.
  2. Not become the "master" of the analyst or an end in itself.
  3. Be free of bias in integrating the information.
  4. Be receptive to new data without extensive alteration of the collating criterion.

Semantic maps are related to mind maps, but are more amenable to computer discovery of relationships.

Semantic network; compare formalism to mind map

The more interactive that the relationship between producer and consumer becomes, the more important will be tools:

  • Collaboration tools. These include all media: voice, video, instant messaging, electronic whiteboards, and shared document markup
  • Databases. Not only will these need to be interoperable, they need to reflect different models, when appropriate, such as the semantic web. There may no longer be a clear line between databases and web applications.
  • Analytic tools. These will cover a wide range of pattern recognition and knowledge organization.

The nature of analysis

An analysis should have a summary of the key characteristics of the topic, followed by the key variables and choices. Increasingly deep analysis can explain the internal dynamics of the matter being studied, and eventually to prediction, known as estimation.

The purpose of intelligence analysis is to reveal to a specific decision maker the underlying significance of selected target information. Analysts should begin with confirmed facts, apply expert knowledge to produce plausible but less certain findings, and even forecast, when the forecast is appropriately qualified. Analysts should not, however, engage in fortunetelling that has no basis in fact.

Food chain in intelligence analysis: the bigger the "fish", the more unlikely it is

The mnemonic "Four Fs Minus One" may serve as a reminder of how to apply this criterion. Whenever the intelligence information allows, and the customer's validated needs demand it, the intelligence analyst will extend the thought process as far along the Food Chain as possible, to the third "F" but not beyond to the fourth.

Types of reasoning

Objectivity is the intelligence analyst's primary asset in creating intelligence that meets the Four Fs Minus One criterion. To produce intelligence objectively, the analyst must employ a process tailored to the nature of the problem. Four basic types of reasoning apply to intelligence analysis: induction, deduction, abduction and the scientific method.

Induction: seeking causality

The induction process is one of discovering relationships among the phenomena under study. It may come from human pattern recognition ability, looking at a seemingly random set of events, perhaps writing them on cards and shuffling them until a pattern emerges.

An analyst might notice that when Country X's command post with call sign ABC sent out a message on frequency 1 between Thursday and Saturday, an air unit will move to a training range within one week. The acknowledgement will take one day, so the analyst should recommend intensified COMINT monitoring of the appropriate frequencies between Friday and Sunday. Another kind of causality could come from interviews, in which soldiers might describe the things that warn them of an impending attack, or how the ground might look when an improvised explosive device has been emplaced.

While induction, for human beings, is usually not at a fully rational level, do not discount the potential role of software that uses statistical or logical techniques for finding patterns. Induction is subtly different from intuition: there usually is a pattern that induction recognizes, and this pattern may be applicable to other situations.

Deduction: applying the general

Deduction, is the classic process of reasoning from the general to the specific, a process made memorable by Sherlock Holmes: "How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?" Deduction can be used to validate a hypothesis by working from premises to conclusion.

The pattern of air maneuvers described above may be a general pattern, or it may be purely General X's personal command style. Analysts need to look at variables, such as personalities, to learn whether a pattern is truly general doctrine, or simply idiosyncratic.

Not all intelligence officers regard this as a desirable approach. At his confirmation hearing for CIA Director, Gen. Michael V. Hayden said he believes that intelligence analysis should be done by "induction", under which "all the data" are gathered and general conclusions determined, rather than by "deduction", under which you have a conclusion and seek out the data that support it.

Trained intuition

Analysts need to harness trained intuition: the recognition that one has come to a spontaneous insight. The steps leading there may not be apparent, although it is well to validate the intuition with the facts and tools that are available.

Polish cryptanalysts first were reading German Enigma ciphers in 1932, although the commercial version may have been broken by the British cryptanalyst, Dilwyn Knox, in the 1920s. Poland gave critical information to the French and British in 1939, and production British cryptanalysis was well underway in 1940. The Enigma, with German military enhancements, was quite powerful for a mechanical encryption device, and it might not have been broken as easily had the Germans been more careful about operating procedures. Throughout the war, Germany introduced enhancements, but never realized the British were reading the traffic almost as fast as the Germans.

Ultimately, no code is unbreakable, including Enigma's, if security is compromised

US cryptanalysts had broken several Japanese diplomatic ciphers, but, without ever seeing the PURPLE machine until after the war, they deduced the logic. Purple was actually mechanically simpler than Enigma, but the U.S. Army team struggled with a mechanical reproduction until Leo Rosen had the unexplained insight that the critical building block in the Purple machine was a telephone-type stepping switch rather than the rotor used in Enigma and in more advanced U.S. and UK machines. Rosen, Frank Rowlett, and others of the team recognized Rosen's insight as based on nothing but a communication engineer's intuition.

Experienced analysts, and sometimes less experienced ones, will have an intuition about some improbable event in a target country, and will collect more data, and perhaps send out collection requests within his or her authority. These intuitions are useful just often enough that wise managers of analysts, unless the situation is absolutely critical, allow them a certain amount of freedom to explore.

Scientific method

Astronomers and nuclear physicists, at different ends of the continuum from macroscopic to microscopic, share the method of having to infer behavior, consistent with hypothesis, not by measuring phenomena to which they have no direct access, but by measuring phenomena that can be measured and that hypothesis suggests will be affected by the mechanism of interest. Other scientists may be able to set up direct experiments, as in chemistry or biology. If the experimental results match the expected outcome, then the hypothesis is validated; if not, then the analyst must develop a new hypothesis and appropriate experimental methods.

In intelligence analysis, the analyst rarely has direct access to the observable subject, but gathers information indirectly. Even when the intelligence subject at hand is a technical one, analysts must remain aware that the other side may be presenting deliberately deceptive information.

From these gathered data, the analyst may proceed with the scientific method by generating tentative explanations for a subject event or phenomenon. Next, each hypothesis is examined for plausibility and compared against newly acquired information, in a continual process toward reaching a conclusion. Often the intelligence analyst tests several hypotheses at the same time, whereas the scientist usually focuses on one at a time. Furthermore, intelligence analysts cannot usually experiment directly upon the subject matter as in science, but must generate fictional scenarios and rigorously test them through methods of analysis suggested below.

Methods of analysis

As opposed to types of reasoning, which are ways the analyst drafts the product, the following methods are ways of validating the analyst's results of reasoning. Structured analytic techniques are used to help challenge judgments, identify mental mindsets, overcome biases, stimulate creativity, and manage uncertainty. Examples include the key assumptions check, analysis of competing hypotheses, Devil's advocacy, Red Team Analysis, and Alternative Futures/Scenarios analysis, among others.

Opportunity analysis

Opportunity analysis identifies for policy officials opportunities or vulnerabilities that the customer's organization can exploit to advance a policy, as well as dangers that could undermine a policy. Lawyers apply the test cui bono (who benefits?) in a rather similar way.

To make the best use of opportunity analysis, there needs to be a set of objectives for one's own country, preferably with some flexibility to them. The next step is to examine personalities and groups in that target country to see if there are any with a commonality of interest. Even though the different sides might want the same thing, it is entirely possible that one or the other might have deal-breaking conditions. If that is the case, then ways to smooth that conflict need to be identified, or no more work should be spent on that alternative.

Conversely, if there are elements that would be utterly opposed to the objectives of one's side, ways of neutralizing those elements need to be explored. They may have vulnerabilities that could render them impotent, or there may be a reward, not a shared opportunity, that would make them cooperate.

Linchpin analysis

Linchpin analysis proceeds from information that is certain, or with a high probability of being certain. In mathematics and physics, a similar problem formation, which constrains the solution by certain known or impossible conditions, is the boundary value condition.

By starting from knowns (and impossibilities), the analyst has a powerful technique for showing consumers, peers, and managers that a problem has both been thoroughly studied and constrained to reality. Linchpin analysis was introduced to CIA by deputy director for Intelligence (1993–1996) Doug MacEachin, as one of the "muscular" terms he pressed as an alternative to academic language, which was unpopular with many analysts. He substituted linchpin analysis for the hypotheses driving key variables. MacEachin required the hypotheses—or linchpins—needed to be explicit, so policymakers could be aware of coverage, and also aware of changes in assumptions.

This method is an "anchoring tool" that seeks to reduce the hazard of self-inflicted intelligence error as well as policymaker misinterpretation. It forces use of the checkpoints listed below, to be used when drafting reports:

  1. Identify the main uncertain factors or key variables judged likely to drive the outcome of the issue, forcing systematic attention to the range of and relationships among factors at play.
  2. Determine the linchpin premises or working assumptions about the drivers. This encourages testing of the key subordinate judgments that hold the estimative conclusion together.
  3. Marshal findings and reasoning in defense of the linchpins, as the premises that warrant the conclusion are subject to debate as well as error.
  4. Address the circumstances under which unexpected developments could occur. What indicators or patterns of development could emerge to signal that the linchpins were unreliable? And what triggers or dramatic internal and external events could reverse the expected momentum?

Analysis of competing hypotheses

Dick Heuer spent years in the CIA Directorate of Operations (DO) as well as the DI, and worked on methodology of analysis both in his later years and after retirement. Some of his key conclusions, coming from both experience and an academic background in philosophy, include:

  1. The mind is poorly "wired" to deal effectively with both inherent uncertainty (the natural fog surrounding complex, indeterminate intelligence issues) and induced uncertainty (the man-made fog fabricated by denial and deception operations).
  2. Even increased awareness of cognitive and other "unmotivated" biases, such as the tendency to see information confirming an already-held judgment more vividly than one sees "disconfirming" information, does little by itself to help analysts deal effectively with uncertainty.
  3. Tools and techniques that gear the analyst's mind to apply higher levels of critical thinking can substantially improve analysis on complex issues on which information is incomplete, ambiguous, and often deliberately distorted. Key examples of such intellectual devices include techniques for structuring information, challenging assumptions, and exploring alternative interpretations.

In 1980, he wrote an article, "Perception: Why Can't We See What Is There to be Seen?" which suggests to Davis that Heuer's ideas were compatible with linchpin analysis. Given the difficulties inherent in the human processing of complex information, a prudent management system should

  1. Encourage products that (a) clearly delineate their assumptions and chains of inference and (b) specify the degree and source of the uncertainty involved in the conclusions.
  2. Emphasize procedures that expose and elaborate alternative points of view—analytic debates, devil's advocates, interdisciplinary brainstorming, competitive analysis, intra-office peer review of production, and elicitation of outside expertise.

According to Heuer, analysts construct a reality based on objective information, filtered through complex mental processes that determine which information is attended to, how it is organized, and the meaning attributed to it. What people perceive, how readily they perceive it, and how they process this information after receiving it are all strongly influenced by past experience, education, cultural values, role requirements, and organizational norms, as well as by the specifics of the information received. To understand how the analysis results, one must use good mental models to create the work, and understand the models when evaluating it. Analysts need to be comfortable with challenge, refinement, and challenge. To go back to linchpin analysis, the boundary conditions give places to challenge and test, reducing ambiguity.

More challenge, according to Heuer, is more important than more information. He wanted better analysis to be applied to less information, rather than the reverse. Given the immense volumes of information that modern collection systems produce, the mind is the limiting factor. Mirror-imaging is one of Heuer's favorite example of a cognitive trap, in which the analyst substitutes his own mindset for that of the target. "To see the options faced by foreign leaders as these leaders see them", according to Heuer, " one must understand [the foreign leaders'] values and assumptions and even their misperceptions and misunderstandings. ... Too frequently, foreign behavior appears "irrational" or "not in their own best interest." Projecting American values created models that were inappropriate for the foreign leader.

A significant problem during the Vietnam War is that Secretary of Defense Robert S. McNamara, an expert on statistical decision-making, assumed that Ho Chi Minh, Võ NguyĂŞn Giáp, and other North Vietnamese officials would approach decision-making as he did. For example, in McNamara's thinking, if the United States did not attack SA-2 anti-aircraft missiles, the enemy would interpret that as "restraint" and not use them against U.S. aircraft  The North Vietnamese leadership, not privy to McNamara's thinking, were unaware of the "signaling" and did their best to shoot down U.S. aircraft with those missiles.

Heuer's answer was making the challenge of Analysis of Competing Hypotheses (ACH) the core of analysis. In ACH, there is competition among competing hypotheses of the foreign leader's assumptions, which will reduce mirror-imaging even if they do not produce the precise answer. The best use of information, in this context, is to challenge the assumption the analyst likes best.

One of the key motivations for ACH, according to Heuer, is to avoid rejecting deception out of hand, because the situation looks straightforward. Heuer observed that good deception looks real. "Rejecting a plausible but unproven hypothesis too early tends to bias the subsequent analysis, because one does not then look for the evidence that might support it. The possibility of deception should not be rejected until it is disproved or, at least, until a systematic search for evidence has been made and none has been found."

The steps in ACH are:

  1. Identify the possible hypotheses to be considered. Use a group of analysts with different perspectives to brainstorm the possibilities.
  2. Make a list of significant evidence and arguments for and against each hypothesis.
  3. Prepare a matrix with hypotheses across the top and evidence down the side. Analyze the "diagnosticity" of the evidence and arguments—that is, identify which items are most helpful in judging the relative likelihood of the hypotheses.
  4. Refine the matrix. Reconsider the hypotheses and delete evidence and arguments that have no diagnostic value.
  5. Draw tentative conclusions about the relative likelihood of each hypothesis. Proceed by trying to disprove the hypotheses rather than prove them.
  6. Analyze how sensitive your conclusion is to a few critical items of evidence. Consider the consequences for your analysis if that evidence were wrong, misleading, or subject to a different interpretation.
  7. Report conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one.
  8. Identify milestones for future observation that may indicate events are taking a different course than expected.

Keith Devlin has been researching the use of mathematics and formal logic in implementing Heuer's ACH paradigm.

Analogy

Analogy is common in technical analysis, but engineering characteristics seeming alike do not necessarily mean that the other side has the same employment doctrine for an otherwise similar thing. Sometimes, the analogy was valid for a time, such as the MiG-25 aircraft being designed as a Soviet counter to the perceived threat of the high-altitude, supersonic B-70 bomber. The Soviets could have canceled the MiG-25 program when the US changed doctrines to low altitude penetration and canceled the B-70 program, but they continued building the MiG-25.

One of the Soviet variants was a high-speed, high-altitude reconnaissance aircraft (MiG-25RB), which, for a time, was thought comparable to the US SR-71 aircraft. Several additional points of data, however, showed that an analogy between the SR-71 and MiG-25RB was not complete. HUMINT revealed that a single Mach 3.2 flight of the MiG wrecked the engines beyond hope of repair, and the cost of replacement was prohibitive. The SR-71, however, could make repeated flights with the same engines. The dissimilarity of engine life was not only expensive, but meant that the MiG-25RB could operate only from bases with the capability to change engines.

Top speed MiG-25RB reconnaissance flights damaged its engines beyond repair

The United States had applied "reverse engineering" to the MiG, essentially saying "if we had an aircraft with such capabilities, what would we do with it?" In the fighter-interceptor role, however, the US gives the pilot considerable flexibility in tactics, where the Soviets had a doctrine of tight ground control. For the U.S. doctrine, the aircraft was too inflexible for American fighter tactics, but made sense for the Soviets as an interceptor that could make one pass at a penetrating bomber, using an extremely powerful radar to burn through jamming for final targeting.

Many of these assumptions fell apart after Viktor Belenko flew his MiG-25 to the West, where TECHINT analysts could examine the aircraft, and doctrinal specialists could interview Belenko.

The analytic process

Analysts should follow a series of sequential steps:

Define the problem

Policy makers will have questions based on their intelligence requirements. Sometimes questions are clear and can easily be addressed by the analyst. Sometimes however, clarification is required due to vagueness, multiple layers of bureaucracy between customer and analyst, or due to time constraints. Just as analysts need to try to understand the thinking of the adversary, analysts need to know the thinking of their customers and allies.

Generate hypotheses

Once the problem is defined, the analyst is able to generate reasonable hypotheses based on the question. For example, a business may want to know whether a competitor will lower their prices in the next quarter. From this problem, two obvious hypotheses are:

  1. The competitor will lower prices or
  2. The competitor will not lower prices.

However, with a little brainstorming, additional hypotheses may become apparent. Perhaps the competitor will offer discounts to long term customers, or perhaps they may even raise prices. At this point, no hypothesis should be discarded.

Determine information needs and gather information

In intelligence, collection usually refers to the step in the formal intelligence cycle process. In many cases, the information needed by the analyst is either already available or is already being sought by collection assets (such as spies, imagery satellites). If not, the analyst may request collection on the subject, or if this is not possible identify this information gap in their final product. The analyst will generally also research other sources of info, such as open source (public record, press reporting), historical records, and various databases.

Evaluate sources

Information used for military, commercial, state, and other forms of intelligence analysis has often been obtained from individuals or organizations that are actively seeking to keep it secret, or may provide misleading information. Adversaries do not want to be analyzed correctly by competitors. This withholding of information is known as counterintelligence, and is very different from similar fields of research, such as science and history where information may be misleading, incomplete or wrong, but rarely does the subject of investigation actively deny the researcher access. So, the analyst must evaluate incoming information for reliability (has the source reported accurate information in the past?), credibility (does the source reasonably have access to the information claimed? Has the source lied in the past?), and for possible denial and deception (even if the source is credible and reliable, they may have been fooled).

Evaluate (test) hypotheses

All hypotheses must be rigorously tested. Methods such as Analysis of Competing Hypotheses or link charts are key. It is essential to triage which may be valid, which fail readily, and which require more information to assess.

Be especially alert to cognitive and cultural biases in and out of the organization. Recent scholarship on theories of the sociology of knowledge raise important caveats.

As Jones and Silberzahn documented in the 2013 volume Constructing Cassandra: Reframing Intelligence Failure at the CIA, 1947–2001, while hypotheses are essential to sorting "signals" from "noise" in raw intelligence data, the variety, types and boundaries of the types of hypotheses an intelligence organization entertains are a function of the collective culture and identity of the intelligence producer. Often, these hypotheses are shaped not merely by the cognitive biases of individual analysts, but by complex social mechanism both inside and outside that analytic unit. After many strategic surprises, "Cassandras" – analysts or outsiders who offered warnings, but whose hypotheses were ignored or sidelined – are discovered. Therefore, careful analysts should recognize the key role that their own and their organization's identity and culture play in accepting or rejecting hypotheses at each step in their analysis.

Production and packaging

Once hypotheses have been evaluated, the intelligence product must be created for the consumer. Three key features of the intelligence product are:

  • Timeliness. Timeliness includes not only the amount of time required to deliver the product, but also the usefulness of the product to the customer at a given moment.
  • Scope. Scope involves the level of detail or comprehensiveness of the material contained in the product.
  • Periodicity. Periodicity describes the schedule of product initiation and generation.

Government intelligence products are typically packaged as highly structured written and oral presentations, including electrical messages, hardcopy reports, and briefings. Many organizations also generate video intelligence products, especially in the form of live daily "newscasts", or canned documentary presentations.

Analysts should understand the relationship between the analyst's and the consumer's organization. There may be times that while the ultimate consumer and originating analyst simply want to pass information, a manager in either chain of command may insist on a polished format.

Peer review

Peer review is essential to assess and confirm accuracy. "Coordination with peers is necessary...If you think you are right, and the coordinator disagrees, let the assessment reflect that difference of opinion and use a footnote, called a reclama, inside the U.S. intelligence community if necessary. But never water down your assessment to a lowest common denominator just to obtain coordination. When everyone agrees on an issue, something probably is wrong. "As an example, following the collapse of the Soviet Union, there was an almost unanimous belief that large numbers of Russian ballistic missile specialists would flood into the Third World and aid missile programs in other states (the so-called brain drain)...As it turned out, there was no [expected] mass departure of Russian missile specialists, but Russian expertise was supplied to other states in ways that had been ignored due to the overemphasis on the brain drain.

In large intelligence establishments, analysts have peers at other agencies. The practical amount of coordination, indeed inside one's own agency, will depend on the secure collaboration tools available (wikis, analyst webpages, email), the schedule and availability of the other analysts, any restrictions on dissemination of the material, and the analyst's ability to play nicely with others. Extremely specialized issues might have very few people who could meaningfully look at it.

An intelligence community document, as opposed to a spot report from a single agency, is expected to be coordinated and reviewed. For example, in reports on the Iraqi WMD program, given a field report that aluminum tubes were on order, which might have been received both at the geographic desk and the Counterproliferation Center, someone might have thought they were for use in uranium separation centrifuges. It has been reported that some analysts thought they might be used for rocket casings, which apparently was the correct interpretation. The question needs to be asked "did the original analyst contact a technical specialist in separation centrifuges, perhaps at Department of Energy intelligence?"

Such an analyst might have mentioned that while aluminum has been used, maraging steel is the material of choice for Zippe-type centrifuges. The alternative, the Helikon vortex separation process, has no moving parts and thus less demand on the tubes, but takes much more energy. If the Helikon had been under consideration, the consultation could have gone farther, perhaps to IMINT analysts familiar with power generation in the area or infrared MASINT specialists who could look for the thermal signature of power generation or the cascade itself. Both Zippe and Helikon techniques take a great deal of energy, and often have been placed near hydroelectric dam power plants so power will be nearby.

Customer feedback and production evaluation

The production phase of the intelligence process does not end with delivering the product to the customer. Rather, it continues in the same manner in which it began: with interaction between producer and customer. For the product to be useful, the analyst and policymaker need to hear feedback from one another, and they refine both analysis and requirements.

Feedback procedures between producers and customers includes key questions, such as: Is the product usable? Is it timely? Was it in fact used? Did the product meet expectations? If not, why not? What next? The answers to these questions lead to refined production, greater use of intelligence by decision makers, and further feedback sessions. Thus, production of intelligence generates more requirements in this iterative process.

Never forget the end user

Effective intelligence analysis is ultimately tailored to the end user. William Donovan, the head of the World War II OSS, began to get FDR's ear because he gave vividly illustrated, well-organized briefings that would be common today, but were unprecedented in World War II. Today, there is danger of becoming too entranced with the presentation and less with its subject. This is also a delicate dance of overemphasizing the subjects that interest high officials, and what they want to hear declared true about them, rather than hearing what the analysts believe is essential.

Most consumers do not care how attractive a report looks or whether the format is correct. I have lost count of the number of times consumers have told me they do not care if an assessment has a CIA seal on it, if it is in the proper format, or even if it has draft stamped all over it; they just want the assessment in their hands as soon as possible, at least in time to help make a decision. Unfortunately, a number of mid-level managers get overly worried about form, and wise top-level intelligence officials make sure that does not happen.

At the same time, analysts must always be wary of mirroring the desires, attitudes, and views of intelligence consumers. They must raise awkward facts and ask probing questions, even if this makes the decision-maker's job harder.

Organic electronics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Organic_electronics   ...