Search This Blog

Sunday, February 15, 2026

Protein design

From Wikipedia, the free encyclopedia

Protein design is the rational design of new protein molecules to design novel activity, behavior, or purpose, and to advance basic understanding of protein function. Proteins can be designed from scratch (de novo design) or by making calculated variants of a known protein structure and its sequence (termed protein redesign). Rational protein design approaches make protein-sequence predictions that will fold to specific structures. These predicted sequences can then be validated experimentally through methods such as peptide synthesis, site-directed mutagenesis, or artificial gene synthesis.

Rational protein design dates back to the mid-1970s. Recently, however, there were numerous examples of successful rational design of water-soluble and even transmembrane peptides and proteins, in part due to a better understanding of different factors contributing to protein structure stability and development of better computational methods.

Overview and history

The goal in rational protein design is to predict amino acid sequences that will fold to a specific protein structure. Although the number of possible protein sequences is vast, growing exponentially with the size of the protein chain, only a subset of them will fold reliably and quickly to one native state. Protein design involves identifying novel sequences within this subset. The native state of a protein is the conformational free energy minimum for the chain. Thus, protein design is the search for sequences that have the chosen structure as a free energy minimum. In a sense, it is the reverse of protein structure prediction. In design, a tertiary structure is specified, and a sequence that will fold to it is identified. Hence, it is also termed inverse folding. Protein design is then an optimization problem: using some scoring criteria, an optimized sequence that will fold to the desired structure is chosen.

When the first proteins were rationally designed during the 1970s and 1980s, the sequence for these was optimized manually based on analyses of other known proteins, the sequence composition, amino acid charges, and the geometry of the desired structure. The first designed proteins are attributed to Bernd Gutte, who designed a reduced version of a known catalyst, bovine ribonuclease, and tertiary structures consisting of beta-sheets and alpha-helices, including a binder of DDT. Urry and colleagues later designed elastin-like fibrous peptides based on rules on sequence composition. Richardson and coworkers designed a 79-residue protein with no sequence homology to a known protein. In the 1990s, the advent of powerful computers, libraries of amino acid conformations, and force fields developed mainly for molecular dynamics simulations enabled the development of structure-based computational protein design tools. Following the development of these computational tools, great success has been achieved over the last 30 years in protein design. The first protein successfully designed completely de novo was done by Stephen Mayo and coworkers in 1997, and, shortly after, in 1999 Peter S. Kim and coworkers designed dimers, trimers, and tetramers of unnatural right-handed coiled coils. In 2003, David Baker's laboratory designed a full protein to a fold never seen before in nature. Later, in 2008, Baker's group computationally designed enzymes for two different reactions. In 2010, one of the most powerful broadly neutralizing antibodies was isolated from patient serum using a computationally designed protein probe. In 2024, Baker received one half of the Nobel Prize in Chemistry for his advancement of computational protein design, with the other half being shared by Demis Hassabis and John Jumper of Deepmind for protein structure prediction. Due to these and other successes (e.g., see examples below), protein design has become one of the most important tools available for protein engineering. There is great hope that the design of new proteins, small and large, will have uses in biomedicine and bioengineering.

Underlying models of protein structure and function

Protein design programs use computer models of the molecular forces that drive proteins in in vivo environments. In order to make the problem tractable, these forces are simplified by protein design models. Although protein design programs vary greatly, they have to address four main modeling questions: What is the target structure of the design, what flexibility is allowed on the target structure, which sequences are included in the search, and which force field will be used to score sequences and structures.

Target structure

The Top7 protein was one of the first proteins designed for a fold that had never been seen before in nature

Protein function is heavily dependent on protein structure, and rational protein design uses this relationship to design function by designing proteins that have a target structure or fold. Thus, by definition, in rational protein design the target structure or ensemble of structures must be known beforehand. This contrasts with other forms of protein engineering, such as directed evolution, where a variety of methods are used to find proteins that achieve a specific function, and with protein structure prediction where the sequence is known, but the structure is unknown.

Most often, the target structure is based on a known structure of another protein. However, novel folds not seen in nature have been made increasingly possible. Peter S. Kim and coworkers designed trimers and tetramers of unnatural coiled coils, which had not been seen before in nature. The protein Top7, developed in David Baker's lab, was designed completely using protein design algorithms, to a completely novel fold. More recently, Baker and coworkers developed a series of principles to design ideal globular-protein structures based on protein folding funnels that bridge between secondary structure prediction and tertiary structures. These principles, which build on both protein structure prediction and protein design, were used to design five different novel protein topologies.

Sequence space

FSD-1 (shown in blue, PDB id: 1FSV) was the first de novo computational design of a full protein. The target fold was that of the zinc finger in residues 33–60 of the structure of protein Zif268 (shown in red, PDB id: 1ZAA). The designed sequence had very little sequence identity with any known protein sequence.

In rational protein design, proteins can be redesigned from the sequence and structure of a known protein, or completely from scratch in de novo protein design. In protein redesign, most of the residues in the sequence are maintained as their wild-type amino-acid while a few are allowed to mutate. In de novo design, the entire sequence is designed anew, based on no prior sequence.

Both de novo designs and protein redesigns can establish rules on the sequence space: the specific amino acids that are allowed at each mutable residue position. For example, the composition of the surface of the RSC3 probe to select HIV-broadly neutralizing antibodies was restricted based on evolutionary data and charge balancing. Many of the earliest attempts on protein design were heavily based on empiric rules on the sequence space. Moreover, the design of fibrous proteins usually follows strict rules on the sequence space. Collagen-based designed proteins, for example, are often composed of Gly-Pro-X repeating patterns. The advent of computational techniques allows designing proteins with no human intervention in sequence selection.

Structural flexibility

Common protein design programs use rotamer libraries to simplify the conformational space of protein side chains. This animation loops through all the rotamers of the isoleucine amino acid based on the Penultimate Rotamer Library (total of 7 rotamers).

In protein design, the target structure (or structures) of the protein are known. However, a rational protein design approach must model some flexibility on the target structure in order to increase the number of sequences that can be designed for that structure and to minimize the chance of a sequence folding to a different structure. For example, in a protein redesign of one small amino acid (such as alanine) in the tightly packed core of a protein, very few mutants would be predicted by a rational design approach to fold to the target structure, if the surrounding side-chains are not allowed to be repacked.

Thus, an essential parameter of any design process is the amount of flexibility allowed for both the side-chains and the backbone. In the simplest models, the protein backbone is kept rigid while some of the protein side-chains are allowed to change conformations. However, side-chains can have many degrees of freedom in their bond lengths, bond angles, and χ dihedral angles. To simplify this space, protein design methods use rotamer libraries that assume ideal values for bond lengths and bond angles, while restricting χ dihedral angles to a few frequently observed low-energy conformations termed rotamers.

Rotamer libraries are derived from the statistical analysis of many protein structures. Backbone-independent rotamer libraries describe all rotamers. Backbone-dependent rotamer libraries, in contrast, describe the rotamers as how likely they are to appear depending on the protein backbone arrangement around the side chain. Most protein design programs use one conformation (e.g., the modal value for rotamer dihedrals in space) or several points in the region described by the rotamer; the OSPREY protein design program, in contrast, models the entire continuous region.

Although rational protein design must preserve the general backbone fold a protein, allowing some backbone flexibility can significantly increase the number of sequences that fold to the structure while maintaining the general fold of the protein. Backbone flexibility is especially important in protein redesign because sequence mutations often result in small changes to the backbone structure. Moreover, backbone flexibility can be essential for more advanced applications of protein design, such as binding prediction and enzyme design. Some models of protein design backbone flexibility include small and continuous global backbone movements, discrete backbone samples around the target fold, backrub motions, and protein loop flexibility.

Energy function

Comparison of various potential energy functions. The most accurate energy are those that use quantum mechanical calculations, but these are too slow for protein design. On the other extreme, heuristic energy functions are based on statistical terms and are very fast. In the middle are molecular mechanics energy functions that are physically based but are not as computationally expensive as quantum mechanical simulations.

Rational protein design techniques must be able to discriminate sequences that will be stable under the target fold from those that would prefer other low-energy competing states. Thus, protein design requires accurate energy functions that can rank and score sequences by how well they fold to the target structure. At the same time, however, these energy functions must consider the computational challenges behind protein design. One of the most challenging requirements for successful design is an energy function that is both accurate and simple for computational calculations.

The most accurate energy functions are those based on quantum mechanical simulations. However, such simulations are too slow and typically impractical for protein design. Instead, many protein design algorithms use either physics-based energy functions adapted from molecular mechanics simulation programs, knowledge based energy-functions, or a hybrid mix of both. The trend has been toward using more physics-based potential energy functions.

Physics-based energy functions, such as AMBER and CHARMM, are typically derived from quantum mechanical simulations, and experimental data from thermodynamics, crystallography, and spectroscopy. These energy functions typically simplify physical energy function and make them pairwise decomposable, meaning that the total energy of a protein conformation can be calculated by adding the pairwise energy between each atom pair, which makes them attractive for optimization algorithms. Physics-based energy functions typically model an attractive-repulsive Lennard-Jones term between atoms and a pairwise electrostatics coulombic term between non-bonded atoms.

Water-mediated hydrogen bonds play a key role in protein–protein binding. One such interaction is shown between residues D457, S365 in the heavy chain of the HIV-broadly-neutralizing antibody VRC01 (green) and residues N58 and Y59 in the HIV envelope protein GP120 (purple).

Statistical potentials, in contrast to physics-based potentials, have the advantage of being fast to compute, of accounting implicitly of complex effects and being less sensitive to small changes in the protein structure. These energy functions are based on deriving energy values from frequency of appearance on a structural database.

Protein design, however, has requirements that can sometimes be limited in molecular mechanics force-fields. Molecular mechanics force-fields, which have been used mostly in molecular dynamics simulations, are optimized for the simulation of single sequences, but protein design searches through many conformations of many sequences. Thus, molecular mechanics force-fields must be tailored for protein design. In practice, protein design energy functions often incorporate both statistical terms and physics-based terms. For example, the Rosetta energy function, one of the most-used energy functions, incorporates physics-based energy terms originating in the CHARMM energy function, and statistical energy terms, such as rotamer probability and knowledge-based electrostatics. Typically, energy functions are highly customized between laboratories, and specifically tailored for every design.

Challenges for effective design energy functions

Water makes up most of the molecules surrounding proteins and is the main driver of protein structure. Thus, modeling the interaction between water and protein is vital in protein design. The number of water molecules that interact with a protein at any given time is huge and each one has a large number of degrees of freedom and interaction partners. Instead, protein design programs model most of such water molecules as a continuum, modeling both the hydrophobic effect and solvation polarization.

Individual water molecules can sometimes have a crucial structural role in the core of proteins, and in protein–protein or protein–ligand interactions. Failing to model such waters can result in mispredictions of the optimal sequence of a protein–protein interface. As an alternative, water molecules can be added to rotamers.


As an optimization problem

This animation illustrates the complexity of a protein design search, which typically compares all the rotamer-conformations from all possible mutations at all residues. In this example, the residues Phe36 and His 106 are allowed to mutate to, respectively, the amino acids Tyr and Asn. Phe and Tyr have 4 rotamers each in the rotamer library, while Asn and His have 7 and 8 rotamers, respectively, in the rotamer library (from the Richardson's penultimate rotamer library). The animation loops through all (4 + 4) x (7 + 8) = 120 possibilities. The structure shown is that of myoglobin, PDB id: 1mbn.

The goal of protein design is to find a protein sequence that will fold to a target structure. A protein design algorithm must, thus, search all the conformations of each sequence, with respect to the target fold, and rank sequences according to the lowest-energy conformation of each one, as determined by the protein design energy function. Thus, a typical input to the protein design algorithm is the target fold, the sequence space, the structural flexibility, and the energy function, while the output is one or more sequences that are predicted to fold stably to the target structure.

The number of candidate protein sequences, however, grows exponentially with the number of protein residues; for example, there are 20100 protein sequences of length 100. Furthermore, even if amino acid side-chain conformations are limited to a few rotamers (see Structural flexibility), this results in an exponential number of conformations for each sequence. Thus, in our 100 residue protein, and assuming that each amino acid has exactly 10 rotamers, a search algorithm that searches this space will have to search over 200100 protein conformations.

The most common energy functions can be decomposed into pairwise terms between rotamers and amino acid types, which casts the problem as a combinatorial one, and powerful optimization algorithms can be used to solve it. In those cases, the total energy of each conformation belonging to each sequence can be formulated as a sum of individual and pairwise terms between residue positions. If a designer is interested only in the best sequence, the protein design algorithm only requires the lowest-energy conformation of the lowest-energy sequence. In these cases, the amino acid identity of each rotamer can be ignored and all rotamers belonging to different amino acids can be treated the same. Let ri be a rotamer at residue position i in the protein chain, and E(ri) the potential energy between the internal atoms of the rotamer. Let E(ri, rj) be the potential energy between ri and rotamer rj at residue position j. Then, we define the optimization problem as one of finding the conformation of minimum energy (ET):

The problem of minimizing ET is an NP-hard problem. Even though the class of problems is NP-hard, in practice many instances of protein design can be solved exactly or optimized satisfactorily through heuristic methods.

Algorithms

Several algorithms have been developed specifically for the protein design problem. These algorithms can be divided into two broad classes: exact algorithms, such as dead-end elimination, that lack runtime guarantees but guarantee the quality of the solution; and heuristic algorithms, such as Monte Carlo, that are faster than exact algorithms but have no guarantees on the optimality of the results. Exact algorithms guarantee that the optimization process produced the optimal according to the protein design model. Thus, if the predictions of exact algorithms fail when these are experimentally validated, then the source of error can be attributed to the energy function, the allowed flexibility, the sequence space or the target structure (e.g., if it cannot be designed for).

Some protein design algorithms are listed below. Although these algorithms address only the most basic formulation of the protein design problem, Equation (1), when the optimization goal changes because designers introduce improvements and extensions to the protein design model, such as improvements to the structural flexibility allowed (e.g., protein backbone flexibility) or including sophisticated energy terms, many of the extensions on protein design that improve modeling are built atop these algorithms. For example, Rosetta Design incorporates sophisticated energy terms, and backbone flexibility using Monte Carlo as the underlying optimizing algorithm. OSPREY's algorithms build on the dead-end elimination algorithm and A* to incorporate continuous backbone and side-chain movements. Thus, these algorithms provide a good perspective on the different kinds of algorithms available for protein design.

In 2020 scientists reported the development of an AI-based process using genome databases for evolution-based designing of novel proteins. They used deep learning to identify design-rules.[24][25] In 2022, a study reported deep learning software that can design proteins that contain pre-specified functional sites.

With mathematical guarantees

Dead-end elimination

The dead-end elimination (DEE) algorithm reduces the search space of the problem iteratively by removing rotamers that can be provably shown to be not part of the global lowest energy conformation (GMEC). On each iteration, the dead-end elimination algorithm compares all possible pairs of rotamers at each residue position, and removes each rotamer r′i that can be shown to always be of higher energy than another rotamer ri and is thus not part of the GMEC:

Other powerful extensions to the dead-end elimination algorithm include the pairs elimination criterion, and the generalized dead-end elimination criterion. This algorithm has also been extended to handle continuous rotamers with provable guarantees.

Although the Dead-end elimination algorithm runs in polynomial time on each iteration, it cannot guarantee convergence. If, after a certain number of iterations, the dead-end elimination algorithm does not prune any more rotamers, then either rotamers have to be merged or another search algorithm must be used to search the remaining search space. In such cases, the dead-end elimination acts as a pre-filtering algorithm to reduce the search space, while other algorithms, such as A*, Monte Carlo, Linear Programming, or FASTER are used to search the remaining search space.

Branch and bound

The protein design conformational space can be represented as a tree, where the protein residues are ordered in an arbitrary way, and the tree branches at each of the rotamers in a residue. Branch and bound algorithms use this representation to efficiently explore the conformation tree: At each branching, branch and bound algorithms bound the conformation space and explore only the promising branches.

A popular search algorithm for protein design is the A* search algorithm. A* computes a lower-bound score on each partial tree path that lower bounds (with guarantees) the energy of each of the expanded rotamers. Each partial conformation is added to a priority queue and at each iteration the partial path with the lowest lower bound is popped from the queue and expanded. The algorithm stops once a full conformation has been enumerated and guarantees that the conformation is the optimal.

The A* score f in protein design consists of two parts, f=g+h. g is the exact energy of the rotamers that have already been assigned in the partial conformation. h is a lower bound on the energy of the rotamers that have not yet been assigned. Each is designed as follows, where d is the index of the last assigned residue in the partial conformation.

Integer linear programming

The problem of optimizing ET (Equation (1)) can be easily formulated as an integer linear program (ILP). One of the most powerful formulations uses binary variables to represent the presence of a rotamer and edges in the final solution, and constraints the solution to have exactly one rotamer for each residue and one pairwise interaction for each pair of residues:

s.t.

ILP solvers, such as CPLEX, can compute the exact optimal solution for large instances of protein design problems. These solvers use a linear programming relaxation of the problem, where qi and qij are allowed to take continuous values, in combination with a branch and cut algorithm to search only a small portion of the conformation space for the optimal solution. ILP solvers have been shown to solve many instances of the side-chain placement problem.

Message-passing based approximations to the linear programming dual

ILP solvers depend on linear programming (LP) algorithms, such as the Simplex or barrier-based methods to perform the LP relaxation at each branch. These LP algorithms were developed as general-purpose optimization methods and are not optimized for the protein design problem (Equation (1)). In consequence, the LP relaxation becomes the bottleneck of ILP solvers when the problem size is large. Recently, several alternatives based on message-passing algorithms have been designed specifically for the optimization of the LP relaxation of the protein design problem. These algorithms can approximate both the dual or the primal instances of the integer programming, but in order to maintain guarantees on optimality, they are most useful when used to approximate the dual of the protein design problem, because approximating the dual guarantees that no solutions are missed. Message-passing based approximations include the tree reweighted max-product message passing algorithm, and the message passing linear programming algorithm.

Optimization algorithms without guarantees

Monte Carlo and simulated annealing

Monte Carlo is one of the most widely used algorithms for protein design. In its simplest form, a Monte Carlo algorithm selects a residue at random, and in that residue a randomly chosen rotamer (of any amino acid) is evaluated. The new energy of the protein, Enew is compared against the old energy Eold and the new rotamer is accepted with a probability of:

where β is the Boltzmann constant and the temperature T can be chosen such that in the initial rounds it is high and it is slowly annealed to overcome local minima.

FASTER

The FASTER algorithm uses a combination of deterministic and stochastic criteria to optimize amino acid sequences. FASTER first uses DEE to eliminate rotamers that are not part of the optimal solution. Then, a series of iterative steps optimize the rotamer assignment.

Belief propagation

In belief propagation for protein design, the algorithm exchanges messages that describe the belief that each residue has about the probability of each rotamer in neighboring residues. The algorithm updates messages on every iteration and iterates until convergence or until a fixed number of iterations. Convergence is not guaranteed in protein design. The message mi→ j(rj that a residue i sends to every rotamer (rj at neighboring residue j is defined as:

Both max-product and sum-product belief propagation have been used to optimize protein design.

Applications and examples of designed proteins

Enzyme design

The design of new enzymes is a use of protein design with huge bioengineering and biomedical applications. In general, designing a protein structure can be different from designing an enzyme, because the design of enzymes must consider many states involved in the catalytic mechanism. However protein design is a prerequisite of de novo enzyme design because, at the very least, the design of catalysts requires a scaffold in which the catalytic mechanism can be inserted.

Great progress in de novo enzyme design, and redesign, was made in the first decade of the 21st century. In three major studies, David Baker and coworkers de novo designed enzymes for the retro-aldol reaction, a Kemp-elimination reaction, and for the Diels-Alder reaction. Furthermore, Stephen Mayo and coworkers developed an iterative method to design the most efficient known enzyme for the Kemp-elimination reaction. Also, in the laboratory of Bruce Donald, computational protein design was used to switch the specificity of one of the protein domains of the nonribosomal peptide synthetase that produces Gramicidin S, from its natural substrate phenylalanine to other noncognate substrates including charged amino acids; the redesigned enzymes had activities close to those of the wild-type.

Semi-rational design

Semi-rational design is a purposeful modification method based on a certain understanding of the sequence, structure, and catalytic mechanism of enzymes. This method is between irrational design and rational design. It uses known information and means to perform evolutionary modification on the specific functions of the target enzyme. The characteristic of semi-rational design is that it does not rely solely on random mutation and screening, but combines the concept of directed evolution. It creates a library of random mutants with diverse sequences through mutagenesis, error-prone RCR, DNA recombination, and site-saturation mutagenesis. At the same time, it uses the understanding of enzymes and design principles to purposefully screen out mutants with desired characteristics.

The methodology of semi-rational design emphasizes the in-depth understanding of enzymes and the control of the evolutionary process. It allows researchers to use known information to guide the evolutionary process, thereby improving efficiency and success rate. This method plays an important role in protein function modification because it can combine the advantages of irrational design and rational design, and can explore unknown space and use known knowledge for targeted modification.

Semi-rational design has a wide range of applications, including but not limited to enzyme optimization, modification of drug targets, evolution of biocatalysts, etc. Through this method, researchers can more effectively improve the functional properties of proteins to meet specific biotechnology or medical needs. Although this method has high requirements for information and technology and is relatively difficult to implement, with the development of computing technology and bioinformatics, the application prospects of semi-rational design in protein engineering are becoming more and more broad.

Design for affinity

Protein–protein interactions are involved in most biotic processes. Many of the hardest-to-treat diseases, such as Alzheimer's disease, many forms of cancer (e.g., TP53), and human immunodeficiency virus (HIV) infection involve protein–protein interactions. Thus, to treat such diseases, it is desirable to design protein or protein-like therapeutics that bind one of the partners of the interaction and, thus, disrupt the disease-causing interaction. This requires designing protein-therapeutics for affinity toward its partner.

Protein–protein interactions can be designed using protein design algorithms because the principles that rule protein stability also rule protein–protein binding. Protein–protein interaction design, however, presents challenges not commonly present in protein design. One of the most important challenges is that, in general, the interfaces between proteins are more polar than protein cores, and binding involves a tradeoff between desolvation and hydrogen bond formation. To overcome this challenge, Bruce Tidor and coworkers developed a method to improve the affinity of antibodies by focusing on electrostatic contributions. They found that, for the antibodies designed in the study, reducing the desolvation costs of the residues in the interface increased the affinity of the binding pair.

Scoring binding predictions

Protein design energy functions must be adapted to score binding predictions because binding involves a trade-off between the lowest-energy conformations of the free proteins (EP and EL) and the lowest-energy conformation of the bound complex (EPL):

.

The K* algorithm approximates the binding constant of the algorithm by including conformational entropy into the free energy calculation. The K* algorithm considers only the lowest-energy conformations of the free and bound complexes (denoted by the sets P, L, and PL) to approximate the partition functions of each complex:

Design for specificity

The design of protein–protein interactions must be highly specific because proteins can interact with a large number of proteins; successful design requires selective binders. Thus, protein design algorithms must be able to distinguish between on-target (or positive design) and off-target binding (or negative design). One of the most prominent examples of design for specificity is the design of specific bZIP-binding peptides by Amy Keating and coworkers for 19 out of the 20 bZIP families; 8 of these peptides were specific for their intended partner over competing peptides. Further, positive and negative design was also used by Anderson and coworkers to predict mutations in the active site of a drug target that conferred resistance to a new drug; positive design was used to maintain wild-type activity, while negative design was used to disrupt binding of the drug. Recent computational redesign by Costas Maranas and coworkers was also capable of experimentally switching the cofactor specificity of Candida boidinii xylose reductase from NADPH to NADH.

Protein resurfacing

Protein resurfacing consists of designing a protein's surface while preserving the overall fold, core, and boundary regions of the protein intact. Protein resurfacing is especially useful to alter the binding of a protein to other proteins. One of the most important applications of protein resurfacing was the design of the RSC3 probe to select broadly neutralizing HIV antibodies at the NIH Vaccine Research Center. First, residues outside of the binding interface between the gp120 HIV envelope protein and the formerly discovered b12-antibody were selected to be designed. Then, the sequence spaced was selected based on evolutionary information, solubility, similarity with the wild-type, and other considerations. Then the RosettaDesign software was used to find optimal sequences in the selected sequence space. RSC3 was later used to discover the broadly neutralizing antibody VRC01 in the serum of a long-term HIV-infected non-progressor individual.

Design of globular proteins

Globular proteins are proteins that contain a hydrophobic core and a hydrophilic surface. Globular proteins often assume a stable structure, unlike fibrous proteins, which have multiple conformations. The three-dimensional structure of globular proteins is typically easier to determine through X-ray crystallography and nuclear magnetic resonance than both fibrous proteins and membrane proteins, which makes globular proteins more attractive for protein design than the other types of proteins. Most successful protein designs have involved globular proteins. Both RSD-1, and Top7 were de novo designs of globular proteins. Five more protein structures were designed, synthesized, and verified in 2012 by the Baker group. These new proteins serve no biotic function, but the structures are intended to act as building-blocks that can be expanded to incorporate functional active sites. The structures were found computationally by using new heuristics based on analyzing the connecting loops between parts of the sequence that specify secondary structures.

Design of membrane proteins

Several transmembrane proteins have been successfully designed, along with many other membrane-associated peptides and proteins. Recently, Costas Maranas and his coworkers developed an automated tool to redesign the pore size of Outer Membrane Porin Type-F (OmpF) from E.coli to any desired sub-nm size and assembled them in membranes to perform precise angstrom scale separation.

Other applications

One of the most desirable uses for protein design is for biosensors, proteins that will sense the presence of specific compounds. Some attempts in the design of biosensors include sensors for unnatural molecules including TNT. More recently, Kuhlman and coworkers designed a biosensor of the PAK1.

In a sense, protein design is a subset of battery design.

Saturday, February 14, 2026

Mutually assured destruction

From Wikipedia, the free encyclopedia
 
Strategic bombers, ICBMs, SLBMs, and MIRVs all contribute to mutually assured destruction via a large number of deliverable strategic nuclear weapons.

Mutually assured destruction or mutual assured destruction (MAD) is a doctrine of military strategy and national security policy which posits that a full-scale use of nuclear weapons by an attacker on a nuclear-armed defender with second-strike capabilities would result in the complete annihilation of both the attacker and the defender. It is based on the theory of rational deterrence, which holds that the threat of using strong weapons against the enemy prevents the enemy's use of those same weapons. The strategy is a form of Nash equilibrium in which, once armed, neither side has any incentive to initiate a conflict or to disarm.

The result may be a nuclear peace, in which the presence of nuclear weapons decreases the risk of crisis escalation, since parties will seek to avoid situations that could lead to the use of nuclear weapons. Proponents of nuclear peace theory therefore believe that controlled nuclear proliferation may be beneficial for global stability. Critics argue that nuclear proliferation increases the chance of nuclear war through either deliberate or inadvertent use of nuclear weapons, as well as the likelihood of nuclear material falling into the hands of violent non-state actors.

The term "mutually assured destruction", commonly abbreviated "MAD", was coined by Donald Brennan, a strategist working in Herman Kahn's Hudson Institute in 1962. Brennan conceived the acronym cynically, spelling out the English word "mad" to argue that holding weapons capable of destroying society was irrational.

Theory

Under MAD, each side has enough nuclear weaponry to destroy the other side. Either side, if attacked for any reason by the other, would retaliate with equal or greater force. The expected result is an immediate, irreversible escalation of hostilities resulting in both combatants' mutual, total, and assured destruction. The doctrine requires that neither side construct shelters on a massive scale. If one side constructed a similar system of shelters, it would violate the MAD doctrine and destabilize the situation, because it would have less to fear from a second strike. The same principle is invoked against missile defense.

The doctrine further assumes that neither side will dare to launch a first strike because the other side would launch on warning (also called fail-deadly) or with surviving forces (a second strike), resulting in unacceptable losses for both parties. The payoff of the MAD doctrine was and still is expected to be a tense but stable global peace. However, many have argued that mutually assured destruction is unable to deter conventional war that could later escalate. Emerging domains of cyber-espionage, proxy-state conflict, and high-speed missiles threaten to circumvent MAD as a deterrent strategy.

The primary application of this doctrine started during the Cold War (1940s to 1991), in which MAD was seen as helping to prevent any direct full-scale conflicts between the United States and the Soviet Union while they engaged in smaller proxy wars around the world. MAD was also responsible for the arms race, as both nations struggled to keep nuclear parity, or at least retain second-strike capability. Although the Cold War ended in the early 1990s, the MAD doctrine continues to be applied.

Proponents of MAD as part of the US and USSR strategic doctrine believed that nuclear war could best be prevented if neither side could expect to survive a full-scale nuclear exchange as a functioning state. Since the credibility of the threat is critical to such assurance, each side had to invest substantial capital in their nuclear arsenals even if they were not intended for use. In addition, neither side could be expected or allowed to adequately defend itself against the other's nuclear missiles. This led both to the hardening and diversification of nuclear delivery systems (such as nuclear missile silos, ballistic missile submarines, and nuclear bombers kept at fail-safe points) and to the Anti-Ballistic Missile Treaty.

This MAD scenario is often referred to as rational nuclear deterrence.

When the possibility of nuclear warfare between the United States and Soviet Union started to become a reality, theorists began to think that mutually assured destruction would be sufficient to deter the other side from launching a nuclear weapon. Kenneth Waltz, an American political scientist, believed that nuclear forces were in fact useful, but even more useful in the fact that they deterred other nuclear threats from using them, based on mutually assured destruction. The theory of mutually assured destruction being a safe way to deter continued even farther with the thought that nuclear weapons intended on being used for the winning of a war, were impractical, and even considered too dangerous and risky. Even with the Cold War ending in 1991, deterrence from mutually assured destruction is still said to be the safest course to avoid nuclear warfare.

Effectiveness of the theory according to empirical studies

A study published in the Journal of Conflict Resolution in 2009 quantitatively evaluated the nuclear peace hypothesis and found support for the existence of the stability-instability paradox. The study determined that nuclear weapons promote strategic stability and prevent large-scale wars but simultaneously allow for more low intensity conflicts. If a nuclear monopoly exists between two states, and one state has nuclear weapons and its opponent does not, there is a greater chance of war. In contrast, if there is mutual nuclear weapon ownership with both states possessing nuclear weapons, the odds of war drop precipitously.

History

Pre-1945

The concept of MAD had been discussed in the literature for nearly a century before the invention of nuclear weapons. One of the earliest references comes from the English author Wilkie Collins, writing at the time of the Franco-Prussian War in 1870: "I begin to believe in only one civilizing influence—the discovery one of these days of a destructive agent so terrible that War shall mean annihilation and men's fears will force them to keep the peace." The concept was also described in 1863 by Jules Verne in his novel Paris in the Twentieth Century, though it was not published until 1994. The book is set in 1960 and describes "the engines of war", which have become so efficient that war is inconceivable and all countries are at a perpetual stalemate.

MAD has been invoked by more than one weapons inventor. For example, Richard Jordan Gatling patented his namesake Gatling gun in 1862 with the partial intention of illustrating the futility of war. Likewise, after his 1867 invention of dynamite, Alfred Nobel stated that "the day when two army corps can annihilate each other in one second, all civilized nations, it is to be hoped, will recoil from war and discharge their troops." In 1937, Nikola Tesla published The Art of Projecting Concentrated Non-dispersive Energy through the Natural Media, a treatise concerning charged particle beam weapons. Tesla described his device as a "superweapon that would put an end to all war."

The March 1940 Frisch–Peierls memorandum, the earliest technical exposition of a practical nuclear weapon, anticipated deterrence as the principal means of combating an enemy with nuclear weapons.

Early Cold War

Aftermath of the atomic bomb explosion over Hiroshima (August 6, 1945), to date one of the only two times a nuclear strike has been performed as an act of war

In August 1945, the United States became the first nuclear power after the nuclear attacks on Hiroshima and Nagasaki. Four years later, on August 29, 1949, the Soviet Union detonated its own nuclear device. At the time, both sides lacked the means to effectively use nuclear devices against each other. However, with the development of aircraft like the American Convair B-36 and the Soviet Tupolev Tu-95, both sides were gaining a greater ability to deliver nuclear weapons into the interior of the opposing country. The official policy of the United States became one of "Instant Retaliation", as coined by Secretary of State John Foster Dulles, which called for massive atomic attack against the Soviet Union if they were to invade Europe, regardless of whether it was a conventional or a nuclear attack.

By the time of the 1962 Cuban Missile Crisis, both the United States and the Soviet Union had developed the capability of launching a nuclear-tipped missile from a submerged submarine, which completed the "third leg" of the nuclear triad weapons strategy necessary to fully implement the MAD doctrine. Having a three-branched nuclear capability eliminated the possibility that an enemy could destroy all of a nation's nuclear forces in a first-strike attack; this, in turn, ensured the credible threat of a devastating retaliatory strike against the aggressor, increasing a nation's nuclear deterrence.

Campbell Craig and Sergey Radchenko argue that Nikita Khrushchev (Soviet leader 1953 to 1964) decided that policies that facilitated nuclear war were too dangerous to the Soviet Union. His approach did not greatly change his foreign policy or military doctrine but is apparent in his determination to choose options that minimized the risk of war.

Strategic Air Command

Image of Boeing B-47B at take-off
Boeing B-47B Stratojet Rocket-Assisted Take Off (RATO) on April 15, 1954
 
Image of B-52D during refueling
B-52D Stratofortress being refueled by a KC-135 Stratotanker, 1965

Beginning in 1955, the United States Strategic Air Command (SAC) kept one-third of its bombers on alert, with crews ready to take off within fifteen minutes and fly to designated targets inside the Soviet Union and destroy them with nuclear bombs in the event of a Soviet first-strike attack on the United States. In 1961, President John F. Kennedy increased funding for this program and raised the commitment to 50 percent of SAC aircraft.

During periods of increased tension in the early 1960s, SAC kept part of its B-52 fleet airborne at all times, to allow an extremely fast retaliatory strike against the Soviet Union in the event of a surprise attack on the United States. This program continued until 1969. Between 1954 and 1992, bomber wings had approximately one-third to one-half of their assigned aircraft on quick reaction ground alert and were able to take off within a few minutes. SAC also maintained the National Emergency Airborne Command Post (NEACP, pronounced "kneecap"), also known as "Looking Glass", which consisted of several EC-135s, one of which was airborne at all times from 1961 through 1990. During the Cuban Missile Crisis the bombers were dispersed to several different airfields, and sixty-five B-52s were airborne at all times.

During the height of the tensions between the US and the USSR in the 1960s, two popular films were made dealing with what could go terribly wrong with the policy of keeping nuclear-bomb-carrying airplanes at the ready: Dr. Strangelove (1964) and Fail Safe (1964).

Retaliation capability (second strike)

Robert McNamara

The strategy of MAD was fully declared in the early 1960s, primarily by United States Secretary of Defense Robert McNamara. In McNamara's formulation, there was the very real danger that a nation with nuclear weapons could attempt to eliminate another nation's retaliatory forces with a surprise, devastating first strike and theoretically "win" a nuclear war relatively unharmed. The true second-strike capability could be achieved only when a nation had a guaranteed ability to fully retaliate after a first-strike attack.

The United States had achieved an early form of second-strike capability by fielding continual patrols of strategic nuclear bombers, with a large number of planes always in the air, on their way to or from fail-safe points close to the borders of the Soviet Union. This meant the United States could still retaliate, even after a devastating first-strike attack. The tactic was expensive and problematic because of the high cost of keeping enough planes in the air at all times and the possibility they would be shot down by Soviet anti-aircraft missiles before reaching their targets. In addition, as the idea of a missile gap existing between the US and the Soviet Union developed, there was increasing priority being given to ICBMs over bombers.

The USS George Washington (SSBN-598), the lead ship of the US Navy's first class of Fleet Ballistic Missile Submarines, Nuclear (SSBN)

It was only with the advent of nuclear-powered ballistic missile submarines, starting with the George Washington class in 1959, that a genuine survivable nuclear force became possible and a retaliatory second strike capability guaranteed.

The deployment of fleets of ballistic missile submarines established a guaranteed second-strike capability because of their stealth and by the number fielded by each Cold War adversary—it was highly unlikely that all of them could be targeted and preemptively destroyed (in contrast to, for example, a missile silo with a fixed location that could be targeted during a first strike). Given their long-range, high survivability and ability to carry many medium- and long-range nuclear missiles, submarines were credible and effective means for full-scale retaliation even after a massive first strike.

This deterrence strategy and the program have continued into the 21st century, with nuclear submarines carrying Trident II ballistic missiles as one leg of the US strategic nuclear deterrent and as the sole deterrent of the United Kingdom. The other elements of the US deterrent are intercontinental ballistic missiles (ICBMs) on alert in the continental United States, and nuclear-capable bombers. Ballistic missile submarines are also operated by the navies of China, France, India, and Russia.

The US Department of Defense anticipates a continued need for a sea-based strategic nuclear force. The first of the current Ohio-class SSBNs are expected to be retired by 2029, meaning that a replacement platform must already be seaworthy by that time. A replacement may cost over $4 billion per unit compared to the USS Ohio's $2 billion. The USN's follow-on class of SSBN will be the Columbia class, which began construction in 2021 and enter service in 2031.

ABMs threaten MAD

In the 1960s both the Soviet Union (A-35 anti-ballistic missile system) and the United States (LIM-49 Nike Zeus) developed anti-ballistic missile systems. Had such systems been able to effectively defend against a retaliatory second strike, MAD would have been undermined. However, multiple scientific studies showed technological and logistical problems in these systems, including the inability to distinguish between real and decoy weapons.

MIRVs

A time exposure of seven MIRVs from Peacekeeper missile passing through clouds

MIRVs as counter against ABM

The multiple independently targetable re-entry vehicle (MIRV) was another weapons system designed specifically to aid with the MAD nuclear deterrence doctrine. With a MIRV payload, one ICBM could hold many separate warheads. MIRVs were first created by the United States in order to counterbalance the Soviet A-35 anti-ballistic missile systems around Moscow. Since each defensive missile could be counted on to destroy only one offensive missile, making each offensive missile have, for example, three warheads (as with early MIRV systems) meant that three times as many defensive missiles were needed for each offensive missile. This made defending against missile attacks more costly and difficult. One of the largest US MIRVed missiles, the LGM-118A Peacekeeper, could hold up to 10 warheads, each with a yield of around 300 kilotons of TNT (1.3 PJ)—all together, an explosive payload equivalent to 230 Hiroshima-type bombs. The multiple warheads made defense untenable with the available technology, leaving the threat of retaliatory attack as the only viable defensive option. MIRVed land-based ICBMs tend to put a premium on striking first. The START II agreement was proposed to ban this type of weapon, but never entered into force.

In the event of a Soviet conventional attack on Western Europe, NATO planned to use tactical nuclear weapons. The Soviet Union countered this threat by issuing a statement that any use of nuclear weapons (tactical or otherwise) against Soviet forces would be grounds for a full-scale Soviet retaliatory strike (massive retaliation). Thus it was generally assumed that any combat in Europe would end with apocalyptic conclusions.

Land-based MIRVed ICBMs threaten MAD

MIRVed land-based ICBMs are generally considered suitable for a first strike (inherently counterforce) or a counterforce second strike, due to:

  1. Their high accuracy (low circular error probable), compared to submarine-launched ballistic missiles which used to be less accurate, and more prone to defects;
  2. Their fast response time, compared to bombers which are considered too slow;
  3. Their ability to carry multiple MIRV warheads at once, useful for destroying a whole missile field or several cities with one missile.

Unlike a decapitation strike or a countervalue strike, a counterforce strike might result in a potentially more constrained retaliation. Though the Minuteman III of the mid-1960s was MIRVed with three warheads, heavily MIRVed vehicles threatened to upset the balance; these included the SS-18 Satan which was deployed in 1976, and was considered to threaten Minuteman III silos, which led some neoconservatives to conclude a Soviet first strike was being prepared for. This led to the development of the aforementioned Pershing II, the Trident I and Trident II, as well as the MX missile, and the B-1 Lancer.

MIRVed land-based ICBMs are considered destabilizing because they tend to put a premium on striking first. When a missile is MIRVed, it is able to carry many warheads (up to eight in existing US missiles, limited by New START, though Trident II is capable of carrying up to 12) and deliver them to separate targets. If it is assumed that each side has 100 missiles, with five warheads each, and further that each side has a 95 percent chance of neutralizing the opponent's missiles in their silos by firing two warheads at each silo, then the attacking side can reduce the enemy ICBM force from 100 missiles to about five by firing 40 missiles with 200 warheads, and keeping the rest of 60 missiles in reserve. As such, this type of weapon was intended to be banned under the START II agreement; however, the START II agreement was never brought into force, and neither Russia nor the United States ratified the agreement.

Late Cold War

The original US MAD doctrine was modified on July 25, 1980, with US President Jimmy Carter's adoption of countervailing strategy with Presidential Directive 59. According to its architect, Secretary of Defense Harold Brown, "countervailing strategy" stressed that the planned response to a Soviet attack was no longer to bomb Soviet population centers and cities primarily, but first to kill the Soviet leadership, then attack military targets, in the hope of a Soviet surrender before total destruction of the Soviet Union (and the United States). This modified version of MAD was seen as a winnable nuclear war, while still maintaining the possibility of assured destruction for at least one party. This policy was further developed by the Reagan administration with the announcement of the Strategic Defense Initiative (SDI, nicknamed "Star Wars"), the goal of which was to develop space-based technology to destroy Soviet missiles before they reached the United States.

SDI was criticized by both the Soviets and many of America's allies (including Prime Minister of the United Kingdom Margaret Thatcher) because, were it ever operational and effective, it would have undermined the "assured destruction" required for MAD. If the United States had a guarantee against Soviet nuclear attacks, its critics argued, it would have first-strike capability, which would have been a politically and militarily destabilizing position. Critics further argued that it could trigger a new arms race, this time to develop countermeasures for SDI. Despite its promise of nuclear safety, SDI was described by many of its critics (including Soviet nuclear physicist and later peace activist Andrei Sakharov) as being even more dangerous than MAD because of these political implications. Supporters also argued that SDI could trigger a new arms race, forcing the USSR to spend an increasing proportion of GDP on defense—something which has been claimed to have been an indirect cause of the eventual collapse of the Soviet Union. Gorbachev himself in 1983 announced that “the continuation of the S.D.I. program will sweep the world into a new stage of the arms race and would destabilize the strategic situation.”

Proponents of ballistic missile defense (BMD) argue that MAD is exceptionally dangerous in that it essentially offers a single course of action in the event of a nuclear attack: full retaliatory response. The fact that nuclear proliferation has led to an increase in the number of nations in the "nuclear club", including nations of questionable stability (e.g. North Korea), and that a nuclear nation might be hijacked by a despot or other person or persons who might use nuclear weapons without a sane regard for the consequences, presents a strong case for proponents of BMD who seek a policy which both protect against attack, but also does not require an escalation into what might become global nuclear war. Russia continues to have a strong public distaste for Western BMD initiatives, presumably because proprietary operative BMD systems could exceed their technical and financial resources and therefore degrade their larger military standing and sense of security in a post-MAD environment. Russian refusal to accept invitations to participate in NATO BMD may be indicative of the lack of an alternative to MAD in current Russian war-fighting strategy due to the dilapidation of conventional forces after the breakup of the Soviet Union.

Proud Prophet

Proud Prophet was a series of war games played out by various American military officials. The simulation revealed MAD made the use of nuclear weapons virtually impossible without total nuclear annihilation, regardless of how nuclear weapons were implemented in war plans. These results essentially ruled out the possibility of a limited nuclear strike, as every time this was attempted, it resulted in a complete expenditure of nuclear weapons by both the United States and USSR. Proud Prophet marked a shift in American strategy; following Proud Prophet, American rhetoric of strategies that involved the use of nuclear weapons dissipated and American war plans were changed to emphasize the use of conventional forces.

TTAPS Study

In 1983, a group of researchers including Carl Sagan released the TTAPS study (named for the respective initials of the authors), which predicted that the large scale use of nuclear weapons would cause a “nuclear winter”. The study predicted that the debris burned in nuclear bombings would be lifted into the atmosphere and diminish sunlight worldwide, thus reducing world temperatures by “-15° to -25°C”. These findings led to theory that MAD would still occur with many fewer weapons than were possessed by either the United States or USSR at the height of the Cold War. As such, nuclear winter was used as an argument for significant reduction of nuclear weapons since MAD would occur anyway.

Post-Cold War

A payload launch vehicle carrying a prototype exoatmospheric kill vehicle is launched from Meck Island at the Kwajalein Missile Range on December 3, 2001, for an intercept of a ballistic missile target over the central Pacific Ocean.

After the fall of the Soviet Union, the Russian Federation emerged as a sovereign entity encompassing most of the territory of the former USSR. Relations between the United States and Russia were, at least for a time, less tense than they had been with the Soviet Union.

While MAD has become less applicable for the US and Russia, it has been argued as a factor behind Israel's acquisition of nuclear weapons. Similarly, diplomats have warned that Japan may be pressured to nuclearize by the presence of North Korean nuclear weapons. The ability to launch a nuclear attack against an enemy city is a relevant deterrent strategy for these powers.

The administration of US President George W. Bush withdrew from the Anti-Ballistic Missile Treaty in June 2002, claiming that the limited national missile defense system which they proposed to build was designed only to prevent nuclear blackmail by a state with limited nuclear capability and was not planned to alter the nuclear posture between Russia and the United States.

While relations have improved and an intentional nuclear exchange is more unlikely, the decay in Russian nuclear capability in the post–Cold War era may have had an effect on the continued viability of the MAD doctrine. A 2006 article by Keir Lieber and Daryl Press stated that the United States could carry out a nuclear first strike on Russia and would "have a good chance of destroying every Russian bomber base, submarine, and ICBM." This was attributed to reductions in Russian nuclear stockpiles and the increasing inefficiency and age of that which remains. Lieber and Press argued that the MAD era is coming to an end and that the United States is on the cusp of global nuclear primacy.

However, in a follow-up article in the same publication, others criticized the analysis, including Peter Flory, the US Assistant Secretary of Defense for International Security Policy, who began by writing "The essay by Keir Lieber and Daryl Press contains so many errors, on a topic of such gravity, that a Department of Defense response is required to correct the record." Regarding reductions in Russian stockpiles, another response stated that "a similarly one-sided examination of [reductions in] U.S. forces would have painted a similarly dire portrait".

A situation in which the United States might actually be expected to carry out a "successful" attack is perceived as a disadvantage for both countries. The strategic balance between the United States and Russia is becoming less stable, and the objective, the technical possibility of a first strike by the United States is increasing. At a time of crisis, this instability could lead to an accidental nuclear war. For example, if Russia feared a US nuclear attack, Moscow might make rash moves (such as putting its forces on alert) that would provoke a US preemptive strike.

An outline of current US nuclear strategy toward both Russia and other nations was published as the document "Essentials of Post–Cold War Deterrence" in 1995.

In November 2020, the US successfully destroyed a dummy ICBM outside the atmosphere with another missile. Bloomberg Opinion writes that this defense ability "ends the era of nuclear stability".

India and Pakistan

MAD does not entirely apply to all nuclear-armed rivals. India and Pakistan are an example of this; because of the superiority of conventional Indian armed forces to their Pakistani counterparts, Pakistan may be forced to use their nuclear weapons on invading Indian forces out of desperation regardless of an Indian retaliatory strike. As such, any large-scale attack on Pakistan by India could precipitate the use of nuclear weapons by Pakistan, thus rendering MAD inapplicable. However, MAD is applicable in that it may deter Pakistan from making a “suicidal” nuclear attack rather than a defensive nuclear strike.

North Korea

Since the emergence of North Korea as a nuclear state, military action has not been an option in handling the instability surrounding North Korea because of their option of nuclear retaliation in response to any conventional attack on them, thus rendering non-nuclear neighboring states such as South Korea and Japan incapable of resolving the destabilizing effect of North Korea via military force. MAD may not apply to the situation in North Korea because the theory relies on rational consideration of the use and consequences of nuclear weapons, which may not be the case for potential North Korean deployment.

China

Since 2020, China has undertaken an ambitious expansion and modernization of its nuclear arsenal. As of March 2025, it is estimated to possess approximately 600 nuclear warheads. It has developed new variants of intercontinental ballistic missiles and is capable of delivering nuclear warheads via land-based ballistic missiles, sea-based ballistic missiles, and bombers. A 2023 Pentagon report estimated that China could possess 1,000 operational warheads by 2030. China also has the world’s second-largest economy and a highly capable military force. China’s intense development of its nuclear program complicates mutual assured destruction with other countries, including the United States. As its nuclear program expands, the prospect of a credible MAD relationship with the US is likely to increase. China seeks to develop second-strike capabilities to counter other nations, following years of adhering to a declared no-first-use policy. Several analysts have cited China’s nuclear developments as a means of leveraging power to bolster China’s demands due to an increased threat. However, others have claimed that China is simply seeking to boost its deterrence to fortify its own security in a rapidly developing world. China’s nuclear arsenal is currently smaller than the arsenals of Russia and the United States. Historically, the United States has possessed a strong nuclear advantage over China. Despite the various different analyses by defense experts and academics of China’s nuclear buildup, its exact intentions remain largely up to speculation. There does not exist an official consensus on whether or not the United States and China have full mutual assured destruction. However, China’s ambitious nuclear policy signals that the country is potentially seeking to establish a MAD relationship with the United States. China’s nuclear buildup also plays a role in regional nuclear dynamics. In a conflict involving Taiwan, for example, the presence of nuclear forces could lead to a rapid escalation in the situation. When the stakes of a conflict become existential, urgency on all sides increases rapidly and intensely. This similarly complicates security guarantees and other forms of alliances with countries across the world, potentially involving allies and/or strategic partners across the world.

Official policy

Whether MAD was the officially accepted doctrine of the United States military during the Cold War is largely a matter of interpretation. The United States Air Force, for example, has retrospectively contended that it never advocated MAD as a sole strategy, and that this form of deterrence was seen as one of numerous options in US nuclear policy. Former officers have emphasized that they never felt as limited by the logic of MAD (and were prepared to use nuclear weapons in smaller-scale situations than "assured destruction" allowed), and did not deliberately target civilian cities (though they acknowledge that the result of a "purely military" attack would certainly devastate the cities as well). However, according to a declassified 1959 Strategic Air Command study, US nuclear weapons plans specifically targeted the populations of Beijing, Moscow, Leningrad, East Berlin, and Warsaw for systematic destruction. MAD was implied in several US policies and used in the political rhetoric of leaders in both the United States and the USSR during many periods of the Cold War:

To continue to deter in an era of strategic nuclear equivalence, it is necessary to have nuclear (as well as conventional) forces such that in considering aggression against our interests any adversary would recognize that no plausible outcome would represent a victory or any plausible definition of victory. To this end and so as to preserve the possibility of bargaining effectively to terminate the war on acceptable terms that are as favorable as practical, if deterrence fails initially, we must be capable of fighting successfully so that the adversary would not achieve his war aims and would suffer costs that are unacceptable, or in any event greater than his gains, from having initiated an attack.

The doctrine of MAD was officially at odds with that of the USSR, which had, contrary to MAD, insisted survival was possible. The Soviets believed they could win not only a strategic nuclear war, which they planned to absorb with their extensive civil defense planning, but also the conventional war that they predicted would follow after their strategic nuclear arsenal had been depleted. Official Soviet policy, though, may have had internal critics towards the end of the Cold War, including some in the USSR's own leadership:

Nuclear use would be catastrophic.

— 1981, the Soviet General Staff

Other evidence of this comes from the Soviet minister of defense, Dmitriy Ustinov, who wrote that "A clear appreciation by the Soviet leadership of what a war under contemporary conditions would mean for mankind determines the active position of the USSR." The Soviet doctrine, although being seen as primarily offensive by Western analysts, fully rejected the possibility of a "limited" nuclear war by 1975.

Criticism

Nuclear weapon test Apache (yield 1.85 Mt or 7.7 PJ)

Deterrence theory has been criticized by numerous scholars for various reasons. A prominent strain of criticism argues that rational deterrence theory is contradicted by frequent deterrence failures, which may be attributed to misperceptions. Critics have also argued that leaders do not behave in ways that are consistent with the predictions of nuclear deterrence theory. For example, it has been argued that it is inconsistent with the logic of rational deterrence theory that states continue to build nuclear arsenals once they have reached the second-strike threshold. For a more inconsistent example, Mao Zedong urged the socialist camp not to fear nuclear war with the United States since, even if "half of mankind died, the other half would remain while imperialism would be razed to the ground and the whole world would become socialist."

Additionally, many scholars have advanced philosophical objections against the principles of deterrence theory on purely ethical grounds. Included in this group is Robert L. Holmes who uses a reductio ad absurdum argument to observe that mankind's reliance upon a system of preventing war which is based exclusively upon the threat of waging war is inherently irrational and must be considered immoral according to fundamental deontological principles. In addition, he questions whether it can be conclusively demonstrated that such a system has in fact served to prevent warfare in the past and may actually serve to increase the probability of waging war in the future due to its reliance upon the continuous development of new generations of technologically advanced nuclear weapons.

Challengeable assumptions

Second-strike capability

  • A first strike must not be capable of preventing a retaliatory second strike or else mutual destruction is not assured. In this case, a state would have nothing to lose with a first strike or might try to preempt the development of an opponent's second-strike capability with a first strike. To avoid this, countries may design their nuclear forces to make decapitation strike almost impossible, by dispersing launchers over wide areas and using a combination of sea-based, air-based, underground, and mobile land-based launchers.
  • Another method of ensuring second strike capability is through the use of dead man's switch or "fail-deadly:" in the absence of ongoing action from a functional command structure—such as would occur after suffering a successful decapitation strike—an automatic system defaults to launching a nuclear strike upon some target. A particular example is the Soviet (now Russian) Dead Hand system, which has been described as a semi-automatic "version of Dr. Strangelove's Doomsday Machine" which, once activated, can launch a second strike without human intervention. The purpose of the Dead Hand system is to ensure a second strike even if Russia were to suffer a decapitation attack, thus maintaining MAD.

Perfect detection

  • No false positives (errors) in the equipment and/or procedures that must identify a launch by the other side. The implication of this is that an accident could lead to a full nuclear exchange. During the Cold War there were several instances of false positives, as in the case of Stanislav Petrov.
  • Perfect attribution. If there is a launch from the Sino-Russian border, it could be difficult to distinguish which nation is responsible—both Russia and China have the capability—and, hence, against which nation retaliation should occur. A launch from a nuclear-armed submarine could also be difficult to attribute.

Perfect rationality

  • No rogue commanders will have the ability to corrupt the launch decision process. Such an incident very nearly occurred during the Cuban Missile Crisis when an argument broke out aboard a nuclear-armed submarine cut off from radio communication. The second-in-command, Vasili Arkhipov, refused to launch despite an order from Captain Savitsky to do so.
  • All leaders with launch capability seem to care about the survival of their citizens. Winston Churchill is quoted as saying that any strategy will not "cover the case of lunatics or dictators in the mood of Hitler when he found himself in his final dugout."

Inability to defend

  • No fallout shelter networks of sufficient capacity to protect large segments of the population and/or industry.
  • No development of anti-missile technology or deployment of remedial protective gear.

Inherent instability

Another reason is that deterrence has an inherent instability. As Kenneth Boulding said: "If deterrence were really stable... it would cease to deter." If decision-makers were perfectly rational, they would never order the largescale use of nuclear weapons, and the credibility of the nuclear threat would be low.

However, that apparent perfect rationality criticism is countered and so is consistent with current deterrence policy. In Essentials of Post-Cold War Deterrence, the authors detail an explicit advocation of ambiguity regarding "what is permitted" for other nations and its endorsement of "irrationality" or, more precisely, the perception thereof as an important tool in deterrence and foreign policy. The document claims that the capacity of the United States, in exercising deterrence, would be hurt by portraying US leaders as fully rational and cool-headed:

The fact that some elements may appear to be potentially 'out of control' can be beneficial to creating and reinforcing fears and doubts in the minds of an adversary's decision makers. This essential sense of fear is the working force of deterrence. That the U.S. may become irrational and vindictive if its vital interests are attacked should be part of the national persona we project to all adversaries.

Terrorism

  • The threat of foreign and domestic nuclear terrorism has been a criticism of MAD as a defensive strategy. Deterrent strategies are ineffective against those who attack without regard for their life. Furthermore, the doctrine of MAD has been critiqued in regard to terrorism and asymmetrical warfare. Critics contend that a retaliatory strike would not be possible in this case because of the decentralization of terrorist organizations, which may be operating in several countries and dispersed among civilian populations. A misguided retaliatory strike made by the targeted nation could even advance terrorist goals in that a contentious retaliatory strike could drive support for the terrorist cause that instigated the nuclear exchange.

However Robert Gallucci, the president of the John D. and Catherine T. MacArthur Foundation, argues that although traditional deterrence is not an effective approach toward terrorist groups bent on causing a nuclear catastrophe, "the United States should instead consider a policy of expanded deterrence, which focuses not solely on the would-be nuclear terrorists but on those states that may deliberately transfer or inadvertently lead nuclear weapons and materials to them. By threatening retaliation against those states, the United States may be able to deter that which it cannot physically prevent."

Graham Allison makes a similar case and argues that the key to expanded deterrence is coming up with ways of tracing nuclear material to the country that forged the fissile material: "After a nuclear bomb detonates, nuclear forensic cops would collect debris samples and send them to a laboratory for radiological analysis. By identifying unique attributes of the fissile material, including its impurities and contaminants, one could trace the path back to its origin." The process is analogous to identifying a criminal by fingerprints: "The goal would be twofold: first, to deter leaders of nuclear states from selling weapons to terrorists by holding them accountable for any use of their own weapons; second, to give leaders every incentive to tightly secure their nuclear weapons and materials."

Space weapons

  • Strategic analysts have criticized the doctrine of MAD for its inability to respond to the proliferation of space weaponry. First, military space systems have unequal dependence across countries. This means that less-dependent countries may find it beneficial to attack a more-dependent country's space weapons, which complicates deterrence. This is especially true for countries like North Korea which have extensive ballistic missiles that could strike space-based systems. Even across countries with similar dependence, anti-satellite weapons (ASATs) have the ability to remove the command and control of nuclear weapons. This encourages crisis-instability and pre-emptive nuclear-disabling strikes. Third, there is a risk of asymmetrical challengers. Countries that fall behind in space weapon advancement may turn to using chemical or biological weapons. This may heighten the risk of escalation, bypassing any deterrent effects of nuclear weapons.

Entanglements

  • Cold-war bipolarity no longer is applicable to the global power balance. The complex modern alliance system makes allies and enemies tied to one another. Thus, action by one country to deter another could threaten the safety of a third country. "Security trilemmas" could increase tension during mundane acts of cooperation, complicating MAD.

Emerging hypersonic weapons

  • Hypersonic ballistic or cruise missiles threaten the retaliatory backbone of mutual assured destruction. The high precision and speed of these weapons may allow for the development of "decapitory" strikes that remove the ability of another nation to have a nuclear response. In addition, the secretive nature of these weapons' development can make deterrence more asymmetrical.

Failure to retaliate

  • If it was known that a country's leader would not resort to nuclear retaliation, adversaries may be emboldened. Edward Teller, a member of the Manhattan Project, echoed these concerns as early as 1985 when he said that "The MAD policy as a deterrent is totally ineffective if it becomes known that in case of attack, we would not retaliate against the aggressor."

Neurohacking

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurohacking   ...