Search This Blog

Sunday, February 15, 2026

Unfunded mandate

From Wikipedia, the free encyclopedia

An unfunded mandate is a statute or regulation that requires any entity to perform certain actions, with no money provided for fulfilling the requirements. This can be imposed on state or local government, as well as private individuals or organizations. The key distinction is that the statute or regulation is not accompanied by funding to fulfill the requirement.

An example in the United States, would be those federal mandates that induce "responsibility, action, procedure or anything else that is imposed by constitutional, administrative, executive, or judicial action" for state and local governments and/or the private sector.

As of 1992, 172 federal mandates obliged state or local governments to fund programs to some extent. Beginning with the Civil Rights Act of 1957 and the Civil Rights Act of 1964, as well as the Voting Rights Act of 1965, the United States federal government has designed laws that require state and local government spending to promote national goals. During the 1970s, the national government promoted education, mental health, and environmental programs by implementing grant projects at a state and local level; the grants were so common that the federal assistance for these programs made up over a quarter of state and local budgets. The rise in federal mandates led to more mandate regulation. During the Reagan Administration, Executive Order 12291 and the State and Local Cost Estimate Act of 1981 were passed, which implemented a careful examination of the true costs of federal unfunded mandates. More reform for federal mandates came in 1995 with the Unfunded Mandates Reform Act (UMRA), which promoted a Congressional focus on the costs imposed onto intergovernmental entities and the private sector because of federal mandates. Familiar examples of Federal Unfunded Mandates in the United States include the Americans with Disabilities Act and Medicaid.

Background

An "intergovernmental mandate" generally refers to the responsibilities or activities that one level of government imposes on another by legislative, executive or judicial action. According to the Unfunded Mandates Reform Act of 1995 (UMRA), an intergovernmental mandate can take various forms:

  • An enforceable duty – this refers to any type of legislation, statute or regulation that either requires or proscribes an action of state or local governments, excluding actions imposed as conditions of receiving federal aid.
  • Certain changes in large entitlement programs – this refers to instances when new conditions or reductions in large entitlement programs, providing $5 billion or more annually to state or local governments, are imposed by the federal government.
  • A reduction in federal funding for an existing mandate – this refers to a reduction or elimination of federal funding authorized to cover the costs of an existing mandate.

A 1993 study conducted by Price Waterhouse, sponsored by the National Association of Counties, determined that in fiscal year 1993 counties in the US spent $4.8 billion for twelve unfunded federal mandates. Medicaid was one of these twelve unfunded mandates, and comprised the second largest item in state budgets, accounting for almost 13 percent of state general revenues in 1993.

Mandates can be applied either vertically or horizontally. Vertically applied mandates are directed by a level of government at a single department or program. Conversely, horizontally applied, or "crosscutting", mandates refer to mandates that affect various departments or programs. For example, a mandate requiring county health departments to provide outpatient mental health programs would be considered a vertically applied mandate, whereas a requirement that all offices in a given jurisdiction to become handicap-accessible would be considered a horizontally applied mandate.

History

Federal unfunded mandates can be traced back to the post-World War II years, when the federal government initiated national programs in education, mental health services, and environmental protection. The method for implementing these projects at the state and local level was to involve state and local governments. In the 1970s, the federal government utilized grants as a way to increase state and local participation, which resulted in federal assistance constituting over 25 percent of state and local budgets.

The first wave of major mandates occurred in the 1960s and 1970s, concerning civil rights, education, and the environment. The arrival of the Reagan administration ostensibly undermined various federal mandate efforts, as the executive branch promised to decrease federal regulatory efforts. For example, the passage of Executive Order 12291 required a cost-benefit analysis and an Office of Management and Budget clearance on proposed agency regulations, and the State and Local Cost Estimate Act of 1981 required the Congressional Budget Office to determine the state and local cost effects of proposed federal legislation moving through the Legislative Branch. However, the U.S. Advisory Commission on Intergovernmental Relations (ACIR) reported that, during the 1980s, more major intergovernmental regulatory programs were enacted than during the 1970s.

According to a 1995 Brookings Institution report, in 1980 there were 36 laws that qualified as unfunded mandates. Despite opposition from the Reagan administration and George H. W. Bush administration, an additional 27 laws that could be categorized as unfunded mandates went into effect between 1982 and 1991.

The U.S. Supreme Court has been involved in deciding the federal government's role in the U.S. governmental system based on constitutionality. During the period between the New Deal era and the mid-1980s the court generally utilized an expansive interpretation of the interstate commerce clause and the 14th Amendment to validate the growth of the federal government's involvement in domestic policymaking. For example, the 1985 Supreme Court case Garcia v. San Antonio Metropolitan Transit Authority affirmed the ability for the federal government to directly regulate state and local governmental affairs.

The increase of mandates in the 1980s and 1990s incited state and local protest. In October 1993, state and local interest groups sponsored a National Unfunded Mandates Day, which involved press conferences and appeals to congressional delegations about mandate relief. In early 1995, Congress passed unfunded mandate reform legislation.

In 1992 the court determined in various cases that the US Constitution provides state and locality protections concerning unfunded mandate enactments. For example, in the 1992 case New York v. United States, the Court struck down a federal law that regulated the disposal of low-level radioactive waste, which utilized the Tenth Amendment to the United States Constitution to require states to dispose of the radioactive material.

Examples

Unfunded mandates are most commonly utilized in regulation of civil rights, anti-poverty programs and environmental protection programs.

Clean Air Act

The Clean Air Act was passed in 1963 to support the United States Environmental Protection Agency (EPA), established on December 2, 1970, in developing research programs looking into air pollution problems and solutions. The EPA received authority to research air quality. The 1970 Amendments to the Clean Air Act established the National Ambient Air Quality Standards, authorized requirements for control of motor vehicle emissions, increased the federal enforcement authority but required states to implement plans to adhere to these standards. The 1990 Amendments to the Clean Air Act of 1970 expanded and modified the National Ambient Air Quality Standards and expanded and modified enforcement authority. The amendments increased the mandates on states to comply with the federal standards for air quality. States have had to write up State Implementation Plans, have them approved by the EPA and must also fund the implementation.

The Americans with Disabilities Act of 1990

The Americans with Disabilities Act of 1990 prohibits discrimination based on disability, requires existing public facilities to be made accessible, requires new facilities to comply with accessibility expectations, and requires that employers provide anything a disabled employee might need, such as a sign language interpreter. Tax incentives encourage employers to hire people with disabilities. State institutions and local employers are expected to pay for changes made to existing facilities and are responsible for making sure that new facilities are in compliance with the federal requirements under the ADA.

Medicaid

Medicaid is a health program for low-income families and people with certain medical needs in the United States. It is funded jointly by the federal and state governments, but implemented by states. Federal funding covers a variable portion of at least half of Medicaid costs, and states are expected to cover the remainder. This means that any federally mandated increase in Medicaid spending forces states to spend more. However, as state participation in Medicaid is voluntary, it is not technically an unfunded mandate.

EMTALA

The Emergency Medical Treatment and Active Labor Act (EMTALA) was passed by the United States Congress in 1986 to halt certain practices of patient dumping. The act requires hospitals accepting payment from Medicare to provide emergency treatment to any patient coming to their emergency department, regardless of their insurance coverage or ability to pay. Though hospitals could theoretically choose to not participate in Medicare placing them outside of EMTALA's scope, very few do not accept payments from Medicare causing EMTALA to apply to nearly all US hospitals. Though EMTALA infers an obligation to provide certain emergency care, the statute does not contain any provision regarding funding or financing of said emergency care. EMTALA could therefore be characterized as an unfunded mandate.

The No Child Left Behind Act of 2001

The 2001 No Child Left Behind Act was passed in response to widespread concern about the quality of public education in America. The act was meant to decrease the gap between students who were performing very well and students who were performing poorly. The act required schools receiving federal funding to administer statewide standardized tests to students at the end of each year. If students did not show improvement from year to year on these tests, their schools were asked to work to improve the quality of the education by hiring highly qualified teachers and by tutoring struggling students. To continue receiving Federal grants, states had to develop plans that demonstrated their steps to improve the quality of education in their schools. The No Child Left Behind Act mandated that states fund the improvements in their schools and provide the appropriate training for less qualified teachers. Federally mandated K-12 education is also a (mostly) unfunded mandate.

Criticism

Critics argue that unfunded mandates are inefficient and are an unfair imposition of the national government on the smaller governments. While many scholars do not object to the goals of the mandates, the way they are enforced and written are criticized for their ineffectiveness. State and local governments do not always disagree with the spirit of the mandate, but they sometimes object to the high costs they must bear to carry out the objectives.

The debate on unfunded federal mandates is visible in cases such as New York v. United States, mentioned above. In School District of Pontiac, Michigan v. Duncan, the plaintiffs alleged that the school district need not comply with the No Child Left Behind Act of 2001 because the federal government did not provide them sufficient funding; the court concluded that insufficient federal funds were not a valid reason to not comply with a federal mandate.

Unfunded Mandates Reform Act

Purpose

The Unfunded Mandates Reform Act (UMRA) was approved by the 104th Congress on March 22, 1995, and became effective October 5, 1995, during the Clinton administration. It is public law 104-4. The official legislation summarizes the bill as being: "An Act: To curb the practice of imposing unfunded Federal mandates on States and local governments; [...] and to ensure that the Federal Government pays the costs incurred by those governments in complying with certain requirements under Federal statutes and regulations, and for other purposes."

UMRA was enacted to avoid imposing mandates, when said mandates did not include federal funding to help the SLTG (State, Local, and Tribal Governments) carry out the goals of the mandate. It also allowed the Congressional Budget Office to estimate the cost of mandates to SLTGs and to the private sector, and allows federal agencies issuing mandates to estimate the costs of mandates to the entities that said mandates regulate.

Application

Most of the act's provisions apply to proposed and final rules for which a notice of the proposed rule was published, and that include a Federal mandate that could result in the expenditure of funds by SLTGs or the private sector of or in excess of $100 million in any given year. If a mandate meets these conditions, a written statement must be provided that includes the legal authority for the rule, a cost-benefit assessment, a description of the macroeconomic effects that the mandate will likely have, and a summary of concerns from the SLTG and how they were addressed. An agency enforcing the mandate must also choose the least-costly option that still achieves the goals of the mandate, as well as consult with elected officials of the SLTG to allow for their input on the implementation of the mandate and its goals. Section 203 of UMRA is a bit more extensive in that it applies to all regulatory requirements that significantly affect small governments, and requires federal agencies to provide notice of the requirements to the government(s), enable the officials of the government(s) to provide their input on the mandate, and inform and educate the government(s) on the requirements for implementation of the mandate.

UMRA allows the United States Congress to decline unfunded federal mandates within legislation if such mandates are estimated to cost more than the threshold amounts estimated by the Congressional Budget Office. UMRA does not apply to "conditions of federal assistance; duties stemming from participation in voluntary federal programs; rules issued by independent regulatory agencies; rules issued without a general notice of proposed rulemaking; and rules and legislative provisions that cover individual constitutional rights, discrimination, emergency assistance, grant accounting and auditing procedures, national security, treaty obligations, and certain elements of Social Security".

Effectiveness

Ever since UMRA was proposed, it has remained unclear, how effective the legislation actually is at limiting the burdens imposed by unfunded mandates on SLTGs, and whether or not unfunded mandates need to be limited so strictly. Proponents of the Act argue that UMRA is needed to limit legislation that imposes obligations on SLTGs and that creates higher costs and less efficiency, while opponents argue that sometimes federal unfunded mandates are necessary to achieve a national goal that state and local governments don't fund voluntarily. Opponents also question the effectiveness of the bill due to the aforementioned restrictions.

2015 Unfunded Mandates and Information Transparency Act

The Act was written to amend UMRA by having the CBO compare the authorized level of funding in legislation to the costs of carrying out any changes. It was done by also amending the Congressional Budget Act of 1974. The bill was introduced by Republican North Carolina Representative Virginia Foxx and passed by the House on February 4, 2015.

Foxx had authored a previous version of this bill, which also passed the house, as H.R. 899 (113th Congress) in February 2014. The bill would allow private companies and trade associations to look at proposed rules before they are announced to the public. The concern is that private companies could weaken upgrades to public protections.

Protein design

From Wikipedia, the free encyclopedia

Protein design is the rational design of new protein molecules to design novel activity, behavior, or purpose, and to advance basic understanding of protein function. Proteins can be designed from scratch (de novo design) or by making calculated variants of a known protein structure and its sequence (termed protein redesign). Rational protein design approaches make protein-sequence predictions that will fold to specific structures. These predicted sequences can then be validated experimentally through methods such as peptide synthesis, site-directed mutagenesis, or artificial gene synthesis.

Rational protein design dates back to the mid-1970s. Recently, however, there were numerous examples of successful rational design of water-soluble and even transmembrane peptides and proteins, in part due to a better understanding of different factors contributing to protein structure stability and development of better computational methods.

Overview and history

The goal in rational protein design is to predict amino acid sequences that will fold to a specific protein structure. Although the number of possible protein sequences is vast, growing exponentially with the size of the protein chain, only a subset of them will fold reliably and quickly to one native state. Protein design involves identifying novel sequences within this subset. The native state of a protein is the conformational free energy minimum for the chain. Thus, protein design is the search for sequences that have the chosen structure as a free energy minimum. In a sense, it is the reverse of protein structure prediction. In design, a tertiary structure is specified, and a sequence that will fold to it is identified. Hence, it is also termed inverse folding. Protein design is then an optimization problem: using some scoring criteria, an optimized sequence that will fold to the desired structure is chosen.

When the first proteins were rationally designed during the 1970s and 1980s, the sequence for these was optimized manually based on analyses of other known proteins, the sequence composition, amino acid charges, and the geometry of the desired structure. The first designed proteins are attributed to Bernd Gutte, who designed a reduced version of a known catalyst, bovine ribonuclease, and tertiary structures consisting of beta-sheets and alpha-helices, including a binder of DDT. Urry and colleagues later designed elastin-like fibrous peptides based on rules on sequence composition. Richardson and coworkers designed a 79-residue protein with no sequence homology to a known protein. In the 1990s, the advent of powerful computers, libraries of amino acid conformations, and force fields developed mainly for molecular dynamics simulations enabled the development of structure-based computational protein design tools. Following the development of these computational tools, great success has been achieved over the last 30 years in protein design. The first protein successfully designed completely de novo was done by Stephen Mayo and coworkers in 1997, and, shortly after, in 1999 Peter S. Kim and coworkers designed dimers, trimers, and tetramers of unnatural right-handed coiled coils. In 2003, David Baker's laboratory designed a full protein to a fold never seen before in nature. Later, in 2008, Baker's group computationally designed enzymes for two different reactions. In 2010, one of the most powerful broadly neutralizing antibodies was isolated from patient serum using a computationally designed protein probe. In 2024, Baker received one half of the Nobel Prize in Chemistry for his advancement of computational protein design, with the other half being shared by Demis Hassabis and John Jumper of Deepmind for protein structure prediction. Due to these and other successes (e.g., see examples below), protein design has become one of the most important tools available for protein engineering. There is great hope that the design of new proteins, small and large, will have uses in biomedicine and bioengineering.

Underlying models of protein structure and function

Protein design programs use computer models of the molecular forces that drive proteins in in vivo environments. In order to make the problem tractable, these forces are simplified by protein design models. Although protein design programs vary greatly, they have to address four main modeling questions: What is the target structure of the design, what flexibility is allowed on the target structure, which sequences are included in the search, and which force field will be used to score sequences and structures.

Target structure

The Top7 protein was one of the first proteins designed for a fold that had never been seen before in nature

Protein function is heavily dependent on protein structure, and rational protein design uses this relationship to design function by designing proteins that have a target structure or fold. Thus, by definition, in rational protein design the target structure or ensemble of structures must be known beforehand. This contrasts with other forms of protein engineering, such as directed evolution, where a variety of methods are used to find proteins that achieve a specific function, and with protein structure prediction where the sequence is known, but the structure is unknown.

Most often, the target structure is based on a known structure of another protein. However, novel folds not seen in nature have been made increasingly possible. Peter S. Kim and coworkers designed trimers and tetramers of unnatural coiled coils, which had not been seen before in nature. The protein Top7, developed in David Baker's lab, was designed completely using protein design algorithms, to a completely novel fold. More recently, Baker and coworkers developed a series of principles to design ideal globular-protein structures based on protein folding funnels that bridge between secondary structure prediction and tertiary structures. These principles, which build on both protein structure prediction and protein design, were used to design five different novel protein topologies.

Sequence space

FSD-1 (shown in blue, PDB id: 1FSV) was the first de novo computational design of a full protein. The target fold was that of the zinc finger in residues 33–60 of the structure of protein Zif268 (shown in red, PDB id: 1ZAA). The designed sequence had very little sequence identity with any known protein sequence.

In rational protein design, proteins can be redesigned from the sequence and structure of a known protein, or completely from scratch in de novo protein design. In protein redesign, most of the residues in the sequence are maintained as their wild-type amino-acid while a few are allowed to mutate. In de novo design, the entire sequence is designed anew, based on no prior sequence.

Both de novo designs and protein redesigns can establish rules on the sequence space: the specific amino acids that are allowed at each mutable residue position. For example, the composition of the surface of the RSC3 probe to select HIV-broadly neutralizing antibodies was restricted based on evolutionary data and charge balancing. Many of the earliest attempts on protein design were heavily based on empiric rules on the sequence space. Moreover, the design of fibrous proteins usually follows strict rules on the sequence space. Collagen-based designed proteins, for example, are often composed of Gly-Pro-X repeating patterns. The advent of computational techniques allows designing proteins with no human intervention in sequence selection.

Structural flexibility

Common protein design programs use rotamer libraries to simplify the conformational space of protein side chains. This animation loops through all the rotamers of the isoleucine amino acid based on the Penultimate Rotamer Library (total of 7 rotamers).

In protein design, the target structure (or structures) of the protein are known. However, a rational protein design approach must model some flexibility on the target structure in order to increase the number of sequences that can be designed for that structure and to minimize the chance of a sequence folding to a different structure. For example, in a protein redesign of one small amino acid (such as alanine) in the tightly packed core of a protein, very few mutants would be predicted by a rational design approach to fold to the target structure, if the surrounding side-chains are not allowed to be repacked.

Thus, an essential parameter of any design process is the amount of flexibility allowed for both the side-chains and the backbone. In the simplest models, the protein backbone is kept rigid while some of the protein side-chains are allowed to change conformations. However, side-chains can have many degrees of freedom in their bond lengths, bond angles, and χ dihedral angles. To simplify this space, protein design methods use rotamer libraries that assume ideal values for bond lengths and bond angles, while restricting χ dihedral angles to a few frequently observed low-energy conformations termed rotamers.

Rotamer libraries are derived from the statistical analysis of many protein structures. Backbone-independent rotamer libraries describe all rotamers. Backbone-dependent rotamer libraries, in contrast, describe the rotamers as how likely they are to appear depending on the protein backbone arrangement around the side chain. Most protein design programs use one conformation (e.g., the modal value for rotamer dihedrals in space) or several points in the region described by the rotamer; the OSPREY protein design program, in contrast, models the entire continuous region.

Although rational protein design must preserve the general backbone fold a protein, allowing some backbone flexibility can significantly increase the number of sequences that fold to the structure while maintaining the general fold of the protein. Backbone flexibility is especially important in protein redesign because sequence mutations often result in small changes to the backbone structure. Moreover, backbone flexibility can be essential for more advanced applications of protein design, such as binding prediction and enzyme design. Some models of protein design backbone flexibility include small and continuous global backbone movements, discrete backbone samples around the target fold, backrub motions, and protein loop flexibility.

Energy function

Comparison of various potential energy functions. The most accurate energy are those that use quantum mechanical calculations, but these are too slow for protein design. On the other extreme, heuristic energy functions are based on statistical terms and are very fast. In the middle are molecular mechanics energy functions that are physically based but are not as computationally expensive as quantum mechanical simulations.

Rational protein design techniques must be able to discriminate sequences that will be stable under the target fold from those that would prefer other low-energy competing states. Thus, protein design requires accurate energy functions that can rank and score sequences by how well they fold to the target structure. At the same time, however, these energy functions must consider the computational challenges behind protein design. One of the most challenging requirements for successful design is an energy function that is both accurate and simple for computational calculations.

The most accurate energy functions are those based on quantum mechanical simulations. However, such simulations are too slow and typically impractical for protein design. Instead, many protein design algorithms use either physics-based energy functions adapted from molecular mechanics simulation programs, knowledge based energy-functions, or a hybrid mix of both. The trend has been toward using more physics-based potential energy functions.

Physics-based energy functions, such as AMBER and CHARMM, are typically derived from quantum mechanical simulations, and experimental data from thermodynamics, crystallography, and spectroscopy. These energy functions typically simplify physical energy function and make them pairwise decomposable, meaning that the total energy of a protein conformation can be calculated by adding the pairwise energy between each atom pair, which makes them attractive for optimization algorithms. Physics-based energy functions typically model an attractive-repulsive Lennard-Jones term between atoms and a pairwise electrostatics coulombic term between non-bonded atoms.

Water-mediated hydrogen bonds play a key role in protein–protein binding. One such interaction is shown between residues D457, S365 in the heavy chain of the HIV-broadly-neutralizing antibody VRC01 (green) and residues N58 and Y59 in the HIV envelope protein GP120 (purple).

Statistical potentials, in contrast to physics-based potentials, have the advantage of being fast to compute, of accounting implicitly of complex effects and being less sensitive to small changes in the protein structure. These energy functions are based on deriving energy values from frequency of appearance on a structural database.

Protein design, however, has requirements that can sometimes be limited in molecular mechanics force-fields. Molecular mechanics force-fields, which have been used mostly in molecular dynamics simulations, are optimized for the simulation of single sequences, but protein design searches through many conformations of many sequences. Thus, molecular mechanics force-fields must be tailored for protein design. In practice, protein design energy functions often incorporate both statistical terms and physics-based terms. For example, the Rosetta energy function, one of the most-used energy functions, incorporates physics-based energy terms originating in the CHARMM energy function, and statistical energy terms, such as rotamer probability and knowledge-based electrostatics. Typically, energy functions are highly customized between laboratories, and specifically tailored for every design.

Challenges for effective design energy functions

Water makes up most of the molecules surrounding proteins and is the main driver of protein structure. Thus, modeling the interaction between water and protein is vital in protein design. The number of water molecules that interact with a protein at any given time is huge and each one has a large number of degrees of freedom and interaction partners. Instead, protein design programs model most of such water molecules as a continuum, modeling both the hydrophobic effect and solvation polarization.

Individual water molecules can sometimes have a crucial structural role in the core of proteins, and in protein–protein or protein–ligand interactions. Failing to model such waters can result in mispredictions of the optimal sequence of a protein–protein interface. As an alternative, water molecules can be added to rotamers.


As an optimization problem

This animation illustrates the complexity of a protein design search, which typically compares all the rotamer-conformations from all possible mutations at all residues. In this example, the residues Phe36 and His 106 are allowed to mutate to, respectively, the amino acids Tyr and Asn. Phe and Tyr have 4 rotamers each in the rotamer library, while Asn and His have 7 and 8 rotamers, respectively, in the rotamer library (from the Richardson's penultimate rotamer library). The animation loops through all (4 + 4) x (7 + 8) = 120 possibilities. The structure shown is that of myoglobin, PDB id: 1mbn.

The goal of protein design is to find a protein sequence that will fold to a target structure. A protein design algorithm must, thus, search all the conformations of each sequence, with respect to the target fold, and rank sequences according to the lowest-energy conformation of each one, as determined by the protein design energy function. Thus, a typical input to the protein design algorithm is the target fold, the sequence space, the structural flexibility, and the energy function, while the output is one or more sequences that are predicted to fold stably to the target structure.

The number of candidate protein sequences, however, grows exponentially with the number of protein residues; for example, there are 20100 protein sequences of length 100. Furthermore, even if amino acid side-chain conformations are limited to a few rotamers (see Structural flexibility), this results in an exponential number of conformations for each sequence. Thus, in our 100 residue protein, and assuming that each amino acid has exactly 10 rotamers, a search algorithm that searches this space will have to search over 200100 protein conformations.

The most common energy functions can be decomposed into pairwise terms between rotamers and amino acid types, which casts the problem as a combinatorial one, and powerful optimization algorithms can be used to solve it. In those cases, the total energy of each conformation belonging to each sequence can be formulated as a sum of individual and pairwise terms between residue positions. If a designer is interested only in the best sequence, the protein design algorithm only requires the lowest-energy conformation of the lowest-energy sequence. In these cases, the amino acid identity of each rotamer can be ignored and all rotamers belonging to different amino acids can be treated the same. Let ri be a rotamer at residue position i in the protein chain, and E(ri) the potential energy between the internal atoms of the rotamer. Let E(ri, rj) be the potential energy between ri and rotamer rj at residue position j. Then, we define the optimization problem as one of finding the conformation of minimum energy (ET):

The problem of minimizing ET is an NP-hard problem. Even though the class of problems is NP-hard, in practice many instances of protein design can be solved exactly or optimized satisfactorily through heuristic methods.

Algorithms

Several algorithms have been developed specifically for the protein design problem. These algorithms can be divided into two broad classes: exact algorithms, such as dead-end elimination, that lack runtime guarantees but guarantee the quality of the solution; and heuristic algorithms, such as Monte Carlo, that are faster than exact algorithms but have no guarantees on the optimality of the results. Exact algorithms guarantee that the optimization process produced the optimal according to the protein design model. Thus, if the predictions of exact algorithms fail when these are experimentally validated, then the source of error can be attributed to the energy function, the allowed flexibility, the sequence space or the target structure (e.g., if it cannot be designed for).

Some protein design algorithms are listed below. Although these algorithms address only the most basic formulation of the protein design problem, Equation (1), when the optimization goal changes because designers introduce improvements and extensions to the protein design model, such as improvements to the structural flexibility allowed (e.g., protein backbone flexibility) or including sophisticated energy terms, many of the extensions on protein design that improve modeling are built atop these algorithms. For example, Rosetta Design incorporates sophisticated energy terms, and backbone flexibility using Monte Carlo as the underlying optimizing algorithm. OSPREY's algorithms build on the dead-end elimination algorithm and A* to incorporate continuous backbone and side-chain movements. Thus, these algorithms provide a good perspective on the different kinds of algorithms available for protein design.

In 2020 scientists reported the development of an AI-based process using genome databases for evolution-based designing of novel proteins. They used deep learning to identify design-rules.[24][25] In 2022, a study reported deep learning software that can design proteins that contain pre-specified functional sites.

With mathematical guarantees

Dead-end elimination

The dead-end elimination (DEE) algorithm reduces the search space of the problem iteratively by removing rotamers that can be provably shown to be not part of the global lowest energy conformation (GMEC). On each iteration, the dead-end elimination algorithm compares all possible pairs of rotamers at each residue position, and removes each rotamer r′i that can be shown to always be of higher energy than another rotamer ri and is thus not part of the GMEC:

Other powerful extensions to the dead-end elimination algorithm include the pairs elimination criterion, and the generalized dead-end elimination criterion. This algorithm has also been extended to handle continuous rotamers with provable guarantees.

Although the Dead-end elimination algorithm runs in polynomial time on each iteration, it cannot guarantee convergence. If, after a certain number of iterations, the dead-end elimination algorithm does not prune any more rotamers, then either rotamers have to be merged or another search algorithm must be used to search the remaining search space. In such cases, the dead-end elimination acts as a pre-filtering algorithm to reduce the search space, while other algorithms, such as A*, Monte Carlo, Linear Programming, or FASTER are used to search the remaining search space.

Branch and bound

The protein design conformational space can be represented as a tree, where the protein residues are ordered in an arbitrary way, and the tree branches at each of the rotamers in a residue. Branch and bound algorithms use this representation to efficiently explore the conformation tree: At each branching, branch and bound algorithms bound the conformation space and explore only the promising branches.

A popular search algorithm for protein design is the A* search algorithm. A* computes a lower-bound score on each partial tree path that lower bounds (with guarantees) the energy of each of the expanded rotamers. Each partial conformation is added to a priority queue and at each iteration the partial path with the lowest lower bound is popped from the queue and expanded. The algorithm stops once a full conformation has been enumerated and guarantees that the conformation is the optimal.

The A* score f in protein design consists of two parts, f=g+h. g is the exact energy of the rotamers that have already been assigned in the partial conformation. h is a lower bound on the energy of the rotamers that have not yet been assigned. Each is designed as follows, where d is the index of the last assigned residue in the partial conformation.

Integer linear programming

The problem of optimizing ET (Equation (1)) can be easily formulated as an integer linear program (ILP). One of the most powerful formulations uses binary variables to represent the presence of a rotamer and edges in the final solution, and constraints the solution to have exactly one rotamer for each residue and one pairwise interaction for each pair of residues:

s.t.

ILP solvers, such as CPLEX, can compute the exact optimal solution for large instances of protein design problems. These solvers use a linear programming relaxation of the problem, where qi and qij are allowed to take continuous values, in combination with a branch and cut algorithm to search only a small portion of the conformation space for the optimal solution. ILP solvers have been shown to solve many instances of the side-chain placement problem.

Message-passing based approximations to the linear programming dual

ILP solvers depend on linear programming (LP) algorithms, such as the Simplex or barrier-based methods to perform the LP relaxation at each branch. These LP algorithms were developed as general-purpose optimization methods and are not optimized for the protein design problem (Equation (1)). In consequence, the LP relaxation becomes the bottleneck of ILP solvers when the problem size is large. Recently, several alternatives based on message-passing algorithms have been designed specifically for the optimization of the LP relaxation of the protein design problem. These algorithms can approximate both the dual or the primal instances of the integer programming, but in order to maintain guarantees on optimality, they are most useful when used to approximate the dual of the protein design problem, because approximating the dual guarantees that no solutions are missed. Message-passing based approximations include the tree reweighted max-product message passing algorithm, and the message passing linear programming algorithm.

Optimization algorithms without guarantees

Monte Carlo and simulated annealing

Monte Carlo is one of the most widely used algorithms for protein design. In its simplest form, a Monte Carlo algorithm selects a residue at random, and in that residue a randomly chosen rotamer (of any amino acid) is evaluated. The new energy of the protein, Enew is compared against the old energy Eold and the new rotamer is accepted with a probability of:

where β is the Boltzmann constant and the temperature T can be chosen such that in the initial rounds it is high and it is slowly annealed to overcome local minima.

FASTER

The FASTER algorithm uses a combination of deterministic and stochastic criteria to optimize amino acid sequences. FASTER first uses DEE to eliminate rotamers that are not part of the optimal solution. Then, a series of iterative steps optimize the rotamer assignment.

Belief propagation

In belief propagation for protein design, the algorithm exchanges messages that describe the belief that each residue has about the probability of each rotamer in neighboring residues. The algorithm updates messages on every iteration and iterates until convergence or until a fixed number of iterations. Convergence is not guaranteed in protein design. The message mi→ j(rj that a residue i sends to every rotamer (rj at neighboring residue j is defined as:

Both max-product and sum-product belief propagation have been used to optimize protein design.

Applications and examples of designed proteins

Enzyme design

The design of new enzymes is a use of protein design with huge bioengineering and biomedical applications. In general, designing a protein structure can be different from designing an enzyme, because the design of enzymes must consider many states involved in the catalytic mechanism. However protein design is a prerequisite of de novo enzyme design because, at the very least, the design of catalysts requires a scaffold in which the catalytic mechanism can be inserted.

Great progress in de novo enzyme design, and redesign, was made in the first decade of the 21st century. In three major studies, David Baker and coworkers de novo designed enzymes for the retro-aldol reaction, a Kemp-elimination reaction, and for the Diels-Alder reaction. Furthermore, Stephen Mayo and coworkers developed an iterative method to design the most efficient known enzyme for the Kemp-elimination reaction. Also, in the laboratory of Bruce Donald, computational protein design was used to switch the specificity of one of the protein domains of the nonribosomal peptide synthetase that produces Gramicidin S, from its natural substrate phenylalanine to other noncognate substrates including charged amino acids; the redesigned enzymes had activities close to those of the wild-type.

Semi-rational design

Semi-rational design is a purposeful modification method based on a certain understanding of the sequence, structure, and catalytic mechanism of enzymes. This method is between irrational design and rational design. It uses known information and means to perform evolutionary modification on the specific functions of the target enzyme. The characteristic of semi-rational design is that it does not rely solely on random mutation and screening, but combines the concept of directed evolution. It creates a library of random mutants with diverse sequences through mutagenesis, error-prone RCR, DNA recombination, and site-saturation mutagenesis. At the same time, it uses the understanding of enzymes and design principles to purposefully screen out mutants with desired characteristics.

The methodology of semi-rational design emphasizes the in-depth understanding of enzymes and the control of the evolutionary process. It allows researchers to use known information to guide the evolutionary process, thereby improving efficiency and success rate. This method plays an important role in protein function modification because it can combine the advantages of irrational design and rational design, and can explore unknown space and use known knowledge for targeted modification.

Semi-rational design has a wide range of applications, including but not limited to enzyme optimization, modification of drug targets, evolution of biocatalysts, etc. Through this method, researchers can more effectively improve the functional properties of proteins to meet specific biotechnology or medical needs. Although this method has high requirements for information and technology and is relatively difficult to implement, with the development of computing technology and bioinformatics, the application prospects of semi-rational design in protein engineering are becoming more and more broad.

Design for affinity

Protein–protein interactions are involved in most biotic processes. Many of the hardest-to-treat diseases, such as Alzheimer's disease, many forms of cancer (e.g., TP53), and human immunodeficiency virus (HIV) infection involve protein–protein interactions. Thus, to treat such diseases, it is desirable to design protein or protein-like therapeutics that bind one of the partners of the interaction and, thus, disrupt the disease-causing interaction. This requires designing protein-therapeutics for affinity toward its partner.

Protein–protein interactions can be designed using protein design algorithms because the principles that rule protein stability also rule protein–protein binding. Protein–protein interaction design, however, presents challenges not commonly present in protein design. One of the most important challenges is that, in general, the interfaces between proteins are more polar than protein cores, and binding involves a tradeoff between desolvation and hydrogen bond formation. To overcome this challenge, Bruce Tidor and coworkers developed a method to improve the affinity of antibodies by focusing on electrostatic contributions. They found that, for the antibodies designed in the study, reducing the desolvation costs of the residues in the interface increased the affinity of the binding pair.

Scoring binding predictions

Protein design energy functions must be adapted to score binding predictions because binding involves a trade-off between the lowest-energy conformations of the free proteins (EP and EL) and the lowest-energy conformation of the bound complex (EPL):

.

The K* algorithm approximates the binding constant of the algorithm by including conformational entropy into the free energy calculation. The K* algorithm considers only the lowest-energy conformations of the free and bound complexes (denoted by the sets P, L, and PL) to approximate the partition functions of each complex:

Design for specificity

The design of protein–protein interactions must be highly specific because proteins can interact with a large number of proteins; successful design requires selective binders. Thus, protein design algorithms must be able to distinguish between on-target (or positive design) and off-target binding (or negative design). One of the most prominent examples of design for specificity is the design of specific bZIP-binding peptides by Amy Keating and coworkers for 19 out of the 20 bZIP families; 8 of these peptides were specific for their intended partner over competing peptides. Further, positive and negative design was also used by Anderson and coworkers to predict mutations in the active site of a drug target that conferred resistance to a new drug; positive design was used to maintain wild-type activity, while negative design was used to disrupt binding of the drug. Recent computational redesign by Costas Maranas and coworkers was also capable of experimentally switching the cofactor specificity of Candida boidinii xylose reductase from NADPH to NADH.

Protein resurfacing

Protein resurfacing consists of designing a protein's surface while preserving the overall fold, core, and boundary regions of the protein intact. Protein resurfacing is especially useful to alter the binding of a protein to other proteins. One of the most important applications of protein resurfacing was the design of the RSC3 probe to select broadly neutralizing HIV antibodies at the NIH Vaccine Research Center. First, residues outside of the binding interface between the gp120 HIV envelope protein and the formerly discovered b12-antibody were selected to be designed. Then, the sequence spaced was selected based on evolutionary information, solubility, similarity with the wild-type, and other considerations. Then the RosettaDesign software was used to find optimal sequences in the selected sequence space. RSC3 was later used to discover the broadly neutralizing antibody VRC01 in the serum of a long-term HIV-infected non-progressor individual.

Design of globular proteins

Globular proteins are proteins that contain a hydrophobic core and a hydrophilic surface. Globular proteins often assume a stable structure, unlike fibrous proteins, which have multiple conformations. The three-dimensional structure of globular proteins is typically easier to determine through X-ray crystallography and nuclear magnetic resonance than both fibrous proteins and membrane proteins, which makes globular proteins more attractive for protein design than the other types of proteins. Most successful protein designs have involved globular proteins. Both RSD-1, and Top7 were de novo designs of globular proteins. Five more protein structures were designed, synthesized, and verified in 2012 by the Baker group. These new proteins serve no biotic function, but the structures are intended to act as building-blocks that can be expanded to incorporate functional active sites. The structures were found computationally by using new heuristics based on analyzing the connecting loops between parts of the sequence that specify secondary structures.

Design of membrane proteins

Several transmembrane proteins have been successfully designed, along with many other membrane-associated peptides and proteins. Recently, Costas Maranas and his coworkers developed an automated tool to redesign the pore size of Outer Membrane Porin Type-F (OmpF) from E.coli to any desired sub-nm size and assembled them in membranes to perform precise angstrom scale separation.

Other applications

One of the most desirable uses for protein design is for biosensors, proteins that will sense the presence of specific compounds. Some attempts in the design of biosensors include sensors for unnatural molecules including TNT. More recently, Kuhlman and coworkers designed a biosensor of the PAK1.

In a sense, protein design is a subset of battery design.

Peace movement

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Peace_movement Cover of Die Frieden...