Search This Blog

Wednesday, August 30, 2023

Protein engineering

From Wikipedia, the free encyclopedia

Protein engineering is the process of developing useful or valuable proteins through the design and production of unnatural polypeptides, often by altering amino acid sequences found in nature. It is a young discipline, with much research taking place into the understanding of protein folding and recognition for protein design principles. It has been used to improve the function of many enzymes for industrial catalysis. It is also a product and services market, with an estimated value of $168 billion by 2017.

There are two general strategies for protein engineering: rational protein design and directed evolution. These methods are not mutually exclusive; researchers will often apply both. In the future, more detailed knowledge of protein structure and function, and advances in high-throughput screening, may greatly expand the abilities of protein engineering. Eventually, even unnatural amino acids may be included, via newer methods, such as expanded genetic code, that allow encoding novel amino acids in genetic code.

Approaches

Rational design

In rational protein design, a scientist uses detailed knowledge of the structure and function of a protein to make desired changes. In general, this has the advantage of being inexpensive and technically easy, since site-directed mutagenesis methods are well-developed. However, its major drawback is that detailed structural knowledge of a protein is often unavailable, and, even when available, it can be very difficult to predict the effects of various mutations since structural information most often provide a static picture of a protein structure. However, programs such as Folding@home and Foldit have utilized crowdsourcing techniques in order to gain insight into the folding motifs of proteins.

Computational protein design algorithms seek to identify novel amino acid sequences that are low in energy when folded to the pre-specified target structure. While the sequence-conformation space that needs to be searched is large, the most challenging requirement for computational protein design is a fast, yet accurate, energy function that can distinguish optimal sequences from similar suboptimal ones.

Multiple sequence alignment

Without structural information about a protein, sequence analysis is often useful in elucidating information about the protein. These techniques involve alignment of target protein sequences with other related protein sequences. This alignment can show which amino acids are conserved between species and are important for the function of the protein. These analyses can help to identify hot spot amino acids that can serve as the target sites for mutations. Multiple sequence alignment utilizes data bases such as PREFAB, SABMARK, OXBENCH, IRMBASE, and BALIBASE in order to cross reference target protein sequences with known sequences. Multiple sequence alignment techniques are listed below.

This method begins by performing pair wise alignment of sequences using k-tuple or Needleman–Wunsch methods. These methods calculate a matrix that depicts the pair wise similarity among the sequence pairs. Similarity scores are then transformed into distance scores that are used to produce a guide tree using the neighbor joining method. This guide tree is then employed to yield a multiple sequence alignment.

Clustal omega

This method is capable of aligning up to 190,000 sequences by utilizing the k-tuple method. Next sequences are clustered using the mBed and k-means methods. A guide tree is then constructed using the UPGMA method that is used by the HH align package. This guide tree is used to generate multiple sequence alignments.

MAFFT

This method utilizes fast Fourier transform (FFT) that converts amino acid sequences into a sequence composed of volume and polarity values for each amino acid residue. This new sequence is used to find homologous regions.

K-Align

This method utilizes the Wu-Manber approximate string matching algorithm to generate multiple sequence alignments.

Multiple sequence comparison by log expectation (MUSCLE)

This method utilizes Kmer and Kimura distances to generate multiple sequence alignments.

T-Coffee

This method utilizes tree based consistency objective functions for alignment evolution. This method has been shown to be 5–10% more accurate than Clustal W.

Coevolutionary analysis

Coevolutionary analysis is also known as correlated mutation, covariation, or co-substitution. This type of rational design involves reciprocal evolutionary changes at evolutionarily interacting loci. Generally this method begins with the generation of a curated multiple sequence alignments for the target sequence. This alignment is then subjected to manual refinement that involves removal of highly gapped sequences, as well as sequences with low sequence identity. This step increases the quality of the alignment. Next, the manually processed alignment is utilized for further coevolutionary measurements using distinct correlated mutation algorithms. These algorithms result in a coevolution scoring matrix. This matrix is filtered by applying various significance tests to extract significant coevolution values and wipe out background noise. Coevolutionary measurements are further evaluated to assess their performance and stringency. Finally, the results from this coevolutionary analysis are validated experimentally.

Structural prediction

De novo synthesis of protein benefits from knowledge of existing protein structures. This knowledge of existing protein structure assists with the prediction of new protein structures. Methods for protein structure prediction fall under one of the four following classes: ab initio, fragment based methods, homology modeling, and protein threading.

Ab initio

These methods involve free modeling without using any structural information about the template. Ab initio methods are aimed at prediction of the native structures of proteins corresponding to the global minimum of its free energy. some examples of ab initio methods are AMBER, GROMOS, GROMACS, CHARMM, OPLS, and ENCEPP12. General steps for ab initio methods begin with the geometric representation of the protein of interest. Next, a potential energy function model for the protein is developed. This model can be created using either molecular mechanics potentials or protein structure derived potential functions. Following the development of a potential model, energy search techniques including molecular dynamic simulations, Monte Carlo simulations and genetic algorithms are applied to the protein.

Fragment based

These methods use database information regarding structures to match homologous structures to the created protein sequences. These homologous structures are assembled to give compact structures using scoring and optimization procedures, with the goal of achieving the lowest potential energy score. Webservers for fragment information are I-TASSER, ROSETTA, ROSETTA @ home, FRAGFOLD, CABS fold, PROFESY, CREF, QUARK, UNDERTAKER, HMM, and ANGLOR.

Homology modeling

These methods are based upon the homology of proteins. These methods are also known as comparative modeling. The first step in homology modeling is generally the identification of template sequences of known structure which are homologous to the query sequence. Next the query sequence is aligned to the template sequence. Following the alignment, the structurally conserved regions are modeled using the template structure. This is followed by the modeling of side chains and loops that are distinct from the template. Finally the modeled structure undergoes refinement and assessment of quality. Servers that are available for homology modeling data are listed here: SWISS MODEL, MODELLER, ReformAlign, PyMOD, TIP-STRUCTFAST, COMPASS, 3d-PSSM, SAMT02, SAMT99, HHPRED, FAGUE, 3D-JIGSAW, META-PP, ROSETTA, and I-TASSER.

Protein threading

Protein threading can be used when a reliable homologue for the query sequence cannot be found. This method begins by obtaining a query sequence and a library of template structures. Next, the query sequence is threaded over known template structures. These candidate models are scored using scoring functions. These are scored based upon potential energy models of both query and template sequence. The match with the lowest potential energy model is then selected. Methods and servers for retrieving threading data and performing calculations are listed here: GenTHREADER, pGenTHREADER, pDomTHREADER, ORFEUS, PROSPECT, BioShell-Threading, FFASO3, RaptorX, HHPred, LOOPP server, Sparks-X, SEGMER, THREADER2, ESYPRED3D, LIBRA, TOPITS, RAPTOR, COTH, MUSTER.

For more information on rational design see site-directed mutagenesis.

Multivalent binding

Multivalent binding can be used to increase the binding specificity and affinity through avidity effects. Having multiple binding domains in a single biomolecule or complex increases the likelihood of other interactions to occur via individual binding events. Avidity or effective affinity can be much higher than the sum of the individual affinities providing a cost and time-effective tool for targeted binding.

Multivalent proteins

Multivalent proteins are relatively easy to produce by post-translational modifications or multiplying the protein-coding DNA sequence. The main advantage of multivalent and multispecific proteins is that they can increase the effective affinity for a target of a known protein. In the case of an inhomogeneous target using a combination of proteins resulting in multispecific binding can increase specificity, which has high applicability in protein therapeutics.

The most common example for multivalent binding are the antibodies, and there is extensive research for bispecific antibodies. Applications of bispecific antibodies cover a broad spectrum that includes diagnosis, imaging, prophylaxis, and therapy.

Directed evolution

In directed evolution, random mutagenesis, e.g. by error-prone PCR or sequence saturation mutagenesis, is applied to a protein, and a selection regime is used to select variants having desired traits. Further rounds of mutation and selection are then applied. This method mimics natural evolution and, in general, produces superior results to rational design. An added process, termed DNA shuffling, mixes and matches pieces of successful variants to produce better results. Such processes mimic the recombination that occurs naturally during sexual reproduction. Advantages of directed evolution are that it requires no prior structural knowledge of a protein, nor is it necessary to be able to predict what effect a given mutation will have. Indeed, the results of directed evolution experiments are often surprising in that desired changes are often caused by mutations that were not expected to have some effect. The drawback is that they require high-throughput screening, which is not feasible for all proteins. Large amounts of recombinant DNA must be mutated and the products screened for desired traits. The large number of variants often requires expensive robotic equipment to automate the process. Further, not all desired activities can be screened for easily.

Natural Darwinian evolution can be effectively imitated in the lab toward tailoring protein properties for diverse applications, including catalysis. Many experimental technologies exist to produce large and diverse protein libraries and for screening or selecting folded, functional variants. Folded proteins arise surprisingly frequently in random sequence space, an occurrence exploitable in evolving selective binders and catalysts. While more conservative than direct selection from deep sequence space, redesign of existing proteins by random mutagenesis and selection/screening is a particularly robust method for optimizing or altering extant properties. It also represents an excellent starting point for achieving more ambitious engineering goals. Allying experimental evolution with modern computational methods is likely the broadest, most fruitful strategy for generating functional macromolecules unknown to nature.

The main challenges of designing high quality mutant libraries have shown significant progress in the recent past. This progress has been in the form of better descriptions of the effects of mutational loads on protein traits. Also computational approaches have showed large advances in the innumerably large sequence space to more manageable screenable sizes, thus creating smart libraries of mutants. Library size has also been reduced to more screenable sizes by the identification of key beneficial residues using algorithms for systematic recombination. Finally a significant step forward toward efficient reengineering of enzymes has been made with the development of more accurate statistical models and algorithms quantifying and predicting coupled mutational effects on protein functions.

Generally, directed evolution may be summarized as an iterative two step process which involves generation of protein mutant libraries, and high throughput screening processes to select for variants with improved traits. This technique does not require prior knowledge of the protein structure and function relationship. Directed evolution utilizes random or focused mutagenesis to generate libraries of mutant proteins. Random mutations can be introduced using either error prone PCR, or site saturation mutagenesis. Mutants may also be generated using recombination of multiple homologous genes. Nature has evolved a limited number of beneficial sequences. Directed evolution makes it possible to identify undiscovered protein sequences which have novel functions. This ability is contingent on the proteins ability to tolerant amino acid residue substitutions without compromising folding or stability.

Directed evolution methods can be broadly categorized into two strategies, asexual and sexual methods.

Asexual methods

Asexual methods do not generate any cross links between parental genes. Single genes are used to create mutant libraries using various mutagenic techniques. These asexual methods can produce either random or focused mutagenesis.

Random mutagenesis

Random mutagenic methods produce mutations at random throughout the gene of interest. Random mutagenesis can introduce the following types of mutations: transitions, transversions, insertions, deletions, inversion, missense, and nonsense. Examples of methods for producing random mutagenesis are below.

Error prone PCR

Error prone PCR utilizes the fact that Taq DNA polymerase lacks 3' to 5' exonuclease activity. This results in an error rate of 0.001–0.002% per nucleotide per replication. This method begins with choosing the gene, or the area within a gene, one wishes to mutate. Next, the extent of error required is calculated based upon the type and extent of activity one wishes to generate. This extent of error determines the error prone PCR strategy to be employed. Following PCR, the genes are cloned into a plasmid and introduced to competent cell systems. These cells are then screened for desired traits. Plasmids are then isolated for colonies which show improved traits, and are then used as templates the next round of mutagenesis. Error prone PCR shows biases for certain mutations relative to others. Such as biases for transitions over transversions.

Rates of error in PCR can be increased in the following ways:

  1. Increase concentration of magnesium chloride, which stabilizes non complementary base pairing.
  2. Add manganese chloride to reduce base pair specificity.
  3. Increased and unbalanced addition of dNTPs.
  4. Addition of base analogs like dITP, 8 oxo-dGTP, and dPTP.
  5. Increase concentration of Taq polymerase.
  6. Increase extension time.
  7. Increase cycle time.
  8. Use less accurate Taq polymerase.

Also see polymerase chain reaction for more information.

Rolling circle error-prone PCR

This PCR method is based upon rolling circle amplification, which is modeled from the method that bacteria use to amplify circular DNA. This method results in linear DNA duplexes. These fragments contain tandem repeats of circular DNA called concatamers, which can be transformed into bacterial strains. Mutations are introduced by first cloning the target sequence into an appropriate plasmid. Next, the amplification process begins using random hexamer primers and Φ29 DNA polymerase under error prone rolling circle amplification conditions. Additional conditions to produce error prone rolling circle amplification are 1.5 pM of template DNA, 1.5 mM MnCl2 and a 24 hour reaction time. MnCl2 is added into the reaction mixture to promote random point mutations in the DNA strands. Mutation rates can be increased by increasing the concentration of MnCl2, or by decreasing concentration of the template DNA. Error prone rolling circle amplification is advantageous relative to error prone PCR because of its use of universal random hexamer primers, rather than specific primers. Also the reaction products of this amplification do not need to be treated with ligases or endonucleases. This reaction is isothermal.

Chemical mutagenesis

Chemical mutagenesis involves the use of chemical agents to introduce mutations into genetic sequences. Examples of chemical mutagens follow.

Sodium bisulfate is effective at mutating G/C rich genomic sequences. This is because sodium bisulfate catalyses deamination of unmethylated cytosine to uracil.

Ethyl methane sulfonate alkylates guanidine residues. This alteration causes errors during DNA replication.

Nitrous acid causes transversion by de-amination of adenine and cytosine.

The dual approach to random chemical mutagenesis is an iterative two step process. First it involves the in vivo chemical mutagenesis of the gene of interest via EMS. Next, the treated gene is isolated and cloning into an untreated expression vector in order to prevent mutations in the plasmid backbone. This technique preserves the plasmids genetic properties.

Targeting glycosylases to embedded arrays for mutagenesis (TaGTEAM)

This method has been used to create targeted in vivo mutagenesis in yeast. This method involves the fusion of a 3-methyladenine DNA glycosylase to tetR DNA-binding domain. This has been shown to increase mutation rates by over 800 time in regions of the genome containing tetO sites.

Mutagenesis by random insertion and deletion

This method involves alteration in length of the sequence via simultaneous deletion and insertion of chunks of bases of arbitrary length. This method has been shown to produce proteins with new functionalities via introduction of new restriction sites, specific codons, four base codons for non-natural amino acids.

Transposon based random mutagenesis

Recently many methods for transposon based random mutagenesis have been reported. This methods include, but are not limited to the following: PERMUTE-random circular permutation, random protein truncation, random nucleotide triplet substitution, random domain/tag/multiple amino acid insertion, codon scanning mutagenesis, and multicodon scanning mutagenesis. These aforementioned techniques all require the design of mini-Mu transposons. Thermo scientific manufactures kits for the design of these transposons.

Random mutagenesis methods altering the target DNA length

These methods involve altering gene length via insertion and deletion mutations. An example is the tandem repeat insertion (TRINS) method. This technique results in the generation of tandem repeats of random fragments of the target gene via rolling circle amplification and concurrent incorporation of these repeats into the target gene.

Mutator strains

Mutator strains are bacterial cell lines which are deficient in one or more DNA repair mechanisms. An example of a mutator strand is the E. coli XL1-RED. This subordinate strain of E. coli is deficient in the MutS, MutD, MutT DNA repair pathways. Use of mutator strains is useful at introducing many types of mutation; however, these strains show progressive sickness of culture because of the accumulation of mutations in the strains own genome.

Focused mutagenesis

Focused mutagenic methods produce mutations at predetermined amino acid residues. These techniques require and understanding of the sequence-function relationship for the protein of interest. Understanding of this relationship allows for the identification of residues which are important in stability, stereoselectivity, and catalytic efficiency. Examples of methods that produce focused mutagenesis are below.

Site saturation mutagenesis

Site saturation mutagenesis is a PCR based method used to target amino acids with significant roles in protein function. The two most common techniques for performing this are whole plasmid single PCR, and overlap extension PCR.

Whole plasmid single PCR is also referred to as site directed mutagenesis (SDM). SDM products are subjected to Dpn endonuclease digestion. This digestion results in cleavage of only the parental strand, because the parental strand contains a GmATC which is methylated at N6 of adenine. SDM does not work well for large plasmids of over ten kilobases. Also, this method is only capable of replacing two nucleotides at a time.

Overlap extension PCR requires the use of two pairs of primers. One primer in each set contains a mutation. A first round of PCR using these primer sets is performed and two double stranded DNA duplexes are formed. A second round of PCR is then performed in which these duplexes are denatured and annealed with the primer sets again to produce heteroduplexes, in which each strand has a mutation. Any gaps in these newly formed heteroduplexes are filled with DNA polymerases and further amplified.

Sequence saturation mutagenesis (SeSaM)

Sequence saturation mutagenesis results in the randomization of the target sequence at every nucleotide position. This method begins with the generation of variable length DNA fragments tailed with universal bases via the use of template transferases at the 3' termini. Next, these fragments are extended to full length using a single stranded template. The universal bases are replaced with a random standard base, causing mutations. There are several modified versions of this method such as SeSAM-Tv-II, SeSAM-Tv+, and SeSAM-III.

Single primer reactions in parallel (SPRINP)

This site saturation mutagenesis method involves two separate PCR reaction. The first of which uses only forward primers, while the second reaction uses only reverse primers. This avoids the formation of primer dimer formation.

Mega primed and ligase free focused mutagenesis

This site saturation mutagenic technique begins with one mutagenic oligonucleotide and one universal flanking primer. These two reactants are used for an initial PCR cycle. Products from this first PCR cycle are used as mega primers for the next PCR.

Ω-PCR

This site saturation mutagenic method is based on overlap extension PCR. It is used to introduce mutations at any site in a circular plasmid.

PFunkel-ominchange-OSCARR

This method utilizes user defined site directed mutagenesis at single or multiple sites simultaneously. OSCARR is an acronym for one pot simple methodology for cassette randomization and recombination. This randomization and recombination results in randomization of desired fragments of a protein. Omnichange is a sequence independent, multisite saturation mutagenesis which can saturate up to five independent codons on a gene.

Trimer-dimer mutagenesis

This method removes redundant codons and stop codons.

Cassette mutagenesis

This is a PCR based method. Cassette mutagenesis begins with the synthesis of a DNA cassette containing the gene of interest, which is flanked on either side by restriction sites. The endonuclease which cleaves these restriction sites also cleaves sites in the target plasmid. The DNA cassette and the target plasmid are both treated with endonucleases to cleave these restriction sites and create sticky ends. Next the products from this cleavage are ligated together, resulting in the insertion of the gene into the target plasmid. An alternative form of cassette mutagenesis called combinatorial cassette mutagenesis is used to identify the functions of individual amino acid residues in the protein of interest. Recursive ensemble mutagenesis then utilizes information from previous combinatorial cassette mutagenesis. Codon cassette mutagenesis allows you to insert or replace a single codon at a particular site in double stranded DNA.

Sexual methods

Sexual methods of directed evolution involve in vitro recombination which mimic natural in vivo recombination. Generally these techniques require high sequence homology between parental sequences. These techniques are often used to recombine two different parental genes, and these methods do create cross overs between these genes.

In vitro homologous recombination

Homologous recombination can be categorized as either in vivo or in vitro. In vitro homologous recombination mimics natural in vivo recombination. These in vitro recombination methods require high sequence homology between parental sequences. These techniques exploit the natural diversity in parental genes by recombining them to yield chimeric genes. The resulting chimera show a blend of parental characteristics.

DNA shuffling

This in vitro technique was one of the first techniques in the era of recombination. It begins with the digestion of homologous parental genes into small fragments by DNase1. These small fragments are then purified from undigested parental genes. Purified fragments are then reassembled using primer-less PCR. This PCR involves homologous fragments from different parental genes priming for each other, resulting in chimeric DNA. The chimeric DNA of parental size is then amplified using end terminal primers in regular PCR.

Random priming in vitro recombination (RPR)

This in vitro homologous recombination method begins with the synthesis of many short gene fragments exhibiting point mutations using random sequence primers. These fragments are reassembled to full length parental genes using primer-less PCR. These reassembled sequences are then amplified using PCR and subjected to further selection processes. This method is advantageous relative to DNA shuffling because there is no use of DNase1, thus there is no bias for recombination next to a pyrimidine nucleotide. This method is also advantageous due to its use of synthetic random primers which are uniform in length, and lack biases. Finally this method is independent of the length of DNA template sequence, and requires a small amount of parental DNA.

Truncated metagenomic gene-specific PCR

This method generates chimeric genes directly from metagenomic samples. It begins with isolation of the desired gene by functional screening from metagenomic DNA sample. Next, specific primers are designed and used to amplify the homologous genes from different environmental samples. Finally, chimeric libraries are generated to retrieve the desired functional clones by shuffling these amplified homologous genes.

Staggered extension process (StEP)

This in vitro method is based on template switching to generate chimeric genes. This PCR based method begins with an initial denaturation of the template, followed by annealing of primers and a short extension time. All subsequent cycle generate annealing between the short fragments generated in previous cycles and different parts of the template. These short fragments and the templates anneal together based on sequence complementarity. This process of fragments annealing template DNA is known as template switching. These annealed fragments will then serve as primers for further extension. This method is carried out until the parental length chimeric gene sequence is obtained. Execution of this method only requires flanking primers to begin. There is also no need for Dnase1 enzyme.

Random chimeragenesis on transient templates (RACHITT)

This method has been shown to generate chimeric gene libraries with an average of 14 crossovers per chimeric gene. It begins by aligning fragments from a parental top strand onto the bottom strand of a uracil containing template from a homologous gene. 5' and 3' overhang flaps are cleaved and gaps are filled by the exonuclease and endonuclease activities of Pfu and taq DNA polymerases. The uracil containing template is then removed from the heteroduplex by treatment with a uracil DNA glcosylase, followed by further amplification using PCR. This method is advantageous because it generates chimeras with relatively high crossover frequency. However it is somewhat limited due to the complexity and the need for generation of single stranded DNA and uracil containing single stranded template DNA.

Synthetic shuffling

Shuffling of synthetic degenerate oligonucleotides adds flexibility to shuffling methods, since oligonucleotides containing optimal codons and beneficial mutations can be included.

In vivo Homologous Recombination

Cloning performed in yeast involves PCR dependent reassembly of fragmented expression vectors. These reassembled vectors are then introduced to, and cloned in yeast. Using yeast to clone the vector avoids toxicity and counter-selection that would be introduced by ligation and propagation in E. coli.

Mutagenic organized recombination process by homologous in vivo grouping (MORPHING)

This method introduces mutations into specific regions of genes while leaving other parts intact by utilizing the high frequency of homologous recombination in yeast.

Phage-assisted continuous evolution (PACE)

This method utilizes a bacteriophage with a modified life cycle to transfer evolving genes from host to host. The phage's life cycle is designed in such a way that the transfer is correlated with the activity of interest from the enzyme. This method is advantageous because it requires minimal human intervention for the continuous evolution of the gene.

In vitro non-homologous recombination methods

These methods are based upon the fact that proteins can exhibit similar structural identity while lacking sequence homology.

Exon shuffling

Exon shuffling is the combination of exons from different proteins by recombination events occurring at introns. Orthologous exon shuffling involves combining exons from orthologous genes from different species. Orthologous domain shuffling involves shuffling of entire protein domains from orthologous genes from different species. Paralogous exon shuffling involves shuffling of exon from different genes from the same species. Paralogous domain shuffling involves shuffling of entire protein domains from paralogous proteins from the same species. Functional homolog shuffling involves shuffling of non-homologous domains which are functional related. All of these processes being with amplification of the desired exons from different genes using chimeric synthetic oligonucleotides. This amplification products are then reassembled into full length genes using primer-less PCR. During these PCR cycles the fragments act as templates and primers. This results in chimeric full length genes, which are then subjected to screening.

Incremental truncation for the creation of hybrid enzymes (ITCHY)

Fragments of parental genes are created using controlled digestion by exonuclease III. These fragments are blunted using endonuclease, and are ligated to produce hybrid genes. THIOITCHY is a modified ITCHY technique which utilized nucleotide triphosphate analogs such as α-phosphothioate dNTPs. Incorporation of these nucleotides blocks digestion by exonuclease III. This inhibition of digestion by exonuclease III is called spiking. Spiking can be accomplished by first truncating genes with exonuclease to create fragments with short single stranded overhangs. These fragments then serve as templates for amplification by DNA polymerase in the presence of small amounts of phosphothioate dNTPs. These resulting fragments are then ligated together to form full length genes. Alternatively the intact parental genes can be amplified by PCR in the presence of normal dNTPs and phosphothioate dNTPs. These full length amplification products are then subjected to digestion by an exonuclease. Digestion will continue until the exonuclease encounters an α-pdNTP, resulting in fragments of different length. These fragments are then ligated together to generate chimeric genes.

SCRATCHY

This method generates libraries of hybrid genes inhibiting multiple crossovers by combining DNA shuffling and ITCHY. This method begins with the construction of two independent ITCHY libraries. The first with gene A on the N-terminus. And the other having gene B on the N-terminus. These hybrid gene fragments are separated using either restriction enzyme digestion or PCR with terminus primers via agarose gel electrophoresis. These isolated fragments are then mixed together and further digested using DNase1. Digested fragments are then reassembled by primerless PCR with template switching.

Recombined extension on truncated templates (RETT)

This method generates libraries of hybrid genes by template switching of uni-directionally growing polynucleotides in the presence of single stranded DNA fragments as templates for chimeras. This method begins with the preparation of single stranded DNA fragments by reverse transcription from target mRNA. Gene specific primers are then annealed to the single stranded DNA. These genes are then extended during a PCR cycle. This cycle is followed by template switching and annealing of the short fragments obtained from the earlier primer extension to other single stranded DNA fragments. This process is repeated until full length single stranded DNA is obtained.

Sequence homology-independent protein recombination (SHIPREC)

This method generates recombination between genes with little to no sequence homology. These chimeras are fused via a linker sequence containing several restriction sites. This construct is then digested using DNase1. Fragments are made are made blunt ended using S1 nuclease. These blunt end fragments are put together into a circular sequence by ligation. This circular construct is then linearized using restriction enzymes for which the restriction sites are present in the linker region. This results in a library of chimeric genes in which contribution of genes to 5' and 3' end will be reversed as compared to the starting construct.

Sequence independent site directed chimeragenesis (SISDC)

This method results in a library of genes with multiple crossovers from several parental genes. This method does not require sequence identity among the parental genes. This does require one or two conserved amino acids at every crossover position. It begins with alignment of parental sequences and identification of consensus regions which serve as crossover sites. This is followed by the incorporation of specific tags containing restriction sites followed by the removal of the tags by digestion with Bac1, resulting in genes with cohesive ends. These gene fragments are mixed and ligated in an appropriate order to form chimeric libraries.

Degenerate homo-duplex recombination (DHR)

This method begins with alignment of homologous genes, followed by identification of regions of polymorphism. Next the top strand of the gene is divided into small degenerate oligonucleotides. The bottom strand is also digested into oligonucleotides to serve as scaffolds. These fragments are combined in solution are top strand oligonucleotides are assembled onto bottom strand oligonucleotides. Gaps between these fragments are filled with polymerase and ligated.

Random multi-recombinant PCR (RM-PCR)

This method involves the shuffling of plural DNA fragments without homology, in a single PCR. This results in the reconstruction of complete proteins by assembly of modules encoding different structural units.

User friendly DNA recombination (USERec)

This method begins with the amplification of gene fragments which need to be recombined, using uracil dNTPs. This amplification solution also contains primers, PfuTurbo, and Cx Hotstart DNA polymerase. Amplified products are next incubated with USER enzyme. This enzyme catalyzes the removal of uracil residues from DNA creating single base pair gaps. The USER enzyme treated fragments are mixed and ligated using T4 DNA ligase and subjected to Dpn1 digestion to remove the template DNA. These resulting dingle stranded fragments are subjected to amplification using PCR, and are transformed into E. coli.

Golden Gate shuffling (GGS) recombination

This method allows you to recombine at least 9 different fragments in an acceptor vector by using type 2 restriction enzyme which cuts outside of the restriction sites. It begins with sub cloning of fragments in separate vectors to create Bsa1 flanking sequences on both sides. These vectors are then cleaved using type II restriction enzyme Bsa1, which generates four nucleotide single strand overhangs. Fragments with complementary overhangs are hybridized and ligated using T4 DNA ligase. Finally these constructs are then transformed into E. coli cells, which are screened for expression levels.

Phosphoro thioate-based DNA recombination method (PRTec)

This method can be used to recombine structural elements or entire protein domains. This method is based on phosphorothioate chemistry which allows the specific cleavage of phosphorothiodiester bonds. The first step in the process begins with amplification of fragments that need to be recombined along with the vector backbone. This amplification is accomplished using primers with phosphorothiolated nucleotides at 5' ends. Amplified PCR products are cleaved in an ethanol-iodine solution at high temperatures. Next these fragments are hybridized at room temperature and transformed into E. coli which repair any nicks.

Integron

This system is based upon a natural site specific recombination system in E. coli. This system is called the integron system, and produces natural gene shuffling. This method was used to construct and optimize a functional tryptophan biosynthetic operon in trp-deficient E. coli by delivering individual recombination cassettes or trpA-E genes along with regulatory elements with the integron system.

Y-Ligation based shuffling (YLBS)

This method generates single stranded DNA strands, which encompass a single block sequence either at the 5' or 3' end, complementary sequences in a stem loop region, and a D branch region serving as a primer binding site for PCR. Equivalent amounts of both 5' and 3' half strands are mixed and formed a hybrid due to the complementarity in the stem region. Hybrids with free phosphorylated 5' end in 3' half strands are then ligated with free 3' ends in 5' half strands using T4 DNA ligase in the presence of 0.1 mM ATP. Ligated products are then amplified by two types of PCR to generate pre 5' half and pre 3' half PCR products. These PCR product are converted to single strands via avidin-biotin binding to the 5' end of the primes containing stem sequences that were biotin labeled. Next, biotinylated 5' half strands and non-biotinylated 3' half strands are used as 5' and 3' half strands for the next Y-ligation cycle.

Semi-rational design

Semi-rational design uses information about a proteins sequence, structure and function, in tandem with predictive algorithms. Together these are used to identify target amino acid residues which are most likely to influence protein function. Mutations of these key amino acid residues create libraries of mutant proteins that are more likely to have enhanced properties.

Advances in semi-rational enzyme engineering and de novo enzyme design provide researchers with powerful and effective new strategies to manipulate biocatalysts. Integration of sequence and structure based approaches in library design has proven to be a great guide for enzyme redesign. Generally, current computational de novo and redesign methods do not compare to evolved variants in catalytic performance. Although experimental optimization may be produced using directed evolution, further improvements in the accuracy of structure predictions and greater catalytic ability will be achieved with improvements in design algorithms. Further functional enhancements may be included in future simulations by integrating protein dynamics.

Biochemical and biophysical studies, along with fine-tuning of predictive frameworks will be useful to experimentally evaluate the functional significance of individual design features. Better understanding of these functional contributions will then give feedback for the improvement of future designs.

Directed evolution will likely not be replaced as the method of choice for protein engineering, although computational protein design has fundamentally changed the way protein engineering can manipulate bio-macromolecules. Smaller, more focused and functionally-rich libraries may be generated by using in methods which incorporate predictive frameworks for hypothesis-driven protein engineering. New design strategies and technical advances have begun a departure from traditional protocols, such as directed evolution, which represents the most effective strategy for identifying top-performing candidates in focused libraries. Whole-gene library synthesis is replacing shuffling and mutagenesis protocols for library preparation. Also highly specific low throughput screening assays are increasingly applied in place of monumental screening and selection efforts of millions of candidates. Together, these developments are poised to take protein engineering beyond directed evolution and towards practical, more efficient strategies for tailoring biocatalysts.

Screening and selection techniques

Once a protein has undergone directed evolution, ration design or semi-ration design, the libraries of mutant proteins must be screened to determine which mutants show enhanced properties. Phage display methods are one option for screening proteins. This method involves the fusion of genes encoding the variant polypeptides with phage coat protein genes. Protein variants expressed on phage surfaces are selected by binding with immobilized targets in vitro. Phages with selected protein variants are then amplified in bacteria, followed by the identification of positive clones by enzyme linked immunosorbent assay. These selected phages are then subjected to DNA sequencing.

Cell surface display systems can also be utilized to screen mutant polypeptide libraries. The library mutant genes are incorporated into expression vectors which are then transformed into appropriate host cells. These host cells are subjected to further high throughput screening methods to identify the cells with desired phenotypes.

Cell free display systems have been developed to exploit in vitro protein translation or cell free translation. These methods include mRNA display, ribosome display, covalent and non covalent DNA display, and in vitro compartmentalization.

Enzyme engineering

Enzyme engineering is the application of modifying an enzyme's structure (and, thus, its function) or modifying the catalytic activity of isolated enzymes to produce new metabolites, to allow new (catalyzed) pathways for reactions to occur, or to convert from some certain compounds into others (biotransformation). These products are useful as chemicals, pharmaceuticals, fuel, food, or agricultural additives.

An enzyme reactor  consists of a vessel containing a reactional medium that is used to perform a desired conversion by enzymatic means. Enzymes used in this process are free in the solution. Also Microorganisms are one of important origin for genuine enzymes .

Examples of engineered proteins

Computing methods have been used to design a protein with a novel fold, named Top7, and sensors for unnatural molecules. The engineering of fusion proteins has yielded rilonacept, a pharmaceutical that has secured Food and Drug Administration (FDA) approval for treating cryopyrin-associated periodic syndrome.

Another computing method, IPRO, successfully engineered the switching of cofactor specificity of Candida boidinii xylose reductase. Iterative Protein Redesign and Optimization (IPRO) redesigns proteins to increase or give specificity to native or novel substrates and cofactors. This is done by repeatedly randomly perturbing the structure of the proteins around specified design positions, identifying the lowest energy combination of rotamers, and determining whether the new design has a lower binding energy than prior ones.

Computation-aided design has also been used to engineer complex properties of a highly ordered nano-protein assembly. A protein cage, E. coli bacterioferritin (EcBfr), which naturally shows structural instability and an incomplete self-assembly behavior by populating two oligomerization states, is the model protein in this study. Through computational analysis and comparison to its homologs, it has been found that this protein has a smaller-than-average dimeric interface on its two-fold symmetry axis due mainly to the existence of an interfacial water pocket centered on two water-bridged asparagine residues. To investigate the possibility of engineering EcBfr for modified structural stability, a semi-empirical computational method is used to virtually explore the energy differences of the 480 possible mutants at the dimeric interface relative to the wild type EcBfr. This computational study also converges on the water-bridged asparagines. Replacing these two asparagines with hydrophobic amino acids results in proteins that fold into alpha-helical monomers and assemble into cages as evidenced by circular dichroism and transmission electron microscopy. Both thermal and chemical denaturation confirm that, all redesigned proteins, in agreement with the calculations, possess increased stability. One of the three mutations shifts the population in favor of the higher order oligomerization state in solution as shown by both size exclusion chromatography and native gel electrophoresis.

A in silico method, PoreDesigner, was successfully developed to redesign bacterial channel protein (OmpF) to reduce its 1 nm pore size to any desired sub-nm dimension. Transport experiments on the narrowest designed pores revealed complete salt rejection when assembled in biomimetic block-polymer matrices.

Modernization theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Modernization_theory

Modernization theory is used to explain the process of modernization within societies. The "classical" theories of modernization of the 1950s and 1960s drew on sociological analyses of Karl Marx, Emile Durkheim and a partial reading of Max Weber, and were strongly influenced by the writings of Harvard sociologist Talcott Parsons. Modernization theory was a dominant paradigm in the social sciences in the 1950s and 1960s, then went into a deep eclipse. It made a comeback after 1991, when Francis Fukuyama wrote about the end of the Cold War as confirmation on modernization theory and more generally of universal history. But the theory remains a controversial model.

Modernization refers to a model of a progressive transition from a "pre-modern" or "traditional" to a "modern" society. Modernization theory suggests that traditional societies will develop as they adopt more modern practices. Proponents of modernization theory claim that modern states are wealthier and more powerful and that their citizens are freer to enjoy a higher standard of living. Developments such as new data technology and the need to update traditional methods in transport, communication and production make modernization necessary or at least preferable to the status quo. That view makes critique difficult since it implies that such developments control the limits of human interaction, not vice versa. And yet, seemingly paradoxically, it also implies that human agency controls the speed and severity of modernization. Supposedly, instead of being dominated by tradition, societies undergoing the process of modernization typically arrive at forms of governance dictated by abstract principles. Traditional religious beliefs and cultural traits, according to the theory, usually become less important as modernization takes hold.

The theory looks at the internal factors of a country while assuming that with assistance, "traditional" countries can be brought to development in the same manner more developed countries have been. Modernization theory both attempts to identify the social variables that contribute to social progress and development of societies and seeks to explain the process of social evolution. Authors such as Daniel Lerner explicitly equated modernization with Westernization.

Today, the concept of modernization is understood in three different meanings: 1) as the internal development of Western Europe and North America relating to the European New Era; 2) as a process by which countries that do not belong to the first group of countries, aim to catch up with them; 3) as processes of evolutionary development of the most modernized societies (Western Europe and North America), i.e. modernization as a permanent process, carried out through reform and innovation, which today means a transition to a postindustrial society. Historians link modernization to the processes of urbanization and industrialization and the spread of education. As Kendall (2007) notes, "Urbanization accompanied modernization and the rapid process of industrialization." In sociological critical theory, modernization is linked to an overarching process of rationalisation. When modernization increases within a society, the individual becomes increasingly important, eventually replacing the family or community as the fundamental unit of society. It is also a subject taught in traditional Advanced Placement World History classes.

Modernization theory is subject to criticism originating among socialist and free-market ideologies, world-systems theorists, globalization theorists and dependency theorists among others. Modernization theory stresses not only the process of change but also the responses to that change. It also looks at internal dynamics while referring to social and cultural structures and the adaptation of new technologies.

The rise and fall of modernization theory

The modernization theory of the 1950s and 1960 drew on classical evolutionary theory and a Parsonian reading of Weber's ideas about a transition from traditional to modern society. Parsons had translated Weber's works into English in the 1930s and provided his own interpretation.

After 1945 the Parsonian version became widely used in sociology and other social sciences. Some of the thinkers associated with modernization theory are Marion J. Levy Jr., Gabriel Almond, Seymour Martin Lipset, Walt Rostow, Daniel Lerner, Lucian Pye, David Apter, Alex Inkeles, Cyril Edwin Black, Bert F. Hoselitz, Myron Weiner, and Karl Deutsch.

By the late 1960s opposition to modernization theory developed because the theory was too general and did not fit all societies in quite the same way. Yet, with the end of the Cold War, a few attempts to revive modernization theory were carried out. Francis Fukuyama argued for the use of modernization theory as universal history. A more academic effort to revise modernization theory was that of Ronald Inglehart and Christian Welzel in Modernization, Cultural Change, and Democracy (2005). Inglehart and Welzel amended the 1960s version of modernization theory in significant ways. Counter to Lipset, who associated industrial growth with democratization, Inglehart and Welzel did not see an association between industrialization and democratization. Rather, they held that only at a latter stage in the process of economic modernization, which various authors have characterized as post-industrial, did values conducive to democratization - which Inglehart and Welzel call "self-expression values" - emerge.

Nonetheless, these efforts to revive modernization theory were criticized by many (see the section on "Criticisms and alternatives" below), and the theory remained a controversial one.

Modernization and democracy

The relationship between modernization and democracy or democratization is one of the most researched studies in comparative politics. Many studies show that modernization has contributed to democracy in some countries. For example, Seymour Martin Lipset argued that modernization can turn into democracy." There is academic debate over the drivers of democracy because there are theories that support economic growth as both a cause and effect of the institution of democracy. "Lipset's observation that democracy is related to economic development, first advanced in 1959, has generated the largest body of research on any topic in comparative politics,"

Anderson explains the idea of an elongated diamond in order to describe the concentration of power in the hands of a few at the top during an authoritarian leadership. He develops this by giving an understanding of the shift in power from the elite class to the middle class that occurs when modernization is incorporated. Socioeconomic modernization allows for a democracy to further develop and influences the success of a democracy. Concluded from this, is the idea that as socioeconomic levels are leveled, democracy levels would further increase.

Larry Diamond and Juan Linz, who worked with Lipset in the book, Democracy in Developing Countries: Latin America, argue that economic performance affects the development of democracy in at least three ways. First, they argue that economic growth is more important for democracy than given levels of socioeconomic development. Second, socioeconomic development generates social changes that can potentially facilitate democratization. Third, socioeconomic development promotes other changes, like organization of the middle class, which is conducive to democracy.

As Seymour Martin Lipset put it, "All the various aspects of economic development—industrialization, urbanization, wealth and education—are so closely interrelated as to form one major factor which has the political correlate of democracy". The argument also appears in Walt W. Rostow, Politics and the Stages of Growth (1971); A. F. K. Organski, The Stages of Political Development (1965); and David Apter, The Politics of Modernization (1965). In the 1960s, some critics argued that the link between modernization and democracy was based too much on the example of European history and neglected the Third World.

One historical problem with that argument has always been Germany whose economic modernization in the 19th century came long before the democratization after 1918. Berman, however, concludes that a process of democratization was underway in Imperial Germany, for "during these years Germans developed many of the habits and mores that are now thought by political scientists to augur healthy political development".

One contemporary problem for modernization theory is the argument of whether modernization implies more human rights for citizens or not. China, one of the most rapidly growing economies in the world, can be observed as an example. The modernization theory implies that this should correlate to democratic growth in some regards, especially in relation to the liberalization of the middle and lower classes. However, active human rights abuses and constant oppression of Chinese citizens by the government seem to contradict the theory strongly. Interestingly enough, the irony is that increasing restrictions on Chinese citizens are a result of modernization theory.

In the 1990s, the Chinese government wanted to reform the legal system and emphasized governing the country by law. This led to a legal awakening for citizens as they were becoming more educated on the law, yet more understanding of their inequality in relation to the government. Looking down the line in the 2000s, Chinese citizens saw even more opportunities to liberalize and were able to be a part of urbanization and could access higher levels of education. This in turn resulted in the attitudes of the lower and middle classes changing to more liberal ideas, which went against the CCP. Over time, this has led to their active participation in civil society activities and similar adjacent political groups in order to make their voices heard. Consequently, the Chinese government represses Chinese citizens at a more aggressive rate, all due to modernization theory.

Ronald Inglehart and Christian Welzel contend that the realization of democracy is not based solely on an expressed desire for that form of government, but democracies are born as a result of the admixture of certain social and cultural factors. They argue the ideal social and cultural conditions for the foundation of a democracy are born of significant modernization and economic development that result in mass political participation.

Randall Peerenboom explores the relationships among democracy, the rule of law and their relationship to wealth by pointing to examples of Asian countries, such as Taiwan and South Korea, which have successfully democratized only after economic growth reached relatively high levels and to examples of countries such as the Philippines, Bangladesh, Cambodia, Thailand, Indonesia and India, which sought to democratize at lower levels of wealth but have not done as well.

Adam Przeworski and others have challenged Lipset's argument. They say political regimes do not transition to democracy as per capita incomes rise. Rather, democratic transitions occur randomly, but once there, countries with higher levels of gross domestic product per capita remain democratic. Epstein et al. (2006) retest the modernization hypothesis using new data, new techniques, and a three-way, rather than dichotomous, classification of regimes. Contrary to Przeworski, this study finds that the modernization hypothesis stands up well. Partial democracies emerge as among the most important and least understood regime types.

Daron Acemoglu and James A. Robinson, in their article "Income and Democracy" (2008) further weaken the case for Lipset's argument by showing that even though there is a strong cross-country correlation between income and democracy, once one controls for country fixed effects and removes the association between income per capita and various measures of democracy, there is "no causal effect of income on democracy." In "Non-Modernization" (2022), they further argue that modernization theory cannot account for various paths of political development "because it posits a link between economics and politics that is not conditional on institutions and culture and that presumes a definite endpoint—for example, an 'end of history'."

Sirianne Dahlum and Carl Henrik Knutsen offer a test of the Ronald Inglehart and Christian Welzel revised version of modernization theory, which focuses on cultural traits triggered by economic development that are presummed to be conducive to democratization. They find "no empirical support" for the Inglehart and Welzel thesis and conclude that "self-expression values do not enhance democracy levels or democratization chances, and neither do they stabilize existing democracies."

A meta-analysis by Gerardo L. Munck of research on Lipset's argument shows that a majority of studies do not support the thesis that higher levels of economic development leads to more democracy.

Modernization and economic development

Development, like modernization, has become the orienting principle of modern times. Countries that are seen as modern are also seen as developed, which means that they are generally more respected by major institutions such as the United Nations and even as possible trade partners for other countries. The extent to which a country has been modernized or developed dictates its power and importance on an international level.

Modernization of the health sector of developing nations recognizes that transitioning from "traditional" to "modern" is not merely the advancement in technology and the introduction of Western practices; implementing modern healthcare requires the reorganization of political agenda and, in turn, an increase in funding by feeders and resources towards public health. Additionally, a strong advocate of the DE-emphasis of medical institutions was Halfdan T. Mahler, the WHO General Director from 1973 to 1988. Related ideas have been proposed at international conferences such as Alma-Ats and the "Health and Population in Development" conference, sponsored by the Rockefeller Foundation in Italy in 1979, and selective primary healthcare and GOBI were discussed (although they have both been strongly criticized by supporters of comprehensive healthcare). Overall, however, this is not to say that the nations of the Global South can function independently from Western states; significant funding is received from well-intention programs, foundations, and charities that target epidemics such as HIV/AIDS, malaria, and tuberculosis that have substantially improved the lives of millions of people and impeded future development.

Modernization theorists often saw traditions as obstacles to economic development. According to Seymour Martin Lipset, economic conditions are heavily determined by the cultural, social values present in that given society. Furthermore, while modernization might deliver violent, radical change for traditional societies, it was thought worth the price. Critics insist that traditional societies were often destroyed without ever gaining the promised advantages if, among other things, the economic gap between advanced societies and such societies actually increased. The net effect of modernization for some societies was therefore the replacement of traditional poverty by a more modern form of misery, according to these critics. Others point to improvements in living standards, physical infrastructure, education and economic opportunity to refute such criticisms.

Modernization theorists such as Samuel P. Huntington held in the 1960s and 1970s that authoritarian regimes yielded greater economic growth than democracies. However, this view had been challenged. In Democracy and Development: Political Institutions and Well-Being in the World, 1950–1990 (2000), Adam Przeworski argued that "democracies perform as well economically as do authoritarian regimes." A study by Daron Acemoglu, Suresh Naidu, Pascual Restrepo, and James A. Robinson shows that "democracy has a positive effect on GDP per capita."

Modernization and globalization

Globalization can be defined as the integration of economic, political and social cultures. It is argued that globalization is related to the spreading of modernization across borders.

Global trade has grown continuously since the European discovery of new continents in the Early modern period; it increased particularly as a result of the Industrial Revolution and the mid-20th century adoption of the shipping container.

Annual trans-border tourist arrivals rose to 456 million by 1990 and almost tripled since, reaching a total of over 1.2 billion in 2016. Communication is another major area that has grown due to modernization. Communication industries have enabled capitalism to spread throughout the world. Telephony, television broadcasts, news services and online service providers have played a crucial part in globalization. Former U.S. president Lyndon B. Johnson was a supporter of the modernization theory and believed that television had potential to provide educational tools in development.

With the many apparent positive attributes to globalization there are also negative consequences. The dominant, neoliberal model of globalization often increases disparities between a society's rich and its poor. In major cities of developing countries there exist pockets where technologies of the modernised world, computers, cell phones and satellite television, exist alongside stark poverty. Globalists are globalization modernization theorists and argue that globalization is positive for everyone, as its benefits must eventually extend to all members of society, including vulnerable groups such as women and children.

Technology

New technology is a major source of social change. (Social change refers to any significant alteration over time in behaviour patterns and cultural values and norms.) Since modernization entails the social transformation from agrarian societies to industrial ones, it is important to look at the technological viewpoint; however, new technologies do not change societies by itself. Rather, it is the response to technology that causes change. Frequently, technology is recognized but not put to use for a very long time such as the ability to extract metal from rock. Although that initially went unused, it later had profound implications for the developmental course of societies. Technology makes it possible for a more innovative society and broad social change. That dramatic change through the centuries that has evolved socially, industrially, and economically, can be summed up by the term modernization. Cell phones, for example, have changed the lives of millions throughout the world. That is especially true in Africa and other parts of the Middle East, where there is a low-cost communication infrastructure. With cell phone technology, widely dispersed populations are connected, which facilitates business-to-business communication and provides internet access to remoter areas, with a consequential rise in literacy.

Applications

United States foreign aid in the 1960s

President John F. Kennedy (1961–63) relied on economists W.W. Rostow on his staff and outsider John Kenneth Galbraith for ideas on how to promote rapid economic development in the "Third World", as it was called at the time. They promoted modernization models in order to reorient American aid to Asia, Africa and Latin America. In the Rostow version in his The Stages of Economic Growth (1960) progress must pass through five stages, and for underdeveloped world the critical stages were the second one, the transition, the third stage, the takeoff into self-sustaining growth. Rostow argued that American intervention could propel a country from the second to the third stage he expected that once it reached maturity, it would have a large energized middle class that would establish democracy and civil liberties and institutionalize human rights. The result was a comprehensive theory that could be used to challenge Marxist ideologies, and thereby repel communist advances. The model provided the foundation for the Alliance for Progress in Latin America, the Peace Corps, Food for Peace, and the Agency for International Development (AID). Kennedy proclaimed the 1960s the "Development Decade" and substantially increased the budget for foreign assistance. Modernization theory supplied the design, rationale, and justification for these programs. The goals proved much too ambitious, and the economists in a few years abandoned the European-based modernization model as inappropriate to the cultures they were trying to impact.

Kennedy and his top advisers were working from implicit ideological assumptions regarding modernization. They firmly believed modernity was not only good for the target populations, but was essential to avoid communism on the one hand or extreme control of traditional rural society by the very rich landowners on the other. They believed America had a duty, as the most modern country in the world, to promulgate this ideal to the poor nations of the Third World. They wanted programs that were altruistic, and benevolent—and also tough, energetic, and determined. It was benevolence with a foreign policy purpose. Michael Latham has identified how this ideology worked out in three major programs the Alliance for Progress, the Peace Corps, and the strategic hamlet program in South Vietnam. However, Latham argues that the ideology was a non-coercive version of the modernization goals of the imperialistic of Britain, France and other European countries in the 19th century.

Criticisms and alternatives

From the 1970s, modernization theory has been criticized by numerous scholars, including Andre Gunder Frank (1929–2005) and Immanuel Wallerstein (1930-2019). In this model, the modernization of a society required the destruction of the indigenous culture and its replacement by a more Westernized one. By one definition, modern simply refers to the present, and any society still in existence is therefore modern. Proponents of modernization typically view only Western society as being truly modern and argue that others are primitive or unevolved by comparison. That view sees unmodernized societies as inferior even if they have the same standard of living as western societies. Opponents argue that modernity is independent of culture and can be adapted to any society. Japan is cited as an example by both sides. Some see it as proof that a thoroughly modern way of life can exist in a non western society. Others argue that Japan has become distinctly more Western as a result of its modernization.

As Tipps has argued, by conflating modernization with other processes, with which theorists use interchangeably (democratization, liberalization, development), the term becomes imprecise and therefore difficult to disprove.

The theory has also been criticised empirically, as modernization theorists ignore external sources of change in societies. The binary between traditional and modern is unhelpful, as the two are linked and often interdependent, and "modernization" does not come as a whole.

Modernization theory has also been accused of being Eurocentric, as modernization began in Europe, with the Industrial Revolution, the French Revolution and the Revolutions of 1848 and has long been regarded as reaching its most advanced stage in Europe. Anthropologists typically make their criticism one step further and say that the view is ethnocentric and is specific to Western culture.

Dependency theory

One alternative model is dependency theory. It emerged in the 1950s and argues that the underdevelopment of poor nations in the Third World derived from systematic imperial and neo-colonial exploitation of raw materials. Its proponents argue that resources typically flow from a "periphery" of poor and underdeveloped states to a "core" of wealthy states, enriching the latter at the expense of the former. It is a central contention of dependency theorists such as Andre Gunder Frank that poor states are impoverished and rich ones enriched by the way poor states are integrated into the "world system".

Dependency models arose from a growing association of southern hemisphere nationalists (from Latin America and Africa) and Marxists. It was their reaction against modernization theory, which held that all societies progress through similar stages of development, that today's underdeveloped areas are thus in a similar situation to that of today's developed areas at some time in the past, and that, therefore, the task of helping the underdeveloped areas out of poverty is to accelerate them along this supposed common path of development, by various means such as investment, technology transfers, and closer integration into the world market. Dependency theory rejected this view, arguing that underdeveloped countries are not merely primitive versions of developed countries, but have unique features and structures of their own; and, importantly, are in the situation of being the weaker members in a world market economy.

Barrington Moore and comparative historical analysis

Another line of critique of modernization theory was due to sociologist Barrington Moore Jr., in his Social Origins of Dictatorship and Democracy (1966). In this classic book, Moore argues there were at least "three routes to the modern world" - the liberal democratic, the fascist, and the communist - each deriving from the timing of industrialization and the social structure at the time of transition. Counter to modernization theory, Moore held that there was not one path to the modern world and that economic development did not always bring about democracy.

Guillermo O'Donnell and bureaucratic authoritarianism

Political scientist Guillermo O'Donnell, in his Modernization and Bureaucratic Authoritarianism (1973) challenged the thesis, advanced most notably by Seymour Martin Lipset, that industrialization produced democracy. In South America, O'Donnell argued, industrialization generated not democracy, but bureaucratic authoritarianism.

Acemoglu and Robinson and institutional economics

Ecoonomists Daron Acemoglu and James A. Robinson, in "Non-Modernization" (2022), argue that modernization theory cannot account for various paths of political development "because it posits a link between economics and politics that is not conditional on institutions and culture and that presumes a definite endpoint—for example, an 'end of history'."

Social rejection

From Wikipedia, the free encyclopedia
A woman walking towards a man who has raised his hand towards her and is turning away
This scene of the Admonitions Scroll shows an emperor turning away from his consort, his hand raised in a gesture of rejection and with a look of disdain on his face.

Social rejection occurs when an individual is deliberately excluded from a social relationship or social interaction. The topic includes interpersonal rejection (or peer rejection), romantic rejection and familial estrangement. A person can be rejected or shunned by individuals or an entire group of people. Furthermore, rejection can be either active, by bullying, teasing, or ridiculing, or passive, by ignoring a person, or giving the "silent treatment". The experience of being rejected is subjective for the recipient, and it can be perceived when it is not actually present. The word "ostracism" is also commonly used to denote a process of social exclusion (in Ancient Greece, ostracism was a form of temporary banishment following a people's vote).

Although humans are social beings, some level of rejection is an inevitable part of life. Nevertheless, rejection can become a problem when it is prolonged or consistent, when the relationship is important, or when the individual is highly sensitive to rejection. Rejection by an entire group of people can have especially negative effects, particularly when it results in social isolation.

The experience of rejection can lead to a number of adverse psychological consequences such as loneliness, low self-esteem, aggression, and depression. It can also lead to feelings of insecurity and a heightened sensitivity to future rejection.

Need for acceptance

Social rejection may be emotionally painful, due to the social nature of human beings, as well as the essential need for social interaction between other humans. Abraham Maslow and other theorists have suggested that the need for love and belongingness is a fundamental human motivation. According to Maslow, all humans, even introverts, need to be able to give and receive affection to be psychologically healthy.

Psychologists believe that simple contact or social interaction with others is not enough to fulfill this need. Instead, people have a strong motivational drive to form and maintain caring interpersonal relationships. People need both stable relationships and satisfying interactions with the people in those relationships. If either of these two ingredients is missing, people will begin to feel lonely and unhappy. Thus, rejection is a significant threat. In fact, the majority of human anxieties appear to reflect concerns over social exclusion.

Being a member of a group is also important for social identity, which is a key component of the self-concept. Mark Leary of Duke University has suggested that the main purpose of self-esteem is to monitor social relations and detect social rejection. In this view, self-esteem is a sociometer which activates negative emotions when signs of exclusion appear.

Social psychological research confirms the motivational basis of the need for acceptance. Specifically, fear of rejection leads to conformity to peer pressure (sometimes called normative influence), and compliance to the demands of others. The need for affiliation and social interaction appears to be particularly strong under stress.

In childhood

Peer rejection has been measured using sociometry and other rating methods. Studies typically show that some children are popular, receiving generally high ratings, many children are in the middle, with moderate ratings, and a minority of children are rejected, showing generally low ratings. One measure of rejection asks children to list peers they like and dislike. Rejected children receive few "like" nominations and many "dislike" nominations. Children classified as neglected receive few nominations of either type.

According to Karen Bierman of Pennsylvania State University, most children who are rejected by their peers display one or more of the following behavior patterns:

  1. Low rates of prosocial behavior, e.g. taking turns, sharing.
  2. High rates of aggressive or disruptive behavior.
  3. High rates of inattentive, immature, or impulsive behavior.
  4. High rates of social anxiety.

Bierman states that well-liked children show social savvy and know when and how to join play groups. Children who are at risk for rejection are more likely to barge in disruptively, or hang back without joining at all. Aggressive children who are athletic or have good social skills are likely to be accepted by peers, and they may become ringleaders in the harassment of less skilled children. Minority children, children with disabilities, or children who have unusual characteristics or behavior may face greater risks of rejection. Depending on the norms of the peer group, sometimes even minor differences among children lead to rejection or neglect. Children who are less outgoing or simply prefer solitary play are less likely to be rejected than children who are socially inhibited and show signs of insecurity or anxiety.

Rejected children are more likely to be bullied at school and on playgrounds.

Peer rejection, once established, tends to be stable over time, and thus difficult for a child to overcome. Researchers have found that active rejection is more stable, more harmful, and more likely to persist after a child transfers to another school, than simple neglect. One reason for this is that peer groups establish reputational biases that act as stereotypes and influence subsequent social interaction. Thus, even when rejected and popular children show similar behavior and accomplishments, popular children are treated much more favorably.

Rejected children are likely to have lower self-esteem, and to be at greater risk for internalizing problems like depression. Some rejected children display externalizing behavior and show aggression rather than depression. The research is largely correlational, but there is evidence of reciprocal effects. This means that children with problems are more likely to be rejected, and this rejection then leads to even greater problems for them. Chronic peer rejection may lead to a negative developmental cycle that worsens with time.

Rejected children are more likely to be bullied and to have fewer friends than popular children, but these conditions are not always present. For example, some popular children do not have close friends, whereas some rejected children do. Peer rejection is believed to be less damaging for children with at least one close friend.

An analysis of 15 school shootings between 1995 and 2001 found that peer rejection was present in all but two of the cases (87%). The documented rejection experiences included both acute and chronic rejection and frequently took the form of ostracism, bullying, and romantic rejection. The authors stated that although it is likely that the rejection experiences contributed to the school shootings, other factors were also present, such as depression, poor impulse control, and other psychopathology.

There are programs available for helping children who suffer from social rejection. One large scale review of 79 controlled studies found that social skills training is very effective (r = 0.40 effect size), with a 70% success rate, compared to 30% success in control groups. There was a decline in effectiveness over time, however, with follow-up studies showing a somewhat smaller effect size (r = 0.35).

In the laboratory

Laboratory research has found that even short-term rejection from strangers can have powerful (if temporary) effects on an individual. In several social psychology experiments, people chosen at random to receive messages of social exclusion become more aggressive, more willing to cheat, less willing to help others, and more likely to pursue short-term over long-term goals. Rejection appears to lead very rapidly to self-defeating and antisocial behavior.

Researchers have also investigated how the brain responds to social rejection. One study found that the dorsal anterior cingulate cortex is active when people are experiencing both physical pain and "social pain," in response to social rejection. A subsequent experiment, also using fMRI neuroimaging, found that three regions become active when people are exposed to images depicting rejection themes. These areas are the posterior cingulate, the parahippocampal gyrus, and the dorsal anterior cingulate cortex. Furthermore, individuals who are high in rejection sensitivity (see below) show less activity in the left prefrontal cortex and the right dorsal superior frontal gyrus, which may indicate less ability to regulate emotional responses to rejection.

An experiment performed in 2007 at the University of California at Berkeley found that individuals with a combination of low self-esteem and low attentional control are more likely to exhibit eye-blink startle responses while viewing rejection themed images. These findings indicate that people who feel bad about themselves are especially vulnerable to rejection, but that people can also control and regulate their emotional reactions.

A study at Miami University indicated that individuals who recently experienced social rejection were better than both accepted and control participants in their ability to discriminate between real and fake smiles. Though both accepted and control participants were better than chance (they did not differ from each other), rejected participants were much better at this task, nearing 80% accuracy. This study is noteworthy in that it is one of the few cases of a positive or adaptive consequence of social rejection.

Ball toss / cyberball experiments

A common experimental technique is the "ball toss" paradigm, which was developed by Kip Williams and his colleagues at Purdue University. This procedure involves a group of three people tossing a ball back and forth. Unbeknownst to the actual participant, two members of the group are working for the experimenter and following a pre-arranged script. In a typical experiment, half of the subjects will be excluded from the activity after a few tosses and never get the ball again. Only a few minutes of this treatment are sufficient to produce negative emotions in the target, including anger and sadness. This effect occurs regardless of self-esteem and other personality differences.

Gender differences have been found in these experiments. In one study, women showed greater nonverbal engagement whereas men disengaged faster and showed face-saving techniques, such as pretending to be uninterested. The researchers concluded that women seek to regain a sense of belonging whereas men are more interested in regaining self-esteem.

A computerized version of the task known as "cyberball" has also been developed and leads to similar results. Cyberball is a virtual ball toss game where the participant is led to believe they are playing with two other participants sitting at computers elsewhere who can toss the ball to either player. The participant is included in the game for the first few minutes, but then excluded by the other players for the remaining three minutes. A significant advantage of the Cyberball software is its openness; Williams made the software available to all researchers. In the software, the researcher can adjust the order of throwing the balls, the user's avatar, the background, the availability of chat, the introductory message and much other information. In addition, researchers can obtain the program's latest version by visiting the official website of CYBERBALL 5.0.

This simple and short time period of ostracism has been found to produce significant increases to self-reported levels of anger and sadness, as well as lowering levels of the four needs. These effects have been found even when the participant is ostracised by out-group members, when the out-group member is identified as a despised person such as someone in the Ku Klux Klan, when they know the source of the ostracism is just a computer, and even when being ostracised means they will be financially rewarded and being included would incur a financial cost. People feel rejected even when they know they are playing only against the computer. A recent set of experiments using cyberball demonstrated that rejection impairs will power or self-regulation. Specifically, people who are rejected are more likely to eat cookies and less likely to drink an unpleasant tasting beverage that they are told is good for them. These experiments also showed that the negative effects of rejection last longer in individuals who are high in social anxiety.

Life-Alone Paradigm

Another mainstream research method is the Life Alone Paradigm, which was first developed by Twenge and other scholars to evoke feelings of rejection by informing subjects of false test results. In contrast to ball toss and cyberball, it focuses on future rejection, i.e. the experience of rejection that participants may potentially experience in the future. Specifically, at the beginning of the experiment, participants complete a personality scale (in the original method, the Eysenck Personality Questionnaire). They are then informed of their results based on their experimental group rather than the real results. Participants in the rejected group will be told that their test results indicate that they will be alone in the future, regardless of their current state of life. Participants in the accepted group will be told they will have a fulfilling relationship. In the control group, participants were told they would encounter some accidences. In this way, the participants' sense of rejection is awakened to take the subsequent measurement. After the experiment, the researcher will explain the results to the participants and apologise.

Scholars point out that this method may cause more harm to the subjects. For example, the participants will likely experience a more severe effect on executive functioning during the test. Therefore, this method faces more significant issue with research ethics and harms than other rejection experiments. Consequently, researchers should use this test with caution in experiments and pay attention to the subjects' reaction afterwards.

Psychology of ostracism

Most of the research on the psychology of ostracism has been conducted by the social psychologist Kip Williams. He and his colleagues have devised a model of ostracism which provides a framework to show the complexity in the varieties of ostracism and the processes of its effects. There he theorises that ostracism can potentially be so harmful that humans have evolved an efficient warning system to immediately detect and respond to it.

In the animal kingdom as well as in primitive human societies, ostracism can lead to death due to the lack of protection benefits and access to sufficient food resources from the group. Living apart from the whole of society also means not having a mate, so being able to detect ostracism would be a highly adaptive response to ensure survival and continuation of the genetic line.

Temporal Need-Threat Model

The predominant theoretical model of social rejection is the temporal-need threat model proposed by Williams and his colleagues, in which the process of social exclusion is divided into three stages: reflexive, reflective, and resignation. The reflexive stage happens when social rejection first occurs. It is an immediate effect happened on individuals. Then, the reflective stage enters when the individual starts to reflect and cope with social rejection. Finally, if the rejection last for the long term and the individual cannot successfully cope with it, the social rejection would turn to the resignation stage, where the individual is likely to suffer from severe depression and helplessness. These will likely push the individual into suicide or other extreme behaviour.

Reflexive Stage

The reflexive stage is the first stage of social rejection and refers to the period immediately after social exclusion has occurred. During this stage, Williams proposed that ostracism uniquely poses a threat to four fundamental human needs; the need to belong, the need for control in social situations, the need to maintain high levels of self-esteem, and the need to have a sense of a meaningful existence. When social rejection is related to the individual's social relationships, the individual's need for belonging and self-esteem is threatened; when it is not associated with it, it is primarily a threat to a sense of control and meaningful existence.

Another challenge that individuals need to face at this stage is the sense of pain. Previous scholars have used neurobiological methods to find that social exclusion, whether intentional or unintentional, evokes pain in individuals. Specifically, neurobiological evidence suggests that social exclusion increases the dorsal anterior cingulate cortex (dACC) activation. This brain region, in turn, is associated with physiological pain in individuals. Notably, the right ventral prefrontal cortex (RVPFC) is also further activated when individuals find that social rejection is intentional; this brain region is associated with the regulation of pain perception, implying that pain perception decreases when individuals understand the source of this social rejection. Further research suggests that personal traits or environmental factors do not affect this pain.

Thus, people are motivated to remove this pain with behaviours aimed at reducing the likelihood of others ostracising them any further and increasing their inclusionary status.

Reflective Stage

In the reflective stage, individuals begin to think about and try to cope with social rejection. In the need-treat model, their response is referred to as need fortification, i.e. the creation of interventions that respond to the needs they are threatened by in the reflective stage. Specifically, when individuals' self-esteem and sense of belonging are threatened, they will try to integrate more into the group. As a result, these rejected individuals develop more pro-social behaviours, such as helping others and giving gifts. In contrast, when their sense of control and meaning is threatened, they show more antisocial behaviour, such as verbal abuse, fighting, etc., to prove they are essential.

Resignation Stage

When individuals have been in social rejection for a long time and cannot improve their situation through effective coping, they move to the third stage, resignation, in which they do not try to change the problem they are facing but choose to accept it. In Zadro's interview study, in which she interviewed 28 respondents in a state of chronic rejection, she found that the respondents were depressed, self-deprecating and helpless. This social rejection can significantly impact the physical and psychological health of the individual.

Controversy

The controversy over temporal need-threat model has focused on whether it enhances or reduces people's perception of pain. DeWall and Baumeister's research suggests that individuals experience a reduction in pain after rejection, a phenomenon they refer to as emotional numbness, which contradicts Williams et al.'s theory that social rejection enhances pain perception. In this regard, Williams suggests that this phenomenon is likely due to differences in the paradigm used in the study, as when using a long-term paradigm such as Life-Alone, individuals do not feel the possibility of rejoining the group, thus creating emotional numbness. This is further supported by Bernstein and Claypool, who found that in separate cyberball and life-alone experiments, stronger stimuli of rejection, such as life-alone, protected people through emotional numbness. In contrast, in the case of minor rejection, such as that in cyberball, the individual's system detects the rejection cue and draws attention to it through a sense of pain.

Popularity resurgence

There has been recent research into the function of popularity on development, specifically how a transition from ostracization to popularity can potentially reverse the deleterious effects of being socially ostracized. While various theories have been put forth regarding what skills or attributes confer an advantage at obtaining popularity, it appears that individuals who were once popular and subsequently experienced a transient ostracization are often able to employ the same skills that led to their initial popularity to bring about a popularity resurgence.

Romantic

In contrast to the study of childhood rejection, which primarily examines rejection by a group of peers, some researchers focus on the phenomenon of a single individual rejecting another in the context of a romantic relationship. In both teenagers and adults, romantic rejection occurs when a person refuses the romantic advances of another, ignores/avoids or is repulsed by someone who is romantically interested in them, or unilaterally ends an existing relationship. The state of unrequited love is a common experience in youth, but mutual love becomes more typical as people get older.

Romantic rejection is a painful, emotional experience that appears to trigger a response in the caudate nucleus of the brain, and associated dopamine and cortisol activity. Subjectively, rejected individuals experience a range of negative emotions, including frustration, intense anger, jealousy, hate, and eventually, resignation, despair, and possible long-term depression. However, there have been cases where individuals go back and forth between depression and anger.

Rejection sensitivity

Karen Horney was the first theorist to discuss the phenomenon of rejection sensitivity. She suggested that it is a component of the neurotic personality, and that it is a tendency to feel deep anxiety and humiliation at the slightest rebuff. Simply being made to wait, for example, could be viewed as a rejection and met with extreme anger and hostility.

Albert Mehrabian developed an early questionnaire measure of rejection sensitivity. Mehrabian suggested that sensitive individuals are reluctant to express opinions, tend to avoid arguments or controversial discussions, are reluctant to make requests or impose on others, are easily hurt by negative feedback from others, and tend to rely too much on familiar others and situations so as to avoid rejection.

A more recent (1996) definition of rejection sensitivity is the tendency to "anxiously expect, readily perceive, and overreact" to social rejection. People differ in their readiness to perceive and react to rejection. The causes of individual differences in rejection sensitivity are not well understood. Because of the association between rejection sensitivity and neuroticism, there is a likely genetic predisposition. Rejection sensitive dysphoria is also a common symptom of ADHD. Others posit that rejection sensitivity stems from early attachment relationships and parental rejection; also peer rejection is thought to play a role. Bullying, an extreme form of peer rejection, is likely connected to later rejection sensitivity. However, there is no conclusive evidence for any of these theories.

Health

Social rejection has a large effect on a person's health. Baumeister and Leary originally suggested that an unsatisfied need to belong would inevitably lead to problems in behavior as well as mental and physical health. Corroboration of these assumptions about behavior deficits were seen by John Bowlby in his research. Numerous studies have found that being socially rejected leads to an increase in levels of anxiety. Additionally, the level of depression a person feels as well as the amount they care about their social relationships is directly proportional to the level of rejection they perceive. Rejection affects the emotional health and well being of a person as well. Overall, experiments show that those who have been rejected will suffer from more negative emotions and have fewer positive emotions than those who have been accepted or those who were in neutral or control conditions.

In addition to the emotional response to rejection, there is a large effect on physical health as well. Having poor relationships and being more frequently rejected is predictive of mortality. Also, as long as a decade after the marriage ends, divorced women have higher rates of illness than their non-married or currently married counterparts. In the case of a family estrangement, a core part of the mother's identity may be betrayed by the rejection of an adult child. The chance for reconciliation, however slight, results in an inability to attain closure. The resulting emotional state and societal stigma from the estrangement may harm psychological and physical health of the parent through end of life.

The immune system tends to be harmed when a person experiences social rejection. This can cause severe problems for those with diseases such as HIV. One study by Cole, Kemeny, and Taylor investigated the differences in the disease progression of HIV positive gay men who were sensitive to rejection compared to those who were not considered rejection sensitive. The study, which took place over nine years, indicated significantly faster rate of low T helper cells, therefore leading to an earlier AIDS diagnosis. They also found that those patients who were more sensitive to rejection died from the disease an average of 2 years earlier than their non-rejection sensitive counterparts.

Other aspects of health are also affected by rejection. Both systolic and diastolic blood pressure increase upon imagining a rejection scenario. Those who are socially rejected have an increased likelihood of suffering from tuberculosis, as well as suicide. Rejection and isolation were found to affect levels of pain following an operation as well as other physical forms of pain. Social rejection may cause a reduction in intelligence. MacDonald and Leary theorize that rejection and exclusion cause physical pain because that pain is a warning sign to support human survival. As humans developed into social creatures, social interactions and relationships became necessary for survival, and the physical pain systems already existed within the human body.

In fiction, film and art

The Painting "Pope Makes Love To Lady Mary Wortley Montagu" by William Powell Frith depicts Lady Mary Wortley Montagu laughingly rejecting Alexander Pope's courtship.

Artistic depictions of rejection occur in a variety of art forms. One genre of film that most frequently depicts rejection is romantic comedies. In the film He's Just Not That Into You, the main characters deal with the challenges of reading and misreading human behavior. This presents a fear of rejection in romantic relationships as reflected in this quote by the character Mary, "And now you have to go around checking all these different portals just to get rejected by seven different technologies. It's exhausting."

Social rejection is also depicted in theatrical plays and musicals. For example, the film Hairspray shares the story of Tracy Turnblad, an overweight 15-year-old dancer set in the 1960s. Tracy and her mother are faced with overcoming society's expectations regarding weight and physical appearances.

Splitting (psychology)

From Wikipedia, the free encyclopedia

Splitting (also called black-and-white thinking, thinking in extremes or all-or-nothing thinking) is the failure in a person's thinking to bring together the dichotomy of both perceived positive and negative qualities of something into a cohesive, realistic whole. It is a common defense mechanism wherein the individual tends to think in extremes (e.g., an individual's actions and motivations are all good or all bad with no middle ground). This kind of dichotomous interpretation is contrasted by an acknowledgement of certain nuances known as "shades of gray".

Splitting was first described by Ronald Fairbairn in his formulation of object relations theory; it begins as the inability of the infant to combine the fulfilling aspects of the parents (the good object) and their unresponsive aspects (the unsatisfying object) into the same individuals, instead seeing the good and bad as separate. In psychoanalytic theory this functions as a defense mechanism.

Relationships

Splitting creates instability in relationships because one person can be viewed as either personified virtue or personified vice at different times, depending on whether they gratify the subject's needs or frustrate them. This, along with similar oscillations in the experience and appraisal of the self, leads to chaotic and unstable relationship patterns, identity diffusion, and mood swings. The therapeutic process can be greatly impeded by these oscillations because the therapist too can come to be seen as all good or all bad. To attempt to overcome the negative effects on treatment outcomes, constant interpretations by the therapist are needed.

Splitting contributes to unstable relationships and intense emotional experiences. Splitting is common during adolescence, but is regarded as transient. Splitting has been noted especially with persons diagnosed with borderline personality disorder. Treatment strategies have been developed for individuals and groups based on dialectical behavior therapy, and for couples. There are also self-help books on related topics such as mindfulness and emotional regulation that claim to be helpful for individuals who struggle with the consequences of splitting.

Borderline personality disorder

Splitting is a relatively common defense mechanism for people with borderline personality disorder. One of the DSM IV-TR criteria for this disorder is a description of splitting: "a pattern of unstable and intense interpersonal relationships characterized by alternating between extremes of idealization and devaluation". In psychoanalytic theory, people with borderline personality disorder are not able to integrate the good and bad images of both self and others, resulting in a bad representation which dominates the good representation.

Narcissistic personality disorder

People matching the diagnostic criteria for narcissistic personality disorder also use splitting as a central defense mechanism. Most often narcissists do this as an attempt to stabilize their sense of self-positivity in order to preserve their self-esteem, by perceiving themselves as purely upright or admirable and others who do not conform to their will or values as purely wicked or contemptible.

The cognitive habit of splitting also implies the use of other related defense mechanisms, namely idealization and devaluation, which are preventive attitudes or reactions to narcissistic rage and narcissistic injury.

Depression

In depression, exaggerated all-or-nothing thinking can form a self-reinforcing cycle: these thoughts might be called emotional amplifiers because, as they go around and around, they become more intense. Typical all-or-nothing thoughts:

Janet, Bleuler and Freud

Splitting of consciousness ("normal self" vs. "secondary self") was first described by Pierre Janet in De l'automatisme psychologique (1889). His ideas were extended by Eugen Bleuler (who in 1908 coined the word schizophrenia from the Ancient Greek skhízō [σχῐ́ζω, "to split"] and phrḗn [φρήν, "mind"]) and Sigmund Freud to explain the splitting (German: Spaltung) of consciousness—not (with Janet) as the product of innate weakness, but as the result of inner conflict. With the development of the idea of repression, splitting moved to the background of Freud's thought for some years, being largely reserved for cases of double personality. However, his late work saw a renewed interest in how it was "possible for the ego to avoid a rupture... by effecting a cleavage or division of itself", a theme which was extended in his Outline of Psycho-Analysis (1940a [1938]) beyond fetishism to the neurotic in general.

His daughter Anna Freud explored how, in healthy childhood development, a splitting of loving and aggressive instincts could be avoided.

Klein

There was, however, from early on, another use of the term "splitting" in Freud that referred rather to resolving ambivalence "by splitting the contradictory feelings so that one person is only loved, another one only hated ... the good mother and the wicked stepmother in fairy tales". Or, with opposing feelings of love and hate, perhaps "the two opposites should have been split apart and one of them, usually the hatred, has been repressed". Such splitting was closely linked to the defence of "isolation ... The division of objects into congenial and uncongenial ones ... making 'disconnections'".

It was the latter sense of the term that was predominantly adopted and exploited by Melanie Klein. After Freud, "the most important contribution has come from Melanie Klein, whose work enlightens the idea of 'splitting of the object' (Objektspaltung) (in terms of 'good/bad' objects)". In her object relations theory, Klein argues that "the earliest experiences of the infant are split between wholly good ones with 'good' objects and wholly bad experiences with 'bad' objects", as children struggle to integrate the two primary drives, love and hate, into constructive social interaction. An important step in childhood development is the gradual depolarization of these two drives.

At what Klein called the paranoid-schizoid position, there is a stark separation of the things the child loves (good, gratifying objects) and the things the child hates (bad, frustrating objects), "because everything is polarised into extremes of love and hate, just like what the baby seems to experience and young children are still very close to". Klein refers to the good breast and the bad breast as split mental entities, resulting from the way "these primitive states tend to deconstruct objects into 'good' and 'bad' bits (called 'part-objects')". The child sees the breasts as opposite in nature at different times, although they actually are the same, belonging to the same mother. As the child learns that people and objects can be good and bad at the same time, he or she progresses to the next phase, the depressive position, which "entails a steady, though painful, approximation towards the reality of oneself and others": integrating the splits and "being able to balance [them] out ... are tasks that continue into early childhood and indeed are never completely finished".

However, Kleinians also use Freud's first conception of splitting to explain the way "in a related process of splitting, the person divides his own self. This is called 'splitting of the ego'". Indeed, Klein herself maintained that "the ego is incapable of splitting the object—internal or external—without a corresponding splitting taking place within the ego". Arguably at least, by this point "the idea of splitting does not carry the same meaning for Freud and for Klein": for the former, "the ego finds itself 'passively' split, as it were. For Klein and the post-Kleinians, on the other hand, splitting is an 'active' defence mechanism". As a result, by the close of the century "four kinds of splitting can be clearly identified, among many other possibilities" for post-Kleinians: "a coherent split in the object, a coherent split in the ego, a fragmentation of the object, and a fragmentation of the ego".

Kernberg

In the developmental model of Otto Kernberg, the overcoming of splitting is also an important developmental task. The child has to learn to integrate feelings of love and hate. Kernberg distinguishes three different stages in the development of a child with respect to splitting:

  1. The child does not experience the self and the object, nor the good and the bad as different entities.
  2. Good and bad are viewed as different. Because the boundaries between the self and the other are not stable yet, the other as a person is viewed as either all good or all bad, depending on their actions. This also means that thinking about another person as bad implies that the self is bad as well, so it's better to think about the caregiver as a good person, so the self is viewed as good too: "Bringing together extremely opposite loving and hateful images of the self and of significant others would trigger unbearable anxiety and guilt".
  3. Splitting – "the division of external objects into 'all good' or 'all bad'" – begins to be resolved when the self and the other can be seen as possessing both good and bad qualities. Having hateful thoughts about the other does not mean that the self is all hateful and does not mean that the other person is all hateful either.

If a person fails to accomplish this developmental task satisfactorily, borderline pathology can emerge. In the borderline personality organization, Kernberg found 'dissociated ego states that result from the use of "splitting" defences'. His therapeutic work then aimed at "the analysis of the repeated and oscillating projections of unwanted self and object representations onto the therapist" so as to produce "something more durable, complex and encompassing than the initial, split-off and polarized state of affairs".

Horizontal and vertical

Heinz Kohut has emphasized in his self psychology the distinction between horizontal and vertical forms of splitting. Traditional psychoanalysis saw repression as forming a horizontal barrier between different levels of the mind – so that for example an unpleasant truth might be accepted superficially but denied in a deeper part of the psyche. Kohut contrasted with this vertical fractures of the mind into two parts with incompatible attitudes separated by mutual disavowal.

Transference

It has been suggested that interpretation of the transference "becomes effective through a sort of splitting of the ego into a reasonable, judging portion and an experiencing portion, the former recognizing the latter as not appropriate in the present and as coming from the past". Clearly, "in this sense, splitting, so far from being a pathological phenomenon, is a manifestation of self-awareness". Nevertheless, "it remains to be investigated how this desirable 'splitting of the ego' and 'self-observation' are to be differentiated from the pathological cleavage ... directed at preserving isolations".

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...