Search This Blog

Monday, September 19, 2022

Popper's experiment

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Popper%27s_experiment

Popper's experiment is an experiment proposed by the philosopher Karl Popper to put to the test different interpretations of quantum mechanics (QM). In fact, as early as 1934, Popper started criticising the increasingly more accepted Copenhagen interpretation, a popular subjectivist interpretation of quantum mechanics. Therefore, in his most famous book Logik der Forschung he proposed a first experiment alleged to empirically discriminate between the Copenhagen Interpretation and a realist interpretation, which he advocated. Einstein, however, wrote a letter to Popper about the experiment in which he raised some crucial objections and Popper himself declared that this first attempt was "a gross mistake for which I have been deeply sorry and ashamed of ever since".

Popper, however, came back to the foundations of quantum mechanics from 1948, when he developed his criticism of determinism in both quantum and classical physics. As a matter of fact, Popper greatly intensified his research activities on the foundations of quantum mechanics throughout the 1950s and 1960s developing his interpretation of quantum mechanics in terms of real existing probabilities (propensities), also thanks to the support of a number of distinguished physicists (such as David Bohm).

Overview

In 1980, Popper proposed perhaps his more important, yet overlooked, contribution to QM: a "new simplified version of the EPR experiment".

The experiment was however published only two years later, in the third volume of the Postscript to the Logic of Scientific Discovery.

The most widely known interpretation of quantum mechanics is the Copenhagen interpretation put forward by Niels Bohr and his school. It maintains that observations lead to a wavefunction collapse, thereby suggesting the counter-intuitive result that two well separated, non-interacting systems require action-at-a-distance. Popper argued that such non-locality conflicts with common sense, and would lead to a subjectivist interpretation of phenomena, depending on the role of the 'observer'.

While the EPR argument was always meant to be a thought experiment, put forward to shed light on the intrinsic paradoxes of QM, Popper proposed an experiment which could have been experimentally implemented and participated at a physics conference organised in Bari in 1983, to present his experiment and propose to the experimentalists to carry it out.

The actual realisation of Popper's experiment required new techniques which would make use of the phenomenon of spontaneous parametric down-conversion but had not yet been exploited at that time, so his experiment was eventually performed only in 1999, five years after Popper had died.

Description

Contrarily to the first (mistaken) proposal of 1934, Popper's experiment of 1980 exploits couples of entangled particles, in order to put to the test Heisenberg's uncertainty principle.

Indeed, Popper maintains:

"I wish to suggest a crucial experiment to test whether knowledge alone is sufficient to create 'uncertainty' and, with it, scatter (as is contended under the Copenhagen interpretation), or whether it is the physical situation that is responsible for the scatter."

Popper's proposed experiment consists of a low-intensity source of particles that can generate pairs of particles traveling to the left and to the right along the x-axis. The beam's low intensity is "so that the probability is high that two particles recorded at the same time on the left and on the right are those which have actually interacted before emission."

There are two slits, one each in the paths of the two particles. Behind the slits are semicircular arrays of counters which can detect the particles after they pass through the slits (see Fig. 1). "These counters are coincident counters [so] that they only detect particles that have passed at the same time through A and B."

Fig.1 Experiment with both slits equally wide. Both the particles should show equal scatter in their momenta.

Popper argued that because the slits localize the particles to a narrow region along the y-axis, from the uncertainty principle they experience large uncertainties in the y-components of their momenta. This larger spread in the momentum will show up as particles being detected even at positions that lie outside the regions where particles would normally reach based on their initial momentum spread.

Popper suggests that we count the particles in coincidence, i.e., we count only those particles behind slit B, whose partner has gone through slit A. Particles which are not able to pass through slit A are ignored.

The Heisenberg scatter for both the beams of particles going to the right and to the left, is tested "by making the two slits A and B wider or narrower. If the slits are narrower, then counters should come into play which are higher up and lower down, seen from the slits. The coming into play of these counters is indicative of the wider scattering angles which go with a narrower slit, according to the Heisenberg relations."

Fig.2 Experiment with slit A narrowed, and slit B wide open. Should the two particle show equal scatter in their momenta? If they do not, Popper says, the Copenhagen interpretation is wrong. If they do, it indicates action at a distance, says Popper.

Now the slit at A is made very small and the slit at B very wide. Popper wrote that, according to the EPR argument, we have measured position "y" for both particles (the one passing through A and the one passing through B) with the precision , and not just for particle passing through slit A. This is because from the initial entangled EPR state we can calculate the position of the particle 2, once the position of particle 1 is known, with approximately the same precision. We can do this, argues Popper, even though slit B is wide open.

Therefore, Popper states that "fairly precise "knowledge"" about the y position of particle 2 is made; its y position is measured indirectly. And since it is, according to the Copenhagen interpretation, our knowledge which is described by the theory – and especially by the Heisenberg relations — it should be expected that the momentum of particle 2 scatters as much as that of particle 1, even though the slit A is much narrower than the widely opened slit at B.

Now the scatter can, in principle, be tested with the help of the counters. If the Copenhagen interpretation is correct, then such counters on the far side of B that are indicative of a wide scatter (and of a narrow slit) should now count coincidences: counters that did not count any particles before the slit A was narrowed.

To sum up: if the Copenhagen interpretation is correct, then any increase in the precision in the measurement of our mere knowledge of the particles going through slit B should increase their scatter.

Popper was inclined to believe that the test would decide against the Copenhagen interpretation, as it is applied to Heisenberg's uncertainty principle. If the test decided in favor of the Copenhagen interpretation, Popper argued, it could be interpreted as indicative of action at a distance.

The debate

Many viewed Popper's experiment as a crucial test of quantum mechanics, and there was a debate on what result an actual realization of the experiment would yield.

In 1985, Sudbery pointed out that the EPR state, which could be written as , already contained an infinite spread in momenta (tacit in the integral over k), so no further spread could be seen by localizing one particle. Although it pointed to a crucial flaw in Popper's argument, its full implication was not understood. Kripps theoretically analyzed Popper's experiment and predicted that narrowing slit A would lead to momentum spread increasing at slit B. Kripps also argued that his result was based just on the formalism of quantum mechanics, without any interpretational problem. Thus, if Popper was challenging anything, he was challenging the central formalism of quantum mechanics.

In 1987 there came a major objection to Popper's proposal from Collet and Loudon. They pointed out that because the particle pairs originating from the source had a zero total momentum, the source could not have a sharply defined position. They showed that once the uncertainty in the position of the source is taken into account, the blurring introduced washes out the Popper effect.

Furthermore, Redhead analyzed Popper's experiment with a broad source and concluded that it could not yield the effect that Popper was seeking.

Realizations

Fig.3 Schematic diagram of Kim and Shih's experiment based on a BBO crystal which generates entangled photons. The lens LS helps create a sharp image of slit A on the location of slit B.
 
Fig.4 Results of the photon experiment by Kim and Shih, aimed at realizing Popper's proposal. The diffraction pattern in the absence of slit B (red symbols) is much narrower than that in the presence of a real slit (blue symbols).

Kim–Shih's experiment

Popper's experiment was realized in 1999 by Yoon-Ho Kim & Yanhua Shih using a spontaneous parametric down-conversion photon source. They did not observe an extra spread in the momentum of particle 2 due to particle 1 passing through a narrow slit. They write:

"Indeed, it is astonishing to see that the experimental results agree with Popper’s prediction. Through quantum entanglement one may learn the precise knowledge of a photon’s position and would therefore expect a greater uncertainty in its momentum under the usual Copenhagen interpretation of the uncertainty relations. However, the measurement shows that the momentum does not experience a corresponding increase in uncertainty. Is this a violation of the uncertainty principle?"

Rather, the momentum spread of particle 2 (observed in coincidence with particle 1 passing through slit A) was narrower than its momentum spread in the initial state.

They concluded that:

"Popper and EPR were correct in the prediction of the physical outcomes of their experiments. However, Popper and EPR made the same error by applying the results of two-particle physics to the explanation of the behavior of an individual particle. The two-particle entangled state is not the state of two individual particles. Our experimental result is emphatically NOT a violation of the uncertainty principle which governs the behavior of an individual quantum."

This led to a renewed heated debate, with some even going to the extent of claiming that Kim and Shih's experiment had demonstrated that there is no non-locality in quantum mechanics.

Unnikrishnan (2001), discussing Kim and Shih's result, wrote that the result:

"is a solid proof that there is no state-reduction-at-a-distance. ... Popper's experiment and its analysis forces us to radically change the current held view on quantum non-locality."

Short criticized Kim and Shih's experiment, arguing that because of the finite size of the source, the localization of particle 2 is imperfect, which leads to a smaller momentum spread than expected. However, Short's argument implies that if the source were improved, we should see a spread in the momentum of particle 2.

Sancho carried out a theoretical analysis of Popper's experiment, using the path-integral approach, and found a similar kind of narrowing in the momentum spread of particle 2, as was observed by Kim and Shih. Although this calculation did not give them any deep insight, it indicated that the experimental result of Kim-Shih agreed with quantum mechanics. It did not say anything about what bearing it has on the Copenhagen interpretation, if any.

Ghost diffraction

Popper's conjecture has also been tested experimentally in the so-called two-particle ghost interference experiment. This experiment was not carried out with the purpose of testing Popper's ideas, but ended up giving a conclusive result about Popper's test. In this experiment two entangled photons travel in different directions. Photon 1 goes through a slit, but there is no slit in the path of photon 2. However, Photon 2, if detected in coincidence with a fixed detector behind the slit detecting photon 1, shows a diffraction pattern. The width of the diffraction pattern for photon 2 increases when the slit in the path of photon 1 is narrowed. Thus, increase in the precision of knowledge about photon 2, by detecting photon 1 behind the slit, leads to increase in the scatter of photons 2.

Predictions according to quantum mechanics

Tabish Qureshi has published the following analysis of Popper's argument.

The ideal EPR state is written as , where the two labels in the "ket" state represent the positions or momenta of the two particle. This implies perfect correlation, meaning, detecting particle 1 at position will also lead to particle 2 being detected at . If particle 1 is measured to have a momentum , particle 2 will be detected to have a momentum . The particles in this state have infinite momentum spread, and are infinitely delocalized. However, in the real world, correlations are always imperfect. Consider the following entangled state

where represents a finite momentum spread, and is a measure of the position spread of the particles. The uncertainties in position and momentum, for the two particles can be written as

The action of a narrow slit on particle 1 can be thought of as reducing it to a narrow Gaussian state:

.

This will reduce the state of particle 2 to

.

The momentum uncertainty of particle 2 can now be calculated, and is given by

If we go to the extreme limit of slit A being infinitesimally narrow (), the momentum uncertainty of particle 2 is , which is exactly what the momentum spread was to begin with. In fact, one can show that the momentum spread of particle 2, conditioned on particle 1 going through slit A, is always less than or equal to (the initial spread), for any value of , and . Thus, particle 2 does not acquire any extra momentum spread than it already had. This is the prediction of standard quantum mechanics. So, the momentum spread of particle 2 will always be smaller than what was contained in the original beam. This is what was actually seen in the experiment of Kim and Shih. Popper's proposed experiment, if carried out in this way, is incapable of testing the Copenhagen interpretation of quantum mechanics.

On the other hand, if slit A is gradually narrowed, the momentum spread of particle 2 (conditioned on the detection of particle 1 behind slit A) will show a gradual increase (never beyond the initial spread, of course). This is what quantum mechanics predicts. Popper had said

"...if the Copenhagen interpretation is correct, then any increase in the precision in the measurement of our mere knowledge of the particles going through slit B should increase their scatter."

This particular aspect can be experimentally tested.

Faster-than-light signalling

The expected additional momentum scatter which Popper wrongly attributed to the Copenhagen interpretation would allow faster-than-light communication, which is excluded by the no-communication theorem in quantum mechanics. Note however that both Collet and Loudon and Qureshi compute that scatter decreases with decreasing the size of slit A, contrary to the increase predicted by Popper. There was some controversy about this decrease also allowing superluminal communication. But the reduction is of the standard deviation of the conditional distribution of the position of particle 2 knowing that particle 1 did go through slit A, since we are only counting coincident detection. The reduction in conditional distribution allows for the unconditional distribution to remain the same, which is the only thing that matters to exclude superluminal communication. Also note that the conditional distribution would be different from the unconditional distribution in classical physics as well. But measuring the conditional distribution after slit B requires the information on the result at slit A, which has to be communicated classically, so that the conditional distribution cannot be known as soon as the measurement is made at slit A but is delayed by the time required to transmit that information.

Protein structure

From Wikipedia, the free encyclopedia
 
Protein primary structureProtein secondary structureProtein tertiary structureProtein quaternary structure

Interactive diagram of protein structure, using PCNA as an example. (PDB: 1AXC​)

Protein structure is the three-dimensional arrangement of atoms in an amino acid-chain molecule. Proteins are polymers – specifically polypeptides – formed from sequences of amino acids, the monomers of the polymer. A single amino acid monomer may also be called a residue indicating a repeating unit of a polymer. Proteins form by amino acids undergoing condensation reactions, in which the amino acids lose one water molecule per reaction in order to attach to one another with a peptide bond. By convention, a chain under 30 amino acids is often identified as a peptide, rather than a protein. To be able to perform their biological function, proteins fold into one or more specific spatial conformations driven by a number of non-covalent interactions such as hydrogen bonding, ionic interactions, Van der Waals forces, and hydrophobic packing. To understand the functions of proteins at a molecular level, it is often necessary to determine their three-dimensional structure. This is the topic of the scientific field of structural biology, which employs techniques such as X-ray crystallography, NMR spectroscopy, cryo electron microscopy (cryo-EM) and dual polarisation interferometry to determine the structure of proteins.

Protein structures range in size from tens to several thousand amino acids. By physical size, proteins are classified as nanoparticles, between 1–100 nm. Very large protein complexes can be formed from protein subunits. For example, many thousands of actin molecules assemble into a microfilament.

A protein usually undergoes reversible structural changes in performing its biological function. The alternative structures of the same protein are referred to as different conformations, and transitions between them are called conformational changes.

Levels of protein structure

There are four distinct levels of protein structure.

Four levels of protein structure

Primary structure

The primary structure of a protein refers to the sequence of amino acids in the polypeptide chain. The primary structure is held together by peptide bonds that are made during the process of protein biosynthesis. The two ends of the polypeptide chain are referred to as the carboxyl terminus (C-terminus) and the amino terminus (N-terminus) based on the nature of the free group on each extremity. Counting of residues always starts at the N-terminal end (NH2-group), which is the end where the amino group is not involved in a peptide bond. The primary structure of a protein is determined by the gene corresponding to the protein. A specific sequence of nucleotides in DNA is transcribed into mRNA, which is read by the ribosome in a process called translation. The sequence of amino acids in insulin was discovered by Frederick Sanger, establishing that proteins have defining amino acid sequences. The sequence of a protein is unique to that protein, and defines the structure and function of the protein. The sequence of a protein can be determined by methods such as Edman degradation or tandem mass spectrometry. Often, however, it is read directly from the sequence of the gene using the genetic code. It is strictly recommended to use the words "amino acid residues" when discussing proteins because when a peptide bond is formed, a water molecule is lost, and therefore proteins are made up of amino acid residues. Post-translational modifications such as phosphorylations and glycosylations are usually also considered a part of the primary structure, and cannot be read from the gene. For example, insulin is composed of 51 amino acids in 2 chains. One chain has 31 amino acids, and the other has 20 amino acids.

Secondary structure

An α-helix with hydrogen bonds (yellow dots)
 

Secondary structure refers to highly regular local sub-structures on the actual polypeptide backbone chain. Two main types of secondary structure, the α-helix and the β-strand or β-sheets, were suggested in 1951 by Linus Pauling et al. These secondary structures are defined by patterns of hydrogen bonds between the main-chain peptide groups. They have a regular geometry, being constrained to specific values of the dihedral angles ψ and φ on the Ramachandran plot. Both the α-helix and the β-sheet represent a way of saturating all the hydrogen bond donors and acceptors in the peptide backbone. Some parts of the protein are ordered but do not form any regular structures. They should not be confused with random coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. Several sequential secondary structures may form a "supersecondary unit".

Tertiary structure

Tertiary structure refers to the three-dimensional structure created by a single protein molecule (a single polypeptide chain). It may include one or several domains. The α-helixes and β-pleated-sheets are folded into a compact globular structure. The folding is driven by the non-specific hydrophobic interactions, the burial of hydrophobic residues from water, but the structure is stable only when the parts of a protein domain are locked into place by specific tertiary interactions, such as salt bridges, hydrogen bonds, and the tight packing of side chains and disulfide bonds. The disulfide bonds are extremely rare in cytosolic proteins, since the cytosol (intracellular fluid) is generally a reducing environment.

Quaternary structure

Quaternary structure is the three-dimensional structure consisting of the aggregation of two or more individual polypeptide chains (subunits) that operate as a single functional unit (multimer). The resulting multimer is stabilized by the same non-covalent interactions and disulfide bonds as in tertiary structure. There are many possible quaternary structure organisations. Complexes of two or more polypeptides (i.e. multiple subunits) are called multimers. Specifically it would be called a dimer if it contains two subunits, a trimer if it contains three subunits, a tetramer if it contains four subunits, and a pentamer if it contains five subunits. The subunits are frequently related to one another by symmetry operations, such as a 2-fold axis in a dimer. Multimers made up of identical subunits are referred to with a prefix of "homo-" and those made up of different subunits are referred to with a prefix of "hetero-", for example, a heterotetramer, such as the two alpha and two beta chains of hemoglobin.

Domains, motifs, and folds in protein structure

Protein domains. The two shown protein structures share a common domain (maroon), the PH domain, which is involved in phosphatidylinositol (3,4,5)-trisphosphate binding

Proteins are frequently described as consisting of several structural units. These units include domains, motifs, and folds. Despite the fact that there are about 100,000 different proteins expressed in eukaryotic systems, there are many fewer different domains, structural motifs and folds.

Structural domain

A structural domain is an element of the protein's overall structure that is self-stabilizing and often folds independently of the rest of the protein chain. Many domains are not unique to the protein products of one gene or one gene family but instead appear in a variety of proteins. Domains often are named and singled out because they figure prominently in the biological function of the protein they belong to; for example, the "calcium-binding domain of calmodulin". Because they are independently stable, domains can be "swapped" by genetic engineering between one protein and another to make chimera proteins. A conservative combination of several domains that occur in different proteins, such as protein tyrosine phosphatase domain and C2 domain pair, was called "a superdomain" that may evolve as a single unit.

Structural and sequence motifs

The structural and sequence motifs refer to short segments of protein three-dimensional structure or amino acid sequence that were found in a large number of different proteins

Supersecondary structure

Tertiary protein structures can have multiple secondary elements on the same polypeptide chain. The supersecondary structure refers to a specific combination of secondary structure elements, such as β-α-β units or a helix-turn-helix motif. Some of them may be also referred to as structural motifs.

Protein fold

A protein fold refers to the general protein architecture, like a helix bundle, β-barrel, Rossmann fold or different "folds" provided in the Structural Classification of Proteins database. A related concept is protein topology.

Protein dynamics and conformational ensembles

Proteins are not static objects, but rather populate ensembles of conformational states. Transitions between these states typically occur on nanoscales, and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis. Protein dynamics and conformational changes allow proteins to function as nanoscale biological machines within cells, often in the form of multi-protein complexes. Examples include motor proteins, such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines...Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. "

Schematic view of the two main ensemble modeling approaches.

Proteins are often thought of as relatively stable tertiary structures that experience conformational changes after being affected by interactions with other proteins or as a part of enzymatic activity. However, proteins may have varying degrees of stability, and some of the less stable variants are intrinsically disordered proteins. These proteins exist and function in a relatively 'disordered' state lacking a stable tertiary structure. As a result, they are difficult to describe by a single fixed tertiary structure. Conformational ensembles have been devised as a way to provide a more accurate and 'dynamic' representation of the conformational state of intrinsically disordered proteins.

Protein ensemble files are a representation of a protein that can be considered to have a flexible structure. Creating these files requires determining which of the various theoretically possible protein conformations actually exist. One approach is to apply computational algorithms to the protein data in order to try to determine the most likely set of conformations for an ensemble file. There are multiple methods for preparing data for the Protein Ensemble Database that fall into two general methodologies – pool and molecular dynamics (MD) approaches (diagrammed in the figure). The pool based approach uses the protein’s amino acid sequence to create a massive pool of random conformations. This pool is then subjected to more computational processing that creates a set of theoretical parameters for each conformation based on the structure. Conformational subsets from this pool whose average theoretical parameters closely match known experimental data for this protein are selected. The alternative molecular dynamics approach takes multiple random conformations at a time and subjects all of them to experimental data. Here the experimental data is serving as limitations to be placed on the conformations (e.g. known distances between atoms). Only conformations that manage to remain within the limits set by the experimental data are accepted. This approach often applies large amounts of experimental data to the conformations which is a very computationally demanding task.

The conformational ensembles were generated for a number of highly dynamic and partially unfolded proteins, such as Sic1/Cdc4, p15 PAF, MKK7, Beta-synuclein and P27.

Protein folding

As it is translated, polypeptides exit the ribosome mostly as a random coil and folds into its native state. The final structure of the protein chain is generally assumed to be determined by its amino acid sequence (Anfinsen's dogma).

Protein stability

Thermodynamic stability of proteins represents the free energy difference between the folded and unfolded protein states. This free energy difference is very sensitive to temperature, hence a change in temperature may result in unfolding or denaturation. Protein denaturation may result in loss of function, and loss of native state. The free energy of stabilization of soluble globular proteins typically does not exceed 50 kJ/mol. Taking into consideration the large number of hydrogen bonds that take place for the stabilization of secondary structures, and the stabilization of the inner core through hydrophobic interactions, the free energy of stabilization emerges as small difference between large numbers.

Protein structure determination

Examples of protein structures from the PDB
 
Rate of Protein Structure Determination by Method and Year

Around 90% of the protein structures available in the Protein Data Bank have been determined by X-ray crystallography. This method allows one to measure the three-dimensional (3-D) density distribution of electrons in the protein, in the crystallized state, and thereby infer the 3-D coordinates of all the atoms to be determined to a certain resolution. Roughly 9% of the known protein structures have been obtained by nuclear magnetic resonance (NMR) techniques. For larger protein complexes, cryo-electron microscopy can determine protein structures. The resolution is typically lower than that of X-ray crystallography, or NMR, but the maximum resolution is steadily increasing. This technique is still a particularly valuable for very large protein complexes such as virus coat proteins and amyloid fibers.

General secondary structure composition can be determined via circular dichroism. Vibrational spectroscopy can also be used to characterize the conformation of peptides, polypeptides, and proteins. Two-dimensional infrared spectroscopy has become a valuable method to investigate the structures of flexible peptides and proteins that cannot be studied with other methods. A more qualitative picture of protein structure is often obtained by proteolysis, which is also useful to screen for more crystallizable protein samples. Novel implementations of this approach, including fast parallel proteolysis (FASTpp), can probe the structured fraction and its stability without the need for purification. Once a protein's structure has been experimentally determined, further detailed studies can be done computationally, using molecular dynamic simulations of that structure.

Protein structure databases

A protein structure database is a database that is modeled around the various experimentally determined protein structures. The aim of most protein structure databases is to organize and annotate the protein structures, providing the biological community access to the experimental data in a useful way. Data included in protein structure databases often includes 3D coordinates as well as experimental information, such as unit cell dimensions and angles for x-ray crystallography determined structures. Though most instances, in this case either proteins or a specific structure determinations of a protein, also contain sequence information and some databases even provide means for performing sequence based queries, the primary attribute of a structure database is structural information, whereas sequence databases focus on sequence information, and contain no structural information for the majority of entries. Protein structure databases are critical for many efforts in computational biology such as structure based drug design, both in developing the computational methods used and in providing a large experimental dataset used by some methods to provide insights about the function of a protein.

Structural classifications of proteins

Protein structures can be grouped based on their structural similarity, topological class or a common evolutionary origin. The Structural Classification of Proteins database and CATH database provide two different structural classifications of proteins. When the structural similarity is large the two proteins have possibly diverged from a common ancestor, and shared structure between proteins is considered evidence of homology. Structure similarity can then be used to group proteins together into protein superfamilies. If shared structure is significant but the fraction shared is small, the fragment shared may be the consequence of a more dramatic evolutionary event such as horizontal gene transfer, and joining proteins sharing these fragments into protein superfamilies is no longer justified. Topology of a protein can be used to classify proteins as well. Knot theory and circuit topology are two topology frameworks developed for classification of protein folds based on chain crossing and intrachain contacts respectively.

Computational prediction of protein structure

The generation of a protein sequence is much easier than the determination of a protein structure. However, the structure of a protein gives much more insight in the function of the protein than its sequence. Therefore, a number of methods for the computational prediction of protein structure from its sequence have been developed. Ab initio prediction methods use just the sequence of the protein. Threading and homology modeling methods can build a 3-D model for a protein of unknown structure from experimental structures of evolutionarily-related proteins, called a protein family.

Christian countercult movement

From Wikipedia, the free encyclopedia

The Christian countercult movement or the Christian anti-cult movement is a social movement among certain Protestant evangelical and fundamentalist and other Christian ministries ("discernment ministries") and individual activists who oppose religious sects that they consider cults.

Overview

Christian countercult activism mainly stems from evangelicalism or fundamentalism. The countercult movement asserts that particular Christian sects are erroneous because their beliefs are not in accordance with the teachings of the Bible. It also states that a religious sect can be considered a cult if its beliefs involve a denial of any of the essential Christian teachings (such as salvation, the Trinity, Jesus himself as a person, the ministry and miracles of Jesus, his crucifixion, his resurrection, the Second Coming and the Rapture).

Countercult ministries often concern themselves with religious sects that consider themselves Christian but hold beliefs that are thought to contradict the teachings of the Bible. Such sects may include: The Church of Jesus Christ of Latter-day Saints, the Unification Church, Christian Science, and the Jehovah's Witnesses. Some Protestants classify the Catholic Church as a cult. Some also denounce non-Christian religions such as Islam, Wicca, Paganism, New Age groups, Buddhism, Hinduism and other religions like UFO religions.

Countercult literature usually expresses specific doctrinal or theological concerns and it also has a missionary or apologetic purpose. It presents a rebuttal by emphasizing the teachings of the Bible against the beliefs of non-fundamental Christian sects. Christian countercult activist writers also emphasize the need for Christians to evangelize to followers of cults. Some Christians also share concerns similar to those of the secular anti-cult movement.

The movement publishes its views through a variety of media, including books, magazines, and newsletters, radio broadcasting, audio and video cassette production, direct-mail appeals, proactive evangelistic encounters, professional and avocational websites, as well as lecture series, training workshops and counter-cult conferences.

History

Precursors and pioneers

Christians have applied theological criteria to assess the teachings of non-orthodox movements throughout church history. The Apostles themselves were involved in challenging the doctrines and claims of various teachers. The Apostle Paul wrote an entire epistle, Galatians, antagonistic to the teachings of a Jewish sect that claimed adherence to the teachings of both Jesus and Moses (cf. Acts 15 and Gal. 1:6–10). The First Epistle of John is devoted to countering early proto-Gnostic cults that had arisen in the first century CE, all claiming to be Christian (1 John 2:19).

The early Church in the post-apostolic period was much more involved in "defending its frontiers against alternative soteriologies—either by defining its own position with greater and greater exactness, or by attacking other religions, and particularly the Hellenistic mysteries." In fact, a good deal of the early Christian literature is devoted to the exposure and refutation of unorthodox theology, mystery religions and Gnostic groups. Irenaeus, Tertullian and Hippolytus of Rome were some of the early Christian apologists who engaged in critical analyses of unorthodox theology, Greco-Roman pagan religions, and Gnostic groups.

In the Protestant tradition, some of the earliest writings opposing unorthodox groups (such as the Swedenborgians) can be traced back to John Wesley, Alexander Campbell and Princeton Theological Seminary theologians like Charles Hodge and B. B. Warfield. The first known usage of the term cult by a Protestant apologist to denote a group is heretical or unorthodox is in Anti-Christian Cults by A. H. Barrington, published in 1898.

Quite a few of the pioneering apologists were Baptist pastors, like I. M. Haldeman, or participants in the Plymouth Brethren, like William C. Irvine and Sydney Watson. Watson wrote a series of didactic novels like Escaped from the Snare: Christian Science, Bewitched by Spiritualism, and The Gilded Lie (Millennial Dawnism), as warnings of the dangers posed by cultic groups. Watson's use of fiction to counter the cults has been repeated by later novelists like Frank E. Peretti.

The early twentieth-century apologists generally applied the words heresy and sects to groups like the Christadelphians, Mormons, Jehovah's Witnesses, Spiritualists, and Theosophists. This was reflected in several chapters contributed to the multi-volume work released in 1915 The Fundamentals, where apologists criticized the teachings of Charles Taze Russell, Mary Baker Eddy, the Mormons and Spiritualists.

Mid-twentieth-century apologists

Since the 1940s, the approach of traditional Christians was to apply the meaning of cult such that it included those religious groups who use other scriptures beside the Bible or have teachings and practices deviating from traditional Christian teachings and practices. Some examples of sources (with published dates where known) that documented this approach are:

One of the first prominent countercult apologists was Jan Karel van Baalen (1890–1968), an ordained minister in the Christian Reformed Church in North America. His book The Chaos of Cults, which was first published in 1938, became a classic in the field as it was repeatedly revised and updated until 1962.

Walter Ralston Martin

Historically, one of the most important protagonists of the movement was Walter Martin (1928–1989), whose numerous books include the 1955 The Rise of the Cults: An Introductory Guide to the Non-Christian Cults and the 1965 The Kingdom of the Cults: An Analysis of Major Cult Systems in the Present Christian Era, which continues to be influential. He became well-known in conservative Christian circles through a radio program, "The Bible Answer Man", currently hosted by Hank Hanegraaff.

In The Rise of the Cults Martin gave the following definition of a cult:

By cultism we mean the adherence to doctrines which are pointedly contradictory to orthodox Christianity and which yet claim the distinction of either tracing their origin to orthodox sources or of being in essential harmony with those sources. Cultism, in short, is any major deviation from orthodox Christianity relative to the cardinal doctrines of the Christian faith.

As Martin's definition suggests, the countercult ministries concentrate on non-traditional groups that claim to be Christian, so chief targets have been, Jehovah's Witnesses, Armstrongism, Christian Science and the Unification Church, but also smaller groups like the Swedenborgian Church.

Various other conservative Christian leaders—among them John Ankerberg and Norman Geisler—have emphasized themes similar to Martin's. Perhaps more importantly, numerous other well-known conservative Christian leaders as well as many conservative pastors have accepted Martin's definition of a cult as well as his understanding of the groups to which he gave that label. Dave Breese summed up this kind of definition in these words:

A cult is a religious perversion. It is a belief and practice in the world of religion which calls for devotion to a religious view or leader centered in false doctrine. It is an organized heresy. A cult may take many forms but it is basically a religious movement which distorts or warps orthodox faith to the point where truth becomes perverted into a lie. A cult is impossible to define except against the absolute standard of the teaching of Holy Scripture.

Discernment blogging

Kenne "Ken" Silva is said by other discernment bloggers to have pioneered online discernment ministry. Ken was a Baptist pastor who ran the discernment blog "Apprising". Silva wrote many blog articles about the Emerging Church, the Word of Faith Movement, the Jehovah's Witnesses, the Gay Christian Movement, and many other groups. He started his blog in 2005 and wrote there until his death in 2014.

Silva's work paved the way for other internet discernment ministries such as Pirate Christian Radio, a group of blogs and podcasts founded by Lutheran pastor Chris Rosebrough in 2008, and Pulpit & Pen, a discernment blog founded by Baptist pastor and polemicist J.D. Hall.

Other technical terminology

Since the 1980s the term new religions or new religious movements has slowly entered into evangelical usage alongside the word cult. Some book titles use both terms.

The acceptance of these alternatives to the word cult in evangelicalism reflects, in part, the wider usage of such language in the sociology of religion.

Apologetics

The term countercult apologetics first appeared in Protestant evangelical literature as a self-designation in the late 1970s and early 1980s in articles by Ronald Enroth and David Fetcho, and by Walter Martin in Martin Speaks Out on the Cults. A mid-1980s debate about apologetic methodology between Ronald Enroth and J. Gordon Melton, led the latter to place more emphasis in his publications on differentiating the Christian countercult from the secular anti-cult. Eric Pement urged Melton to adopt the label "Christian countercult", and since the early 1990s the terms has entered into popular usage and is recognized by sociologists such as Douglas Cowan.

The only existing umbrella organization within the countercult movement in the United States is the EMNR (Evangelical Ministries to New Religions), founded in 1982 by Martin, Enroth, Gordon Lewis, and James Bjornstad.

Worldwide organizations

While the greatest number of countercult ministries are found in the United States, ministries exist in Australia, Brazil, Canada, Denmark, England, Ethiopia, Germany, Hungary, Italy, Mexico, New Zealand, Philippines, Romania, Russia, Sweden, and Ukraine. A comparison between the methods employed in the United States and other nations discloses some similarities in emphasis, but also other nuances in emphasis. The similarities are that globally these ministries share a common concern about the evangelization of people in cults and new religions. There is also often a common thread of comparing orthodox doctrines and biblical passages with the teachings of the groups under examination. In some of the European and southern hemisphere contexts, however, confrontational methods of engagement are not always relied on, and dialogical approaches are sometimes advocated.

A group of organizations that originated within the context of established religion is working in more general fields of "cult awareness," especially in Europe. Their leaders are theologians, and they are often social ministries affiliated to big churches.

Protestant

  • Berlin-based Pfarramt für Sekten- und Weltanschauungsfragen (Parish Office for Sects and World Views) headed by Lutheran pastor Thomas Gandow
  • Swiss Evangelische Informationsstelle Kirchen-Sekten-Religionen (Protestant Reformed Zwinglian Information Service on Churches, Sects and Religions) headed by Zwinglian parson Georg Schmid

Catholic

  • Sekten und Weltanschauungen in Sachsen (Sects and ideologies in Saxony)
  • Weltanschauungen und religiöse Gruppierungen (Worldviews and religious groups) of the Roman Catholic Diocese of Linz, Austria
  • GRIS (Gruppo di ricerca e informazione socio-religiosa), Italy

Orthodox

Contextual missiology

The phenomena of cults has also entered into the discourses of Christian missions and theology of religions. An initial step in this direction occurred in 1980 when the Lausanne Committee for World Evangelization convened a mini-consultation in Thailand. From that consultation a position paper was produced. The issue was revisited at the Lausanne Forum in 2004 with another paper. The latter paper adopts a different methodology to that advocated in 1980.

In the 1990s, discussions in academic missions and theological journals indicate that another trajectory is emerging that reflects the influence of contextual missions theory. Advocates of this approach maintain that apologetics as a tool needs to be retained, but do not favor a confrontational style of engagement.

Variations and models

Countercult apologetics has several variations and methods employed in analyzing and responding to cults. The different nuances in countercult apologetics have been discussed by John A. Saliba and Philip Johnson.

The dominant method is the emphasis on detecting unorthodox or heretical doctrines and contrasting those with orthodox interpretations of the Bible and early creedal documents. Some apologists, such as Francis J. Beckwith, have emphasized a philosophical approach, pointing out logical, epistemological and metaphysical problems within the teachings of a particular group. Another approach involves former members of cultic groups recounting their spiritual autobiographies, which highlight experiences of disenchantment with the group, unanswered questions and doubts about commitment to the group, culminating in the person's conversion to evangelical Christianity.

Apologists like Dave Hunt in Peace, Prosperity and the Coming Holocaust and Hal Lindsey in The Terminal Generation have tended to interpret the phenomena of cults as part of the burgeoning evidence of signs that Christ's Second Advent is close at hand. Both Hunt and Constance Cumbey have applied a conspiracy model to interpreting the emergence of New Age spirituality and linking that to speculations about fulfilled prophecies heralding Christ's reappearance.

Prominent advocates

People

Organizations

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...