Search This Blog

Tuesday, January 25, 2022

Baryon asymmetry

From Wikipedia, the free encyclopedia

In physical cosmology, the baryon asymmetry problem, also known as the matter asymmetry problem or the matter–antimatter asymmetry problem, is the observed imbalance in baryonic matter (the type of matter experienced in everyday life) and antibaryonic matter in the observable universe. Neither the standard model of particle physics, nor the theory of general relativity provides a known explanation for why this should be so, and it is a natural assumption that the universe is neutral with all conserved charges. The Big Bang should have produced equal amounts of matter and antimatter. Since this does not seem to have been the case, it is likely some physical laws must have acted differently or did not exist for matter and antimatter. Several competing hypotheses exist to explain the imbalance of matter and antimatter that resulted in baryogenesis. However, there is as of yet no consensus theory to explain the phenomenon, which has been described as "one of the great mysteries in physics".

Sakharov conditions

In 1967, Andrei Sakharov proposed a set of three necessary conditions that a baryon-generating interaction must satisfy to produce matter and antimatter at different rates. These conditions were inspired by the recent discoveries of the cosmic background radiation and CP violation in the neutral kaon system. The three necessary "Sakharov conditions" are:

Baryon number violation

Baryon number violation is a necessary condition to produce an excess of baryons over anti-baryons. But C-symmetry violation is also needed so that the interactions which produce more baryons than anti-baryons will not be counterbalanced by interactions which produce more anti-baryons than baryons. CP-symmetry violation is similarly required because otherwise equal numbers of left-handed baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-handed anti-baryons and right-handed baryons. Finally, the interactions must be out of thermal equilibrium, since otherwise CPT symmetry would assure compensation between processes increasing and decreasing the baryon number.

Currently, there is no experimental evidence of particle interactions where the conservation of baryon number is broken perturbatively: this would appear to suggest that all observed particle reactions have equal baryon number before and after. Mathematically, the commutator of the baryon number quantum operator with the (perturbative) Standard Model hamiltonian is zero: . However, the Standard Model is known to violate the conservation of baryon number only non-perturbatively: a global U(1) anomaly. To account for baryon violation in baryogenesis, such events (including proton decay) can occur in Grand Unification Theories (GUTs) and supersymmetric (SUSY) models via hypothetical massive bosons such as the X boson.

CP-symmetry violation

The second condition for generating baryon asymmetry—violation of charge-parity symmetry—is that a process is able to happen at a different rate to its antimatter counterpart. In the Standard Model, CP violation appears as a complex phase in the quark mixing matrix of the weak interaction. There may also be a non-zero CP-violating phase in the neutrino mixing matrix, but this is currently unmeasured. The first in a series of basic physics principles to be violated was parity through Chien-Shiung Wu's experiment. This led to CP violation being verified in the 1964 Fitch–Cronin experiment with neutral kaons, which resulted in the 1980 Nobel Prize in physics (direct CP violation, that is violation of CP symmetry in a decay process, was discovered later, in 1999). Due to CPT symmetry, violation of CP symmetry demands violation of time inversion symmetry, or T-symmetry. Despite the allowance for CP violation in the Standard Model, it is insufficient to account for the observed baryon asymmetry of the universe given the limits on baryon number violation, meaning that beyond-Standard Model sources are needed.

A possible new source of CP violation was found at the Large Hadron Collider (LHC) by the LHCb collaboration during the first three years of LHC operations (beginning March 2010). The experiment analyzed the decays of two particles, the bottom Lambdab0) and its antiparticle, and compared the distributions of decay products. The data showed an asymmetry of up to 20% of CP-violation sensitive quantities, implying a breaking of CP-symmetry. This analysis will need to be confirmed by more data from subsequent runs of the LHC.

Interactions out of thermal equilibrium

In the out-of-equilibrium decay scenario, the last condition states that the rate of a reaction which generates baryon-asymmetry must be less than the rate of expansion of the universe. In this situation the particles and their corresponding antiparticles do not achieve thermal equilibrium due to rapid expansion decreasing the occurrence of pair-annihilation.

Other explanations

Regions of the universe where antimatter dominates

Another possible explanation of the apparent baryon asymmetry is that matter and antimatter are essentially separated into different, widely distant regions of the universe. The formation of antimatter galaxies was originally thought to explain the baryon asymmetry, as from a distance, antimatter atoms are indistinguishable from matter atoms; both produce light (photons) in the same way. Along the boundary between matter and antimatter regions, however, annihilation (and the subsequent production of gamma radiation) would be detectable, depending on its distance and the density of matter and antimatter. Such boundaries, if they exist, would likely lie in deep intergalactic space. The density of matter in intergalactic space is reasonably well established at about one atom per cubic meter. Assuming this is a typical density near a boundary, the gamma ray luminosity of the boundary interaction zone can be calculated. No such zones have been detected, but 30 years of research have placed bounds on how far they might be. On the basis of such analyses, it is now deemed unlikely that any region within the observable universe is dominated by antimatter.

One attempt to explain the lack of observable interfaces between matter and antimatter dominated regions is that they are separated by a Leidenfrost layer of very hot matter created by the energy released from annihilation. This is similar to the manner in which water may be separated from a hot plate by a layer of evaporated vapor, delaying the evaporation of more water.

Electric dipole moment

The presence of an electric dipole moment (EDM) in any fundamental particle would violate both parity (P) and time (T) symmetries. As such, an EDM would allow matter and antimatter to decay at different rates leading to a possible matter–antimatter asymmetry as observed today. Many experiments are currently being conducted to measure the EDM of various physical particles. All measurements are currently consistent with no dipole moment. However, the results do place rigorous constraints on the amount of symmetry violation that a physical model can permit. The most recent EDM limit, published in 2014, was that of the ACME Collaboration, which measured the EDM of the electron using a pulsed beam of thorium monoxide (ThO) molecules.

Mirror anti-universe

The Big Bang generated a universe–antiuniverse pair, our universe flows forward in time, while our mirror counterpart flows backward.

The state of universe, as it is, does not violate the CPT symmetry, because the Big Bang could be considered as a double sided event, both classically and quantum mechanically, consisting of a universe-antiuniverse pair. This means that this universe is the charge (C), parity (P) and time (T) image of the anti-universe. This pair emerged from the Big Bang epochs not directly into a hot, radiation-dominated era. The antiuniverse would flow back in time from the Big Bang, becoming bigger as it does so, and would be also dominated by antimatter. Its spatial properties are inverted if compared to those in our universe, a situation analogous to creating electronpositron pairs in a vacuum. This model, devised by physicists from the Perimeter Institute for Theoretical Physics in Canada, proposes that temperature fluctuations in the cosmic microwave background (CMB) are due to the quantum-mechanical nature of space-time near the Big Bang singularity. This means that a point in the future of our universe and a point in the distant past of the antiuniverse would provide fixed classical points, while all possible quantum-based permutations would exist in between. Quantum uncertainty causes the universe and antiuniverse to not be exact mirror images of each other.

This model has not shown if it can reproduce certain observations regarding the inflation scenario, such as explaining the uniformity of the cosmos on large scales. However, it provides a natural and straightforward explanation for dark matter. Such a universe-antiuniverse pair would produce large numbers of superheavy neutrinos, also known as sterile neutrinos. These neutrinos might also be the source of recently observed bursts of high-energy cosmic rays.

Baryon asymmetry parameter

The challenges to the physics theories are then to explain how to produce the predominance of matter over antimatter, and also the magnitude of this asymmetry. An important quantifier is the asymmetry parameter,

This quantity relates the overall number density difference between baryons and antibaryons (nB and nB, respectively) and the number density of cosmic background radiation photons nγ.

According to the Big Bang model, matter decoupled from the cosmic background radiation (CBR) at a temperature of roughly 3000 kelvin, corresponding to an average kinetic energy of 3000 K / (10.08×103 K/eV) = 0.3 eV. After the decoupling, the total number of CBR photons remains constant. Therefore, due to space-time expansion, the photon density decreases. The photon density at equilibrium temperature T per cubic centimeter, is given by

with kB as the Boltzmann constant, ħ as the Planck constant divided by 2π and c as the speed of light in vacuum, and ζ(3) as Apéry's constant. At the current CBR photon temperature of 2.725 K, this corresponds to a photon density nγ of around 411 CBR photons per cubic centimeter.

Therefore, the asymmetry parameter η, as defined above, is not the "good" parameter. Instead, the preferred asymmetry parameter uses the entropy density s,

because the entropy density of the universe remained reasonably constant throughout most of its evolution. The entropy density is

with p and ρ as the pressure and density from the energy density tensor Tμν, and g* as the effective number of degrees of freedom for "massless" particles (inasmuch as mc2kBT holds) at temperature T,

,

for bosons and fermions with gi and gj degrees of freedom at temperatures Ti and Tj respectively. Presently, s = 7.04nγ.

Laterality

From Wikipedia, the free encyclopedia

The term laterality refers to the preference most humans show for one side of their body over the other. Examples include left-handedness/right-handedness and left/right-footedness; it may also refer to the primary use of the left or right hemisphere in the brain. It may also apply to animals or plants. The majority of tests have been conducted on humans, specifically to determine the effects on language.

Human

The majority of humans are right-handed. Many are also right-sided in general (that is, they prefer to use their right eye, right foot and right ear if forced to make a choice between the two). The reasons for this are not fully understood, but it is thought that because the left cerebral hemisphere of the brain controls the right side of the body, the right side is generally stronger; it is suggested that the left cerebral hemisphere is dominant over the right in most humans because in 90-92% of all humans, the left hemisphere is the language hemisphere.

Human cultures are predominantly right-handed, and so the right-sided trend may be socially as well as biologically enforced. This is quite apparent from a quick survey of languages. The English word "left" comes from the Anglo-Saxon word lyft which means "weak" or "useless". Similarly, the French word for left, gauche, is also used to mean "awkward" or "tactless", and sinistra, the Latin word from which the English word "sinister" was derived, means "left". Similarly, in many cultures the word for "right" also means "correct". The English word "right" comes from the Anglo-Saxon word riht which also means "straight" or "correct."

This linguistic and social bias is not restricted to European cultures: for example, Chinese characters are designed for right-handers to write, and no significant left-handed culture has ever been found in the world.

When a person is forced to use the hand opposite of the hand that they would naturally use, this is known as forced laterality, or more specifically forced dextrality. A study done by the Department of Neurology at Keele University, North Staffordshire Royal Infirmary suggests that forced dextrality may be part of the reason that the percentage of left-handed people decreases with the higher age groups, both because the effects of pressures toward right-handedness are cumulative over time (hence increasing with age for any given person subjected to them) and because the prevalence of such pressure is decreasing, such that fewer members of younger generations face any such pressure to begin with.

Ambidexterity is when a person has approximately equal skill with both hands and/or both sides of the body. True ambidexterity is very rare. Although a small number of people can write competently with both hands and use both sides of their body well, even these people usually show preference for one side of their body over the other. However, this preference is not necessarily consistent for all activities. Some people may, for instance, use their right hand for writing, and their left hand for playing racket sports and eating.

Also, it is not uncommon that people preferring to use the right hand prefer to use the left leg, e.g. when using a shovel, kicking a ball, or operating control pedals. In many cases, this may be because they are disposed for left-handedness but have been trained for right-handedness, which is usually attached to learning and behavioural disorders (term usually so called as "cross dominance"). In the sport of cricket, some players may find that they are more comfortable bowling with their left or right hand, but batting with the other hand.

Approximate statistics are below:

Laterality of motor and sensory control has been the subject of a recent intense study and review. It turns out that the hemisphere of speech is the hemisphere of action in general and that the command hemisphere is located either in the right or the left hemisphere (never in both). Around 80% of people are left hemispheric for speech and the remainder are right hemispheric: ninety percent of right-handers are left hemispheric for speech, but only 50% of left-handers are right hemispheric for speech (the remainder are left hemispheric). The reaction timeof the neurally dominant side of the body (the side opposite to the major hemisphere or the command center, as just defined) is shorter than that of the opposite side by an interval equal to the interhemispheric transfer time. Thus, one in five persons has a handedness that is the opposite for which they are wired (per laterality of command center or brainedness, as determined by reaction time study mentioned above).

Different expressions

  • Board footedness: The stance in a boardsport is not necessarily the same as the normal-footedness of the person. In skateboarding and other board sports, a “goofy footed” stance is one with the right foot leading. A stance with the left foot forward is called “regular” or “normal” stance.
  • Jump and spin: Direction of rotation in figure skating jumps and spins is not necessarily the same as the footedness or the handedness of each person. A skater can jump and spin counter-clockwise (the most common direction), yet be left-footed and left-handed.
  • Ocular dominance: The eye preferred when binocular vision is not possible, as through a keyhole or monocular microscope.

Speech

Cerebral dominance or specialization has been studied in relation to a variety of human functions. With speech in particular, many studies have been used as evidence that it is generally localized in the left hemisphere. Research comparing the effects of lesions in the two hemispheres, split-brain patients, and perceptual asymmetries have aided in the knowledge of speech lateralization. In one particular study, the left hemisphere's sensitivity to differences in rapidly changing sound cues was noted (Annett, 1991). This has real world implication, since very fine acoustic discriminations are needed to comprehend and produce speech signals. In an electrical stimulation demonstration performed by Ojemann and Mateer (1979), the exposed cortex was mapped revealing the same cortical sites were activated in phoneme discrimination and mouth movement sequences (Annett, 1991).

As suggested by Kimura (1975, 1982), left hemisphere speech lateralization might be based upon a preference for movement sequences as demonstrated by American Sign Language (ASL) studies. Since ASL requires intricate hand movements for language communication, it was proposed that skilled hand motions and speech require sequences of action over time. In deaf patients suffering from a left hemispheric stroke and damage, noticeable losses in their abilities to sign were noted. These cases were compared to studies of normal speakers with dysphasias located at lesioned areas similar to the deaf patients. In the same study, deaf patients with right hemispheric lesions did not display any significant loss of signing nor any decreased capacity for motor sequencing (Annett, 1991).

One theory, known as the acoustic laterality theory, the physical properties of certain speech sounds are what determine laterality to the left hemisphere. Stop consonants, for example t, p, or k, leave a defined silent period at the end of words that can easily be distinguished. This theory postulates that changing sounds such as these are preferentially processed by the left hemisphere. As a result of the right ear being responsible for transmission to sounds to the left hemisphere, it is capable of perceiving these sounds with rapid changes. This right ear advantage in hearing and speech laterality was evidenced in dichotic listening studies. Magnetic imaging results from this study showed greater left hemisphere activation when actual words were presented as opposed to pseudowords (Shtyrov, Pihko, and Pulvermuller, 2005). Two important aspects of speech recognition are phonetic cues, such as format patterning, and prosody cues, such as intonation, accent, and emotional state of the speaker (Imaizumi, Koichi, Kiritani, Hosoi & Tonoike, 1998).

In a study done with both monolinguals and bilinguals, which took into account language experience, second language proficiency, and onset of bilingualism among other variables, researchers were able to demonstrate left hemispheric dominance. In addition, bilinguals that began speaking a second language early in life demonstrated bilateral hemispheric involvement. The findings of this study were able to predict differing patterns of cerebral language lateralization in adulthood (Hull & Vaid, 2006).

In other animals

It has been shown that cerebral lateralization is a widespread phenomenon in the animal kingdom. Functional and structural differences between left and right brain hemispheres can be found in many other vertebrates and also in invertebrates.

It has been proposed that negative, withdrawal-associated emotions are processed predominantly by the right hemisphere, whereas the left hemisphere is largely responsible for processing positive, approach-related emotions. This has been called the "laterality-valence hypothesis".

One sub-set of laterality in animals is limb dominance. Preferential limb use for specific tasks has been shown in species including chimpanzees, mice, bats, wallabies, parrots, chickens and toads.

Another form of laterality is hemispheric dominance for processing conspecific vocalizations, reported for chimpanzees, sea lions, dogs, zebra finches and Bengalese finches.

In mice

In mice (Mus musculus), laterality in paw usage has been shown to be a learned behavior (rather than inherited), due to which, in any population, half of the mice become left-handed while the other half becomes right-handed. The learning occurs by a gradual reinforcement of randomly occurring weak asymmetries in paw choice early in training, even when training in an unbiased world. Meanwhile, reinforcement relies on short-term and long-term memory skills that are strain-dependent, causing strains to differ in the degree of laterality of its individuals. Long-term memory of previously gained laterality in handedness due to training is heavily diminished in mice with absent corpus callosum and reduced hippocampal commissure. Regardless of the amount of past training and consequent biasing of paw choice, there is a degree of randomness in paw choice that is not removed by training, which may provide adaptability to changing environments.

In other mammals

Domestic horses (Equus caballus) exhibit laterality in at least two areas of neural organization, i.e. sensory and motor. In thoroughbreds, the strength of motor laterality increases with age. Horses under 4 years old have a preference to initially use the right nostril during olfaction. Along with olfaction, French horses have an eye laterality when looking at novel objects. There is a correlation between their score on an emotional index and eye preference; horses with higher emotionality are more likely to look with their left eye. The less emotive French saddlebreds glance at novel objects using the right eye, however, this tendency is absent in the trotters, although the emotive index is the same for both breeds. Racehorses exhibit laterality in stride patterns as well. They use their preferred stride pattern at all times whether racing or not, unless they are forced to change it while turning, injured, or fatigued.

In domestic dogs (Canis familiaris), there is a correlation between motor laterality and noise sensitivity - a lack of paw preference is associated with noise-related fearfulness. (Branson and Rogers, 2006). Fearfulness is an undesirable trait in guide dogs, therefore, testing for laterality can be a useful predictor of a successful guide dog. Knowing a guide dog's laterality can also be useful for training because the dog may be better at walking to the left or the right of their blind owner.

Domestic cats (Felis catus) show an individual handedness when reaching for static food. In one study, 46% preferred to use the right paw, 44% the left, and 10% were ambi-lateral; 60% used one paw 100% of the time. There was no difference between male and female cats in the proportions of left and right paw preferences. In moving-target reaching tests, cats have a left-sided behavioural asymmetry. One study indicates that laterality in this species is strongly related to temperament. Furthermore, individuals with stronger paw preferences are rated as more confident, affectionate, active, and friendly.

Chimpanzees show right-handedness in certain conditions. This is expressed at the population level for females, but not males. The complexity of the task has a dominant effect on handedness in chimps.

Cattle use visual/brain lateralisation in their visual scanning of novel and familiar stimuli. Domestic cattle prefer to view novel stimuli with the left eye, (similar to horses, Australian magpies, chicks, toads and fish) but use the right eye for viewing familiar stimuli.

Schreibers' long-fingered bat is lateralized at the population level and shows a left-hand bias for climbing or grasping.

Some types of mastodon indicate laterality through the fossil remains having differing tusk lengths.

In marsupials

Marsupials are fundamentally different from other mammals in that they lack a corpus callosum. However, wild kangaroos and other macropod marsupials have a left-hand preference for everyday tasks. Left-handedness is particularly apparent in the red kangaroo (Macropus rufus) and the eastern gray kangaroo (Macropus giganteus). The red-necked wallaby (Macropus rufogriseus) preferentially uses the left hand for behaviours that involve fine manipulation, but the right for behaviours that require more physical strength. There is less evidence for handedness in arboreal species.

In birds

Parrots tend to favor one foot when grasping objects (for example fruit when feeding). Some studies indicate that most parrots are left footed.

The Australian magpie (Gymnorhina tibicen) uses both left-eye and right-eye laterality when performing anti-predator responses, which include mobbing. Prior to withdrawing from a potential predator, Australian magpies view the animal with the left eye (85%), but prior to approaching, the right eye is used (72%). The left eye is used prior to jumping (73%) and prior to circling (65%) the predator, as well as during circling (58%) and for high alert inspection of the predator (72%). The researchers commented that "mobbing and perhaps circling are agonistic responses controlled by the LE[left eye]/right hemisphere, as also seen in other species. Alert inspection involves detailed examination of the predator and likely high levels of fear, known to be right hemisphere function."

Yellow-legged gull (Larus michahellis) chicks show laterality when reverting from a supine to prone posture, and also in pecking at a dummy parental bill to beg for food. Lateralization occurs at both the population and individual level in the reverting response and at the individual level in begging. Females have a leftward preference in the righting response, indicating this is sex dependent. Laterality in the begging response in chicks varies according to laying order and matches variation in egg androgens concentration. 

In fish

Laterality determines the organisation of rainbowfish (Melanotaenia spp.) schools. These fish demonstrate an individual eye preference when examining their reflection in a mirror. Fish which show a right-eye preference in the mirror test prefer to be on the left side of the school. Conversely, fish that show a left-eye preference in the mirror test or were non-lateralised, prefer to be slightly to the right side of the school. The behaviour depends on the species and sex of the school.

In amphibians

Three species of toads, the common toad (Bufo bufo), green toad (Bufo viridis) and the cane toad (Bufo marinus) show stronger escape and defensive responses when a model predator was placed on the toad's left side compared to their right side. Emei music frogs (Babina daunchina) have a right-ear preference for positive or neutral signals such as a conspecific's advertisement call and white noise, but a left-ear preference for negative signals such as predatory attack.

In invertebrates

The Mediterranean fruit fly (Ceratitis capitata) exhibits left-biased population-level lateralisation of aggressive displays (boxing with forelegs and wing strikes) with no sex-differences. In ants, Temnothorax albipennis (rock ant) scouts show behavioural lateralization when exploring unknown nest sites, showing a population-level bias to prefer left turns. One possible reason for this is that its environment is partly maze-like and consistently turning in one direction is a good way to search and exit mazes without getting lost. This turning bias is correlated with slight asymmetries in the ants' compound eyes (differential ommatidia count).

Sequence homology

From Wikipedia, the free encyclopedia
 
Gene phylogeny as red and blue branches within grey species phylogeny. Top: An ancestral gene duplication produces two paralogs (histone H1.1 and 1.2). A speciation event produces orthologs in the two daughter species (human and chimpanzee). Bottom: in a separate species (E. coli), a gene has a similar function (histone-like nucleoid-structuring protein) but has a separate evolutionary origin and so is an analog.

Sequence homology is the biological homology between DNA, RNA, or protein sequences, defined in terms of shared ancestry in the evolutionary history of life. Two segments of DNA can have shared ancestry because of three phenomena: either a speciation event (orthologs), or a duplication event (paralogs), or else a horizontal (or lateral) gene transfer event (xenologs).

Homology among DNA, RNA, or proteins is typically inferred from their nucleotide or amino acid sequence similarity. Significant similarity is strong evidence that two sequences are related by evolutionary changes from a common ancestral sequence. Alignments of multiple sequences are used to indicate which regions of each sequence are homologous.

Identity, similarity, and conservation

A sequence alignment of mammalian histone proteins. Sequences are the middle 120-180 amino acid residues of the proteins. Residues that are conserved across all sequences are highlighted in grey. The key below denotes conserved sequence (*), conservative mutations (:), semi-conservative mutations (.), and non-conservative mutations ( ).

The term "percent homology" is often used to mean "sequence similarity”, that is the percentage of identical residues (percent identity), or the percentage of residues conserved with similar physicochemical properties (percent similarity), e.g. leucine and isoleucine, is usually used to "quantify the homology." Based on the definition of homology specified above this terminology is incorrect since sequence similarity is the observation, homology is the conclusion. Sequences are either homologous or not.[3] This involves that the term "percent homology" is a misnomer.

As with morphological and anatomical structures, sequence similarity might occur because of convergent evolution, or, as with shorter sequences, by chance, meaning that they are not homologous. Homologous sequence regions are also called conserved. This is not to be confused with conservation in amino acid sequences, where the amino acid at a specific position has been substituted with a different one that has functionally equivalent physicochemical properties.

Partial homology can occur where a segment of the compared sequences has a shared origin, while the rest does not. Such partial homology may result from a gene fusion event.

Orthology

Top: An ancestral gene duplicates to produce two paralogs (Genes A and B). A speciation event produces orthologs in the two daughter species. Bottom: in a separate species, an unrelated gene has a similar function (Gene C) but has a separate evolutionary origin and so is an analog.

Homologous sequences are orthologous if they are inferred to be descended from the same ancestral sequence separated by a speciation event: when a species diverges into two separate species, the copies of a single gene in the two resulting species are said to be orthologous. Orthologs, or orthologous genes, are genes in different species that originated by vertical descent from a single gene of the last common ancestor. The term "ortholog" was coined in 1970 by the molecular evolutionist Walter Fitch.

For instance, the plant Flu regulatory protein is present both in Arabidopsis (multicellular higher plant) and Chlamydomonas (single cell green algae). The Chlamydomonas version is more complex: it crosses the membrane twice rather than once, contains additional domains and undergoes alternative splicing. However it can fully substitute the much simpler Arabidopsis protein, if transferred from algae to plant genome by means of genetic engineering. Significant sequence similarity and shared functional domains indicate that these two genes are orthologous genes, inherited from the shared ancestor.

Orthology is strictly defined in terms of ancestry. Given that the exact ancestry of genes in different organisms is difficult to ascertain due to gene duplication and genome rearrangement events, the strongest evidence that two similar genes are orthologous is usually found by carrying out phylogenetic analysis of the gene lineage. Orthologs often, but not always, have the same function.

Orthologous sequences provide useful information in taxonomic classification and phylogenetic studies of organisms. The pattern of genetic divergence can be used to trace the relatedness of organisms. Two organisms that are very closely related are likely to display very similar DNA sequences between two orthologs. Conversely, an organism that is further removed evolutionarily from another organism is likely to display a greater divergence in the sequence of the orthologs being studied.

Databases of orthologous genes

Given their tremendous importance for biology and bioinformatics, orthologous genes have been organized in several specialized databases that provide tools to identify and analyze orthologous gene sequences. These resources employ approaches that can be generally classified into those that use heuristic analysis of all pairwise sequence comparisons, and those that use phylogenetic methods. Sequence comparison methods were first pioneered in the COGs database in 1997. These methods have been extended and automated in the following databases:

  • AYbRAH: Analyzing Yeasts by Reconstructing Ancestry of Homologs
  • eggNOG
  • GreenPhylDB for plants
  • InParanoid focuses on pairwise ortholog relationships
  • OHNOLOGS is a repository of the genes retained from whole genome duplications in the vertebrate genomes including human and mouse.
  • OMA
  • OrthoDB appreciates that the orthology concept is relative to different speciation points by providing a hierarchy of orthologs along the species tree.
  • OrthoInspector is a repository of orthologous genes for 4753 organisms covering the three domains of life
  • OrthologID
  • OrthoMaM for mammals
  • OrthoMCL
  • Roundup

Tree-based phylogenetic approaches aim to distinguish speciation from gene duplication events by comparing gene trees with species trees, as implemented in databases and software tools such as:

A third category of hybrid approaches uses both heuristic and phylogenetic methods to construct clusters and determine trees, for example:

  • EnsemblCompara GeneTrees
  • HomoloGene
  • Ortholuge

Paralogy

Paralogous genes are genes that are related via duplication events in the last common ancestor (LCA) of the species being compared. They result from the mutation of duplicated genes during separate speciation events. When descendants from the LCA share mutated homologs of the original duplicated genes then those genes are considered paralogs.

As an example, in the LCA, one gene (gene A) may get duplicated to make a separate similar gene (gene B), those two genes will continue to get passed to subsequent generations. During speciation, one environment will favor a mutation in gene A (gene A1), producing a new species with genes A1 and B. Then in a separate speciation event, one environment will favor a mutation in gene B (gene B1) giving rise to a new species with genes A and B1. The descendants’ genes A1 and B1 are paralogous to each other because they are homologs that are related via a duplication event in the last common ancestor of the two species.

Additional classifications of paralogs include alloparalogs (out-paralogs) and symparalogs (in-paralogs). Alloparalogs are paralogs that evolved from gene duplications that preceded the given speciation event. In other words, alloparalogs are paralogs that evolved from duplication events that happened in the LCA of the organisms being compared. The example above is an example alloparalogy. Symparalogs are paralogs that evolved from gene duplication of paralogous genes in subsequent speciation events. From the example above, if the descendant with genes A1 and B underwent another speciation event where gene A1 duplicated, the new species would have genes B, A1a, and A1b. In this example, genes A1a and A1b are symparalogs.

Vertebrate Hox genes are organized in sets of paralogs. Each Hox cluster (HoxA, HoxB, etc.) is on a different chromosome. For instance, the human HoxA cluster is on chromosome 7. The mouse HoxA cluster shown here has 11 paralogous genes (2 are missing).

Paralogous genes can shape the structure of whole genomes and thus explain genome evolution to a large extent. Examples include the Homeobox (Hox) genes in animals. These genes not only underwent gene duplications within chromosomes but also whole genome duplications. As a result, Hox genes in most vertebrates are clustered across multiple chromosomes with the HoxA-D clusters being the best studied.

Another example are the globin genes which encode myoglobin and hemoglobin and are considered to be ancient paralogs. Similarly, the four known classes of hemoglobins (hemoglobin A, hemoglobin A2, hemoglobin B, and hemoglobin F) are paralogs of each other. While each of these proteins serves the same basic function of oxygen transport, they have already diverged slightly in function: fetal hemoglobin (hemoglobin F) has a higher affinity for oxygen than adult hemoglobin. Function is not always conserved, however. Human angiogenin diverged from ribonuclease, for example, and while the two paralogs remain similar in tertiary structure, their functions within the cell are now quite different.

It is often asserted that orthologs are more functionally similar than paralogs of similar divergence, but several papers have challenged this notion.

Regulation

Paralogs are often regulated differently, e.g. by having different tissue-specific expression patterns (see Hox genes). However, they can also be regulated differently on the protein level. For instance, Bacillus subtilis encodes two paralogues of glutamate dehydrogenase: GudB is constitutively transcribed whereas RocG is tightly regulated. In their active, oligomeric states, both enzymes show similar enzymatic rates. However, swaps of enzymes and promoters cause severe fitness losses, thus indicating promoter–enzyme coevolution. Characterization of the proteins shows that, compared to RocG, GudB's enzymatic activity is highly dependent on glutamate and pH.

Paralogous chromosomal regions

Sometimes, large regions of chromosomes share gene content similar to other chromosomal regions within the same genome. They are well characterised in the human genome, where they have been used as evidence to support the 2R hypothesis. Sets of duplicated, triplicated and quadruplicated genes, with the related genes on different chromosomes, are deduced to be remnants from genome or chromosomal duplications. A set of paralogy regions is together called a paralogon. Well-studied sets of paralogy regions include regions of human chromosome 2, 7, 12 and 17 containing Hox gene clusters, collagen genes, keratin genes and other duplicated genes, regions of human chromosomes 4, 5, 8 and 10 containing neuropeptide receptor genes, NK class homeobox genes and many more gene families, and parts of human chromosomes 13, 4, 5 and X containing the ParaHox genes and their neighbors. The Major histocompatibility complex (MHC) on human chromosome 6 has paralogy regions on chromosomes 1, 9 and 19. Much of the human genome seems to be assignable to paralogy regions.

Ohnology

A whole genome duplication event produces a genome with two ohnolog copies of each gene.
 
A speciation event produces orthologs of a gene in the two daughter species. A horizontal gene transfer event from one species to another adds a xenolog of the gene to its genome.
 
A speciation event produces orthologs of a gene in the two daughter species. Subsequent hybridisation of those species generates a hybrid genome with a homoeolog copy of each gene from both species.

Ohnologous genes are paralogous genes that have originated by a process of 2R whole-genome duplication. The name was first given in honour of Susumu Ohno by Ken Wolfe. Ohnologues are useful for evolutionary analysis because all ohnologues in a genome have been diverging for the same length of time (since their common origin in the whole genome duplication). Ohnologues are also known to show greater association with cancers, dominant genetic disorders, and pathogenic copy number variations.

Xenology

Homologs resulting from horizontal gene transfer between two organisms are termed xenologs. Xenologs can have different functions if the new environment is vastly different for the horizontally moving gene. In general, though, xenologs typically have similar function in both organisms. The term was coined by Walter Fitch.

Homoeology

Homoeologous (also spelled homeologous) chromosomes or parts of chromosomes are those brought together following inter-species hybridization and allopolyploidization to form a hybrid genome, and whose relationship was completely homologous in an ancestral species. In allopolyploids, the homologous chromosomes within each parental sub-genome should pair faithfully during meiosis, leading to disomic inheritance; however in some allopolyploids, the homoeologous chromosomes of the parental genomes may be nearly as similar to one another as the homologous chromosomes, leading to tetrasomic inheritance (four chromosomes pairing at meiosis), intergenomic recombination, and reduced fertility.

Gametology

Gametology denotes the relationship between homologous genes on non-recombining, opposite sex chromosomes. The term was coined by García-Moreno and Mindell. 2000. Gametologs result from the origination of genetic sex determination and barriers to recombination between sex chromosomes. Examples of gametologs include CHDW and CHDZ in birds.

Molecular mimicry

From Wikipedia, the free encyclopedia

Molecular mimicry is defined as the theoretical possibility that sequence similarities between foreign and self-peptides are sufficient to result in the cross-activation of autoreactive T or B cells by pathogen-derived peptides. Despite the prevalence of several peptide sequences which can be both foreign and self in nature, a single antibody or TCR (T cell receptor) can be activated by just a few crucial residues which stresses the importance of structural homology in the theory of molecular mimicry. Upon the activation of B or T cells, it is believed that these "peptide mimic" specific T or B cells can cross-react with self-epitopes, thus leading to tissue pathology (autoimmunity). Molecular mimicry is a phenomenon that has been just recently discovered as one of several ways in which autoimmunity can be evoked. A molecular mimicking event is, however, more than an epiphenomenon despite its low statistical probability of occurring and these events have serious implications in the onset of many human autoimmune disorders.

In the past decade the study of autoimmunity, the failure to recognize self antigens as "self", has grown immensely. Autoimmunity is thought by many researchers to be a result of a loss of immunological tolerance, the ability for an individual to discriminate between self and non-self, though others are beginning to think that many autoimmune diseases are due to mutations governing programmed cell death, or to environmental products that injure target tissues, thus causing a release of immunostimulatory alarm signals. Growth in the field of autoimmunity has resulted in more and more frequent diagnosis of autoimmune diseases. Consequently, recent data show that autoimmune diseases affect approximately 1 in 31 people within the general population. Growth has also led to a greater characterization of what autoimmunity is and how it can be studied and treated. With an increased amount of research, there has been tremendous growth in the study of the several different ways in which autoimmunity can occur, one of which is molecular mimicry. The mechanism by which pathogens have evolved, or obtained by chance, similar amino acid sequences or the homologous three-dimensional crystal structure of immunodominant epitopes remains a mystery.

Immunological tolerance

Tolerance is a fundamental property of the immune system. Tolerance involves non-self discrimination which is the ability of the normal immune system to recognize and respond to foreign antigens, but not self antigens. Autoimmunity is evoked when this tolerance to self antigen is broken. Tolerance within an individual is normally evoked as a fetus. This is known as maternal-fetal tolerance where B cells expressing receptors specific for a particular antigen enter the circulation of the developing fetus via the placenta.

After pre-B cells leave the bone marrow where they are synthesized, they are moved to the bone marrow where the maturation of B cells occurs. It is here where the first wave of B cell tolerance arises. Within the bone marrow, pre-B cells will encounter various self and foreign antigens present in the thymus that enter the thymus from peripheral sites via the circulatory system. Within the thymus, pre-T cells undergo a selection process where they must be positively selected and should avoid negative selection. B cells that bind with low avidity to self-MHC receptors are positively selected for maturation, those that do not die by apoptosis. Cells that survive positive selection, but bind strongly to self-antigens are negatively selected also by active induction of apoptosis. This negative selection is known as clonal deletion, one of the mechanisms for B cell tolerance. Approximately 99 percent of pre-B cells within the thymus are negatively selected. Only approximately 1 percent are positively selected for maturity.

However, there is only a limited repertoire of antigen that B cells can encounter within the thymus. B cell tolerance then must occur within the periphery after the induction of B cell tolerance within the thymus as a more diverse group of antigens can be encountered in peripheral tissues. This same positive and negative selection mechanism, but in peripheral tissues, is known as clonal anergy. The mechanism of clonal anergy is important to maintain tolerance to many autologous antigens. Active suppression is the other known mechanism of T cell tolerance. Active suppression involves the injection of large amounts of foreign antigen in the absence of an adjuvant which leads to a state of unresponsiveness. This unresponsive state is then transferred to a naïve recipient from the injected donor to induce a state of tolerance within the recipient.

Tolerance is also produced in T cells. There are also various processes which lead to B cell tolerance. Just as in T cells, clonal deletion and clonal anergy can physically eliminate autoreactive B cell clones. Receptor editing is another mechanism for B cell tolerance. This involves the reactivation or maintenance of V(D)J recombination in the cell which leads to the expression of novel receptor specificity through V region gene rearrangements which will create variation in the heavy and light immunoglobulin (Ig) chains.

Autoimmunity

Autoimmunity can thus be defined simply as exceptions to the tolerance "rules." By doing this, an immune response is generated against self-tissue and cells. These mechanisms are known by many to be intrinsic. However, there are pathogenic mechanisms for the generation of autoimmune disease. Pathogens can induce autoimmunity by polyclonal activation of B or T cells, or increased expression of major histocompatibility complex (MHC) class I or II molecules. There are several ways in which a pathogen can cause an autoimmune response. A pathogen may contain a protein that acts as a mitogen to encourage cell division, thus causing more B or T cell clones to be produced. Similarly, a pathogenic protein may act as a superantigen which causes rapid polyclonal activation of B or T cells. Pathogens can also cause the release of cytokines resulting in the activation of B or T cells, or they can alter macrophage function. Finally, pathogens may also expose B or T cells to cryptic determinants, which are self antigen determinants that have not been processed and presented sufficiently to tolerize the developing T cells in the thymus and are presented at the periphery where the infection occurs.

Molecular mimicry has been characterized as recently as the 1970s as another mechanism by which a pathogen can generate autoimmunity. Molecular mimicry is defined as similar structures shared by molecules from dissimilar genes or by their protein products. Either the linear amino acid sequence or the conformational fit of the immunodominant epitope may be shared between the pathogen and host. This is also known as "cross-reactivity" between self antigen of the host and immunodominant epitopes of the pathogen. An autoimmune response is then generated against the epitope. Due to similar sequence homology in the epitope between the pathogen and the host, cells and tissues of the host associated with the protein are destroyed as a result of the autoimmune response.

Probability of mimicry events

The prerequisite for molecular mimicry to occur is thus the sharing of the immunodominant epitope between the pathogen and the immunodominant self sequence that is generated by a cell or tissue. However, due to the amino acid variation between different proteins, molecular mimicry should not happen from a probability standpoint. Assuming five to six amino acid residues are used to induce a monoclonal antibody response, the probability of 20 amino acids occurring in six identical residues between two proteins is 1 in 206 or 1 in 64,000,000. However, there has been evidence shown and documented of many molecular mimicry events.

To determine which epitopes are shared between pathogen and self, large protein databases are used. The largest protein database in the world, known as the UniProt database (formerly SwissProt), has shown reports of molecular mimicry becoming more common with expansion of the database. The database currently contains 1.5 X 107 residues. The probability of finding a perfect match with a motif of 5 amino acids in length is 1 in 3.7 X 10−7 (0.055). Therefore, within the SwissProt database, one would expect to find 1.5 X 107 X 3.7 X 10−7 = 5 matches. However, there are sequence motifs within the database that are overrepresented and are found more than 5 times. For example, the QKRAA sequence is an amino acid motif in the third hypervariable region of HLA-DRB1*0401. This motif is also expressed on numerous other proteins, such as on gp110 of the Epstein-Barr virus and in E. coli. This motif occurs 37 times in the database. This would suggest that the linear amino acid sequence may not be an underlying cause of molecular mimicry since it can be found numerous times within the database. The possibility exists, then, for variability within amino acid sequence, but similarity in three-dimensional structure between two peptides can be recognized by T cell clones. This, therefore, uncovers a flaw of such large databases. They may be able to give a hint to relationships between epitopes, but the important three-dimensional structure cannot yet be searched for in such a database.

Structural mimicry

Despite no obvious amino acid sequence similarity from pathogen to host factors, structural studies have revealed that mimicry can still occur at the host level. In some cases, pathogenic mimics can possess a structural architecture that differs markedly from that of the functional homologues. Therefore, proteins of dissimilar sequence may have a common structure which elicits an autoimmune response. It has been hypothesized that these virulent proteins display their mimicry through molecular surfaces that mimic host protein surfaces (protein fold or three-dimensional conformation), which have been obtained by convergent evolution. It has also been theorized that these similar protein folds have been obtained by horizontal gene transfer, most likely from a eukaryotic host. This further supports the theory that microbial organisms have evolved a mechanism of concealment similar to that of higher organisms such as the African praying mantis or chameleon who camouflage themselves so that they can mimic their background as not to be recognized by others.

Despite dissimilar sequence homology between self and foreign peptide, weak electrostatic interactions between foreign peptide and the MHC can also mimic self peptide to elicit an autoimmune response within the host. For example, charged residues can explain the enhanced on-rate and reduced off-rate of a particular antigen or can contribute to a higher affinity and activity for a particular antigen that can perhaps mimic that of the host. Similarly, prominent ridges on the floor of peptide-binding grooves can do such things as create C-terminal bulges in particular peptides that can greatly increase the interaction between foreign and self peptide on the MHC. Similarly, there has been evidence that even gross features such as acidic/basic and hydrophobic/hydrophilic interactions have allowed foreign peptides to interact with an antibody or MHC and TCR. It is now apparent that sequence similarity considerations are not sufficient when evaluating potential mimic epitopes and the underlying mechanisms of molecular mimicry. Molecular mimicry, from these examples, has therefore been shown to occur in the absence of any true sequence homology.

There has been increasing evidence for mimicking events caused not only by amino acid similarities but also in similarities in binding motifs to the MHC. Molecular mimicry is thus occurring between two recognized peptides that have similar antigenic surfaces in the absence of primary sequence homology. For example, specific single amino acid residues such as cysteine (creates di-sulfide bonds), arginine or lysine (form multiple hydrogen bonds), could be essential for T cell cross-reactivity. These single residues may be the only residues conserved between self and foreign antigen that allow the structurally similar but sequence non-specific peptides to bind to the MHC.

Epitope spreading

Epitope spreading, also known as determinant spreading, is another common way in which autoimmunity can occur which uses the molecular mimicry mechanism. Autoreactive T cells are activated de novo by self epitopes released secondary to pathogen-specific T cell-mediated bystander damage. T cell responses to progressively less dominant epitopes are activated as a consequence of the release of other antigens secondary to the destruction of the pathogen with a homologous immunodominant sequence. Thus, inflammatory responses induced by specific pathogens that trigger pro-inflammatory Th1 responses have the ability to persist in genetically susceptible hosts. This may lead to organ-specific autoimmune disease. Conversely, epitope spreading could be due to target antigens being physically linked intracellularly as members of a complex to self antigen. The result of this is an autoimmune response that is triggered by exogenous antigen that progresses to a truly autoimmune response against mimicked self antigen and other antigens. From these examples, it is clear that the search for candidate mimic epitopes must extend beyond the immunodominant epitopes of a given autoimmune response.

Implications in human disease

Diseases of the central nervous system

The HIV-1 virus has been shown to cause diseases of the central nervous system (CNS) in humans through a molecular mimicry apparatus. HIV-1 gp41 is used to bind chemokines on the cell surface of the host so that the virion may gain entrance into the host. Astrocytes are cells of the CNS which are used to regulate the concentrations of K+ and neurotransmitter which enter the cerebrospinal fluid (CSF) to contribute to the blood brain barrier. A twelve amino acid sequence (Leu-Gly-Ile-Trp-Gly-Cys-Ser-Gly-Lys-Leu-Ile-Cys) on gp41 of the HIV-1 virus (immunodominant region) shows sequence homology with a twelve amino acid protein on the surface of human astrocytes. Antibodies are produced for the HIV-1 gp41 protein. These antibodies can cross-react with astrocytes within human CNS tissue and act as autoantibodies. This contributes to many CNS complications found in AIDS patients.

Theiler's murine encephalomyelitis virus (TMEV) leads to the development in mice of a progressive CD4+ T cell-mediated response after these cells have infiltrated the CNS. This virus has been shown to cause CNS disease in mice that resembles multiple sclerosis, an autoimmune disease in humans that results in the gradual destruction of the myelin sheath coating axons of the CNS. The TMEV mouse virus shares a thirteen amino acid sequence (His-Cys-Leu-Gly-Lys-Trp-Leu-Gly-His-Pro-Asp-Lys-Phe) (PLP (proteolipid protein) 139-151 epitope) with that of a human myelin-specific epitope. Bystander myelin damage is caused by virus specific Th1 cells that cross react with this self epitope. To test the efficacy in which TMEV uses molecular mimicry to its advantage, a sequence of the human myelin-specific epitope was inserted into a non-pathogenic TMEV variant. As a result, there was a CD4+ T cell response and autoimmune demyelination was initiated by infection with a TMEV peptide ligand. In humans, it has recently been shown that there are other possible targets for molecular mimicry in patients with multiple sclerosis. These involve the hepatitis B virus mimicking the human proteolipid protein (myelin protein) and the Epstein-Barr virus mimicking anti-myelin oligodendrocyte glycoprotein (contributes to a ring of myelin around blood vessels).

Muscle disorders

Myasthenia gravis is another common autoimmune disease. This disease causes fluctuating muscle weakness and fatigue. The disease occurs due to detectable antibodies produced against the human acetylcholine receptor. The receptor contains a seven amino acid sequence (Trp-Thr-Tyr-Asp-Gly-Thr-Lys) in the α-subunit that demonstrates immunological cross-reactivity with a shared immunodominant domain of gpD of the herpes simplex virus (HSV). Similar to HIV-1, gpD also aids in binding to chemokines on the cell surface of the host to gain entry into the host. Cross-reactivity of the self epitope (α-subunit of the receptor) with antibodies produced against HSV suggests that the virus is associated with the initiation of myasthenia gravis. Not only does HSV cause immunologic cross-reactivity, but the gpD peptide also competitively inhibits the binding of antibody made against the α-subunit to its corresponding peptide on the α-subunit. Despite this, an autoimmune response still occurs. This further shows an immunologically significant sequence homology to the biologically active site of the human acetylcholine receptor.

Control

There are ways in which autoimmunity caused by molecular mimicry can be avoided. Control of the initiating factor (pathogen) via vaccination seems to be the most common method to avoid autoimmunity. Inducing tolerance to the host autoantigen in this way may also be the most stable factor. The development of a downregulating immune response to the shared epitope between pathogen and host may be the best way of treating an autoimmune disease caused by molecular mimicry. Alternatively, treatment with immunosuppressive drugs such as ciclosporin and azathioprine has also been used as a possible solution. However, in many cases this has been shown to be ineffective because cells and tissues have already been destroyed at the onset of the infection.

Conclusion

The concept of molecular mimicry is a useful tool in understanding the etiology, pathogenesis, treatment, and prevention of autoimmune disorders. Molecular mimicry is, however, only one mechanism by which an autoimmune disease can occur in association with a pathogen. Understanding the mechanisms of molecular mimicry may allow future research to be directed toward uncovering the initiating infectious agent as well as recognizing the self determinant. This way, future research may be able to design strategies for treatment and prevention of autoimmune disorders. The use of transgenic models such as those used for discovery of the mimicry events leading to diseases of the CNS and muscle disorders has helped evaluate the sequence of events leading to molecular mimicry.

Related terms

  • Viral apoptotic mimicry, defined by the exposure of phosphatidylserine (a marker for apoptosis) on the pathogen surface – in the case of apoptosis, the dead cell surface that is used to gain viral access to the interior of immune cells.

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...