Search This Blog

Sunday, May 30, 2021

Proteomics

From Wikipedia, the free encyclopedia
Robotic preparation of MALDI mass spectrometry samples on a sample carrier

Proteomics is the large-scale study of proteins. Proteins are vital parts of living organisms, with many functions. The proteome is the entire set of proteins that is produced or modified by an organism or system. Proteomics has enabled the identification of ever increasing numbers of protein. This varies with time and distinct requirements, or stresses, that a cell or organism undergoes. Proteomics is an interdisciplinary domain that has benefitted greatly from the genetic information of various genome projects, including the Human Genome Project. It covers the exploration of proteomes from the overall level of protein composition, structure, and activity. It is an important component of functional genomics.

Proteomics generally refers to the large-scale experimental analysis of proteins and proteomes, but often is used specifically to refer to protein purification and mass spectrometry.

History and etymology

The first studies of proteins that could be regarded as proteomics began in 1975, after the introduction of the two-dimensional gel and mapping of the proteins from the bacterium Escherichia coli.

The word proteome is blend of the words "protein" and "genome", and was coined by Marc Wilkins in 1994 while he was a Ph.D. student at Macquarie University. Macquarie University also founded the first dedicated proteomics laboratory in 1995.

Complexity of the problem

After genomics and transcriptomics, proteomics is the next step in the study of biological systems. It is more complicated than genomics because an organism's genome is more or less constant, whereas proteomes differ from cell to cell and from time to time. Distinct genes are expressed in different cell types, which means that even the basic set of proteins that are produced in a cell needs to be identified.

In the past this phenomenon was assessed by RNA analysis, but it was found to lack correlation with protein content. Now it is known that mRNA is not always translated into protein, and the amount of protein produced for a given amount of mRNA depends on the gene it is transcribed from and on the current physiological state of the cell. Proteomics confirms the presence of the protein and provides a direct measure of the quantity present.

Post-translational modifications

Not only does the translation from mRNA cause differences, but many proteins also are subjected to a wide variety of chemical modifications after translation. The most common and widely studied post translational modifications include phosphorylation and glycosylation. Many of these post-translational modifications are critical to the protein's function.

Phosphorylation

One such modification is phosphorylation, which happens to many enzymes and structural proteins in the process of cell signaling. The addition of a phosphate to particular amino acids—most commonly serine and threonine mediated by serine-threonine kinases, or more rarely tyrosine mediated by tyrosine kinases—causes a protein to become a target for binding or interacting with a distinct set of other proteins that recognize the phosphorylated domain.

Because protein phosphorylation is one of the most-studied protein modifications, many "proteomic" efforts are geared to determining the set of phosphorylated proteins in a particular cell or tissue-type under particular circumstances. This alerts the scientist to the signaling pathways that may be active in that instance.

Ubiquitination

Ubiquitin is a small protein that may be affixed to certain protein substrates by enzymes called E3 ubiquitin ligases. Determining which proteins are poly-ubiquitinated helps understand how protein pathways are regulated. This is, therefore, an additional legitimate "proteomic" study. Similarly, once a researcher determines which substrates are ubiquitinated by each ligase, determining the set of ligases expressed in a particular cell type is helpful.

Additional modifications

In addition to phosphorylation and ubiquitination, proteins may be subjected to (among others) methylation, acetylation, glycosylation, oxidation, and nitrosylation. Some proteins undergo all these modifications, often in time-dependent combinations. This illustrates the potential complexity of studying protein structure and function.

Distinct proteins are made under distinct settings

A cell may make different sets of proteins at different times or under different conditions, for example during development, cellular differentiation, cell cycle, or carcinogenesis. Further increasing proteome complexity, as mentioned, most proteins are able to undergo a wide range of post-translational modifications.

Therefore, a "proteomics" study may become complex very quickly, even if the topic of study is restricted. In more ambitious settings, such as when a biomarker for a specific cancer subtype is sought, the proteomics scientist might elect to study multiple blood serum samples from multiple cancer patients to minimise confounding factors and account for experimental noise. Thus, complicated experimental designs are sometimes necessary to account for the dynamic complexity of the proteome.

Limitations of genomics and proteomics studies

Proteomics gives a different level of understanding than genomics for many reasons:

  • the level of transcription of a gene gives only a rough estimate of its level of translation into a protein. An mRNA produced in abundance may be degraded rapidly or translated inefficiently, resulting in a small amount of protein.
  • as mentioned above, many proteins experience post-translational modifications that profoundly affect their activities; for example, some proteins are not active until they become phosphorylated. Methods such as phosphoproteomics and glycoproteomics are used to study post-translational modifications.
  • many transcripts give rise to more than one protein, through alternative splicing or alternative post-translational modifications.
  • many proteins form complexes with other proteins or RNA molecules, and only function in the presence of these other molecules.
  • protein degradation rate plays an important role in protein content.

Reproducibility. One major factor affecting reproducibility in proteomics experiments is the simultaneous elution of many more peptides than mass spectrometers can measure. This causes stochastic differences between experiments due to data-dependent acquisition of tryptic peptides. Although early large-scale shotgun proteomics analyses showed considerable variability between laboratories, presumably due in part to technical and experimental differences between laboratories, reproducibility has been improved in more recent mass spectrometry analysis, particularly on the protein level and using Orbitrap mass spectrometers. Notably, targeted proteomics shows increased reproducibility and repeatability compared with shotgun methods, although at the expense of data density and effectiveness.

Methods of studying proteins

In proteomics, there are multiple methods to study proteins. Generally, proteins may be detected by using either antibodies (immunoassays) or mass spectrometry. If a complex biological sample is analyzed, either a very specific antibody needs to be used in quantitative dot blot analysis (QDB), or biochemical separation then needs to be used before the detection step, as there are too many analytes in the sample to perform accurate detection and quantification.

Protein detection with antibodies (immunoassays)

Antibodies to particular proteins, or to their modified forms, have been used in biochemistry and cell biology studies. These are among the most common tools used by molecular biologists today. There are several specific techniques and protocols that use antibodies for protein detection. The enzyme-linked immunosorbent assay (ELISA) has been used for decades to detect and quantitatively measure proteins in samples. The western blot may be used for detection and quantification of individual proteins, where in an initial step, a complex protein mixture is separated using SDS-PAGE and then the protein of interest is identified using an antibody.

Modified proteins may be studied by developing an antibody specific to that modification. For example, there are antibodies that only recognize certain proteins when they are tyrosine-phosphorylated, they are known as phospho-specific antibodies. Also, there are antibodies specific to other modifications. These may be used to determine the set of proteins that have undergone the modification of interest.

Disease detection at the molecular level is driving the emerging revolution of early diagnosis and treatment. A challenge facing the field is that protein biomarkers for early diagnosis may be present in very low abundance. The lower limit of detection with conventional immunoassay technology is the upper femtomolar range (10−13 M). Digital immunoassay technology has improved detection sensitivity three logs, to the attomolar range (10−16 M). This capability has the potential to open new advances in diagnostics and therapeutics, but such technologies have been relegated to manual procedures that are not well suited for efficient routine use.

Antibody-free protein detection

While protein detection with antibodies is still very common in molecular biology, other methods have been developed as well, that do not rely on an antibody. These methods offer various advantages, for instance they often are able to determine the sequence of a protein or peptide, they may have higher throughput than antibody-based, and they sometimes can identify and quantify proteins for which no antibody exists.

Detection methods

One of the earliest methods for protein analysis has been Edman degradation (introduced in 1967) where a single peptide is subjected to multiple steps of chemical degradation to resolve its sequence. These early methods have mostly been supplanted by technologies that offer higher throughput.

More recently implemented methods use mass spectrometry-based techniques, a development that was made possible by the discovery of "soft ionization" methods developed in the 1980s, such as matrix-assisted laser desorption/ionization (MALDI) and electrospray ionization (ESI). These methods gave rise to the top-down and the bottom-up proteomics workflows where often additional separation is performed before analysis (see below).

Separation methods

For the analysis of complex biological samples, a reduction of sample complexity is required. This may be performed off-line by one-dimensional or two-dimensional separation. More recently, on-line methods have been developed where individual peptides (in bottom-up proteomics approaches) are separated using reversed-phase chromatography and then, directly ionized using ESI; the direct coupling of separation and analysis explains the term "on-line" analysis.

Hybrid technologies

There are several hybrid technologies that use antibody-based purification of individual analytes and then perform mass spectrometric analysis for identification and quantification. Examples of these methods are the MSIA (mass spectrometric immunoassay), developed by Randall Nelson in 1995,[20] and the SISCAPA (Stable Isotope Standard Capture with Anti-Peptide Antibodies) method, introduced by Leigh Anderson in 2004.

Current research methodologies

Fluorescence two-dimensional differential gel electrophoresis (2-D DIGE) may be used to quantify variation in the 2-D DIGE process and establish statistically valid thresholds for assigning quantitative changes between samples.

Comparative proteomic analysis may reveal the role of proteins in complex biological systems, including reproduction. For example, treatment with the insecticide triazophos causes an increase in the content of brown planthopper (Nilaparvata lugens (Stål)) male accessory gland proteins (Acps) that may be transferred to females via mating, causing an increase in fecundity (i.e. birth rate) of females. To identify changes in the types of accessory gland proteins (Acps) and reproductive proteins that mated female planthoppers received from male planthoppers, researchers conducted a comparative proteomic analysis of mated N. lugens females. The results indicated that these proteins participate in the reproductive process of N. lugens adult females and males.

Proteome analysis of Arabidopsis peroxisomes has been established as the major unbiased approach for identifying new peroxisomal proteins on a large scale.

There are many approaches to characterizing the human proteome, which is estimated to contain between 20,000 and 25,000 non-redundant proteins. The number of unique protein species likely will increase by between 50,000 and 500,000 due to RNA splicing and proteolysis events, and when post-translational modification also are considered, the total number of unique human proteins is estimated to range in the low millions.

In addition, the first promising attempts to decipher the proteome of animal tumors have recently been reported. This method was used as a functional method in Macrobrachium rosenbergii protein profiling.

High-throughput proteomic technologies

Proteomics has steadily gained momentum over the past decade with the evolution of several approaches. Few of these are new, and others build on traditional methods. Mass spectrometry-based methods and micro arrays are the most common technologies for large-scale study of proteins.

Mass spectrometry and protein profiling

There are two mass spectrometry-based methods currently used for protein profiling. The more established and widespread method uses high resolution, two-dimensional electrophoresis to separate proteins from different samples in parallel, followed by selection and staining of differentially expressed proteins to be identified by mass spectrometry. Despite the advances in 2-DE and its maturity, it has its limits as well. The central concern is the inability to resolve all the proteins within a sample, given their dramatic range in expression level and differing properties.

The second quantitative approach uses stable isotope tags to differentially label proteins from two different complex mixtures. Here, the proteins within a complex mixture are labeled isotopically first, and then digested to yield labeled peptides. The labeled mixtures are then combined, the peptides separated by multidimensional liquid chromatography and analyzed by tandem mass spectrometry. Isotope coded affinity tag (ICAT) reagents are the widely used isotope tags. In this method, the cysteine residues of proteins get covalently attached to the ICAT reagent, thereby reducing the complexity of the mixtures omitting the non-cysteine residues.

Quantitative proteomics using stable isotopic tagging is an increasingly useful tool in modern development. Firstly, chemical reactions have been used to introduce tags into specific sites or proteins for the purpose of probing specific protein functionalities. The isolation of phosphorylated peptides has been achieved using isotopic labeling and selective chemistries to capture the fraction of protein among the complex mixture. Secondly, the ICAT technology was used to differentiate between partially purified or purified macromolecular complexes such as large RNA polymerase II pre-initiation complex and the proteins complexed with yeast transcription factor. Thirdly, ICAT labeling was recently combined with chromatin isolation to identify and quantify chromatin-associated proteins. Finally ICAT reagents are useful for proteomic profiling of cellular organelles and specific cellular fractions.

Another quantitative approach is the accurate mass and time (AMT) tag approach developed by Richard D. Smith and coworkers at Pacific Northwest National Laboratory. In this approach, increased throughput and sensitivity is achieved by avoiding the need for tandem mass spectrometry, and making use of precisely determined separation time information and highly accurate mass determinations for peptide and protein identifications.

Protein chips

Balancing the use of mass spectrometers in proteomics and in medicine is the use of protein micro arrays. The aim behind protein micro arrays is to print thousands of protein detecting features for the interrogation of biological samples. Antibody arrays are an example in which a host of different antibodies are arrayed to detect their respective antigens from a sample of human blood. Another approach is the arraying of multiple protein types for the study of properties like protein-DNA, protein-protein and protein-ligand interactions. Ideally, the functional proteomic arrays would contain the entire complement of the proteins of a given organism. The first version of such arrays consisted of 5000 purified proteins from yeast deposited onto glass microscopic slides. Despite the success of first chip, it was a greater challenge for protein arrays to be implemented. Proteins are inherently much more difficult to work with than DNA. They have a broad dynamic range, are less stable than DNA and their structure is difficult to preserve on glass slides, though they are essential for most assays. The global ICAT technology has striking advantages over protein chip technologies.

Reverse-phased protein microarrays

This is a promising and newer microarray application for the diagnosis, study and treatment of complex diseases such as cancer. The technology merges laser capture microdissection (LCM) with micro array technology, to produce reverse phase protein microarrays. In this type of microarrays, the whole collection of protein themselves are immobilized with the intent of capturing various stages of disease within an individual patient. When used with LCM, reverse phase arrays can monitor the fluctuating state of proteome among different cell population within a small area of human tissue. This is useful for profiling the status of cellular signaling molecules, among a cross section of tissue that includes both normal and cancerous cells. This approach is useful in monitoring the status of key factors in normal prostate epithelium and invasive prostate cancer tissues. LCM then dissects these tissue and protein lysates were arrayed onto nitrocellulose slides, which were probed with specific antibodies. This method can track all kinds of molecular events and can compare diseased and healthy tissues within the same patient enabling the development of treatment strategies and diagnosis. The ability to acquire proteomics snapshots of neighboring cell populations, using reverse phase microarrays in conjunction with LCM has a number of applications beyond the study of tumors. The approach can provide insights into normal physiology and pathology of all the tissues and is invaluable for characterizing developmental processes and anomalies.

Practical applications

New Drug Discovery

One major development to come from the study of human genes and proteins has been the identification of potential new drugs for the treatment of disease. This relies on genome and proteome information to identify proteins associated with a disease, which computer software can then use as targets for new drugs. For example, if a certain protein is implicated in a disease, its 3D structure provides the information to design drugs to interfere with the action of the protein. A molecule that fits the active site of an enzyme, but cannot be released by the enzyme, inactivates the enzyme. This is the basis of new drug-discovery tools, which aim to find new drugs to inactivate proteins involved in disease. As genetic differences among individuals are found, researchers expect to use these techniques to develop personalized drugs that are more effective for the individual.

Proteomics is also used to reveal complex plant-insect interactions that help identify candidate genes involved in the defensive response of plants to herbivory.

Interaction proteomics and protein networks

Interaction proteomics is the analysis of protein interactions from scales of binary interactions to proteome- or network-wide. Most proteins function via protein–protein interactions, and one goal of interaction proteomics is to identify binary protein interactions, protein complexes, and interactomes.

Several methods are available to probe protein–protein interactions. While the most traditional method is yeast two-hybrid analysis, a powerful emerging method is affinity purification followed by protein mass spectrometry using tagged protein baits. Other methods include surface plasmon resonance (SPR), protein microarrays, dual polarisation interferometry, microscale thermophoresis and experimental methods such as phage display and in silico computational methods.

Knowledge of protein-protein interactions is especially useful in regard to biological networks and systems biology, for example in cell signaling cascades and gene regulatory networks (GRNs, where knowledge of protein-DNA interactions is also informative). Proteome-wide analysis of protein interactions, and integration of these interaction patterns into larger biological networks, is crucial towards understanding systems-level biology.

Expression proteomics

Expression proteomics includes the analysis of protein expression at larger scale. It helps identify main proteins in a particular sample, and those proteins differentially expressed in related samples—such as diseased vs. healthy tissue. If a protein is found only in a diseased sample then it can be a useful drug target or diagnostic marker. Proteins with same or similar expression profiles may also be functionally related. There are technologies such as 2D-PAGE and mass spectrometry that are used in expression proteomics.

Biomarkers

The National Institutes of Health has defined a biomarker as "a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention."

Understanding the proteome, the structure and function of each protein and the complexities of protein–protein interactions is critical for developing the most effective diagnostic techniques and disease treatments in the future. For example, proteomics is highly useful in identification of candidate biomarkers (proteins in body fluids that are of value for diagnosis), identification of the bacterial antigens that are targeted by the immune response, and identification of possible immunohistochemistry markers of infectious or neoplastic diseases.

An interesting use of proteomics is using specific protein biomarkers to diagnose disease. A number of techniques allow to test for proteins produced during a particular disease, which helps to diagnose the disease quickly. Techniques include western blot, immunohistochemical staining, enzyme linked immunosorbent assay (ELISA) or mass spectrometry. Secretomics, a subfield of proteomics that studies secreted proteins and secretion pathways using proteomic approaches, has recently emerged as an important tool for the discovery of biomarkers of disease.

Proteogenomics

In proteogenomics, proteomic technologies such as mass spectrometry are used for improving gene annotations. Parallel analysis of the genome and the proteome facilitates discovery of post-translational modifications and proteolytic events, especially when comparing multiple species (comparative proteogenomics).

Structural proteomics

Structural proteomics includes the analysis of protein structures at large-scale. It compares protein structures and helps identify functions of newly discovered genes. The structural analysis also helps to understand that where drugs bind to proteins and also show where proteins interact with each other. This understanding is achieved using different technologies such as X-ray crystallography and NMR spectroscopy.

Bioinformatics for proteomics (proteome informatics)

Much proteomics data is collected with the help of high throughput technologies such as mass spectrometry and microarray. It would often take weeks or months to analyze the data and perform comparisons by hand. For this reason, biologists and chemists are collaborating with computer scientists and mathematicians to create programs and pipeline to computationally analyze the protein data. Using bioinformatics techniques, researchers are capable of faster analysis and data storage. A good place to find lists of current programs and databases is on the ExPASy bioinformatics resource portal. The applications of bioinformatics-based proteomics includes medicine, disease diagnosis, biomarker identification, and many more.

Protein identification

Mass spectrometry and microarray produce peptide fragmentation information but do not give identification of specific proteins present in the original sample. Due to the lack of specific protein identification, past researchers were forced to decipher the peptide fragments themselves. However, there are currently programs available for protein identification. These programs take the peptide sequences output from mass spectrometry and microarray and return information about matching or similar proteins. This is done through algorithms implemented by the program which perform alignments with proteins from known databases such as UniProt and PROSITE to predict what proteins are in the sample with a degree of certainty.

Protein structure

The biomolecular structure forms the 3D configuration of the protein. Understanding the protein's structure aids in identification of the protein's interactions and function. It used to be that the 3D structure of proteins could only be determined using X-ray crystallography and NMR spectroscopy. As of 2017, Cryo-electron microscopy is a leading technique, solving difficulties with crystallization (in X-ray crystallography) and conformational ambiguity (in NMR); resolution was 2.2Å as of 2015. Now, through bioinformatics, there are computer programs that can in some cases predict and model the structure of proteins. These programs use the chemical properties of amino acids and structural properties of known proteins to predict the 3D model of sample proteins. This also allows scientists to model protein interactions on a larger scale. In addition, biomedical engineers are developing methods to factor in the flexibility of protein structures to make comparisons and predictions.

Post-translational modifications

Most programs available for protein analysis are not written for proteins that have undergone post-translational modifications. Some programs will accept post-translational modifications to aid in protein identification but then ignore the modification during further protein analysis. It is important to account for these modifications since they can affect the protein's structure. In turn, computational analysis of post-translational modifications has gained the attention of the scientific community. The current post-translational modification programs are only predictive. Chemists, biologists and computer scientists are working together to create and introduce new pipelines that allow for analysis of post-translational modifications that have been experimentally identified for their effect on the protein's structure and function.

Computational methods in studying protein biomarkers

One example of the use of bioinformatics and the use of computational methods is the study of protein biomarkers. Computational predictive models have shown that extensive and diverse feto-maternal protein trafficking occurs during pregnancy and can be readily detected non-invasively in maternal whole blood. This computational approach circumvented a major limitation, the abundance of maternal proteins interfering with the detection of fetal proteins, to fetal proteomic analysis of maternal blood. Computational models can use fetal gene transcripts previously identified in maternal whole blood to create a comprehensive proteomic network of the term neonate. Such work shows that the fetal proteins detected in pregnant woman’s blood originate from a diverse group of tissues and organs from the developing fetus. The proteomic networks contain many biomarkers that are proxies for development and illustrate the potential clinical application of this technology as a way to monitor normal and abnormal fetal development.

An information theoretic framework has also been introduced for biomarker discovery, integrating biofluid and tissue information. This new approach takes advantage of functional synergy between certain biofluids and tissues with the potential for clinically significant findings not possible if tissues and biofluids were considered individually. By conceptualizing tissue-biofluid as information channels, significant biofluid proxies can be identified and then used for guided development of clinical diagnostics. Candidate biomarkers are then predicted based on information transfer criteria across the tissue-biofluid channels. Significant biofluid-tissue relationships can be used to prioritize clinical validation of biomarkers.

Emerging trends

A number of emerging concepts have the potential to improve current features of proteomics. Obtaining absolute quantification of proteins and monitoring post-translational modifications are the two tasks that impact the understanding of protein function in healthy and diseased cells. For many cellular events, the protein concentrations do not change; rather, their function is modulated by post-translational modifications (PTM). Methods of monitoring PTM are an underdeveloped area in proteomics. Selecting a particular subset of protein for analysis substantially reduces protein complexity, making it advantageous for diagnostic purposes where blood is the starting material. Another important aspect of proteomics, yet not addressed, is that proteomics methods should focus on studying proteins in the context of the environment. The increasing use of chemical cross linkers, introduced into living cells to fix protein-protein, protein-DNA and other interactions, may ameliorate this problem partially. The challenge is to identify suitable methods of preserving relevant interactions. Another goal for studying protein is to develop more sophisticated methods to image proteins and other molecules in living cells and real time.

Systems biology

Advances in quantitative proteomics would clearly enable more in-depth analysis of cellular systems. Biological systems are subject to a variety of perturbations (cell cycle, cellular differentiation, carcinogenesis, environment (biophysical), etc.). Transcriptional and translational responses to these perturbations results in functional changes to the proteome implicated in response to the stimulus. Therefore, describing and quantifying proteome-wide changes in protein abundance is crucial towards understanding biological phenomenon more holistically, on the level of the entire system. In this way, proteomics can be seen as complementary to genomics, transcriptomics, epigenomics, metabolomics, and other -omics approaches in integrative analyses attempting to define biological phenotypes more comprehensively. As an example, The Cancer Proteome Atlas provides quantitative protein expression data for ~200 proteins in over 4,000 tumor samples with matched transcriptomic and genomic data from The Cancer Genome Atlas. Similar datasets in other cell types, tissue types, and species, particularly using deep shotgun mass spectrometry, will be an immensely important resource for research in fields like cancer biology, developmental and stem cell biology, medicine, and evolutionary biology.

Human plasma proteome

Characterizing the human plasma proteome has become a major goal in the proteomics arena, but it is also the most challenging proteomes of all human tissues. It contains immunoglobulin, cytokines, protein hormones, and secreted proteins indicative of infection on top of resident, hemostatic proteins. It also contains tissue leakage proteins due to the blood circulation through different tissues in the body. The blood thus contains information on the physiological state of all tissues and, combined with its accessibility, makes the blood proteome invaluable for medical purposes. It is thought that characterizing the proteome of blood plasma is a daunting challenge.

The depth of the plasma proteome encompassing a dynamic range of more than 1010 between the highest abundant protein (albumin) and the lowest (some cytokines) and is thought to be one of the main challenges for proteomics. Temporal and spatial dynamics further complicate the study of human plasma proteome. The turnover of some proteins is quite faster than others and the protein content of an artery may substantially vary from that of a vein. All these differences make even the simplest proteomic task of cataloging the proteome seem out of reach. To tackle this problem, priorities need to be established. Capturing the most meaningful subset of proteins among the entire proteome to generate a diagnostic tool is one such priority. Secondly, since cancer is associated with enhanced glycosylation of proteins, methods that focus on this part of proteins will also be useful. Again: multiparameter analysis best reveals a pathological state. As these technologies improve, the disease profiles should be continually related to respective gene expression changes. Due to the above-mentioned problems plasma proteomics remained challenging. However, technological advancements and continuous developments seem to result in a revival of plasma proteomics as it was shown recently by a technology called plasma proteome profiling. Due to such technologies researchers were able to investigate inflammation processes in mice, the heritability of plasma proteomes as well as to show the effect of such a common life style change like weight loss on the plasma proteome.

Brain asymmetry

From Wikipedia, the free encyclopedia

In human neuroanatomy, brain asymmetry can refer to at least two quite distinct findings:

A stereotypical image of brain lateralisation - demonstrated to be false in neuroscientific research.

Neuroanatomical differences themselves exist on different scales, from neuronal densities, to the size of regions such as the planum temporale, to—at the largest scale—the torsion or "wind" in the human brain, reflected shape of the skull, which reflects a backward (posterior) protrusion of the left occipital bone and a forward (anterior) protrusion of the right frontal bone. In addition to gross size differences, both neurochemical and structural differences have been found between the hemispheres. Asymmetries appear in the spacing of cortical columns, as well as dendritic structure and complexity. Larger cell sizes are also found in layer III of Broca's area.

The human brain has an overall leftward posterior and rightward anterior asymmetry (or brain torque). There are particularly large asymmetries in the frontal, temporal and occipital lobes, which increase in asymmetry in the antero-posterior direction beginning at the central region. Leftward asymmetry can be seen in the Heschl gyrus, parietal operculum, Silvian fissure, left cingulate gyrus, temporo-parietal region and planum temporale. Rightward asymmetry can be seen in the right central sulcus (potentially suggesting increased connectivity between motor and somatosensory cortices in the left side of the brain), lateral ventricle, entorhinal cortex, amygdala and temporo-parieto-occipital area. Sex-dependent brain asymmetries are also common. For example, human male brains are more asymmetrically lateralized than those of females. However, gene expression studies done by Hawrylycz and colleagues and Pletikos and colleagues, were not able to detect asymmetry between the hemispheres on the population level.

People with autism have much more symmetrical brains than people without it.

History

In the mid-19th century scientists first began to make discoveries regarding lateralization of the brain, or differences in anatomy and corresponding function between the brain's two hemispheres. Franz Gall, a German anatomist, was the first to describe what is now known as the Doctrine of Cerebral Localization. Gall believed that, rather than the brain operating as a single, whole entity, different mental functions could be attributed to different parts of the brain. He was also the first to suggest language processing happened in the frontal lobes. However, Gall's theories were controversial among many scientists at the time. Others were convinced by experiments such as those conducted by Marie-Jean-Pierre Flourens, in which he demonstrated lesions to bird brains caused irreparable damage to vital functions. Flourens's methods, however, were not precise; the crude methodology employed in his experiments actually caused damage to several areas of the tiny brains of the avian models.

Paul Broca was among the first to offer compelling evidence for localization of function when he identified an area of the brain related to speech.

In 1861 surgeon Paul Broca provided evidence that supported Gall's theories. Broca discovered that two of his patients who had suffered from speech loss had similar lesions in the same area of the left frontal lobe. While this was compelling evidence for localization of function, the connection to “sidedness” was not made immediately. As Broca continued to study similar patients, he made the connection that all of the cases involved damage to the left hemisphere, and in 1864 noted the significance of these findings—that this must be a specialized region. He also—incorrectly—proposed theories about the relationship of speech areas to “handedness”.

Accordingly, some of the most famous early studies on brain asymmetry involved speech processing. Asymmetry in the Sylvian fissure (also known as the lateral sulcus), which separates the frontal and parietal lobes from the temporal lobe, was one of the first incongruencies to be discovered. Its anatomical variances are related to the size and location of two areas of the human brain that are important for language processing, Broca's area and Wernicke's area, both in the left hemisphere.

Around the same time that Broca and Wernicke made their discoveries, neurologist Hughlings Jackson suggested the idea of a “leading hemisphere”—or, one side of the brain that played a more significant role in overall function—which would eventually pave the way for understanding hemispheric “dominance” for various processes. Several years later, in the mid-20th century, critical understanding of hemispheric lateralization for visuospatial, attention and perception, auditory, linguistic and emotional processing came from patients who underwent split-brain procedures to treat disorders such as epilepsy. In split-brain patients, the corpus callosum is cut, severing the main structure for communication between the two hemispheres. The first modern split-brain patient was a war veteran known as Patient W.J., whose case contributed to further understanding of asymmetry.

Brain asymmetry is not unique to humans. In addition to studies on human patients with various diseases of the brain, much of what is understood today about asymmetries and lateralization of function has been learned through both invertebrate and vertebrate animal models, including zebrafish, pigeons, rats, and many others. For example, more recent studies revealing sexual dimorphism in brain asymmetries in the cerebral cortex and hypothalamus of rats show that sex differences emerging from hormonal signaling can be an important influence on brain structure and function. Work with zebrafish has been especially informative because this species provides the best model for directly linking asymmetric gene expression with asymmetric morphology, and for behavioral analyses.

In humans

Lateralized functional differences and significant regions in each side of the brain and their function

The left and right hemispheres operate the contralateral sides of the body. Each hemisphere contains sections of all 4 lobes: the frontal lobe, parietal lobe, temporal lobe, and occipital lobe. The two hemispheres are separated along the mediated longitudinal fissure and are connected by the corpus callosum which allows for communication and coordination of stimuli and information. The corpus callosum is the largest collective pathway of white matter tissue in the body that is made of more than 200 million nerve fibers. The left and right hemispheres are associated with different functions and specialize in interpreting the same data in different ways, referred to as lateralization of the brain. The left hemisphere is associated with language and calculations, while the right hemisphere is more closely associated with visual-spatial recognition and facial recognition. This lateralization of brain function results in some specialized regions being only present in a certain hemisphere or being dominant in one hemisphere versus the other. Some of the significant regions included in each hemisphere are listed below.

Left Hemisphere

Broca's Area
Broca's area is located in the left hemisphere prefrontal cortex above the cingulate gyrus in the third frontal convolution. Broca's area was discovered by Paul Broca in 1865. This area handles speech production. Damage to this area would result in Broca aphasia which causes the patient to become unable to formulate coherent appropriate sentences.
Wernicke's Area
Wernicke's area was discovered in 1976 by Carl Wernicke and was found to be the site of language comprehension. Wernicke's area is also found in the left hemisphere in the temporal lobe. Damage to this area of the brain results in the individual losing the ability to understand language. However, they are still able to produce sounds, words, and sentence although they are not used in the appropriate context.

Right Hemisphere

Fusiform Face Area
The Fusiform Face Area (FFA) is an area that has been studied to be highly active when faces are being attended to in the visual field. A FFA is found to be present in both hemispheres, however, studies have found that the FFA is predominantly lateralized in the right hemisphere where a more in-depth cognitive processing of faces is conducted. The left hemisphere FFA is associated with rapid processing of faces and their features.

Other regions and associated diseases

Some significant regions that can present as asymmetrical in the brain can result in either of the hemispheres due to factors such as genetics. An example would include handedness. Handedness can result from asymmetry in the motor cortex of one hemisphere. For right handed individuals, since the brain operates the contralateral side of the body, they could have a more induced motor cortex in the left hemisphere.

Several diseases have been found to exacerbate brain asymmetries that are already present in the brain. Researchers are starting to look into the effect and relationship of brain asymmetries to diseases such as schizophrenia and dyslexia.

Schizophrenia
Schizophrenia is a complex long-term mental disorder that causes hallucinations, delusions and a lack of concentration, thinking, and motivation in an individual. Studies have found that individuals with schizophrenia have a lack in brain asymmetry thus reducing the functional efficiency of affected regions such as the frontal lobe. Conditions include leftward functional hemispheric lateralization, loss of laterality for language comprehension, a reduction in gyrification, brain torsion etc.
Dyslexia
As studied earlier, language is usually dominant in the left hemisphere. Developmental language disorders, such as dyslexia, have been researched using brain imaging techniques to understand the neuronal or structural changes associated with the disorder. Past research has exhibited that hemispheric asymmetries that are usually found in healthy adults such as the size of the temporal lobe is not present in adult patients with dyslexia. In conjunction, past research has exhibited that patients with dyslexia lack a lateralization of language in their brain compared to healthy patients. Instead patients with dyslexia showed to have a bilateral hemispheric dominance for language.

Current research

Lateralization of function and asymmetry in the human brain continues to propel a popular branch of neuroscientific and psychological inquiry. Technological advancements for brain mapping have enabled researchers to see more parts of the brain more clearly, which has illuminated previously undetected lateralization differences that occur during different life stages. As more information emerges, researchers are finding insights into how and why early human brains may have evolved the way that they did to adapt to social, environmental and pathological changes. This information provides clues regarding plasticity, or how different parts of the brain can sometimes be recruited for different functions.

Continued study of brain asymmetry also contributes to the understanding and treatment of complex diseases. Neuroimaging in patients with Alzheimer's disease, for example, shows significant deterioration in the left hemisphere, along with a rightward hemispheric dominance—which could relate to recruitment of resources to that side of the brain in the face of damage to the left. These hemispheric changes have been connected to performance on memory tasks.

As has been the case in the past, studies on language processing and the implications of left- and right- handedness also dominate current research on brain asymmetry.

Evolutionary taxonomy

From Wikipedia, the free encyclopedia

Evolutionary taxonomy, evolutionary systematics or Darwinian classification is a branch of biological classification that seeks to classify organisms using a combination of phylogenetic relationship (shared descent), progenitor-descendant relationship (serial descent), and degree of evolutionary change. This type of taxonomy may consider whole taxa rather than single species, so that groups of species can be inferred as giving rise to new groups. The concept found its most well-known form in the modern evolutionary synthesis of the early 1940s.

Evolutionary taxonomy differs from strict pre-Darwinian Linnaean taxonomy (producing orderly lists only), in that it builds evolutionary trees. While in phylogenetic nomenclature each taxon must consist of a single ancestral node and all its descendants, evolutionary taxonomy allows for groups to be excluded from their parent taxa (e.g. dinosaurs are not considered to include birds, but to have given rise to them), thus permitting paraphyletic taxa.

Origin of evolutionary taxonomy

Jean-Baptiste Lamarck's 1815 diagram showing branching in the course of invertebrate evolution

Evolutionary taxonomy arose as a result of the influence of the theory of evolution on Linnaean taxonomy. The idea of translating Linnaean taxonomy into a sort of dendrogram of the Animal and Plant Kingdoms was formulated toward the end of the 18th century, well before Charles Darwin's book On the Origin of Species was published. The first to suggest that organisms had common descent was Pierre-Louis Moreau de Maupertuis in his 1751 Essai de Cosmologie, Transmutation of species entered wider scientific circles with Erasmus Darwin's 1796 Zoönomia and Jean-Baptiste Lamarck's 1809 Philosophie Zoologique. The idea was popularised in the English-speaking world by the speculative but widely read Vestiges of the Natural History of Creation, published anonymously by Robert Chambers in 1844.

Following the appearance of On the Origin of Species, Tree of Life representations became popular in scientific works. In On the Origin of Species, the ancestor remained largely a hypothetical species; Darwin was primarily occupied with showing the principle, carefully refraining from speculating on relationships between living or fossil organisms and using theoretical examples only. In contrast, Chambers had proposed specific hypotheses, the evolution of placental mammals from marsupials, for example.

Following Darwin's publication, Thomas Henry Huxley used the fossils of Archaeopteryx and Hesperornis to argue that the birds are descendants of the dinosaurs. Thus, a group of extant animals could be tied to a fossil group. The resulting description, that of dinosaurs "giving rise to" or being "the ancestors of" birds, exhibits the essential hallmark of evolutionary taxonomic thinking.

The past three decades have seen a dramatic increase in the use of DNA sequences for reconstructing phylogeny and a parallel shift in emphasis from evolutionary taxonomy towards Hennig's 'phylogenetic systematics'.

Today, with the advent of modern genomics, scientists in every branch of biology make use of molecular phylogeny to guide their research. One common method is multiple sequence alignment.

Cavalier-Smith, G. G. Simpson and Ernst Mayr are some representative evolutionary taxonomists.

New methods in modern evolutionary systematics

Efforts in combining modern methods of cladistics, phylogenetics, and DNA analysis with classical views of taxonomy have recently appeared. Certain authors have found that phylogenetic analysis is acceptable scientifically as long as paraphyly at least for certain groups is allowable. Such a stance is promoted in papers by Tod F. Stuessy and others. A particularly strict form of evolutionary systematics has been presented by Richard H. Zander in a number of papers, but summarized in his "Framework for Post-Phylogenetic Systematics".

Briefly, Zander's pluralistic systematics is based on the incompleteness of each of the theories: A method that cannot falsify a hypothesis is as unscientific as a hypothesis that cannot be falsified. Cladistics generates only trees of shared ancestry, not serial ancestry. Taxa evolving seriatim cannot be dealt with by analyzing shared ancestry with cladistic methods. Hypotheses such as adaptive radiation from a single ancestral taxon cannot be falsified with cladistics. Cladistics offers a way to cluster by trait transformations but no evolutionary tree can be entirely dichotomous. Phylogenetics posits shared ancestral taxa as causal agents for dichotomies yet there is no evidence for the existence of such taxa. Molecular systematics uses DNA sequence data for tracking evolutionary changes, thus paraphyly and sometimes phylogenetic polyphyly signal ancestor-descendant transformations at the taxon level, but otherwise molecular phylogenetics makes no provision for extinct paraphyly. Additional transformational analysis is needed to infer serial descent.

Cladogram of the moss genus Didymodon showing taxon transformations. Colors denote dissilient groups.

The Besseyan cactus or commagram is the best evolutionary tree for showing both shared and serial ancestry. First, a cladogram or natural key is generated. Generalized ancestral taxa are identified and specialized descendant taxa are noted as coming off the lineage with a line of one color representing the progenitor through time. A Besseyan cactus or commagram is then devised that represents both shared and serial ancestry. Progenitor taxa may have one or more descendant taxa. Support measures in terms of Bayes factors may be given, following Zander's method of transformational analysis using decibans.

Cladistic analysis groups taxa by shared traits but incorporates a dichotomous branching model borrowed from phenetics. It is essentially a simplified dichotomous natural key, although reversals are tolerated. The problem, of course, is that evolution is not necessarily dichotomous. An ancestral taxon generating two or more descendants requires a longer, less parsimonious tree. A cladogram node summarizes all traits distal to it, not of any one taxon, and continuity in a cladogram is from node to node, not taxon to taxon. This is not a model of evolution, but is a variant of hierarchical cluster analysis (trait changes and non-ultrametric branches. This is why a tree based solely on shared traits is not called an evolutionary tree but merely a cladistic tree. This tree reflects to a large extent evolutionary relationships through trait transformations but ignores relationships made by species-level transformation of extant taxa.

A Besseyan cactus evolutionary tree of the moss genus Didymodon with generalized taxa in color and specialized descendants in white. Support measures are given in terms of Bayes factors, using deciban analysis of taxon transformation. Only two progenitors are considered unknown shared ancestors.

Phylogenetics attempts to inject a serial element by postulating ad hoc, undemonstrable shared ancestors at each node of a cladistic tree. There are in number, for a fully dichotomous cladogram, one less invisible shared ancestor than the number of terminal taxa. We get, then, in effect a dichotomous natural key with an invisible shared ancestor generating each couplet. This cannot imply a process-based explanation without justification of the dichotomy, and supposition of the shared ancestors as causes. The cladistic form of analysis of evolutionary relationships cannot falsify any genuine evolutionary scenario incorporating serial transformation, according to Zander.

Zander has detailed methods for generating support measures for molecular serial descent and for morphological serial descent using Bayes factors and sequential Bayes analysis through Turing deciban or Shannon informational bit addition.

The Tree of Life

Evolution of the vertebrates at class level, width of spindles indicating number of families. Spindle diagrams are often used in evolutionary taxonomy.

As more and more fossil groups were found and recognized in the late 19th and early 20th century, palaeontologists worked to understand the history of animals through the ages by linking together known groups. The Tree of life was slowly being mapped out, with fossil groups taking up their position in the tree as understanding increased.

These groups still retained their formal Linnaean taxonomic ranks. Some of them are paraphyletic in that, although every organism in the group is linked to a common ancestor by an unbroken chain of intermediate ancestors within the group, some other descendants of that ancestor lie outside the group. The evolution and distribution of the various taxa through time is commonly shown as a spindle diagram (often called a Romerogram after the American palaeontologist Alfred Romer) where various spindles branch off from each other, with each spindle representing a taxon. The width of the spindles are meant to imply the abundance (often number of families) plotted against time.

Vertebrate palaeontology had mapped out the evolutionary sequence of vertebrates as currently understood fairly well by the closing of the 19th century, followed by a reasonable understanding of the evolutionary sequence of the plant kingdom by the early 20th century. The tying together of the various trees into a grand Tree of Life only really became possible with advancements in microbiology and biochemistry in the period between the World Wars.

Terminological difference

The two approaches, evolutionary taxonomy and the phylogenetic systematics derived from Willi Hennig, differ in the use of the word "monophyletic". For evolutionary systematicists, "monophyletic" means only that a group is derived from a single common ancestor. In phylogenetic nomenclature, there is an added caveat that the ancestral species and all descendants should be included in the group. The term "holophyletic" has been proposed for the latter meaning. As an example, amphibians are monophyletic under evolutionary taxonomy, since they have arisen from fishes only once. Under phylogenetic taxonomy, amphibians do not constitute a monophyletic group in that the amniotes (reptiles, birds and mammals) have evolved from an amphibian ancestor and yet are not considered amphibians. Such paraphyletic groups are rejected in phylogenetic nomenclature, but are considered a signal of serial descent by evolutionary taxonomists.

Split-brain

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Split-brain

Split-brain or callosal syndrome is a type of disconnection syndrome when the corpus callosum connecting the two hemispheres of the brain is severed to some degree. It is an association of symptoms produced by disruption of, or interference with, the connection between the hemispheres of the brain. The surgical operation to produce this condition (corpus callosotomy) involves transection of the corpus callosum, and is usually a last resort to treat refractory epilepsy. Initially, partial callosotomies are performed; if this operation does not succeed, a complete callosotomy is performed to mitigate the risk of accidental physical injury by reducing the severity and violence of epileptic seizures. Before using callosotomies, epilepsy is instead treated through pharmaceutical means. After surgery, neuropsychological assessments are often performed.

After the right and left brain are separated, each hemisphere will have its own separate perception, concepts, and impulses to act. Having two "brains" in one body can create some interesting dilemmas. When one split-brain patient dressed himself, he sometimes pulled his pants up with one hand (that side of his brain wanted to get dressed) and down with the other (this side did not). He also reported to have grabbed his wife with his left hand and shaken her violently, at which point his right hand came to her aid and grabbed the aggressive left hand. However, such conflicts are very rare. If a conflict arises, one hemisphere usually overrides the other.

When split-brain patients are shown an image only in the left half of each eye's visual field, they cannot vocally name what they have seen. This is because the image seen in the left visual field is sent only to the right side of the brain, and most people's speech-control center is on the left side of the brain. Communication between the two sides is inhibited, so the patient cannot say out loud the name of that which the right side of the brain is seeing. A similar effect occurs if a split-brain patient touches an object with only the left hand while receiving no visual cues in the right visual field; the patient will be unable to name the object, as each cerebral hemisphere of the primary somatosensory cortex only contains a tactile representation of the opposite side of the body. If the speech-control center is on the right side of the brain, the same effect can be achieved by presenting the image or object to only the right visual field or hand.

The same effect occurs for visual pairs and reasoning. For example, a patient with split brain is shown a picture of a chicken foot and a snowy field in separate visual fields and asked to choose from a list of words the best association with the pictures. The patient would choose a chicken to associate with the chicken foot and a shovel to associate with the snow; however, when asked to reason why the patient chose the shovel, the response would relate to the chicken (e.g. "the shovel is for cleaning out the chicken coop").

History

In the 1950s, research on people with certain brain injuries made it possible to suspect that the "language center" in the brain was commonly located in the left hemisphere. One had observed that people with lesions in two specific areas on the left hemisphere lost their ability to talk, for example. Roger Sperry and his colleague pioneered research. In his early work on animal subjects, Sperry made many noteworthy discoveries. The results of these studies over the next thirty years later led to Roger Sperry being awarded the Nobel Prize in Physiology or Medicine in 1981. Sperry received the prize for his discoveries concerning the functional specialization of the cerebral hemispheres. With the help of so-called "split brain" patients, he carried out experiments, and for the first time in history, knowledge about the left and right hemispheres was revealed. In the 1960s, Sperry was joined by Michael Gazzaniga, a psychobiology Ph.D. student at Caltech. Even though Sperry is considered the founder of split-brain research, Gazzaniga's clear summaries of their collaborative work are consistently cited in psychology texts. In Sperry and Gazzaniga's "The Split Brain in Man" experiment published in Scientific American in 1967 they attempted to explore the extent to which two halves of the human brain were able to function independently and whether or not they had separate and unique abilities. They wanted to examine how perceptual and intellectual skills were affected in someone with a split-brain. At Caltech, Gazzaniga worked with Sperry on the effects of split-brain surgery on perception, vision and other brain functions. The surgery, which was a treatment for severe epilepsy, involved severing the corpus callosum, which carries signals between the left-brain hemisphere, the seat of speech and analytical capacity, and the right-brain hemisphere, which helps recognize visual patterns. At the time this article was written, only ten patients had undergone the surgery to sever their corpus callosum (corpus callosotomy). Four of these patients had consented to participate in Sperry and Gazzaniga's research. After the corpus callosum severing, all four participants' personality, intelligence, and emotions appeared to be unaffected. However, the testing done by Sperry and Gazzaniga showed the subjects demonstrated unusual mental abilities. The researchers created three types of tests to analyze the range of cognitive capabilities of the split-brain subjects. The first was to test their visual stimulation abilities, the second test was a tactile stimulation situation and the third tested auditory abilities.

Visual test

The first test started with a board that had a horizontal row of lights. The subject was told to sit in front of the board and stare at a point in the middle of the lights, then the bulbs would flash across both the right and left visual fields. When the patients were asked to describe afterward what they saw, they said that only the lights on the right side of the board had lit up. Next, when Sperry and Gazzaniga flashed the lights on the right side of the board on the subjects left side of their visual field, they claimed not to have seen any lights at all. When the experimenters conducted the test again, they asked the subjects to point to the lights that lit up. Although subjects had only reported seeing the lights flash on the right, they actually pointed to all the lights in both visual fields. This showed that both brain hemispheres had seen the lights and were equally competent in visual perception. The subjects did not say they saw the lights when they flashed in the left visual field even though they did see them because the center for speech is located in the brain's left hemisphere. This test supports the idea that in order to say one has seen something, the region of the brain associated with speech must be able to communicate with areas of the brain that process the visual information.

Tactile test

In a second experiment, Sperry and Gazzaniga placed a small object in the subject's right or left hand, without being able to see (or hear) it. Placed in the right hand, the isolated left hemisphere perceived the object and could easily describe and name it. However, placed in the left hand, the isolated right hemisphere could not name or describe the object. Questioning this result, the researchers found that the subjects could later match it from several similar objects; tactile sensations limited to the right hemisphere were accurately perceived but could not be verbalized. This further demonstrated the apparent location (or lateralization) of language functions in the left hemisphere.

Combination of both tests

In the last test the experimenters combined both the tactile and visual test. They presented subjects with a picture of an object to only their right hemisphere, and subjects were unable to name it or describe it. There were no verbal responses to the picture at all. If the subject however was able to reach under the screen with their left hand to touch various objects, they were able to pick the one that had been shown in the picture. The subjects were also reported to be able to pick out objects that were related to the picture presented, if that object was not under the screen.

Sperry and Gazzaniga went on to conduct other tests to shed light on the language processing abilities of the right hemisphere as well as auditory and emotional reactions as well. The significance of the findings of these tests by Sperry and Gazzaniga were extremely telling and important to the psychology world. Their findings showed that the two halves of the brain have numerous functions and specialized skills. They concluded that each hemisphere really has its own functions. One's left hemisphere of the brain is thought to be better at writing, speaking, mathematical calculation, reading, and is the primary area for language. The right hemisphere is seen to possess capabilities for problem solving, recognizing faces, symbolic reasoning, art, and spatial relationships.

Roger Sperry continued this line of research up until his death in 1994. Michael Gazzaniga continues to research the split-brain. Their findings have been rarely critiqued and disputed, however, a popular belief that some people are more "right-brained" or "left-brained" has developed. In the mid-1980s Jarre Levy, a psychobiologist at the University of Chicago, had set out and been in the forefront of scientists who wanted to dispel the notion we have two functioning brains. She believes that because each hemisphere has separate functions that they must integrate their abilities instead of separating them. Levy also claims that no human activity uses only one side of the brain. In 1998 a French study by Hommet and Billiard was published that questioned Sperry and Gazzaniga's study that severing the corpus callosum actually divides the hemispheres of the brain. They found that children born without a corpus callosum demonstrated that information was being transmitted between hemispheres, and concluded that subcortical connections must be present in these children with this rare brain malformation. They are unclear about whether these connections are present in split-brain patients though. Another study by Parsons, Gabrieli, Phelps, and Gazzaniga in 1998 demonstrated that split-brain patients may commonly perceive the world differently from the rest of us. Their study suggested that communication between brain hemispheres is necessary for imaging or simulating in your mind the movements of others. Morin's research on inner speech in 2001 suggested that an alternative for interpretation of commissurotomy according to which split-brain patients exhibit two uneven streams of self-awareness: a "complete" one in the left hemisphere and a "primitive" one in the right hemisphere.

Hemispheric specialization

The two hemispheres of the cerebral cortex are linked by the corpus callosum, through which they communicate and coordinate actions and decisions. Communication and coordination between the two hemispheres is essential because each hemisphere has some separate functions. The right hemisphere of the cortex excels at nonverbal and spatial tasks, whereas the left hemisphere is more dominant in verbal tasks, such as speaking and writing. The right hemisphere controls the primary sensory functions of the left side of the body. In a cognitive sense the right hemisphere is responsible for recognizing objects and timing, and in an emotional sense it is responsible for empathy, humour and depression. On the other hand, the left hemisphere controls the primary sensory functions of the right side of the body and is responsible for scientific and maths skills, and logic. The extent of specialised brain function by an area remains under investigation. It is claimed that the difference between the two hemispheres is that the left hemisphere is "analytic" or "logical" while the right hemisphere is "holistic" or "intuitive." Many simple tasks, especially comprehension of inputs, require functions that are specific to both the right and left hemispheres and together form a one direction systematised way of creating an output through the communication and coordination that occurs between hemispheres.

Role of the corpus callosum

The corpus callosum is a structure in the brain along the longitudinal fissure that facilitates much of the communication between the two hemispheres. This structure is composed of white matter: millions of axons that have their dendrites and terminal boutons projecting in both the right and left hemisphere. However, there is evidence that the corpus callosum may also have some inhibitory functions. Post-mortem research on human and monkey brains shows that the corpus callosum is functionally organised. It proves that the right hemisphere is superior for detecting faces. This organisation results in modality-specific regions of the corpus callosum that are responsible for the transfer of different types of information. Research has revealed that the anterior midbody transfers motor information, the posterior midbody transfers somatosensory information, the isthmus transfers auditory information and the splenium transfers visual information. Although much of the interhemispheric transfer occurs at the corpus callosum, there are trace amounts of transfer via subcortical pathways.

Studies of the effects on the visual pathway on split-brained patients has revealed that there is a redundancy gain (the ability of target detection to benefit from multiple copies of the target) in simple reaction time. In a simple response to visual stimuli, split-brained patients experience a faster reaction time to bilateral stimuli than predicted by model. A model proposed by Iacoboni et al. suggests split-brained patients experience asynchronous activity that causes a stronger signal, and thus a decreased reaction time. Iacoboni also suggests there exists dual attention in split-brained patients, which is implying that each cerebral hemisphere has its own attentional system. An alternative approach taken by Reuter-Lorenz et al. suggests that enhanced redundancy gain in the split brain is primarily due to a slowing of responses to unilateral stimuli, rather than a speeding of responses to bilateral ones. It is important to note that the simple reaction time in split-brained patients, even with enhanced redundancy gain, is slower than the reaction time of normal adults.

Functional plasticity

Following a stroke or other injury to the brain, functional deficiencies are common. The deficits are expected to be in areas related to the part of the brain that has been damaged; if a stroke has occurred in the motor cortex, deficits may include paralysis, abnormal posture, or abnormal movement synergies. Significant recovery occurs during the first several weeks after the injury. However, recovery is generally thought not to continue past 6 months. If a specific region of the brain is injured or destroyed, its functions can sometimes be transferred and taken over by a neighbouring region. There is little functional plasticity observed in partial and complete callosotomies; however, much more plasticity can be seen in infant patients receiving a hemispherectomy, which suggests that the opposite hemisphere can adapt some functions typically performed by its opposite pair. In a study done by Anderson, it proved a correlation between the severity of the injury, the age of the individual and their cognitive performance. It was evident that there was more neuroplasticity in older children, even if their injury was extremely severe, than infants who suffered moderate brain injury. In some incidents of any moderate to severe brain injury, it mostly causes developmental impairments and in some of the most severe injuries it can cause a profound impact on their development that can lead to long-term cognitive effects. In the aging brain, it is extremely uncommon for neuroplasticity to occur; "olfactory bulb and hippocampus are two regions of the mammalian brain in which mutations preventing adult neurogenesis were never beneficial, or simply never occurred" (Anderson, 2005).

Corpus callosotomy

Corpus callosotomy is a surgical procedure that sections the corpus callosum, resulting in either the partial or complete disconnection between the two hemispheres. It is typically used as a last resort measure in treatment of intractable epilepsy. The modern procedure typically involves only the anterior third of the corpus callosum; however, if the epileptic seizures continue, the following third is lesioned prior to the remaining third if the seizures persist. This results in a complete callosotomy in which most of the information transfer between hemispheres is lost.

Due to the functional mapping of the corpus callosum, a partial callosotomy has less detrimental effects because it leaves parts of the corpus callosum intact. There is little functional plasticity observed in partial and complete callosotomies on adults, the most neuroplasticity is seen in young children but not in infants.

It is known that when the corpus callosum is severed during an experimental procedure, the experimenter can ask each side of the brain the same question and receive two different answers. When the experimenter asks the right visual field/left hemisphere what they see the participant will respond verbally, whereas if the experimenter asks the left visual field/right hemisphere what they see the participant will not be able to respond verbally but will pick up the appropriate object with their left hand.

Memory

It is known that the right and the left hemisphere have different functions when it comes to memory. The right hemisphere is better at recognizing objects and faces, recalling knowledge that the individual has already learned, or recalling images already seen. The left hemisphere is better at mental manipulation, language production, and semantic priming but was more susceptible to memory confusion than the right hemisphere. The main issue for individuals that have undergone a callosotomy is that because the function of memory is split into two major systems, the individual is more likely to become confused between knowledge they already know and information that they have only inferred.

In tests, memory in either hemisphere of split-brained patients is generally lower than normal, though better than in patients with amnesia, suggesting that the forebrain commissures are important for the formation of some kinds of memory. This suggests that posterior callosal sections that include the hippocampal commissures cause a mild memory deficit (in standardised free-field testing) involving recognition.

Control

In general, split-brained patients behave in a coordinated, purposeful and consistent manner, despite the independent, parallel, usually different and occasionally conflicting processing of the same information from the environment by the two disconnected hemispheres. When two hemispheres receive competing stimuli at the same time, the response mode tends to determine which hemisphere controls behaviour.

Often, split-brained patients are indistinguishable from normal adults. This is due to the compensatory phenomena; split-brained patients progressively acquire a variety of strategies to get around their interhemispheric transfer deficits. One issue that can happen with their body control is that one side of the body is doing the opposite of the other side called the intermanual effect.

Attention

Experiments on covert orienting of spatial attention using the Posner paradigm confirm the existence of two different attentional systems in the two hemispheres. The right hemisphere was found superior to the left hemisphere on modified versions of spatial relations tests and in locations testing, whereas the left hemisphere was more object based. The components of mental imagery are differentially specialised: the right hemisphere was found superior for mental rotation, the left hemisphere superior for image generation. It was also found that the right hemisphere paid more attention to landmarks and scenes whereas the left hemisphere paid more attention to exemplars of categories.

Case studies of split-brain patients

Patient W.J.

Patient W.J. was the first patient to undergo a full corpus callosotomy in 1962, after experiencing fifteen years of convulsions resulting from grand mal seizures. He was a World War II paratrooper who was injured at 30 years old during a bombing raid jump over the Netherlands, and again in a prison camp following his first injury. After returning home, he began to suffer from blackouts in which he would not remember what he was doing or where, and how or when he got there. At age 37, he suffered his first generalised convulsion. One of his worst episodes occurred in 1953, when he suffered a series of convulsions lasting for many days. During these convulsions, his left side would go numb and he would recover quickly, but after the series of convulsions, he never regained complete feeling on his left side.

Before his surgery, both hemispheres functioned and interacted normally, his sensory and motor functions were normal aside from slight hypoesthesia, and he could correctly identify and understand visual stimuli presented to both sides of his visual field. During his surgery in 1962, his surgeons determined that no massa intermedia had developed, and he had undergone atrophy in the part of the right frontal lobe exposed during the procedure. His operation was a success, in that it led to decreases in the frequency and intensity of his seizures.

Patient JW

Funnell et al. (2007) tested patient JW some time before June 2006. They described JW as

a right-handed male who was 47 years old at the time of testing. He successfully completed high school and has no reported learning disabilities. He had his first seizure at the age of 16 and the age of 25, he underwent a two-stage resection of the corpus callosum for relief of intractable epilepsy. Complete sectioning of the corpus callosum has been confirmed by MRI. Post-surgical MRI also revealed no evidence of other neurological damage.

Funnell et al.'s (2007) experiments were to determine each of JW's hemisphere's ability to perform simple addition, subtraction, multiplication and division. For example, in one experiment, on each trial, they presented an arithmetic problem in the center of the screen for 1 second, followed by a central cross hair JW was to look at. After 1 more second, Funnell et al. presented a number to one or the other hemisphere/visual field for 150 ms—too fast for JW to move his eyes. Randomly in half the trials, the number was the correct answer; in the other half of the trials it was the incorrect answer. With the hand on the same side as the number, JW pressed one key if the number was correct and another key if the number was incorrect.

Funnell et al.'s results were that performance of the left hemisphere was highly accurate (around 95%)—much better than performance of the right hemisphere, which was at chance for subtraction, multiplication, and division. Nevertheless the right hemisphere showed better than chance performance for addition (around 58%).

Turk et al. (2002) tested hemispheric differences in JW's recognition of himself and of familiar faces. They used faces that were composites of JW's face and Dr. Michael Gazzaniga's face. Composites ranges from 100% JW, through 50% JW and 50% Gazzaniga, to 100% Gazzaniga. JW pressed keys to say whether a presented face looked like him or Gazzaniga. Turk et al.concluded there are cortical networks in the left hemisphere that play an important role in self-recognition.

Patient VP

Patient VP is a woman who underwent a two-stage callosotomy in 1979 at the age of 27. Although the callosotomy was reported to be complete, follow-up MRI in 1984 revealed spared fibers in the rostrum and splenium. The spared rostral fibers constituted approximately 1.8% of the total cross-sectional area of the corpus callosum and the spared splenial fibers constituted approximately 1% of the area. VP's postsurgery intelligence and memory quotients were within normal limits.

One of the experiments involving VP attempted to investigate systematically the types of visual information that could be transferred via VP's spared splenial fibers. The first experiment was designed to assess VP's ability to make between-field perceptual judgements about simultaneously presented pairs of stimuli. The stimuli were presented in varying positions with respect to the horizontal and vertical midline with VP's vision fixated on a central crosshair. The judgements were based on differences in colour, shape or size. The testing procedure was the same for all three types of stimuli; after presentation of each pair, VP verbally responded "yes" if the two items in the pair were identical and "no" if they were not. The results show that there was no perceptual transfer for colour, size or shape with binomial tests showing that VP's accuracy was not greater than chance.

A second experiment involving VP attempted to investigate what aspects of words transferred between the two hemispheres. The set up was similar to the previous experiment, with VP's vision fixated on a central cross hair. A word pair was presented with one word on each side of the cross-hair for 150 ms. The words presented were in one of four categories: words that looked and sounded like rhymes (e.g. tire and fire), words that looked as if they should rhyme but did not (e.g. cough and dough), words that did not look as if they should rhyme but did (e.g. bake and ache), and words that neither looked nor sounded like rhymes (e.g. keys and fort). After presentation of each word pair, VP responded "yes" if the two words rhymed and "no" if they did not. VP's performance was above chance and she was able to distinguish among the different conditions. When the word pairs did not sound like rhymes, VP was able to say accurately that the words did not rhyme, regardless of whether or not they looked as if they should rhyme. When the words did rhyme, VP was more likely to say they rhymed, particularly if the words also looked as if they should rhyme.

Although VP showed no evidence for transfer of colour, shape or size, there was evidence for transfer of word information. This is consistent with the speculation that the transfer of word information involves fibres in the ventroposterior region of the splenium—the same region in which V.P. had callosal sparing. V.P. is able to integrate words presented to both visual fields, creating a concept that is not suggested by either word. For example, she combines "head" and "stone" to form the integrated concept of a tombstone.

Kim Peek

Kim Peek was arguably the most well-known savant. He was born on November 11, 1951 with an enlarged head, sac-like protrusions of the brain and the membranes that cover it through openings in the skull, a malformed cerebellum, and without a corpus callosum, an anterior commissure, or a posterior commissure. He was able to memorize over 9,000 books, and information from approximately 15 subject areas. These include: world/American history, sports, movies, geography, actors and actresses, the Bible, church history, literature, classical music, area codes/zip codes of the United States, television stations serving these areas, and step by step directions within any major U.S. city. Despite these abilities, he had an IQ of 87, was diagnosed as autistic, was unable to button his shirt, and had difficulties performing everyday tasks. The missing structures of his brain have yet to be linked to his increased abilities, but they can be linked to his ability to read pages of a book in 8–10 seconds. He was able to view the left page of a book with his left visual field and the right page of a book with his right visual fields so he could read both pages simultaneously. He also had developed language areas in both hemispheres, something very uncommon in split-brain patients. Language is processed in areas of the left temporal lobe, and involves a contralateral transfer of information before the brain can process what is being read. In Peek's case, there was no transfer ability—this is what led to his development of language centers in each hemisphere. Many believe this is the reason behind his extremely fast reading capabilities.

Although Peek did not undergo corpus callosotomy, he is considered a natural split-brain patient and is critical to understanding the importance of the corpus callosum. Kim Peek died in 2009.

Butane

From Wikipedia, the free encyclopedia ...