Search This Blog

Saturday, February 16, 2019

Medical image computing

From Wikipedia, the free encyclopedia

Medical image computing (MIC) is an interdisciplinary field at the intersection of computer science, information engineering, electrical engineering, physics, mathematics and medicine. This field develops computational and mathematical methods for solving problems pertaining to medical images and their use for biomedical research and clinical care. 
 
The main goal of MIC is to extract clinically relevant information or knowledge from medical images. While closely related to the field of medical imaging, MIC focuses on the computational analysis of the images, not their acquisition. The methods can be grouped into several broad categories: image segmentation, image registration, image-based physiological modeling, and others.

Data forms

Medical image computing typically operates on uniformly sampled data with regular x-y-z spatial spacing (images in 2D and volumes in 3D, generically referred to as images). At each sample point, data is commonly represented in integral form such as signed and unsigned short (16-bit), although forms from unsigned char (8-bit) to 32-bit float are not uncommon. The particular meaning of the data at the sample point depends on modality: for example a CT acquisition collects radiodensity values, while a MRI acquisition may collect T1 or T2-weighted images. Longitudinal, time-varying acquisitions may or may not acquire images with regular time steps. Fan-like images due to modalities such as curved-array ultrasound are also common and require different representational and algorithmic techniques to process. Other data forms include sheared images due to gantry tilt during acquisition; and unstructured meshes, such as hexahedral and tetrahedral forms, which are used in advanced biomechanical analysis (e.g., tissue deformation, vascular transport, bone implants).

Segmentation

A T1 weighted MR image of the brain of a patient with a meningioma after injection of a MRI contrast agent (top left), and the same image with the result of an interactive segmentation overlaid in green (3D model of the segmentation on the top right, axial and coronal views at the bottom).
 
Segmentation is the process of partitioning an image into different meaningful segments. In medical imaging, these segments often correspond to different tissue classes, organs, pathologies, or other biologically relevant structures. Medical image segmentation is made difficult by low contrast, noise, and other imaging ambiguities. Although there are many computer vision techniques for image segmentation, some have been adapted specifically for medical image computing. Below is a sampling of techniques within this field; the implementation relies on the expertise that clinicians can provide.
  • Atlas-Based Segmentation: For many applications, a clinical expert can manually label several images; segmenting unseen images is a matter of extrapolating from these manually labeled training images. Methods of this style are typically referred to as atlas-based segmentation methods. Parametric atlas methods typically combine these training images into a single atlas image, while nonparametric atlas methods typically use all of the training images separately. Atlas-based methods usually require the use of image registration in order to align the atlas image or images to a new, unseen image.
  • Shape-Based Segmentation: Many methods parametrize a template shape for a given structure, often relying on control points along the boundary. The entire shape is then deformed to match a new image. Two of the most common shape-based techniques are Active Shape Models and Active Appearance Models. These methods have been very influential, and have given rise to similar models.
  • Image-Based segmentation: Some methods initiate a template and refine its shape according to the image data while minimizing integral error measures, like the Active contour model and its variations.
  • Interactive Segmentation: Interactive methods are useful when clinicians can provide some information, such as a seed region or rough outline of the region to segment. An algorithm can then iteratively refine such a segmentation, with or without guidance from the clinician. Manual segmentation, using tools such as a paint brush to explicitly define the tissue class of each pixel, remains the gold standard for many imaging applications. Recently, principles from feedback control theory have been incorporated into segmentation, which give the user much greater flexibility and allow for the automatic correction of errors.
  • Subjective surface Segmentation: This method is based on the idea of evolution of segmentation function which is governed by an advection-diffusion model. To segment an object, a segmentation seed is needed (that is the starting point that determines the approximate position of the object in the image). Consequently, an initial segmentation function is constructed. The idea behind the subjective surface method  is that the position of the seed is the main factor determining the form of this segmentation function.
However, there are some other classification of image segmentation methods which are similar to above categories. Moreover, we can classify another group as “Hybrid” which is based on combination of methods.

Registration

CT image (left), PET image (center) and overlay of both (right) after correct registration.
 
Image registration is a process that searches for the correct alignment of images. In the simplest case, two images are aligned. Typically, one image is treated as the target image and the other is treated as a source image; the source image is transformed to match the target image. The optimization procedure updates the transformation of the source image based on a similarity value that evaluates the current quality of the alignment. This iterative procedure is repeated until a (local) optimum is found. An example is the registration of CT and PET images to combine structural and metabolic information (see figure). 

Image registration is used in a variety of medical applications:
  • Studying temporal changes. Longitudinal studies acquire images over several months or years to study long-term processes, such as disease progression. Time series correspond to images acquired within the same session (seconds or minutes). They can be used to study cognitive processes, heart deformations and respiration.
  • Combining complementary information from different imaging modalities. An example is the fusion of anatomical and functional information. Since the size and shape of structures vary across modalities, it is more challenging to evaluate the alignment quality. This has led to the use of similarity measures such as mutual information.
  • Characterizing a population of subjects. In contrast to intra-subject registration, a one-to-one mapping may not exist between subjects, depending on the structural variability of the organ of interest. Inter-subject registration is required for atlas construction in computational anatomy. Here, the objective is to statistically model the anatomy of organs across subjects.
  • Computer-assisted surgery. In computer-assisted surgery pre-operative images such as CT or MRI are registered to intra-operative images or tracking systems to facilitate image guidance or navigation.
There are several important considerations when performing image registration:
  • The transformation model. Common choices are rigid, affine, and deformable transformation models. B-spline and thin plate spline models are commonly used for parameterized transformation fields. Non-parametric or dense deformation fields carry a displacement vector at every grid location; this necessitates additional regularization constraints. A specific class of deformation fields are diffeomorphisms, which are invertible transformations with a smooth inverse.
  • The similarity metric. A distance or similarity function is used to quantify the registration quality. This similarity can be calculated either on the original images or on features extracted from the images. Common similarity measures are sum of squared distances (SSD), correlation coefficient, and mutual information. The choice of similarity measure depends on whether the images are from the same modality; the acquisition noise can also play a role in this decision. For example, SSD is the optimal similarity measure for images of the same modality with Gaussian noise. However, the image statistics in ultrasound are significantly different from Gaussian noise, leading to the introduction of ultrasound specific similarity measures. Multi-modal registration requires a more sophisticated similarity measure; alternatively, a different image representation can be used, such as structural representations or registering adjacent anatomy.
  • The optimization procedure. Either continuous or discrete optimization is performed. For continuous optimization, gradient-based optimization techniques are applied to improve the convergence speed.

Visualization

Volume rendering (left), axial cross-section (right top), and sagittal cross-section (right bottom) of a CT image of a subject with multiple nodular lesions (white line) in the lung.
 
Visualization plays several key roles in Medical Image Computing. Methods from scientific visualization are used to understand and communicate about medical images, which are inherently spatial-temporal. Data visualization and data analysis are used on unstructured data forms, for example when evaluating statistical measures derived during algorithmic processing. Direct interaction with data, a key feature of the visualization process, is used to perform visual queries about data, annotate images, guide segmentation and registration processes, and control the visual representation of data (by controlling lighting rendering properties and viewing parameters). Visualization is used both for initial exploration and for conveying intermediate and final results of analyses.

The figure "Visualization of Medical Imaging" illustrates several types of visualization: 1. the display of cross-sections as gray scale images; 2. reformatted views of gray scale images (the sagittal view in this example has a different orientation than the original direction of the image acquisition; and 3. A 3D volume rendering of the same data. The nodular lesion is clearly visible in the different presentations and has been annotated with a white line.

Atlases

Medical images can vary significantly across individuals due to people having organs of different shapes and sizes. Therefore, representing medical images to account for this variability is crucial. A popular approach to represent medical images is through the use of one or more atlases. Here, an atlas refers to a specific model for a population of images with parameters that are learned from a training dataset.

The simplest example of an atlas is a mean intensity image, commonly referred to as a template. However, an atlas can also include richer information, such as local image statistics and the probability that a particular spatial location has a certain label. New medical images, which are not used during training, can be mapped to an atlas, which has been tailored to the specific application, such as segmentation and group analysis. Mapping an image to an atlas usually involves registering the image and the atlas. This deformation can be used to address variability in medical images.

Single template

The simplest approach is to model medical images as deformed versions of a single template image. For example, anatomical MRI brain scans are often mapped to the MNI template  as to represent all the brain scans in common coordinates. The main drawback of a single-template approach is that if there are significant differences between the template and a given test image, then there may not be a good way to map one onto the other. For example, an anatomical MRI brain scan of a patient with severe brain abnormalities (i.e., a tumor or surgical procedure), may not easily map to the MNI template.

Multiple templates

Rather than relying on a single template, multiple templates can be used. The idea is to represent an image as a deformed version of one of the templates. For example, there could be one template for a healthy population and one template for a diseased population. However, in many applications, it is not clear how many templates are needed. A simple albeit computationally expensive way to deal with this is to have every image in a training dataset be a template image and thus every new image encountered is compared against every image in the training dataset. A more recent approach automatically finds the number of templates needed.

Statistical analysis

Statistical methods combine the medical imaging field with modern Computer Vision, Machine Learning and Pattern Recognition. Over the last decade, several large datasets have been made publicly available (see for example ADNI, 1000 functional Connectomes Project), in part due to collaboration between various institutes and research centers. This increase in data size calls for new algorithms that can mine and detect subtle changes in the images to address clinical questions. Such clinical questions are very diverse and include group analysis, imaging biomarkers, disease phenotyping and longitudinal studies.

Group analysis

In the Group Analysis, the objective is to detect and quantize abnormalities induced by a disease by comparing the images of two or more cohorts. Usually one of these cohorts consist of normal (control) subjects, and the other one consists of abnormal patients. Variation caused by the disease can manifest itself as abnormal deformation of anatomy. For example, shrinkage of sub-cortical tissues such as the Hippocampus in brain may be linked to Alzheimer's disease. Additionally, changes in biochemical (functional) activity can be observed using imaging modalities such as Positron Emission Tomography

The comparison between groups is usually conducted on the voxel level. Hence, the most popular pre-processing pipeline, particularly in neuroimaging, transforms all of the images in a dataset to a common coordinate frame via (Medical Image Registration) in order to maintain correspondence between voxels. Given this voxel-wise correspondence, the most common Frequentist method is to extract a statistic for each voxel (for example, the mean voxel intensity for each group) and perform statistical hypothesis testing to evaluate whether a null hypothesis is or is not supported. The null hypothesis typically assumes that the two cohorts are drawn from the same distribution, and hence, should have the same statistical properties (for example, the mean values of two groups are equal for a particular voxel). Since medical images contain large numbers of voxels, the issue of multiple comparison needs to be addressed,. There are also Bayesian approaches to tackle group analysis problem.

Classification

Although group analysis can quantify the general effects of a pathology on an anatomy and function, it does not provide subject level measures, and hence cannot be used as biomarkers for diagnosis (see Imaging Biomarkers). Clinicians, on the other hand, are often interested in early diagnosis of the pathology (i.e. classification) and in learning the progression of a disease (i.e. regression). From methodological point of view, current techniques varies from applying standard machine learning algorithms to medical imaging datasets (e.g. Support Vector Machine), to developing new approaches adapted for the needs of the field. The main difficulties are as follows:
  • Small sample size: a large medical imaging dataset contains hundreds to thousands of images, whereas the number of voxels in a typical volumetric image can easily go beyond millions. A remedy to this problem is to reduce the number of features in an informative sense. Several unsupervised and semi-/supervised, approaches have been proposed to address this issue.
  • Interpretability: A good generalization accuracy is not always the primary objective, as clinicians would like to understand which parts of anatomy are affected by the disease. Therefore, interpretability of the results is very important; methods that ignore the image structure are not favored. Alternative methods based on feature selection have been proposed,.

Clustering

Image-based pattern classification methods typically assume that the neurological effects of a disease are distinct and well defined. This may not always be the case. For a number of medical conditions, the patient populations are highly heterogeneous, and further categorization into sub-conditions has not been established. Additionally, some diseases (e.g., Autism Spectrum Disorder (ASD), Schizophrenia, Mild cognitive impairment (MCI)) can be characterized by a continuous or nearly-continuous spectra from mild cognitive impairment to very pronounced pathological changes. To facilitate image-based analysis of heterogeneous disorders, methodological alternatives to pattern classification have been developed. These techniques borrow ideas from high-dimensional clustering  and high-dimensional pattern-regression to cluster a given population into homogeneous sub-populations. The goal is to provide a better quantitative understanding of the disease within each sub-population.

Shape analysis

Shape Analysis is the field of Medical Image Computing that studies geometrical properties of structures obtained from different imaging modalities. Shape analysis recently become of increasing interest to the medical community due to its potential to precisely locate morphological changes between different populations of structures, i.e. healthy vs pathological, female vs male, young vs elderly. Shape Analysis includes two main steps: shape correspondence and statistical analysis.
  • Shape correspondence is the methodology that computes correspondent locations between geometric shapes represented by triangle meshes, contours, point sets or volumetric images. Obviously definition of correspondence will influence directly the analysis. Among the different options for correspondence frameworks we can find: Anatomical correspondence, manual landmarks, functional correspondence (i.e. in brain morphometry locus responsible for same neuronal functionality), geometry correspondence, (for image volumes) intensity similarity, etc. Some approaches, e.g. spectral shape analysis, do not require correspondence but compare shape descriptors directly.
  • Statistical analysis will provide measurements of structural change at correspondent locations.

Longitudinal studies

In longitudinal studies the same person is imaged repeatedly. This information can be incorporated both into the image analysis, as well as into the statistical modeling.
  • In longitudinal image processing, segmentation and analysis methods of individual time points are informed and regularized with common information usually from a within-subject template. This regularization is designed to reduce measurement noise and thus helps increase sensitivity and statistical power. At the same time over-regularization needs to be avoided, so that effect sizes remain stable. Intense regularization, for example, can lead to excellent test-retest reliability, but limits the ability to detect any true changes and differences across groups. Often a trade-off needs to be aimed for, that optimizes noise reduction at the cost of limited effect size loss. Another common challenge in longitudinal image processing is the, often unintentional, introduction of processing bias. When, for example, follow-up images get registered and resampled to the baseline image, interpolation artifacts get introduced to only the follow-up images and not the baseline. These artifact can cause spurious effects (usually a bias towards overestimating longitudinal change and thus underestimating required sample size). It is therefore essential that all-time points get treated exactly the same to avoid any processing bias.
  • Post-processing and statistical analysis of longitudinal data usually requires dedicated statistical tools such as repeated measure ANOVA or the more powerful linear mixed effects models. Additionally, it is advantageous to consider the spatial distribution of the signal. For example, cortical thickness measurements will show a correlation within-subject across time and also within a neighborhood on the cortical surface - a fact that can be used to increase statistical power. Furthermore, time-to-event (aka survival) analysis is frequently employed to analyze longitudinal data and determine significant predictors.

Image-based physiological modeling

Traditionally, medical image computing has seen to address the quantification and fusion of structural or functional information available at the point and time of image acquisition. In this regard, it can be seen as quantitative sensing of the underlying anatomical, physical or physiological processes. However, over the last few years, there has been a growing interest in the predictive assessment of disease or therapy course. Image-based modelling, be it of biomechanical or physiological nature, can therefore extend the possibilities of image computing from a descriptive to a predictive angle.

According to the STEP research roadmap, the Virtual Physiological Human (VPH) is a methodological and technological framework that, once established, will enable the investigation of the human body as a single complex system. Underlying the VPH concept, the International Union for Physiological Sciences (IUPS) has been sponsoring the IUPS Physiome Project for more than a decade,. This is a worldwide public domain effort to provide a computational framework for understanding human physiology. It aims at developing integrative models at all levels of biological organization, from genes to the whole organisms via gene regulatory networks, protein pathways, integrative cell functions, and tissue and whole organ structure/function relations. Such an approach aims at transforming current practice in medicine and underpins a new era of computational medicine.

In this context, medical imaging and image computing play an increasingly important role as they provide systems and methods to image, quantify and fuse both structural and functional information about the human being in vivo. These two broad research areas include the transformation of generic computational models to represent specific subjects, thus paving the way for personalized computational models. Individualization of generic computational models through imaging can be realized in three complementary directions:
  • Definition of the subject-specific computational domain (anatomy) and related subdomains (tissue types);
  • Definition of boundary and initial conditions from (dynamic and/or functional) imaging; and
  • Characterization of structural and functional tissue properties.
In addition, imaging also plays a pivotal role in the evaluation and validation of such models both in humans and in animal models, and in the translation of models to the clinical setting with both diagnostic and therapeutic applications. In this specific context, molecular, biological, and pre-clinical imaging render additional data and understanding of basic structure and function in molecules, cells, tissues and animal models that may be transferred to human physiology where appropriate. 

The applications of image-based VPH/Physiome models in basic and clinical domains are vast. Broadly speaking, they promise to become new virtual imaging techniques. Effectively more, often non-observable, parameters will be imaged in silico based on the integration of observable but sometimes sparse and inconsistent multimodal images and physiological measurements. Computational models will serve to engender interpretation of the measurements in a way compliant with the underlying biophysical, biochemical or biological laws of the physiological or pathophysiological processes under investigation. Ultimately, such investigative tools and systems will help our understanding of disease processes, the natural history of disease evolution, and the influence on the course of a disease of pharmacological and/or interventional therapeutic procedures.
Cross-fertilization between imaging and modelling goes beyond interpretation of measurements in a way consistent with physiology. Image-based patient-specific modelling, combined with models of medical devices and pharmacological therapies, opens the way to predictive imaging whereby one will be able to understand, plan and optimize such interventions in silico.

Mathematical methods in medical imaging

A number of sophisticated mathematical methods have entered medical imaging, and have already been implemented in various software packages. These include approaches based on partial differential equations (PDEs) and curvature driven flows for enhancement, segmentation, and registration. Since they employ PDEs, the methods are amenable to parallelization and implementation on GPGPUs. A number of these techniques have been inspired from ideas in optimal control. Accordingly, very recently ideas from control have recently made their way into interactive methods, especially segmentation. Moreover, because of noise and the need for statistical estimation techniques for more dynamically changing imagery, the Kalman filter and particle filter have come into use. A survey of these methods with an extensive list of references may be found in.

Modality specific computing

Some imaging modalities provide very specialized information. The resulting images cannot be treated as regular scalar images and give rise to new sub-areas of Medical Image Computing. Examples include diffusion MRI, functional MRI and others.

Diffusion MRI

A mid-axial slice of the ICBM diffusion tensor image template. Each voxel's value is a tensor represented here by an ellipsoid. Color denotes principal orientation: red = left-right, blue=inferior-superior, green = posterior-anterior
 
Diffusion MRI is a structural magnetic resonance imaging modality that allows measurement of the diffusion process of molecules. Diffusion is measured by applying a gradient pulse to a magnetic field along a particular direction. In a typical acquisition, a set of uniformly distributed gradient directions is used to create a set of diffusion weighted volumes. In addition, an unweighted volume is acquired under the same magnetic field without application of a gradient pulse. As each acquisition is associated with multiple volumes, diffusion MRI has created a variety of unique challenges in medical image computing. 

In medicine, there are two major computational goals in diffusion MRI:
  • Estimation of local tissue properties, such as diffusivity;
  • Estimation of local directions and global pathways of diffusion.
The diffusion tensor, a 3 × 3 symmetric positive-definite matrix, offers a straightforward solution to both of these goals. It is proportional to the covariance matrix of a Normally distributed local diffusion profile and, thus, the dominant eigenvector of this matrix is the principal direction of local diffusion. Due to the simplicity of this model, a maximum likelihood estimate of the diffusion tensor can be found by simply solving a system of linear equations at each location independently. However, as the volume is assumed to contain contiguous tissue fibers, it may be preferable to estimate the volume of diffusion tensors in its entirety by imposing regularity conditions on the underlying field of tensors. Scalar values can be extracted from the diffusion tensor, such as the fractional anisotropy, mean, axial and radial diffusivities, which indirectly measure tissue properties such as the dysmyelination of axonal fibers  or the presence of edema. Standard scalar image computing methods, such as registration and segmentation, can be applied directly to volumes of such scalar values. However, to fully exploit the information in the diffusion tensor, these methods have been adapted to account for tensor valued volumes when performing registration and segmentation.

Given the principal direction of diffusion at each location in the volume, it is possible to estimate the global pathways of diffusion through a process known as tractography. However, due to the relatively low resolution of diffusion MRI, many of these pathways may cross, kiss or fan at a single location. In this situation, the single principal direction of the diffusion tensor is not an appropriate model for the local diffusion distribution. The most common solution to this problem is to estimate multiple directions of local diffusion using more complex models. These include mixtures of diffusion tensors, Q-ball imaging, diffusion spectrum imaging  and fiber orientation distribution functions, which typically require HARDI acquisition with a large number of gradient directions. As with the diffusion tensor, volumes valued with these complex models require special treatment when applying image computing methods, such as registration and segmentation.

Functional MRI

Functional magnetic resonance imaging (fMRI) is a medical imaging modality that indirectly measures neural activity by observing the local hemodynamics, or blood oxygen level dependent signal (BOLD). fMRI data offers a range of insights, and can be roughly divided into two categories:
  • Task related fMRI is acquired as the subject is performing a sequence of timed experimental conditions. In block-design experiments, the conditions are present for short periods of time (e.g., 10 seconds) and are alternated with periods of rest. Event-related experiments rely on a random sequence of stimuli and use a single time point to denote each condition. The standard approach to analyze task related fMRI is the general linear model (GLM) 
  • Resting state fMRI is acquired in the absence of any experimental task. Typically, the objective is to study the intrinsic network structure of the brain. Observations made during rest have also been linked to specific cognitive processes such as encoding or reflection. Most studies of resting state fMRI focus on low frequency fluctuations of the fMRI signal (LF-BOLD). Seminal discoveries include the default network, a comprehensive cortical parcellation, and the linking of network characteristics to behavioral parameters.
There is a rich set of methodology used to analyze functional neuroimaging data, and there is often no consensus regarding the best method. Instead, researchers approach each problem independently and select a suitable model/algorithm. In this context there is a relatively active exchange among neuroscience, computational biology, statistics, and machine learning communities. Prominent approaches include
  • Massive univariate approaches that probe individual voxels in the imaging data for a relationship to the experiment condition. The prime approach is the general linear model (GLM) 
  • Multivariate- and classifier based approaches, often referred to as multi voxel pattern analysis or multi-variate pattern analysis probe the data for global and potentially distributed responses to an experimental condition. Early approaches used support vector machines (SVM) to study responses to visual stimuli. Recently, alternative pattern recognition algorithms have been explored, such as random forest based gini contrast or sparse regression and dictionary learning 
  • Functional connectivity analysis studies the intrinsic network structure of the brain, including the interactions between regions. The majority of such studies focus on resting state data to parcelate the brain or to find correlates to behavioral measures. Task specific data can be used to study causal relationships among brain regions (e.g., dynamic causal mapping (DCM)).
When working with large cohorts of subjects, the normalization (registration) of individual subjects into a common reference frame is crucial. A body of work and tools exist to perform normalization based on anatomy (FSL, FreeSurfer, SPM). Alignment taking spatial variability across subjects into account is a more recent line of work. Examples are the alignment of the cortex based on fMRI signal correlation, the alignment based on the global functional connectivity structure both in task-, or resting state data, and the alignment based on stimulus specific activation profiles of individual voxels.

Software

Software for medical image computing is a complex combination of systems providing IO, visualization and interaction, user interface, data management and computation. Typically system architectures are layered to serve algorithm developers, application developers, and users. The bottom layers are often libraries and/or toolkits which provide base computational capabilities; while the top layers are specialized applications which address specific medical problems, diseases, or body systems.

Additional notes

Medical Image Computing is also related to the field of Computer Vision. An international society, the MICCAI society represents the field and organizes an annual conference and associated workshops. Proceedings for this conference are published by Springer in the Lecture Notes in Computer Science series. In 2000, N. Ayache and J. Duncan reviewed the state of the field.

Neuroanatomy

From Wikipedia, the free encyclopedia

Neuroanatomy is the study of the anatomy and organisation of the nervous system. Pictured here is a cross-section showing the gross anatomy of the human brain

Neuroanatomy is the study of the structure and organization of the nervous system. In contrast to animals with radial symmetry, whose nervous system consists of a distributed network of cells, animals with bilateral symmetry have segregated, defined nervous systems. Their neuroanatomy is therefore better understood. In vertebrates, the nervous system is segregated into the internal structure of the brain and spinal cord (together called the central nervous system, or CNS) and the routes of the nerves that connect to the rest of the body (known as the peripheral nervous system, or PNS). The delineation of distinct structures and regions of the nervous system has been critical in investigating how it works. For example, much of what neuroscientists have learned comes from observing how damage or "lesions" to specific brain areas affects behavior or other neural functions. 

History

J. M. Bourgery's anatomy of the brain, brainstem, and upper spinal column
 
The first known written record of a study of the anatomy of the human brain is the ancient Egyptian document the Edwin Smith Papyrus. The next major development in neuroanatomy came from the Greek Alcmaeon, who determined that the brain and not the heart ruled the body and that the senses were dependent on the brain.

After Alcmaeon’s findings, many scientists, philosophers, and physicians from around the world continued to contribute to the understanding of neuroanatomy, notably: Galen, Herophilus, Rhazes and Erasistratus. Herophilus and Erasistratus of Alexandria were perhaps the most influential Greek neuroscientists with their studies involving dissecting the brains. For several hundred years afterward, with the cultural taboo of dissection, no major progress occurred in neuroscience. However, Pope Sixtus IV effectively revitalized the study of neuroanatomy by altering the papal policy and allowing human dissection. This resulted in a boom of research in neuroanatomy by artists and scientists of the Renaissance.

In 1664, Thomas Willis, a physician and professor at Oxford University, coined the term neurology when he published his text Cerebri anatome which is considered the foundation of neuroanatomy. The subsequent three hundred and fifty some years has produced a great deal of documentation and study of the neural systems.

Composition

At the tissue level, the nervous system is composed of neurons, glial cells, and extracellular matrix. Both neurons and glial cells come in many types. Neurons are the information-processing cells of the nervous system: they sense our environment, communicate with each other via electrical signals and chemicals called neurotransmitters across synapses, and produce our memories, thoughts and movements. Glial cells maintain homeostasis, produce myelin, and provide support and protection for the brain's neurons. Some glial cells (astrocytes) can even propagate intercellular calcium waves over long distances in response to stimulation, and release gliotransmitters in response to changes in calcium concentration. The extracellular matrix also provides support on the molecular level for the brain's cells. 

At the organ level, the nervous system is composed of brain regions, such as the hippocampus in mammals or the mushroom bodies of the fruit fly. These regions are often modular and serve a particular role within the general pathways of the nervous system. For example, the hippocampus is critical for forming memories. The nervous system also contains nerves, which are bundles of fibers that originate from the brain and spinal cord, and branch repeatedly to innervate every part of the body. Nerves are made primarily of the axons of neurons, along with a variety of membranes that wrap around and segregate them into nerve fascicles

The vertebrate nervous system is divided into the central and peripheral nervous systems. The central nervous system (CNS) consists of the brain, retina, and spinal cord, while the peripheral nervous system (PNS) is made up of all the nerves outside of the CNS that connect it to the rest of the body. The PNS is further subdivided into the somatic and autonomic nervous systems. The somatic nervous system is made up of "afferent" neurons, which bring sensory information from the sense organs to the CNS, and "efferent" neurons, which carry motor instructions out to the muscles. The autonomic nervous system also has two subdivisions, the sympathetic and the parasympathetic, which are important for regulating the body's basic internal organ functions such as heartbeat, breathing, digestion, and salivation. Autonomic nerves, like somatic nerves, contain afferent and efferent fibers.

Orientation in neuroanatomy

Para-sagittal MRI of the head in a patient with benign familial macrocephaly.
 
In anatomy in general and neuroanatomy in particular, several sets of topographic terms are used to denote orientation and location, which are generally referred to the body or brain axis (see Anatomical terms of location). The pairs of terms used most commonly in neuroanatomy are:
  • Dorsal and ventral: dorsal loosely refers to the top or upper side, and ventral to the bottom or lower side. These descriptors originally referred to dorsum and ventrum – back and belly – of the body; the belly of most animals is oriented towards the ground; the erect posture of humans places our ventral aspect anteriorly, and the dorsal aspect becomes posterior. The case of the head and the brain is peculiar, since the belly does not properly extend into the head, unless we assume that the mouth represents an extended belly element. Therefore, in common use, those brain parts that lie close to the base of the cranium, and through it to the mouth cavity, are called ventral – i.e., at its bottom or lower side, as defined above – whereas dorsal parts are closer to the enclosing cranial vault.
  • Rostral and caudal: rostral refers to the front of the body (towards the nose, or rostrum in Latin), and caudal to the tail end of the body (towards the tail; cauda in Latin). In Man, the directional terms "superior" and "inferior" essentially refer to this rostrocaudal dimension, because our body axis is roughly oriented vertically in the erect position. However, all vertebrates develop a kink in the neural tube that is still detectable in the adult central nervous system, known as the cephalic flexure. The latter bends the rostral part of the CNS at a 90 degree angle relative to the caudal part, at the transition between the forebrain and the brainstem and spinal cord. This change in axial dimension is problematic when trying to describe relative position and sectioning planes in the brain.
  • Medial and lateral: medial refers to being close, or relatively closer, to the midline (the descriptor median means a position precisely at the midline. Lateral is the opposite (a position separated away from the midline)).
Note that such descriptors (dorsal/ventral, rostral/caudal; medial/lateral) are relative rather than absolute (e.g., a lateral structure may be said to lie medial to something else that lies even more laterally). 

Commonly used terms for planes of orientation or planes of section in neuroanatomy are "sagittal", "transverse" or "coronal", and "axial" or "horizontal". Again in this case, the situation is different for swimming, creeping or quadrupedal (prone) animals than for Man, or other erect species, due to the changed position of the axis.
  • A mid-sagittal plane divides the body and brain into left and right halves; sagittal sections in general are parallel to this median plane, moving along the medial-lateral dimension(see the image above). The term sagittal refers etymologically to the median suture between the right and left parietal bones of the cranium, known classically as sagittal suture, because it looks roughly like an arrow by its confluence with other sutures (sagitta; arrow in Latin).
  • A section plane across any elongated form in principle is held to be transverse if it is orthogonal to the axis (e.g., a transverse section of a finger; if there is no length axis, there is no way to define such sections, or there are infinite possibilities). Therefore, transverse body sections in vertebrates are parallel to the ribs, which are orthogonal to the vertebral column, that represents the body axis both in animals and man. The brain also has an intrinsic longitudinal axis – that of the primordial elongated neural tube – which becomes largely vertical with the erect posture of Man, similarly as the body axis, except at its rostral end, as commented above. This explains that transverse spinal cord sections are roughly parallel to our ribs, or to the ground. However, this is only true for the spinal cord and the brainstem, since the forebrain end of the neural axis bends crook-like during early morphogenesis into the hypothalamus, where it ends; the orientation of true transverse sections accordingly changes, and is no longer parallel to the ribs and ground, but perpendicular to them; lack of awareness of this morphologic brain peculiarity (present in all vertebrate brains without exceptions) has caused and still causes erroneous thinking on forebrain brain parts. Acknowledging the singularity of rostral transverse sections, tradition has introduced a different descriptor for them, namely coronal sections. Coronal sections divide the forebrain from rostral (front) to caudal (back), forming a series orthogonal (transverse) to the local bent axis. The concept cannot be applied meaningfully to the brainstem and spinal cord, since there the coronal sections become horizontal to the axial dimension, being parallel to the axis.
  • A coronal plane across the head and brain is modernly conceived to be parallel to the face (the etymology refers to corona or crown; the plane in which a king's crown sits on his head is not exactly parallel to the face, and exportation of the concept to less frontally endowed animals than us is obviously even more conflictive, but there is an implicit reference to the coronal suture of the cranium, which forms between the frontal and temporal/parietal bones, giving a sort of diadema configuration which is roughly parallel to the face). Coronal section planes thus essentially refer only to the head and brain, where a diadema makes sense, and not to the neck and body below.
  • Horizontal sections by definition are aligned with the horizon. In swimming, creeping and quadrupedal animals the body axis itself is horizontal, and, thus, horizontal sections run along the length of the spinal cord, separating ventral from dorsal parts. Horizontal sections are orthogonal to both transverse and sagittal sections. Due to the axial bend in the brain (forebrain), true horizontal sections in that region are orthogonal to coronal (transverse) sections (as is the horizon relative to the face).
According to these considerations, the three directions of space are represented precisely by the sagittal, transverse and horizontal planes, whereas coronal sections can be transverse, oblique or horizontal, depending on how they relate to the brain axis and its incurvations.

Tools

Modern developments in neuroanatomy are directly correlated to the technologies used to perform research. Therefore, it is necessary to discuss the various tools that are available. Many of the histological techniques used to study other tissues can be applied to the nervous system as well. However, there are some techniques that have been developed especially for the study of neuroanatomy.

Cell staining

In biological systems, staining is a technique used to enhance the contrast of particular features in microscopic images. 

Nissl staining uses aniline basic dyes to intensely stain the acidic polyribosomes in the rough endoplasmic reticulum, which is abundant in neurons. This allows researchers to distinguish between different cell types (such as neurons and glia), and neuronal shapes and sizes, in various regions of the nervous system cytoarchitecture.

The classic Golgi stain uses potassium dichromate and silver nitrate to fill selectively with a silver chromate precipitate a few neural cells (neurons or glia, but in principle any cells can react similarly). This so-called silver chromate impregnation procedure stains entirely or partially the cell bodies and neurites of some neurons -dendrites, axon- in brown and black, allowing researchers to trace their paths up to their thinnest terminal branches in a slice of nervous tissue, thanks to the transparency consequent to the lack of staining in the majority of surrounding cells. Modernly, Golgi-impregnated material has been adapted for electron-microscopic visualization of the unstained elements surrounding the stained processes and cell bodies, thus adding further resolutive power.

Histochemistry

Histochemistry uses knowledge about biochemical reaction properties of the chemical constituents of the brain (including notably enzymes) to apply selective methods of reaction to visualize where they occur in the brain and any functional or pathological changes. This applies importantly to molecules related to neurotransmitter production and metabolism, but applies likewise in many other directions chemoarchitecture, or chemical neuroanatomy.

Immunocytochemistry is a special case of histochemistry that uses selective antibodies against a variety of chemical epitopes of the nervous system to selectively stain particular cell types, axonal fascicles, neuropiles, glial processes or blood vessels, or specific intracytoplasmic or intranuclear proteins and other immunogenetic molecules, e.g., neurotransmitters. Immunoreacted transcription factor proteins reveal genomic readout in terms of translated protein. This immensely increases the capacity of researchers to distinguish between different cell types (such as neurons and glia) in various regions of the nervous system. 

In situ hybridization uses synthetic RNA probes that attach (hybridize) selectively to complementary mRNA transcripts of DNA exons in the cytoplasm, to visualize genomic readout, that is, distinguish active gene expression, in terms of mRNA rather than protein. This allows identification histologically (in situ) of the cells involved in the production of genetically-coded molecules, which often represent differentiation or functional traits, as well as the molecular boundaries separating distinct brain domains or cell populations.

Genetically encoded markers

By expressing variable amounts of red, green, and blue fluorescent proteins in the brain, the so-called "brainbow" mutant mouse allows the combinatorial visualization of many different colors in neurons. This tags neurons with enough unique colors that they can often be distinguished from their neighbors with fluorescence microscopy, enabling researchers to map the local connections or mutual arrangement (tiling) between neurons. 

Optogenetics uses transgenic constitutive and site-specific expression (normally in mice) of blocked markers that can be activated selectively by illumination with a light beam. This allows researchers to study axonal connectivity in the nervous system in a very discriminative way.

Non-invasive brain imaging

Magnetic resonance imaging has been used extensively to investigate brain structure and function non-invasively in healthy human subjects. An important example is diffusion tensor imaging, which relies on the restricted diffusion of water in tissue in order to produce axon images. In particular, water moves more quickly along the direction aligned with the axons, permitting the inference of their structure.

Viral-based methods

Certain viruses can replicate in brain cells and cross synapses. So, viruses modified to express markers (such as fluorescent proteins) can be used to trace connectivity between brain regions across multiple synapses. Two tracer viruses which replicate and spread transneuronal/transsynaptic are the Herpes simplex virus type1 (HSV) and the Rhabdoviruses. Herpes simplex virus was used to trace the connections between the brain and the stomach, in order to examine the brain areas involved in viscero-sensory processing. Another study injected herpes simplex virus into the eye, thus allowing the visualization of the optical pathway from the retina into the visual system. An example of a tracer virus which replicates from the synapse to the soma is the pseudorabies virus. By using pseudorabies viruses with different fluorescent reporters, dual infection models can parse complex synaptic architecture.

Dye-based methods

Axonal transport methods use a variety of dyes (horseradish peroxidase variants, fluorescent or radioactive markers, lectins, dextrans) that are more or less avidly absorbed by neurons or their processes. These molecules are selectively transported anterogradely (from soma to axon terminals) or retrogradely (from axon terminals to soma), thus providing evidence of primary and collateral connections in the brain. These 'physiologic' methods (because properties of living, unlesioned cells are used) can be combined with other procedures, and have essentially superseded the earlier procedures studying degeneration of lesioned neurons or axons. Detailed synaptic connections can be determined by correlative electron microscopy.

Connectomics

Serial section electron microscopy has been extensively developed for use in studying nervous systems. For example, the first application of serial block-face scanning electron microscopy was on rodent cortical tissue. Circuit reconstruction from data produced by this high-throughput method is challenging, and the Citizen science game EyeWire has been developed to aid research in that area.

Computational neuroanatomy

Is a field that utilizes various imaging modalities and computational techniques to model and quantify the spatiotemporal dynamics of neuroanatomical structures in both normal and clinical populations.

Model systems

Aside from the human brain, there are many other animals whose brains and nervous systems have received extensive study as model systems, including mice, zebrafish, fruit fly, and a species of roundworm called C. elegans. Each of these has its own advantages and disadvantages as a model system. For example, the C. elegans nervous system is extremely stereotyped from one individual worm to the next. This has allowed researchers using electron microscopy to map the paths and connections of all of the approximately 300 neurons in this species. The fruit fly is widely studied in part because its genetics is very well understood and easily manipulated. The mouse is used because, as a mammal, its brain is more similar in structure to our own (e.g., it has a six-layered cortex, yet its genes can be easily modified and its reproductive cycle is relatively fast).

Caenorhabditis elegans

A rod-shaped body contains a digestive system running from the mouth at one end to the anus at the other. Alongside the digestive system is a nerve cord with a brain at the end, near to the mouth.
Nervous system of a generic bilaterian animal, in the form of a nerve cord with segmental enlargements, and a "brain" at the front
 
The brain is small and simple in some species, such as the nematode worm, where the body plan is quite simple: a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The nematode Caenorhabditis elegans has been studied because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model system for studying the way that genes control development, including neuronal development. One advantage of working with this worm is that the nervous system of the hermaphrodite contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed every section under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body, to give a complete connectome of the nematode. Nothing approaching this level of detail is available for any other organism, and the information has been used to enable a multitude of studies that would not have been possible without it.

Drosophila melanogaster

Drosophila melanogaster is a popular experimental animal because it is easily cultured en masse from the wild, has a short generation time, and mutant animals are readily obtainable.

Arthropods have a central brain with three divisions and large optical lobes behind each eye for visual processing. The brain of a fruit fly contains several million synapses, compared to at least 100 billion in the human brain. Approximately two-thirds of the Drosophila brain is dedicated to visual processing

Thomas Hunt Morgan started to work with Drosophila in 1906, and this work earned him the 1933 Nobel Prize in Medicine for identifying chromosomes as the vector of inheritance for genes. Because of the large array of tools available for studying Drosophila genetics, they have been a natural subject for studying the role of genes in the nervous system. The genome has been sequenced and published in 2000. About 75% of known human disease genes have a recognizable match in the genome of fruit flies. Drosophila is being used as a genetic model for several human neurological diseases including the neurodegenerative disorders Parkinson's, Huntington's, spinocerebellar ataxia and Alzheimer's disease. In spite of the large evolutionary distance between insects and mammals, many basic aspects of Drosophila neurogenetics have turned out to be relevant to humans. For instance, the first biological clock genes were identified by examining Drosophila mutants that showed disrupted daily activity cycles.

Immunohistochemistry

From Wikipedia, the free encyclopedia

Mouse-brain slice stained by Immunohistochemistry.

Immunohistochemistry (IHC) is the most common application of immunostaining. It involves the process of selectively identifying antigens (proteins) in cells of a tissue section by exploiting the principle of antibodies binding specifically to antigens in biological tissues. IHC takes its name from the roots "immuno", in reference to antibodies used in the procedure, and "histo", meaning tissue (compare to immunocytochemistry). Albert Coons conceptualized and first implemented the procedure in 1941.

Immunohistochemical staining is widely used in the diagnosis of abnormal cells such as those found in cancerous tumors. Specific molecular markers are characteristic of particular cellular events such as proliferation or cell death (apoptosis). Immunohistochemistry is also widely used in basic research to understand the distribution and localization of biomarkers and differentially expressed proteins in different parts of a biological tissue. 

Visualising an antibody-antigen interaction can be accomplished in a number of ways. In the most common instance, an antibody is conjugated to an enzyme, such as peroxidase, that can catalyse a colour-producing reaction (see immunoperoxidase staining). Alternatively, the antibody can also be tagged to a fluorophore, such as fluorescein or rhodamine (see immunofluorescence).

Sample preparation

Preparation of the sample is critical to maintain cell morphology, tissue architecture and the antigenicity of target epitopes. This requires proper tissue collection, fixation and sectioning. A solution of paraformaldehyde is often used to fix tissue, but other methods may be used.

Preparing tissue slices

The tissue may then be sliced or used whole, dependent upon the purpose of the experiment or the tissue itself. Before sectioning, the tissue sample may be embedded in a medium, like paraffin wax or cryomedia. Sections can be sliced on a variety of instruments, most commonly a microtome, cryostat, or vibratome. Specimens are typically sliced at a range of 3 µm-5 μm. The slices are then mounted on slides, dehydrated using alcohol washes of increasing concentrations (e.g., 50%, 75%, 90%, 95%, 100%), and cleared using a detergent like xylene before being imaged under a microscope. 

Depending on the method of fixation and tissue preservation, the sample may require additional steps to make the epitopes available for antibody binding, including deparaffinization and antigen retrieval. For formalin-fixed paraffin-embedded tissues, antigen-retrieval is often necessary, and involves pre-treating the sections with heat or protease. These steps may make the difference between the target antigens staining or not staining.

Reducing non-specific immuno-staining

Depending on the tissue type and the method of antigen detection, endogenous biotin or enzymes may need to be blocked or quenched, respectively, prior to antibody staining. Although antibodies show preferential avidity for specific epitopes, they may partially or weakly bind to sites on nonspecific proteins (also called reactive sites) that are similar to the cognate binding sites on the target antigen. A great amount of non-specific binding causes high background staining which will mask the detection of the target antigen. To reduce background staining in IHC, ICC and other immunostaining methods, samples are incubated with a buffer that blocks the reactive sites to which the primary or secondary antibodies may otherwise bind. Common blocking buffers include normal serum, non-fat dry milk, BSA, or gelatin. Commercial blocking buffers with proprietary formulations are available for greater efficiency. Methods to eliminate background staining include dilution of the primary or secondary antibodies, changing the time or temperature of incubation, and using a different detection system or different primary antibody. Quality control should as a minimum include a tissue known to express the antigen as a positive control and negative controls of tissue known not to express the antigen, as well as the test tissue probed in the same way with omission of the primary antibody (or better, absorption of the primary antibody).

Sample labeling

Antibody types

The antibodies used for specific detection can be polyclonal or monoclonal. Polyclonal antibodies are made by injecting animals with the protein of interest, or a peptide fragment and, after a secondary immune response is stimulated, isolating antibodies from whole serum. Thus, polyclonal antibodies are a heterogeneous mix of antibodies that recognize several epitopes. Monoclonal antibodies are made by injecting the animal and then taking a specific sample of immune tissue, isolating a parent cell, and using the resulting immortalized line to create antibodies. This causes the antibodies to show specificity for a single epitope.

For immunohistochemical detection strategies, antibodies are classified as primary or secondary reagents. Primary antibodies are raised against an antigen of interest and are typically unconjugated (unlabeled), while secondary antibodies are raised against immunoglobulins of the primary antibody species. The secondary antibody is usually conjugated to a linker molecule, such as biotin, that then recruits reporter molecules, or the secondary antibody itself is directly bound to the reporter molecule.

IHC reporters

Reporter molecules vary based on the nature of the detection method, the most popular being chromogenic and fluorescence detection mediated by an enzyme or a fluorophore, respectively. With chromogenic reporters, an enzyme label reacts with a substrate to yield an intensely colored product that can be analyzed with an ordinary light microscope. While the list of enzyme substrates is extensive, alkaline phosphatase (AP) and horseradish peroxidase (HRP) are the two enzymes used most extensively as labels for protein detection. An array of chromogenic, fluorogenic and chemiluminescent substrates is available for use with either enzyme, including DAB or BCIP/NBT, which produce a brown or purple staining, respectively, wherever the enzymes are bound. Reaction with DAB can be enhanced using nickel, producing a deep purple/black staining. 

Fluorescent reporters are small, organic molecules used for IHC detection and traditionally include FITC, TRITC and AMCA, while commercial derivatives, including the Alexa Fluors and Dylight Fluors, show similar enhanced performance but vary in price. For chromogenic and fluorescent detection methods, densitometric analysis of the signal can provide semi- and fully quantitative data, respectively, to correlate the level of reporter signal to the level of protein expression or localization. 

The direct method of immunohistochemical staining uses one labelled antibody, which binds directly to the antigen being stained for.
 
The indirect method of immunohistochemical staining uses one antibody against the antigen being probed for, and a second, labelled, antibody against the first.

Target antigen detection methods

The direct method is a one-step staining method and involves a labeled antibody (e.g. FITC-conjugated antiserum) reacting directly with the antigen in tissue sections. While this technique utilizes only one antibody and therefore is simple and rapid, the sensitivity is lower due to little signal amplification, in contrast to indirect approaches. However, this strategy is used less frequently than its multi-phase counterpart. 

The indirect method involves an unlabeled primary antibody (first layer) that binds to the target antigen in the tissue and a labeled secondary antibody (second layer) that reacts with the primary antibody. As mentioned above, the secondary antibody must be raised against the IgG of the animal species in which the primary antibody has been raised. This method is more sensitive than direct detection strategies because of signal amplification due to the binding of several secondary antibodies to each primary antibody if the secondary antibody is conjugated to the fluorescent or enzyme reporter.

Further amplification can be achieved if the secondary antibody is conjugated to several biotin molecules, which can recruit complexes of avidin-, streptavidin- or NeutrAvidin protein-bound enzyme. The difference between these three biotin-binding proteins is their individual binding affinity to endogenous tissue targets leading to nonspecific binding and high background; the ranking of these proteins based on their nonspecific binding affinities, from highest to lowest, is: 1) avidin, 2) streptavidin and 3) NeutrAvidin protein. 

The indirect method, aside from its greater sensitivity, also has the advantage that only a relatively small number of standard conjugated (labeled) secondary antibodies needs to be generated. For example, a labeled secondary antibody raised against rabbit IgG, which can be purchased "off the shelf", is useful with any primary antibody raised in rabbit. With the direct method, it would be necessary to label each primary antibody for every antigen of interest.

Counterstains

After immunohistochemical staining of the target antigen, a second stain is often applied to provide contrast that helps the primary stain stand out. Many of these stains show specificity for specific classes of biomolecules, while others will stain the whole cell. Both chromogenic and fluorescent dyes are available for IHC to provide a vast array of reagents to fit every experimental design, and include: hematoxylin, Hoechst stain and DAPI are commonly used.

Troubleshooting

In immunohistochemical techniques, there are several steps prior to the final staining of the tissue antigen, which can cause a variety of problems including strong background staining, weak target antigen staining, and autofluorescence. Endogenous biotin or reporter enzymes or primary/secondary antibody cross-reactivity are common causes of strong background staining, while weak staining may be caused by poor enzyme activity or primary antibody potency. Furthermore, autofluorescence may be due to the nature of the tissue or the fixation method. These aspects of IHC tissue prep and antibody staining must be systematically addressed to identify and overcome staining issues.

Diagnostic IHC markers

Immunohistochemical staining of normal kidney with CD10.
 
IHC is an excellent detection technique and has the tremendous advantage of being able to show exactly where a given protein is located within the tissue examined. It is also an effective way to examine the tissues. This has made it a widely used technique in the neurosciences, enabling researchers to examine protein expression within specific brain structures. Its major disadvantage is that, unlike immunoblotting techniques where staining is checked against a molecular weight ladder, it is impossible to show in IHC that the staining corresponds with the protein of interest. For this reason, primary antibodies must be well-validated in a Western Blot or similar procedure. The technique is even more widely used in diagnostic surgical pathology for immunophenotyping tumors (e.g. immunostaining for e-cadherin to differentiate between DCIS (ductal carcinoma in situ: stains positive) and LCIS [lobular carcinoma in situ: does not stain positive]). More recently, Immunohistochemical techniques have been useful in differential diagnoses of multiple forms of salivary gland, head, and neck carcinomas.

The diversity of IHC markers used in diagnostic surgical pathology is substantial. Many clinical laboratories in tertiary hospitals will have menus of over 200 antibodies used as diagnostic, prognostic and predictive biomarkers. Examples of some commonly used markers include:

Directing therapy

A variety of molecular pathways are altered in cancer and some of the alterations can be targeted in cancer therapy. Immunohistochemistry can be used to assess which tumors are likely to respond to therapy, by detecting the presence or elevated levels of the molecular target.

Chemical inhibitors

Tumor biology allows for a number of potential intracellular targets. Many tumors are hormone dependent. The presence of hormone receptors can be used to determine if a tumor is potentially responsive to antihormonal therapy. One of the first therapies was the antiestrogen, tamoxifen, used to treat breast cancer. Such hormone receptors can be detected by immunohistochemistry. Imatinib, an intracellualar tyrosine kinase inhibitor, was developed to treat chronic myelogenous leukemia, a disease characterized by the formation of a specific abnormal tyrosine kinase. Imitanib has proven effective in tumors that express other tyrosine kinases, most notably KIT. Most gastrointestinal stromal tumors express KIT, which can be detected by immunohistochemistry.

Monoclonal antibodies

Many proteins shown to be highly upregulated in pathological states by immunohistochemistry are potential targets for therapies utilizing monoclonal antibodies. Monoclonal antibodies, due to their size, are utilized against cell surface targets. Among the overexpressed targets are members of the epidermal growth factor receptor (EGFR) family, transmembrane proteins with an extracellular receptor domain regulating an intracellular tyrosine kinase. Of these, HER2/neu (also known as Erb-B2) was the first to be developed. The molecule is highly expressed in a variety of cancer cell types, most notably breast cancer. As such, antibodies against HER2/neu have been FDA approved for clinical treatment of cancer under the drug name Herceptin. There are commercially available immunohistochemical tests, Dako HercepTest, Leica Biosystems Oracle and Ventana Pathway.

Similarly, EGFR (HER-1) is overexpressed in a variety of cancers including head and neck and colon. Immunohistochemistry is used to determine patients who may benefit from therapeutic antibodies such as Erbitux (cetuximab). Commercial systems to detect EGFR by immunohistochemistry include the Dako pharmDx.

Mapping protein expression

Immunohistochemistry can also be used for a more general protein profiling, provided the availability of antibodies validated for immunohistochemistry. The Human Protein Atlas displays a map of protein expression in normal human organs and tissues and organs. The combination of immunohistochemistry and tissue microarrays provides protein expression patterns in a large number of different tissue types. Immunohistochemistry is also used for protein profiling in the most common forms of human cancer.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...