Search This Blog

Monday, April 26, 2021

Monoclonal antibody

From Wikipedia, the free encyclopedia
 
A general representation of the method used to produce monoclonal antibodies.

A monoclonal antibody (mAb or moAb) is an antibody made by cloning a unique white blood cell. All subsequent antibodies derived this way trace back to a unique parent cell.

Monoclonal antibodies can have monovalent affinity, binding only to the same epitope (the part of an antigen that is recognized by the antibody). In contrast, polyclonal antibodies bind to multiple epitopes and are usually made by several different antibody secreting plasma cell lineages. Bispecific monoclonal antibodies can also be engineered, by increasing the therapeutic targets of one monoclonal antibody to two epitopes.

It is possible to produce monoclonal antibodies that specifically bind to virtually any suitable substance; they can then serve to detect or purify it. This capability has become an important tool in biochemistry, molecular biology, and medicine.

History

In the 1900s, immunologist Paul Ehrlich proposed the idea of a Zauberkugel – "magic bullet", conceived of as a compound which selectively targeted a disease-causing organism, and could deliver a toxin for that organism. This underpinned the concept of monoclonal antibodies and monoclonal drug conjugates. Ehrlich and Élie Metchnikoff received the 1908 Nobel Prize for Physiology or Medicine for providing the theoretical basis for immunology.

By the 1970s, lymphocytes producing a single antibody where known, in the form of multiple myeloma – a cancer affecting B-cells. These abnormal antibodies or paraproteins were used to study the structure of antibodies, but it was not yet possible to produce identical antibodies specific to a given antigen. In 1973, Jerrold Schwaber described the production of monoclonal antibodies using human–mouse hybrid cells. This work remains widely cited among those using human-derived hybridomas. In 1975, Georges Köhler and César Milstein succeeded in making fusions of myeloma cell lines with B cells to create hybridomas that could produce antibodies, specific to known antigens and that were immortalized. They and Niels Kaj Jerne shared the Nobel Prize in Physiology or Medicine in 1984 for the discovery.

In 1988, Greg Winter and his team pioneered the techniques to humanize monoclonal antibodies, eliminating the reactions that many monoclonal antibodies caused in some patients. By the 1990s research was making progress in using monoclonal antibodies therapeutically, and in 2018, James P. Allison and Tasuku Honjo received the Nobel Prize in Physiology or Medicine for their discovery of cancer therapy by inhibition of negative immune regulation, using monoclonal antibodies that prevent inhibitory linkages.

Production

Researchers looking at slides of cultures of cells that make monoclonal antibodies. These are grown in a lab and the researchers are analyzing the products to select the most promising of them.
 
Monoclonal antibodies can be grown in unlimited quantities in the bottles shown in this picture.
 
Technician hand-filling wells with a liquid for a research test. This test involves preparation of cultures in which hybrids are grown in large quantities to produce desired antibody. This is effected by fusing a myeloma cell and a mouse lymphocyte to form a hybrid cell (hybridoma).
 
Lab technician bathing prepared slides in a solution. This technician prepares slides of monoclonal antibodies for researchers. The cells shown are labeling human breast cancer.

Hybridoma development

Much of the work behind production of monoclonal antibodies is rooted in the production of hybridomas, which involves identifying antigen-specific plasma/plasmablast cells (ASPC) that produce antibodies specific to an antigen of interest and fusing these cells with myeloma cells. Rabbit B-cells can be used to form a rabbit hybridoma. Polyethylene glycol is used to fuse adjacent plasma membranes, but the success rate is low, so a selective medium in which only fused cells can grow is used. This is possible because myeloma cells have lost the ability to synthesize hypoxanthine-guanine-phosphoribosyl transferase (HGPRT), an enzyme necessary for the salvage synthesis of nucleic acids. The absence of HGPRT is not a problem for these cells unless the de novo purine synthesis pathway is also disrupted. Exposing cells to aminopterin (a folic acid analogue, which inhibits dihydrofolate reductase, DHFR), makes them unable to use the de novo pathway and become fully auxotrophic for nucleic acids, thus requiring supplementation to survive.

The selective culture medium is called HAT medium because it contains hypoxanthine, aminopterin and thymidine. This medium is selective for fused (hybridoma) cells. Unfused myeloma cells cannot grow because they lack HGPRT and thus cannot replicate their DNA. Unfused spleen cells cannot grow indefinitely because of their limited life span. Only fused hybrid cells referred to as hybridomas, are able to grow indefinitely in the medium because the spleen cell partner supplies HGPRT and the myeloma partner has traits that make it immortal (similar to a cancer cell).

This mixture of cells is then diluted and clones are grown from single parent cells on microtitre wells. The antibodies secreted by the different clones are then assayed for their ability to bind to the antigen (with a test such as ELISA or antigen microarray assay) or immuno-dot blot. The most productive and stable clone is then selected for future use.

The hybridomas can be grown indefinitely in a suitable cell culture medium. They can also be injected into mice (in the peritoneal cavity, surrounding the gut). There, they produce tumors secreting an antibody-rich fluid called ascites fluid.

The medium must be enriched during in vitro selection to further favour hybridoma growth. This can be achieved by the use of a layer of feeder fibrocyte cells or supplement medium such as briclone. Culture-media conditioned by macrophages can be used. Production in cell culture is usually preferred as the ascites technique is painful to the animal. Where alternate techniques exist, ascites is considered unethical.

Novel mAb development technology

Several monoclonal antibody technologies have been developed recently, such as phage display, single B cell culture, single cell amplification from various B cell populations and single plasma cell interrogation technologies. Different from traditional hybridoma technology, the newer technologies use molecular biology techniques to amplify the heavy and light chains of the antibody genes by PCR and produce in either bacterial or mammalian systems with recombinant technology. One of the advantages of the new technologies is applicable to multiple animals, such as rabbit, llama, chicken and other common experimental animals in the laboratory.

Purification

After obtaining either a media sample of cultured hybridomas or a sample of ascites fluid, the desired antibodies must be extracted. Cell culture sample contaminants consist primarily of media components such as growth factors, hormones and transferrins. In contrast, the in vivo sample is likely to have host antibodies, proteases, nucleases, nucleic acids and viruses. In both cases, other secretions by the hybridomas such as cytokines may be present. There may also be bacterial contamination and, as a result, endotoxins that are secreted by the bacteria. Depending on the complexity of the media required in cell culture and thus the contaminants, one or the other method (in vivo or in vitro) may be preferable.

The sample is first conditioned, or prepared for purification. Cells, cell debris, lipids, and clotted material are first removed, typically by centrifugation followed by filtration with a 0.45 µm filter. These large particles can cause a phenomenon called membrane fouling in later purification steps. In addition, the concentration of product in the sample may not be sufficient, especially in cases where the desired antibody is produced by a low-secreting cell line. The sample is therefore concentrated by ultrafiltration or dialysis.

Most of the charged impurities are usually anions such as nucleic acids and endotoxins. These can be separated by ion exchange chromatography. Either cation exchange chromatography is used at a low enough pH that the desired antibody binds to the column while anions flow through, or anion exchange chromatography is used at a high enough pH that the desired antibody flows through the column while anions bind to it. Various proteins can also be separated along with the anions based on their isoelectric point (pI). In proteins, the isoelectric point (pI) is defined as the pH at which a protein has no net charge. When the pH > pI, a protein has a net negative charge, and when the pH < pI, a protein has a net positive charge. For example, albumin has a pI of 4.8, which is significantly lower than that of most monoclonal antibodies, which have a pI of 6.1. Thus, at a pH between 4.8 and 6.1, the average charge of albumin molecules is likely to be more negative, while mAbs molecules are positively charged and hence it is possible to separate them. Transferrin, on the other hand, has a pI of 5.9, so it cannot be easily separated by this method. A difference in pI of at least 1 is necessary for a good separation.

Transferrin can instead be removed by size exclusion chromatography. This method is one of the more reliable chromatography techniques. Since we are dealing with proteins, properties such as charge and affinity are not consistent and vary with pH as molecules are protonated and deprotonated, while size stays relatively constant. Nonetheless, it has drawbacks such as low resolution, low capacity and low elution times.

A much quicker, single-step method of separation is protein A/G affinity chromatography. The antibody selectively binds to protein A/G, so a high level of purity (generally >80%) is obtained. However, this method may be problematic for antibodies that are easily damaged, as harsh conditions are generally used. A low pH can break the bonds to remove the antibody from the column. In addition to possibly affecting the product, low pH can cause protein A/G itself to leak off the column and appear in the eluted sample. Gentle elution buffer systems that employ high salt concentrations are available to avoid exposing sensitive antibodies to low pH. Cost is also an important consideration with this method because immobilized protein A/G is a more expensive resin.

To achieve maximum purity in a single step, affinity purification can be performed, using the antigen to provide specificity for the antibody. In this method, the antigen used to generate the antibody is covalently attached to an agarose support. If the antigen is a peptide, it is commonly synthesized with a terminal cysteine, which allows selective attachment to a carrier protein, such as KLH during development and to support purification. The antibody-containing medium is then incubated with the immobilized antigen, either in batch or as the antibody is passed through a column, where it selectively binds and can be retained while impurities are washed away. An elution with a low pH buffer or a more gentle, high salt elution buffer is then used to recover purified antibody from the support.

Antibody heterogeneity

Product heterogeneity is common in monoclonal antibodies and other recombinant biological products and is typically introduced either upstream during expression or downstream during manufacturing.

These variants are typically aggregates, deamidation products, glycosylation variants, oxidized amino acid side chains, as well as amino and carboxyl terminal amino acid additions. These seemingly minute structural changes can affect preclinical stability and process optimization as well as therapeutic product potency, bioavailability and immunogenicity. The generally accepted purification method of process streams for monoclonal antibodies includes capture of the product target with protein A, elution, acidification to inactivate potential mammalian viruses, followed by ion chromatography, first with anion beads and then with cation beads.

Displacement chromatography has been used to identify and characterize these often unseen variants in quantities that are suitable for subsequent preclinical evaluation regimens such as animal pharmacokinetic studies. Knowledge gained during the preclinical development phase is critical for enhanced product quality understanding and provides a basis for risk management and increased regulatory flexibility. The recent Food and Drug Administration's Quality by Design initiative attempts to provide guidance on development and to facilitate design of products and processes that maximizes efficacy and safety profile while enhancing product manufacturability.

Recombinant

The production of recombinant monoclonal antibodies involves repertoire cloning, CRISPR/Cas9, or phage display/yeast display technologies. Recombinant antibody engineering involves antibody production by the use of viruses or yeast, rather than mice. These techniques rely on rapid cloning of immunoglobulin gene segments to create libraries of antibodies with slightly different amino acid sequences from which antibodies with desired specificities can be selected. The phage antibody libraries are a variant of phage antigen libraries. These techniques can be used to enhance the specificity with which antibodies recognize antigens, their stability in various environmental conditions, their therapeutic efficacy and their detectability in diagnostic applications. Fermentation chambers have been used for large scale antibody production.

Chimeric antibodies

While mouse and human antibodies are structurally similar the differences between them were sufficient to invoke an immune response when murine monoclonal antibodies were injected into humans, resulting in their rapid removal from the blood, as well as systemic inflammatory effects and the production of human anti-mouse antibodies (HAMA).

Recombinant DNA has been explored since the late 1980s to increase residence times. In one approach, mouse DNA encoding the binding portion of a monoclonal antibody was merged with human antibody-producing DNA in living cells. The expression of this "chimeric" or "humanised" DNA through cell culture yielded part-mouse, part-human antibodies.

Human antibodies

Approaches have been developed to isolate human monoclonal antibodies

Ever since the discovery that monoclonal antibodies could be generated, scientists have targeted the creation of fully human products to reduce the side effects of humanised or chimeric antibodies. Several successful approaches have been identified: transgenic mice, phage display and single B cell cloning:

As of November 2016, thirteen of the nineteen fully human monoclonal antibody therapeutics on the market were derived from transgenic mice technology.

Adopting organizations who market transgenic technology include:

  • Medarex — which marketed the UltiMab platform. Medarex was acquired in July 2009 by Bristol Myers Squibb
  • Abgenix — which marketed the Xenomouse technology. Abgenix was acquired in April 2006 by Amgen.
  • Regeneron Pharmaceuticals VelocImmune technology.
  • Kymab - who market their Kymouse technology.
  • Open Monoclonal Technology's OmniRat™ and OmniMouse™ platform.
  • TRIANNI, Inc. – who market their TRIANNI Mouse platform.
  • Ablexis, LLC - who market their AlivaMab Mouse platform.

Phage display can be used to express variable antibody domains on filamentous phage coat proteins (Phage major coat protein). These phage display antibodies can be used for various research applications. ProAb was announced in December 1997 and involved high throughput screening of antibody libraries against diseased and non-diseased tissue, whilst Proximol used a free radical enzymatic reaction to label molecules in proximity to a given protein.

Monoclonal antibodies have been approved to treat cancer, cardiovascular disease, inflammatory diseases, macular degeneration, transplant rejection, multiple sclerosis and viral infection.

In August 2006, the Pharmaceutical Research and Manufacturers of America reported that U.S. companies had 160 different monoclonal antibodies in clinical trials or awaiting approval by the Food and Drug Administration.

Cost

Monoclonal antibodies are more expensive to manufacture than small molecules due to the complex processes involved and the general size of the molecules, this is all in addition to the enormous research and development costs involved in bringing a new chemical entity to patients. They are priced to enable manufacturers to recoup the typically large investment costs, and where there are no price controls, such as the United States, prices can be higher if they provide great value. Seven University of Pittsburgh researchers concluded, "The annual price of mAb therapies is about $100,000 higher in oncology and hematology than in other disease states," comparing them on a per patient basis, to those for cardiovascular or metabolic disorders, immunology, infectious diseases, allergy, and ophthalmology.

Applications

Diagnostic tests

Once monoclonal antibodies for a given substance have been produced, they can be used to detect the presence of this substance. Proteins can be detected using the Western blot and immuno dot blot tests. In immunohistochemistry, monoclonal antibodies can be used to detect antigens in fixed tissue sections, and similarly, immunofluorescence can be used to detect a substance in either frozen tissue section or live cells.

Analytic and chemical uses

Antibodies can also be used to purify their target compounds from mixtures, using the method of immunoprecipitation.

Therapeutic uses

Therapeutic monoclonal antibodies act through multiple mechanisms, such as blocking of targeted molecule functions, inducing apoptosis in cells which express the target, or by modulating signalling pathways.

Cancer treatment

One possible treatment for cancer involves monoclonal antibodies that bind only to cancer-cell-specific antigens and induce an immune response against the target cancer cell. Such mAbs can be modified for delivery of a toxin, radioisotope, cytokine or other active conjugate or to design bispecific antibodies that can bind with their Fab regions both to target antigen and to a conjugate or effector cell. Every intact antibody can bind to cell receptors or other proteins with its Fc region.

Monoclonal antibodies for cancer. ADEPT, antibody directed enzyme prodrug therapy; ADCC: antibody dependent cell-mediated cytotoxicity; CDC: complement-dependent cytotoxicity; MAb: monoclonal antibody; scFv, single-chain Fv fragment.

MAbs approved by the FDA for cancer include:

Autoimmune diseases

Monoclonal antibodies used for autoimmune diseases include infliximab and adalimumab, which are effective in rheumatoid arthritis, Crohn's disease, ulcerative colitis and ankylosing spondylitis by their ability to bind to and inhibit TNF-α. Basiliximab and daclizumab inhibit IL-2 on activated T cells and thereby help prevent acute rejection of kidney transplants. Omalizumab inhibits human immunoglobulin E (IgE) and is useful in treating moderate-to-severe allergic asthma.

Examples of therapeutic monoclonal antibodies

Monoclonal antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine like CiteAb. Below are examples of clinically important monoclonal antibodies.

Main category Type Application Mechanism/Target Mode
Anti-
inflammatory
infliximab inhibits TNF-α chimeric
adalimumab inhibits TNF-α human
basiliximab inhibits IL-2 on activated T cells chimeric
daclizumab inhibits IL-2 on activated T cells humanized
omalizumab
  • moderate-to-severe allergic asthma
inhibits human immunoglobulin E (IgE) humanized
Anti-cancer gemtuzumab targets myeloid cell surface antigen CD33 on leukemia cells humanized
alemtuzumab targets an antigen CD52 on T- and B-lymphocytes humanized
rituximab[52] targets phosphoprotein CD20 on B lymphocytes chimeric
trastuzumab
  • breast cancer with HER2/neu overexpression
targets the HER2/neu (erbB2) receptor humanized
nimotuzumab EGFR inhibitor humanized
cetuximab EGFR inhibitor chimeric
bevacizumab & ranibizumab inhibits VEGF humanized
Anti-cancer and anti-viral bavituximab immunotherapy, targets phosphatidylserine chimeric
Anti-viral

casirivimab/imdevimab

immunotherapy, targets spike protein of SARS-CoV-2 chimeric
bamlanivimab/etesevimab immunotherapy, targets spike protein of SARS-CoV-2 chimeric
Other palivizumab
  • RSV infections in children
inhibits an RSV fusion (F) protein humanized
abciximab inhibits the receptor GpIIb/IIIa on platelets chimeric

Side effects

Several monoclonal antibodies, such as bevacizumab and cetuximab, can cause different kinds of side effects. These side effects can be categorized into common and serious side effects.

Some common side effects include:

  • Dizziness
  • Headaches
  • Allergies
  • Diarrhea
  • Cough
  • Fever
  • Itching
  • Back pain
  • General weakness
  • Loss of appetite
  • Insomnia
  • Constipation

Among the possible serious side effects are:

Image editing

From Wikipedia, the free encyclopedia
 
Original black and white photo: Migrant Mother, showing Florence Owens Thompson, taken by Dorothea Lange in 1936.
 
This is a photo that has been edited as a Bokeh effect, using a Gaussian blur.

Image editing encompasses the processes of altering images, whether they are digital photographs, traditional photo-chemical photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs or editing illustrations with any traditional art medium. Graphic software programs, which can be broadly grouped into vector graphics editors, raster graphics editors, and 3D modelers, are the primary tools with which a user may manipulate, enhance, and transform images. Many image editing programs are also used to render or create computer art from scratch.

Basics of image editing

Raster images are stored in a computer in the form of a grid of picture elements, or pixels. These pixels contain the image's color and brightness information. Image editors can change the pixels to enhance the image in many ways. The pixels can be changed as a group, or individually, by the sophisticated algorithms within the image editors. This article mostly refers to bitmap graphics editors, which are often used to alter photographs and other raster graphics. However, vector graphics software, such as Adobe Illustrator, CorelDRAW, Xara Designer Pro or Inkscape, are used to create and modify vector images, which are stored as descriptions of lines, Bézier curves, and text instead of pixels. It is easier to rasterize a vector image than to vectorize a raster image; how to go about vectorizing a raster image is the focus of much research in the field of computer vision. Vector images can be modified more easily because they contain descriptions of the shapes for easy rearrangement. They are also scalable, being rasterizable at any resolution.

Automatic image enhancement

Camera or computer image editing programs often offer basic automatic image enhancement features that correct color hue and brightness imbalances as well as other image editing features, such as red eye removal, sharpness adjustments, zoom features and automatic cropping. These are called automatic because generally they happen without user interaction or are offered with one click of a button or mouse button or by selecting an option from a menu. Additionally, some automatic editing features offer a combination of editing actions with little or no user interaction.

Digital data compression

Many image file formats use data compression to reduce file size and save storage space. Digital compression of images may take place in the camera, or can be done in the computer with the image editor. When images are stored in JPEG format, compression has already taken place. Both cameras and computer programs allow the user to set the level of compression.

Some compression algorithms, such as those used in PNG file format, are lossless, which means no information is lost when the file is saved. By contrast, the more popular JPEG file format uses a lossy compression algorithm (based on discrete cosine transform coding) by which the greater the compression, the more information is lost, ultimately reducing image quality or detail that can not be restored. JPEG uses knowledge of the way the human brain and eyes perceive color to make this loss of detail less noticeable.

Image editor features

Listed below are some of the most used capabilities of the better graphics manipulation programs. The list is by no means all-inclusive. There are a myriad of choices associated with the application of most of these features.

Selection

One of the prerequisites for many of the applications mentioned below is a method of selecting part(s) of an image, thus applying a change selectively without affecting the entire picture. Most graphics programs have several means of accomplishing this, such as:

  • a marquee tool for selecting rectangular or other regular polygon-shaped regions,
  • a lasso tool for freehand selection of a region,
  • a magic wand tool that selects objects or regions in the image defined by proximity of color or luminance,
  • vector-based pen tools,

as well as more advanced facilities such as edge detection, masking, alpha compositing, and color and channel-based extraction. The border of a selected area in an image is often animated with the marching ants effect to help the user to distinguish the selection border from the image background.

Layers

Leonardo da Vinci's Vitruvian Man overlaid with Goethe's Color Wheel using a screen layer in Adobe Photoshop. Screen layers can be helpful in graphic design and in creating multiple exposures in photography.
 
Leonardo da Vinci's Vitruvian Man overlaid a soft light layer Moses Harris's Color Wheel and a soft light layer of Ignaz Schiffermüller's Color Wheel. Soft light layers have a darker, more translucent look than screen layers.

Another feature common to many graphics applications is that of Layers, which are analogous to sheets of transparent acetate (each containing separate elements that make up a combined picture), stacked on top of each other, each capable of being individually positioned, altered and blended with the layers below, without affecting any of the elements on the other layers. This is a fundamental workflow which has become the norm for the majority of programs on the market today, and enables maximum flexibility for the user while maintaining non-destructive editing principles and ease of use.

Image size alteration

Image editors can resize images in a process often called image scaling, making them larger, or smaller. High image resolution cameras can produce large images which are often reduced in size for Internet use. Image editor programs use a mathematical process called resampling to calculate new pixel values whose spacing is larger or smaller than the original pixel values. Images for Internet use are kept small, say 640 x 480 pixels which would equal 0.3 megapixels.

Cropping an image

Digital editors are used to crop images. Cropping creates a new image by selecting a desired rectangular portion from the image being cropped. The unwanted part of the image is discarded. Image cropping does not reduce the resolution of the area cropped. Best results are obtained when the original image has a high resolution. A primary reason for cropping is to improve the image composition in the new image.

Uncropped image from camera
Lily cropped from larger image

 

Digital image

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Digital_image

A digital image is an image composed of picture elements, also known as pixels, each with finite, discrete quantities of numeric representation for its intensity or gray level that is an output from its two-dimensional functions fed as input by its spatial coordinates denoted with x, y on the x-axis and y-axis, respectively. Depending on whether the image resolution is fixed, it may be of vector or raster type. By itself, the term "digital image" usually refers to raster images or bitmapped images (as opposed to vector images).

Raster

Raster images have a finite set of digital values, called picture elements or pixels. The digital image contains a fixed number of rows and columns of pixels. Pixels are the smallest individual element in an image, holding antiquated values that represent the brightness of a given color at any specific point.

Typically, the pixels are stored in computer memory as a raster image or raster map, a two-dimensional array of small integers. These values are often transmitted or stored in a compressed form.

Raster images can be created by a variety of input devices and techniques, such as digital cameras, scanners, coordinate-measuring machines, seismographic profiling, airborne radar, and more. They can also be synthesized from arbitrary non-image data, such as mathematical functions or three-dimensional geometric models; the latter being a major sub-area of computer graphics. The field of digital image processing is the study of algorithms for their transformation.

Raster file formats

Most users come into contact with raster images through digital cameras, which use any of several image file formats.

Some digital cameras give access to almost all the data captured by the camera, using a raw image format. The Universal Photographic Imaging Guidelines (UPDIG) suggests these formats be used when possible since raw files produce the best quality images. These file formats allow the photographer and the processing agent the greatest level of control and accuracy for output. Their use is inhibited by the prevalence of proprietary information (trade secrets) for some camera makers, but there have been initiatives such as OpenRAW to influence manufacturers to release these records publicly. An alternative may be Digital Negative (DNG), a proprietary Adobe product described as "the public, archival format for digital camera raw data". Although this format is not yet universally accepted, support for the product is growing, and increasingly professional archivists and conservationists, working for respectable organizations, variously suggest or recommend DNG for archival purposes.

Vector

Vector images resulted from mathematical geometry (vector). In mathematical terms, a vector consists of both a magnitude, or length, and a direction.

Often, both raster and vector elements will be combined in one image; for example, in the case of a billboard with text (vector) and photographs (raster).

Example of vector file types are EPS, PDF, and AI.

Image viewing

Image viewer software displays images. Web browsers can display standard internet image formats including JPEG, GIF and PNG. Some can show SVG format which is a standard W3C format. In the past, when Internet was still slow, it was common to provide "preview" image that would load and appear on the web site before being replaced by the main image (to give at preliminary impression). Now Internet is fast enough and this preview image is seldom used.

Some scientific images can be very large (for instance, the 46 gigapixel size image of the Milky Way, about 194 Gb in size). Such images are difficult to download and are usually browsed online through more complex web interfaces.

Some viewers offer a slideshow utility to display a sequence of images.

History

The first scan done by the SEAC in 1957
 
The SEAC scanner

Early digital fax machines such as the Bartlane cable picture transmission system preceded digital cameras and computers by decades. The first picture to be scanned, stored, and recreated in digital pixels was displayed on the Standards Eastern Automatic Computer (SEAC) at NIST. The advancement of digital imagery continued in the early 1960s, alongside development of the space program and in medical research. Projects at the Jet Propulsion Laboratory, MIT, Bell Labs and the University of Maryland, among others, used digital images to advance satellite imagery, wirephoto standards conversion, medical imaging, videophone technology, character recognition, and photo enhancement.

Rapid advances in digital imaging began with the introduction of MOS integrated circuits in the 1960s and microprocessors in the early 1970s, alongside progress in related computer memory storage, display technologies, and data compression algorithms.

The invention of computerized axial tomography (CAT scanning), using x-rays to produce a digital image of a "slice" through a three-dimensional object, was of great importance to medical diagnostics. As well as origination of digital images, digitization of analog images allowed the enhancement and restoration of archaeological artifacts and began to be used in fields as diverse as nuclear medicine, astronomy, law enforcement, defence and industry.

Advances in microprocessor technology paved the way for the development and marketing of charge-coupled devices (CCDs) for use in a wide range of image capture devices and gradually displaced the use of analog film and tape in photography and videography towards the end of the 20th century. The computing power necessary to process digital image capture also allowed computer-generated digital images to achieve a level of refinement close to photorealism.

Digital image sensors

The basis for digital image sensors is metal-oxide-semiconductor (MOS) technology, which originates from the invention of the MOSFET (MOS field-effect transistor) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. This led to the development of digital semiconductor image sensors, including the charge-coupled device (CCD) and later the CMOS sensor.

The first semiconductor image sensor was the CCD, developed by Willard S. Boyle and George E. Smith at Bell Labs in 1969. While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.

Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. It was a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.

The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels. The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985. The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993. By 2007, sales of CMOS sensors had surpassed CCD sensors.

Digital image compression

An important development in digital image compression technology was the discrete cosine transform (DCT), a lossy compression technique first proposed by Nasir Ahmed in 1972. DCT compression became the basis for JPEG, which was introduced by the Joint Photographic Experts Group in 1992. JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format on the Internet. Its highly efficient DCT compression algorithm was largely responsible for the wide proliferation of digital images and digital photos, with several billion JPEG images produced every day as of 2015.

Mosaic

In digital imaging, a mosaic is a combination of non-overlapping images, arranged in some tessellation. Gigapixel images are an example of such digital image mosaics. Satellite imagery are often mosaicked to cover Earth regions.

Interactive viewing is provided by virtual-reality photography

Digital microscope

From Wikipedia, the free encyclopedia
 
An insect observed with a digital microscope.
 
Entomologist using a digital microscope to magnify a miniature insect.

A digital microscope is a variation of a traditional optical microscope that uses optics and a digital camera to output an image to a monitor, sometimes by means of software running on a computer. A digital microscope often has its own in-built LED light source, and differs from an optical microscope in that there is no provision to observe the sample directly through an eyepiece. Since the image is focused on the digital circuit, the entire system is designed for the monitor image. The optics for the human eye are omitted.

Digital microscopes can range from cheap USB digital microscopes to advanced industrial digital microscopes costing tens of thousands of dollars. The low price commercial microscopes normally omit the optics for illumination (for example Köhler illumination and phase contrast illumination) and are more akin to webcams with a macro lens. For information about stereo microscopes with a digital camera in research and development, see optical microscope.

History

An early digital microscope was made by a company in Tokyo, Japan in 1986, which is now known as Hirox Co. LTD. It included a control box and a lens connected to a computer. The original connection to the computer was analog through an S-video connection. Over time that connection was changed to Firewire 800 to handle a large amount of digital information coming from the digital camera. Around 2005 they introduced advanced all-in-one units that did not require a computer, but had the monitor and computer built-in. Then in late 2015 they released a system that once again had the computer separate, but connected to the computer by USB 3.0, taking advantage of the speed and longevity of the USB connection. This system also was much more compacted than previous models with a reduction in the number of cables and physical size of the unit itself.

A digital microscope allows several students in Laos to examine insect parts. This model cost about USD 150.

The invention of the USB port resulted in a multitude of USB microscopes ranging in quality and magnification. They continue to fall in price, especially compared with traditional optical microscopes. They offer high-resolution images which are normally recorded directly to a computer, and which also use the computer power for their built-in LED light source. The resolution is directly related to the number of megapixels available on a specific model, from 1.3 MP, 2 MP, 5 MP and upwards.

Stereo and digital microscopes

A primary difference between a stereo microscope and a digital microscope is the magnification. With a stereo microscope, the magnification is determined by multiplying the eyepiece magnification times the objective magnification. Since the digital microscope does not have an eyepiece, the magnification cannot be found using this method. Instead the magnification for a digital microscope was originally determined by how many times larger the sample was reproduced on a 15” monitor. While monitor sizes have changed, the physical size of the camera chip used has not. As a result magnification numbers and field of view are still the same as that original definition, regardless of the size of the monitor used. The average difference in magnification between an optical microscope and a digital microscope is about 40%. Thus the magnification number of a stereomicroscope is usually 40% less than the magnification number of a digital microscope.

Since the digital microscope has the image projected directly on to the CCD camera, it is possible to have higher quality recorded images than with a stereo microscope. With the stereo microscope, the lenses are made for the optics of the eye. Attaching a CCD camera to a stereo microscope will result in an image that has compromises made for the eyepiece. Although the monitor image and recorded image may be of higher quality with the digital microscope, the application of the microscope may dictate which microscope is preferred.

Digital eyepiece for microscopes

Digital eyepiece for microscopes Software contain wide ranges of optional accessories provides multipurpose such as phase contrast observation, Bright and dark field observation, microphotography, image processing, particle size determination in µm, pathological report and patient manager, microphotograph, recording mobility video, drawing and labeling etc.

Resolution

With a typical 2 megapixel CCD, a 1600×1200 pixels image is generated. The resolution of the image depends on the field of view of the lens used with the camera. The approximate pixel resolution can be determined by dividing the horizontal field of view (FOV) by 1600.

Increased resolution can be accomplished by creating a sub-pixel image. The Pixel Shift Method uses an actuator to physically move the CCD in order to take multiple overlapping images. By combining the images within the microscope, sub-pixel resolution can be generated. This method provides sub-pixel information, averaging a standard image is also a proven method to provide sub-pixel information.

2D measurement

Most of the high-end digital microscope systems have the ability to measure samples in 2D. The measurements are done onscreen by measuring the distance from pixel to pixel. This allows for length, width, diagonal, and circle measurements as well as much more. Some systems are even capable of counting particles.

3D measurement

3D measurement is achieved with a digital microscope by image stacking. Using a step motor, the system takes images from the lowest focal plane in the field of view to the highest focal plane. Then it reconstructs these images into a 3D model based on contrast to give a 3D color image of the sample. From these 3D model measurements can be made, but their accuracy is based on the step motor and depth of field of the lens.

2D and 3D tiling

2D and 3D tiling, also known as stitching or creating a panoramic, can now be done with the more advanced digital microscope systems. In 2D tiling the image is automatically tiled together seamlessly in real-time by moving the XY stage. 3D tiling combines the XY stage movement of 2D tiling with the Z-axis movement of 3D measurement to create a 3D panoramic.

USB microscopes

Salt crystals seen with a USB microscope
Sea salt crystals
 
Table salt crystals
with cubic habit
 
Miniature USB microscope

Digital microscopes range from inexpensive units costing from perhaps US$20, which connect to a computer via USB connector, to units costing tens of thousands of dollars. These advanced digital microscope systems usually are self-contained and do not require a computer.

Some of the cheaper microscopes which connect via USB have no stand, or a simple stand with clampable joints. They are essentially very simple webcams with small lenses and sensors—and can be used to view subjects which are not very close to the lens— mechanically arranged to allow focus at very close distances. Magnification is typically claimed to be user-adjustable from 10× to 200-400×.

Devices which connect to a computer require software to operate. The basic operation includes viewing the microscope image and recording "snapshots". More advanced functionality, possible even with simpler devices, includes recording moving images, time-lapse photography, measurement, image enhancement, annotation, etc. Many of the simpler units which connect to a computer use standard operating system facilities, and do not require device-specific drivers. A consequence of this is that many different microscope software packages can be used interchangeably with different microscopes, although such software may not support features unique to the more advanced devices. Basic operation may be possible with software included as part of computer operating systems—in Windows XP, images from microscopes which do not require special drivers can be viewed and recorded from "Scanners and Cameras" in Control Panel. The more advanced digital microscope units have stands that hold the microscope and allow it to be racked up and down, similarly to standard optical microscopes. Calibrated movement in all three dimensions are available through the use of a step motor and automated stage. The resolution, image quality, and dynamic range vary with price. Systems with a lower number of pixels have a higher frame rate (30fps to 100fps) and faster processing. The faster processing can be seen when using functions like HDR (high dynamic range). In addition to general-purpose microscopes, instruments specialized for specific applications are produced. These units can have a magnification range up to 0-10,000x, are either all-in-one systems (computer built-in) or connect to a desktop computer. They also differ from the cheaper USB microscopes in not only the quality of the image, but also in capability, and the quality of the system's construction giving these types of systems a longer lifetime.

Introduction to M-theory

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduction_to_M-theory In non-tec...