Search This Blog

Sunday, June 19, 2022

Biosensor

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Biosensor

A biosensor is an analytical device, used for the detection of a chemical substance, that combines a biological component with a physicochemical detector. The sensitive biological element, e.g. tissue, microorganisms, organelles, cell receptors, enzymes, antibodies, nucleic acids, etc., is a biologically derived material or biomimetic component that interacts with, binds with, or recognizes the analyte under study. The biologically sensitive elements can also be created by biological engineering. The transducer or the detector element, which transforms one signal into another one, works in a physicochemical way: optical, piezoelectric, electrochemical, electrochemiluminescence etc., resulting from the interaction of the analyte with the biological element, to easily measure and quantify. The biosensor reader device connects with the associated electronics or signal processors that are primarily responsible for the display of the results in a user-friendly way. This sometimes accounts for the most expensive part of the sensor device, however it is possible to generate a user friendly display that includes transducer and sensitive element (holographic sensor). The readers are usually custom-designed and manufactured to suit the different working principles of biosensors.

Biosensor system

A biosensor typically consists of a bio-receptor (enzyme/antibody/cell/nucleic acid/aptamer), transducer component (semi-conducting material/nanomaterial), and electronic system which includes a signal amplifier, processor & display. Transducers and electronics can be combined, e.g., in CMOS-based microsensor systems. The recognition component, often called a bioreceptor, uses biomolecules from organisms or receptors modeled after biological systems to interact with the analyte of interest. This interaction is measured by the biotransducer which outputs a measurable signal proportional to the presence of the target analyte in the sample. The general aim of the design of a biosensor is to enable quick, convenient testing at the point of concern or care where the sample was procured.

Bioreceptors

Biosensors used for screening combinatorial DNA libraries

In a biosensor, the bioreceptor is designed to interact with the specific analyte of interest to produce an effect measurable by the transducer. High selectivity for the analyte among a matrix of other chemical or biological components is a key requirement of the bioreceptor. While the type of biomolecule used can vary widely, biosensors can be classified according to common types of bioreceptor interactions involving: antibody/antigen, enzymes/ligands, nucleic acids/DNA, cellular structures/cells, or biomimetic materials.

Antibody/antigen interactions

An immunosensor utilizes the very specific binding affinity of antibodies for a specific compound or antigen. The specific nature of the antibody-antigen interaction is analogous to a lock and key fit in that the antigen will only bind to the antibody if it has the correct conformation. Binding events result in a physicochemical change that in combination with a tracer, such as fluorescent molecules, enzymes, or radioisotopes, can generate a signal. There are limitations with using antibodies in sensors: 1. The antibody binding capacity is strongly dependent on assay conditions (e.g. pH and temperature), and 2. the antibody-antigen interaction is generally robust, however, binding can be disrupted by chaotropic reagents, organic solvents, or even ultrasonic radiation.

Antibody-antigen interactions can also be used for serological testing, or the detection of circulating antibodies in response to a specific disease. Importantly, serology tests have become an important part of the global response to the COVID-19 pandemic.

Artificial binding proteins

The use of antibodies as the bio-recognition component of biosensors has several drawbacks. They have high molecular weights and limited stability, contain essential disulfide bonds and are expensive to produce. In one approach to overcome these limitations, recombinant binding fragments (Fab, Fv or scFv) or domains (VH, VHH) of antibodies have been engineered. In another approach, small protein scaffolds with favorable biophysical properties have been engineered to generate artificial families of Antigen Binding Proteins (AgBP), capable of specific binding to different target proteins while retaining the favorable properties of the parent molecule. The elements of the family that specifically bind to a given target antigen, are often selected in vitro by display techniques: phage display, ribosome display, yeast display or mRNA display. The artificial binding proteins are much smaller than antibodies (usually less than 100 amino-acid residues), have a strong stability, lack disulfide bonds and can be expressed in high yield in reducing cellular environments like the bacterial cytoplasm, contrary to antibodies and their derivatives. They are thus especially suitable to create biosensors.

Enzymatic interactions

The specific binding capabilities and catalytic activity of enzymes make them popular bioreceptors. Analyte recognition is enabled through several possible mechanisms: 1) the enzyme converting the analyte into a product that is sensor-detectable, 2) detecting enzyme inhibition or activation by the analyte, or 3) monitoring modification of enzyme properties resulting from interaction with the analyte. The main reasons for the common use of enzymes in biosensors are: 1) ability to catalyze a large number of reactions; 2) potential to detect a group of analytes (substrates, products, inhibitors, and modulators of the catalytic activity); and 3) suitability with several different transduction methods for detecting the analyte. Notably, since enzymes are not consumed in reactions, the biosensor can easily be used continuously. The catalytic activity of enzymes also allows lower limits of detection compared to common binding techniques. However, the sensor's lifetime is limited by the stability of the enzyme.

Affinity binding receptors

Antibodies have a high binding constant in excess of 10^8 L/mol, which stands for a nearly irreversible association once the antigen-antibody couple has formed. For certain analyte molecules like glucose affinity binding proteins exist that bind their ligand with a high specificity like an antibody, but with a much smaller binding constant on the order of 10^2 to 10^4 L/mol. The association between analyte and receptor then is of reversible nature and next to the couple between both also their free molecules occur in a measurable concentration. In case of glucose, for instance, concanavalin A may function as affinity receptor exhibiting a binding constant of 4x10^2 L/mol. The use of affinity binding receptors for purposes of biosensing has been proposed by Schultz and Sims in 1979 and was subsequently configured into a fluorescent assay for measuring glucose in the relevant physiological range between 4.4 and 6.1 mmol/L. The sensor principle has the advantage that it does not consume the analyte in a chemical reaction as occurs in enzymatic assays.

Nucleic acid interactions

Biosensors employing nucleic acid based receptors can be either based on complementary base pairing interactions referred to as genosensors or specific nucleic acid based antibody mimics (aptamers) as aptasensors. In the former, the recognition process is based on the principle of complementary base pairing, adenine:thymine and cytosine:guanine in DNA. If the target nucleic acid sequence is known, complementary sequences can be synthesized, labeled, and then immobilized on the sensor. The hybridization event can be optically detected and presence of target DNA/RNA ascertained. In the latter, aptamers generated against the target recognise it via interplay of specific non-covalent interactions and induced fitting. These aptamers can be labelled with a fluorophore/metal nanoparticles easily for optical detection or may be employed for label-free electrochemical or cantilever based detection platforms for a wide range of target molecules or complex targets like cells and viruses. Additionally, aptamers can be combined with nucleic acid enzymes, such as RNA-cleaving DNAzymes, providing both target recognition and signal generation in a single molecule, which shows potential applications in the development of multiplex biosensors.

Epigenetics

It has been proposed that properly optimized integrated optical resonators can be exploited for detecting epigenetic modifications (e.g. DNA methylation, histone post-translational modifications) in body fluids from patients affected by cancer or other diseases. Photonic biosensors with ultra-sensitivity are nowadays being developed at a research level to easily detect cancerous cells within the patient's urine. Different research projects aim to develop new portable devices that use cheap, environmentally friendly, disposable cartridges that require only simple handling with no need of further processing, washing, or manipulation by expert technicians.

Organelles

Organelles form separate compartments inside cells and usually perform functions independently. Different kinds of organelles have various metabolic pathways and contain enzymes to fulfill its function. Commonly used organelles include lysosome, chloroplast and mitochondria. The spatial-temporal distribution pattern of calcium is closely related to ubiquitous signaling pathway. Mitochondria actively participate in the metabolism of calcium ions to control the function and also modulate the calcium related signaling pathways. Experiments have proved that mitochondria have the ability to respond to high calcium concentrations generated in their proximity by opening the calcium channels. In this way, mitochondria can be used to detect the calcium concentration in medium and the detection is very sensitive due to high spatial resolution. Another application of mitochondria is used for detection of water pollution. Detergent compounds' toxicity will damage the cell and subcellular structure including mitochondria. The detergents will cause a swelling effect which could be measured by an absorbance change. Experiment data shows the change rate is proportional to the detergent concentration, providing a high standard for detection accuracy.

Cells

Cells are often used in bioreceptors because they are sensitive to surrounding environment and they can respond to all kinds of stimulants. Cells tend to attach to the surface so they can be easily immobilized. Compared to organelles they remain active for longer period and the reproducibility makes them reusable. They are commonly used to detect global parameter like stress condition, toxicity and organic derivatives. They can also be used to monitor the treatment effect of drugs. One application is to use cells to determine herbicides which are main aquatic contaminant. Microalgae are entrapped on a quartz microfiber and the chlorophyll fluorescence modified by herbicides is collected at the tip of an optical fiber bundle and transmitted to a fluorimeter. The algae are continuously cultured to get optimized measurement. Results show that detection limit of certain herbicide can reach sub-ppb concentration level. Some cells can also be used to monitor the microbial corrosion. Pseudomonas sp. is isolated from corroded material surface and immobilized on acetylcellulose membrane. The respiration activity is determined by measuring oxygen consumption. There is linear relationship between the current generated and the concentration of sulfuric acid. The response time is related to the loading of cells and surrounding environments and can be controlled to no more than 5min.

Tissue

Tissues are used for biosensor for the abundance of enzymes existing. Advantages of tissues as biosensors include the following:

  • easier to immobilize compared to cells and organelles
  • the higher activity and stability from maintaining enzymes in the natural environment
  • the availability and low price
  • the avoidance of tedious work of extraction, centrifuge, and purification of enzymes
  • necessary cofactors for an enzyme to function exists
  • the diversity providing a wide range of choices concerning different objectives.

There also exist some disadvantages of tissues, like the lack of specificity due to the interference of other enzymes and longer response time due to the transport barrier.

Microbial biosensors

Microbial biosensors exploit the response of bacteria to a given substance. For example, arsenic can be detected using the ars operon found in several bacterial taxon.

Surface attachment of the biological elements

Sensing negatively charged exosomes bound a graphene surface

An important part of a biosensor is to attach the biological elements (small molecules/protein/cells) to the surface of the sensor (be it metal, polymer, or glass). The simplest way is to functionalize the surface in order to coat it with the biological elements. This can be done by polylysine, aminosilane, epoxysilane, or nitrocellulose in the case of silicon chips/silica glass. Subsequently, the bound biological agent may also be fixed—for example, by layer by layer deposition of alternatively charged polymer coatings.

Alternatively, three-dimensional lattices (hydrogel/xerogel) can be used to chemically or physically entrap these (whereby chemically entrapped it is meant that the biological element is kept in place by a strong bond, while physically they are kept in place being unable to pass through the pores of the gel matrix). The most commonly used hydrogel is sol-gel, glassy silica generated by polymerization of silicate monomers (added as tetra alkyl orthosilicates, such as TMOS or TEOS) in the presence of the biological elements (along with other stabilizing polymers, such as PEG) in the case of physical entrapment.

Another group of hydrogels, which set under conditions suitable for cells or protein, are acrylate hydrogel, which polymerizes upon radical initiation. One type of radical initiator is a peroxide radical, typically generated by combining a persulfate with TEMED (Polyacrylamide gel are also commonly used for protein electrophoresis), alternatively light can be used in combination with a photoinitiator, such as DMPA (2,2-dimethoxy-2-phenylacetophenone). Smart materials that mimic the biological components of a sensor can also be classified as biosensors using only the active or catalytic site or analogous configurations of a biomolecule.

Biotransducer

Classification of biosensors based on type of biotransducer

Biosensors can be classified by their biotransducer type. The most common types of biotransducers used in biosensors are:

  • electrochemical biosensors
  • optical biosensors
  • electronic biosensors
  • piezoelectric biosensors
  • gravimetric biosensors
  • pyroelectric biosensors
  • magnetic biosensors

Electrochemical

Electrochemical biosensors are normally based on enzymatic catalysis of a reaction that produces or consumes electrons (such enzymes are rightly called redox enzymes). The sensor substrate usually contains three electrodes; a reference electrode, a working electrode and a counter electrode. The target analyte is involved in the reaction that takes place on the active electrode surface, and the reaction may cause either electron transfer across the double layer (producing a current) or can contribute to the double layer potential (producing a voltage). We can either measure the current (rate of flow of electrons is now proportional to the analyte concentration) at a fixed potential or the potential can be measured at zero current (this gives a logarithmic response). Note that potential of the working or active electrode is space charge sensitive and this is often used. Further, the label-free and direct electrical detection of small peptides and proteins is possible by their intrinsic charges using biofunctionalized ion-sensitive field-effect transistors.

Another example, the potentiometric biosensor, (potential produced at zero current) gives a logarithmic response with a high dynamic range. Such biosensors are often made by screen printing the electrode patterns on a plastic substrate, coated with a conducting polymer and then some protein (enzyme or antibody) is attached. They have only two electrodes and are extremely sensitive and robust. They enable the detection of analytes at levels previously only achievable by HPLC and LC/MS and without rigorous sample preparation. All biosensors usually involve minimal sample preparation as the biological sensing component is highly selective for the analyte concerned. The signal is produced by electrochemical and physical changes in the conducting polymer layer due to changes occurring at the surface of the sensor. Such changes can be attributed to ionic strength, pH, hydration and redox reactions, the latter due to the enzyme label turning over a substrate. Field effect transistors, in which the gate region has been modified with an enzyme or antibody, can also detect very low concentrations of various analytes as the binding of the analyte to the gate region of the FET cause a change in the drain-source current.

Impedance spectroscopy based biosensor development has been gaining traction nowadays and many such devices / developments are found in the academia and industry. One such device, based on a 4-electrode electrochemical cell, using a nanoporous alumina membrane, has been shown to detect low concentrations of human alpha thrombin in presence of high background of serum albumin. Also interdigitated electrodes have been used for impedance biosensors.

Ion channel switch

ICS – channel open
 
ICS – channel closed

The use of ion channels has been shown to offer highly sensitive detection of target biological molecules. By embedding the ion channels in supported or tethered bilayer membranes (t-BLM) attached to a gold electrode, an electrical circuit is created. Capture molecules such as antibodies can be bound to the ion channel so that the binding of the target molecule controls the ion flow through the channel. This results in a measurable change in the electrical conduction which is proportional to the concentration of the target.

An ion channel switch (ICS) biosensor can be created using gramicidin, a dimeric peptide channel, in a tethered bilayer membrane. One peptide of gramicidin, with attached antibody, is mobile and one is fixed. Breaking the dimer stops the ionic current through the membrane. The magnitude of the change in electrical signal is greatly increased by separating the membrane from the metal surface using a hydrophilic spacer.

Quantitative detection of an extensive class of target species, including proteins, bacteria, drug and toxins has been demonstrated using different membrane and capture configurations. The European research project Greensense develops a biosensor to perform quantitative screening of drug-of-abuse such as THC, morphine, and cocaine  in saliva and urine.

Reagentless fluorescent biosensor

A reagentless biosensor can monitor a target analyte in a complex biological mixture without additional reagent. Therefore, it can function continuously if immobilized on a solid support. A fluorescent biosensor reacts to the interaction with its target analyte by a change of its fluorescence properties. A Reagentless Fluorescent biosensor (RF biosensor) can be obtained by integrating a biological receptor, which is directed against the target analyte, and a solvatochromic fluorophore, whose emission properties are sensitive to the nature of its local environment, in a single macromolecule. The fluorophore transduces the recognition event into a measurable optical signal. The use of extrinsic fluorophores, whose emission properties differ widely from those of the intrinsic fluorophores of proteins, tryptophan and tyrosine, enables one to immediately detect and quantify the analyte in complex biological mixtures. The integration of the fluorophore must be done in a site where it is sensitive to the binding of the analyte without perturbing the affinity of the receptor.

Antibodies and artificial families of Antigen Binding Proteins (AgBP) are well suited to provide the recognition module of RF biosensors since they can be directed against any antigen (see the paragraph on bioreceptors). A general approach to integrate a solvatochromic fluorophore in an AgBP when the atomic structure of the complex with its antigen is known, and thus transform it into a RF biosensor, has been described. A residue of the AgBP is identified in the neighborhood of the antigen in their complex. This residue is changed into a cysteine by site-directed mutagenesis. The fluorophore is chemically coupled to the mutant cysteine. When the design is successful, the coupled fluorophore does not prevent the binding of the antigen, this binding shields the fluorophore from the solvent, and it can be detected by a change of fluorescence. This strategy is also valid for antibody fragments.

However, in the absence of specific structural data, other strategies must be applied. Antibodies and artificial families of AgBPs are constituted by a set of hypervariable (or randomized) residue positions, located in a unique sub-region of the protein, and supported by a constant polypeptide scaffold. The residues that form the binding site for a given antigen, are selected among the hypervariable residues. It is possible to transform any AgBP of these families into a RF biosensor, specific of the target antigen, simply by coupling a solvatochromic fluorophore to one of the hypervariable residues that have little or no importance for the interaction with the antigen, after changing this residue into cysteine by mutagenesis. More specifically, the strategy consists in individually changing the residues of the hypervariable positions into cysteine at the genetic level, in chemically coupling a solvatochromic fluorophore with the mutant cysteine, and then in keeping the resulting conjugates that have the highest sensitivity (a parameter that involves both affinity and variation of fluorescence signal). This approach is also valid for families of antibody fragments.

A posteriori studies have shown that the best reagentless fluorescent biosensors are obtained when the fluorophore does not make non-covalent interactions with the surface of the bioreceptor, which would increase the background signal, and when it interacts with a binding pocket at the surface of the target antigen. The RF biosensors that are obtained by the above methods, can function and detect target analytes inside living cells.

Magnetic biosensors

Magnetic biosensors utilize paramagnetic or supra-paramagnetic particles, or crystals, to detect biological interactions. Examples could be coil-inductance, resistance, or other magnetic properties. It is common to use magnetic nano or microparticles. In the surface of such particles are the bioreceptors, that can be DNA (complementary to a sequence or aptamers) antibodies, or others. The binding of the bioreceptor will affect some of the magnetic particle properties that can be measured by AC susceptometry, a Hall Effect sensor, a giant magnetoresistance device, or others.

Others

Piezoelectric sensors utilise crystals which undergo an elastic deformation when an electrical potential is applied to them. An alternating potential (A.C.) produces a standing wave in the crystal at a characteristic frequency. This frequency is highly dependent on the elastic properties of the crystal, such that if a crystal is coated with a biological recognition element the binding of a (large) target analyte to a receptor will produce a change in the resonance frequency, which gives a binding signal. In a mode that uses surface acoustic waves (SAW), the sensitivity is greatly increased. This is a specialised application of the quartz crystal microbalance as a biosensor

Electrochemiluminescence (ECL) is nowadays a leading technique in biosensors. Since the excited species are produced with an electrochemical stimulus rather than with a light excitation source, ECL displays improved signal-to-noise ratio compared to photoluminescence, with minimized effects due to light scattering and luminescence background. In particular, coreactant ECL operating in buffered aqueous solution in the region of positive potentials (oxidative-reduction mechanism) definitively boosted ECL for immunoassay, as confirmed by many research applications and, even more, by the presence of important companies which developed commercial hardware for high throughput immunoassays analysis in a market worth billions of dollars each year.

Thermometric biosensors are rare.

Biosensor MOSFET (BioFET)

The MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng in 1959, and demonstrated in 1960. Two years later, Leland C. Clark and Champ Lyons invented the first biosensor in 1962. Biosensor MOSFETs (BioFETs) were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters.

The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld for electrochemical and biological applications in 1970. the adsorption FET (ADFET) was patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET was demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.

By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.

Placement of biosensors

The appropriate placement of biosensors depends on their field of application, which may roughly be divided into biotechnology, agriculture, food technology and biomedicine.

In biotechnology, analysis of the chemical composition of cultivation broth can be conducted in-line, on-line, at-line and off-line. As outlined by the US Food and Drug Administration (FDA) the sample is not removed from the process stream for in-line sensors, while it is diverted from the manufacturing process for on-line measurements. For at-line sensors the sample may be removed and analyzed in close proximity to the process stream. An example of the latter is the monitoring of lactose in a dairy processing plant. Off-line biosensors compare to bioanalytical techniques that are not operating in the field, but in the laboratory. These techniques are mainly used in agriculture, food technology and biomedicine.

In medical applications biosensors are generally categorized as in vitro and in vivo systems. An in vitro, biosensor measurement takes place in a test tube, a culture dish, a microtiter plate or elsewhere outside a living organism. The sensor uses a bioreceptor and transducer as outlined above. An example of an in vitro biosensor is an enzyme-conductimetric biosensor for blood glucose monitoring. There is a challenge to create a biosensor that operates by the principle of point-of-care testing, i.e. at the location where the test is needed. Development of wearable biosensors is among such studies. The elimination of lab testing can save time and money. An application of a POCT biosensor can be for the testing of HIV in areas where it is difficult for patients to be tested. A biosensor can be sent directly to the location and a quick and easy test can be used.

Biosensor implant for glucose monitoring in subcutaneous tissue (59x45x8 mm). Electronic components are hermetically enclosed in a Ti casing, while antenna and sensor probe are moulded into the epoxy header.

An in vivo biosensor is an implantable device that operates inside the body. Of course, biosensor implants have to fulfill the strict regulations on sterilization in order to avoid an initial inflammatory response after implantation. The second concern relates to the long-term biocompatibility, i.e. the unharmful interaction with the body environment during the intended period of use. Another issue that arises is failure. If there is failure, the device must be removed and replaced, causing additional surgery. An example for application of an in vivo biosensor would be the insulin monitoring within the body, which is not available yet.

Most advanced biosensor implants have been developed for the continuous monitoring of glucose. The figure displays a device, for which a Ti casing and a battery as established for cardiovascular implants like pacemakers and defibrillators is used. Its size is determined by the battery as required for a lifetime of one year. Measured glucose data will be transmitted wirelessly out of the body within the MICS 402-405 MHz band as approved for medical implants.

Biosensors can also be integrated into mobile phone systems, making them user-friendly and accessible to a large number of users.

Applications

Biosensing of influenza virus using an antibody-modified boron-doped diamond

There are many potential applications of biosensors of various types. The main requirements for a biosensor approach to be valuable in terms of research and commercial applications are the identification of a target molecule, availability of a suitable biological recognition element, and the potential for disposable portable detection systems to be preferred to sensitive laboratory-based techniques in some situations. Some examples are glucose monitoring in diabetes patients, other medical health related targets, environmental applications, e.g. the detection of pesticides and river water contaminants, such as heavy metal ions, remote sensing of airborne bacteria, e.g. in counter-bioterrorist activities, remote sensing of water quality in coastal waters by describing online different aspects of clam ethology (biological rhythms, growth rates, spawning or death records) in groups of abandoned bivalves around the world, detection of pathogens, determining levels of toxic substances before and after bioremediation, detection and determining of organophosphate, routine analytical measurement of folic acid, biotin, vitamin B12 and pantothenic acid as an alternative to microbiological assay, determination of drug residues in food, such as antibiotics and growth promoters, particularly meat and honey, drug discovery and evaluation of biological activity of new compounds, protein engineering in biosensors, and detection of toxic metabolites such as mycotoxins.

A common example of a commercial biosensor is the blood glucose biosensor, which uses the enzyme glucose oxidase to break blood glucose down. In doing so it first oxidizes glucose and uses two electrons to reduce the FAD (a component of the enzyme) to FADH2. This in turn is oxidized by the electrode in a number of steps. The resulting current is a measure of the concentration of glucose. In this case, the electrode is the transducer and the enzyme is the biologically active component.

A canary in a cage, as used by miners to warn of gas, could be considered a biosensor. Many of today's biosensor applications are similar, in that they use organisms which respond to toxic substances at a much lower concentrations than humans can detect to warn of their presence. Such devices can be used in environmental monitoring, trace gas detection and in water treatment facilities.

Glucose monitoring

Commercially available glucose monitors rely on amperometric sensing of glucose by means of glucose oxidase, which oxidises glucose producing hydrogen peroxide which is detected by the electrode. To overcome the limitation of amperometric sensors, a flurry of research is present into novel sensing methods, such as fluorescent glucose biosensors.

Interferometric reflectance imaging sensor

The interferometric reflectance imaging sensor (IRIS) is based on the principles of optical interference and consists of a silicon-silicon oxide substrate, standard optics, and low-powered coherent LEDs. When light is illuminated through a low magnification objective onto the layered silicon-silicon oxide substrate, an interferometric signature is produced. As biomass, which has a similar index of refraction as silicon oxide, accumulates on the substrate surface, a change in the interferometric signature occurs and the change can be correlated to a quantifiable mass. Daaboul et al. used IRIS to yield a label-free sensitivity of approximately 19 ng/mL. Ahn et al. improved the sensitivity of IRIS through a mass tagging technique.

Since initial publication, IRIS has been adapted to perform various functions. First, IRIS integrated a fluorescence imaging capability into the interferometric imaging instrument as a potential way to address fluorescence protein microarray variability. Briefly, the variation in fluorescence microarrays mainly derives from inconsistent protein immobilization on surfaces and may cause misdiagnoses in allergy microarrays. To correct for any variation in protein immobilization, data acquired in the fluorescence modality is then normalized by the data acquired in the label-free modality. IRIS has also been adapted to perform single nanoparticle counting by simply switching the low magnification objective used for label-free biomass quantification to a higher objective magnification. This modality enables size discrimination in complex human biological samples. Monroe et al. used IRIS to quantify protein levels spiked into human whole blood and serum and determined allergen sensitization in characterized human blood samples using zero sample processing. Other practical uses of this device include virus and pathogen detection.

Food analysis

There are several applications of biosensors in food analysis. In the food industry, optics coated with antibodies are commonly used to detect pathogens and food toxins. Commonly, the light system in these biosensors is fluorescence, since this type of optical measurement can greatly amplify the signal.

A range of immuno- and ligand-binding assays for the detection and measurement of small molecules such as water-soluble vitamins and chemical contaminants (drug residues) such as sulfonamides and Beta-agonists have been developed for use on SPR based sensor systems, often adapted from existing ELISA or other immunological assay. These are in widespread use across the food industry.

Detection/monitoring of pollutants

Biosensors could be used to monitor air, water, and soil pollutants such as pesticides, potentially carcinogenic, mutagenic, and/or toxic substances and endocrine disrupting chemicals.

For example, bionanotechnologists developed a viable biosensor, ROSALIND 2.0, that can detect levels of diverse water pollutants.

Ozone measurement

Because ozone filters out harmful ultraviolet radiation, the discovery of holes in the ozone layer of the earth's atmosphere has raised concern about how much ultraviolet light reaches the earth's surface. Of particular concern are the questions of how deeply into sea water ultraviolet radiation penetrates and how it affects marine organisms, especially plankton (floating microorganisms) and viruses that attack plankton. Plankton form the base of the marine food chains and are believed to affect our planet's temperature and weather by uptake of CO2 for photosynthesis.

Deneb Karentz, a researcher at the Laboratory of Radio-biology and Environmental Health (University of California, San Francisco) has devised a simple method for measuring ultraviolet penetration and intensity. Working in the Antarctic Ocean, she submerged to various depths thin plastic bags containing special strains of E. coli that are almost totally unable to repair ultraviolet radiation damage to their DNA. Bacterial death rates in these bags were compared with rates in unexposed control bags of the same organism. The bacterial "biosensors" revealed constant significant ultraviolet damage at depths of 10 m and frequently at 20 and 30 m. Karentz plans additional studies of how ultraviolet may affect seasonal plankton blooms (growth spurts) in the oceans.

Metastatic cancer cell detection

Metastasis is the spread of cancer from one part of the body to another via either the circulatory system or lymphatic system. Unlike radiology imaging tests (mammograms), which send forms of energy (x-rays, magnetic fields, etc.) through the body to only take interior pictures, biosensors have the potential to directly test the malignant power of the tumor. The combination of a biological and detector element allows for a small sample requirement, a compact design, rapid signals, rapid detection, high selectivity and high sensitivity for the analyte being studied. Compared to the usual radiology imaging tests biosensors have the advantage of not only finding out how far cancer has spread and checking if treatment is effective but also are cheaper, more efficient (in time, cost and productivity) ways to assess metastaticity in early stages of cancer.

Biological engineering researchers have created oncological biosensors for breast cancer. Breast cancer is the leading common cancer among women worldwide. An example would be a transferrin- quartz crystal microbalance (QCM). As a biosensor, quartz crystal microbalances produce oscillations in the frequency of the crystal's standing wave from an alternating potential to detect nano-gram mass changes. These biosensors are specifically designed to interact and have high selectivity for receptors on cell (cancerous and normal) surfaces. Ideally, this provides a quantitative detection of cells with this receptor per surface area instead of a qualitative picture detection given by mammograms.

Seda Atay, a biotechnology researcher at Hacettepe University, experimentally observed this specificity and selectivity between a QCM and MDA-MB 231 breast cells, MCF 7 cells, and starved MDA-MB 231 cells in vitro. With other researchers she devised a method of washing these different metastatic leveled cells over the sensors to measure mass shifts due to different quantities of transferrin receptors. Particularly, the metastatic power of breast cancer cells can be determined by Quartz crystal microbalances with nanoparticles and transferrin that would potentially attach to transferrin receptors on cancer cell surfaces. There is very high selectivity for transferrin receptors because they are over-expressed in cancer cells. If cells have high expression of transferrin receptors, which shows their high metastatic power, they have higher affinity and bind more to the QCM that measures the increase in mass. Depending on the magnitude of the nano-gram mass change, the metastatic power can be determined.

Additionally, in the last years, significant attentions have been focused to detect the biomarkers of lung cancer without biopsy. In this regard, biosensors are very attractive and applicable tools for providing rapid, sensitive, specific, stable, cost-effective and non-invasive detections for early lung cancer diagnosis. Thus, cancer biosensors consisting of specific biorecognition molecules such as antibodies, complementary nucleic acid probes or other immobilized biomolecules on a transducer surface. The biorecognition molecules interact specifically with the biomarkers (targets) and the generated biological responses are converted by the transducer into a measurable analytical signal. Depending on the type of biological response, various transducers are utilized in the fabrication of cancer biosensors such as electrochemical, optical and mass-based transducers.

Pathogen detection

Biosensors could be used for the detection of pathogenic organisms.

Embedded biosensors for pathogenic signatures – such as of SARS-CoV-2 – that are wearable have been developed – such as face masks with built-in tests.

Types

Optical biosensors

Many optical biosensors are based on the phenomenon of surface plasmon resonance (SPR) techniques. This utilises a property of and other materials; specifically that a thin layer of gold on a high refractive index glass surface can absorb laser light, producing electron waves (surface plasmons) on the gold surface. This occurs only at a specific angle and wavelength of incident light and is highly dependent on the surface of the gold, such that binding of a target analyte to a receptor on the gold surface produces a measurable signal.

Surface plasmon resonance sensors operate using a sensor chip consisting of a plastic cassette supporting a glass plate, one side of which is coated with a microscopic layer of gold. This side contacts the optical detection apparatus of the instrument. The opposite side is then contacted with a microfluidic flow system. The contact with the flow system creates channels across which reagents can be passed in solution. This side of the glass sensor chip can be modified in a number of ways, to allow easy attachment of molecules of interest. Normally it is coated in carboxymethyl dextran or similar compound.

The refractive index at the flow side of the chip surface has a direct influence on the behavior of the light reflected off the gold side. Binding to the flow side of the chip has an effect on the refractive index and in this way biological interactions can be measured to a high degree of sensitivity with some sort of energy. The refractive index of the medium near the surface changes when biomolecules attach to the surface, and the SPR angle varies as a function of this change.

Light of a fixed wavelength is reflected off the gold side of the chip at the angle of total internal reflection, and detected inside the instrument. The angle of incident light is varied in order to match the evanescent wave propagation rate with the propagation rate of the surface plasmon polaritons. This induces the evanescent wave to penetrate through the glass plate and some distance into the liquid flowing over the surface.

Other optical biosensors are mainly based on changes in absorbance or fluorescence of an appropriate indicator compound and do not need a total internal reflection geometry. For example, a fully operational prototype device detecting casein in milk has been fabricated. The device is based on detecting changes in absorption of a gold layer. A widely used research tool, the micro-array, can also be considered a biosensor.

Biological biosensors

Biological biosensors, also known as optogenetic sensors, often incorporate a genetically modified form of a native protein or enzyme. The protein is configured to detect a specific analyte and the ensuing signal is read by a detection instrument such as a fluorometer or luminometer. An example of a recently developed biosensor is one for detecting cytosolic concentration of the analyte cAMP (cyclic adenosine monophosphate), a second messenger involved in cellular signaling triggered by ligands interacting with receptors on the cell membrane. Similar systems have been created to study cellular responses to native ligands or xenobiotics (toxins or small molecule inhibitors). Such "assays" are commonly used in drug discovery development by pharmaceutical and biotechnology companies. Most cAMP assays in current use require lysis of the cells prior to measurement of cAMP. A live-cell biosensor for cAMP can be used in non-lysed cells with the additional advantage of multiple reads to study the kinetics of receptor response.

Nanobiosensors use an immobilized bioreceptor probe that is selective for target analyte molecules. Nanomaterials are exquisitely sensitive chemical and biological sensors. Nanoscale materials demonstrate unique properties. Their large surface area to volume ratio can achieve rapid and low cost reactions, using a variety of designs.

Other evanescent wave biosensors have been commercialised using waveguides where the propagation constant through the waveguide is changed by the absorption of molecules to the waveguide surface. One such example, dual polarisation interferometry uses a buried waveguide as a reference against which the change in propagation constant is measured. Other configurations such as the Mach–Zehnder have reference arms lithographically defined on a substrate. Higher levels of integration can be achieved using resonator geometries where the resonant frequency of a ring resonator changes when molecules are absorbed.

Electronic nose devices

Recently, arrays of many different detector molecules have been applied in so called electronic nose devices, where the pattern of response from the detectors is used to fingerprint a substance. In the Wasp Hound odor-detector, the mechanical element is a video camera and the biological element is five parasitic wasps who have been conditioned to swarm in response to the presence of a specific chemical. Current commercial electronic noses, however, do not use biological elements.

DNA biosensors

DNA can be the analyte of a biosensor, being detected through specific means, but it can also be used as part of a biosensor or, theoretically, even as a whole biosensor.

Many techniques exist to detect DNA, which is usually a means to detect organisms that have that particular DNA. DNA sequences can also be used as described above. But more forward-looking approaches exist, where DNA can be synthesized to hold enzymes in a biological, stable gel. Other applications are the design of aptamers, sequences of DNA that have a specific shape to bind a desired molecule. The most innovative processes use DNA origami for this, creating sequences that fold in a predictable structure that is useful for detection.

Scientists have built prototype sensors to detect DNA of animals from sucked in air, "airborne eDNA".

"Nanoantennas" made out of DNA – a novel type of nano-scale optical antenna – can be attached to proteins and produce a signal via fluorescence when these perform their biological functions, in particular for distinct conformational changes.

Graphene-based biosensor

Graphene is a two-dimensional carbon-based substance with superior optical, electrical, mechanical, thermal, and mechanical properties. The ability to absorb and immobilize a variety of proteins, particularly some with carbon ring structures, has proven graphene to be an excellent candidate as a biosensor transducer. As a result, various graphene-based biosensors have been explored and developed in recent times.

Genetic pollution

From Wikipedia, the free encyclopedia

Genetic pollution is a controversial term for uncontrolled gene flow into wild populations. It is defined as "the dispersal of contaminated altered genes from genetically engineered organisms to natural organisms, esp. by cross-pollination", but has come to be used in some broader ways. It is related to the population genetics concept of gene flow, and genetic rescue, which is genetic material intentionally introduced to increase the fitness of a population. It is called genetic pollution when it negatively impacts on the fitness of a population, such as through outbreeding depression and the introduction of unwanted phenotypes which can lead to extinction.

Conservation biologists and conservationists have used the term to describe gene flow from domestic, feral, and non-native species into wild indigenous species, which they consider undesirable. They promote awareness of the effects of introduced invasive species that may "hybridize with native species, causing genetic pollution". In the fields of agriculture, agroforestry and animal husbandry, genetic pollution is used to describe gene flows between genetically engineered species and wild relatives. The use of the word "pollution" is meant to convey the idea that mixing genetic information is bad for the environment, but because the mixing of genetic information can lead to a variety of outcomes, "pollution" may not always be the most accurate descriptor.

Gene flow to wild population

Some conservation biologists and conservationists have used genetic pollution for a number of years as a term to describe gene flow from a non-native, invasive subspecies, domestic, or genetically-engineered population to a wild indigenous population.

Importance

The introduction of genetic material into the gene pool of a population by human intervention can have both positive and negative effects on populations. When genetic material is intentionally introduced to increase the fitness of a population, this is called genetic rescue. When genetic material is unintentionally introduced to a population, this is called genetic pollution and can negatively affect the fitness of a population (primarily through outbreeding depression), introduce other unwanted phenotypes, or theoretically lead to extinction.

Introduced species

An introduced species is one that is not native to a given population that is either intentionally or accidentally brought into a given ecosystem. Effects of introduction are highly variable, but if an introduced species has a major negative impact on its new environment, it can be considered an invasive species. One such example is the introduction of the Asian Longhorned beetle in North America, which was first detected in 1996 in Brooklyn, New York. It is believed that these beetles were introduced through cargo at trade ports. The beetles are highly damaging to the environment, and are estimated to cause risk to 35% of urban trees, excluding natural forests. These beetles cause severe damage to the wood of trees by larval funneling. Their presence in the ecosystem destabilizes community structure, having a negative influence on many species in the system.

Introduced species are not always disruptive to an environment, however. Tomás Carlo and Jason Gleditch of Penn State University found that the number of "invasive" honeysuckle plants in the area correlated with the number and diversity of the birds in the Happy Valley Region of Pennsylvania, suggesting introduced honeysuckle plants and birds formed a mutually beneficial relationship. Presence of introduced honeysuckle was associated with higher diversity of the bird populations in that area, demonstrating that introduced species are not always detrimental to a given environment and it is completely context dependent.

Invasive species

Conservation biologists and conservationists have, for a number of years, used the term to describe gene flow from domestic, feral, and non-native species into wild indigenous species, which they consider undesirable. For example, TRAFFIC is the international wildlife trade monitoring network that works to limit trade in wild plants and animals so that it is not a threat to conservationist goals. They promote awareness of the effects of introduced invasive species that may "hybridize with native species, causing genetic pollution". Furthermore, the Joint Nature Conservation Committee, the statutory adviser to the UK government, has stated that invasive species "will alter the genetic pool (a process called genetic pollution), which is an irreversible change."

Invasive species can invade both large and small native populations and have a profound effect. Upon invasion, invasive species interbreed with native species to form sterile or more evolutionarily fit hybrids that can outcompete the native populations. Invasive species can cause extinctions of small populations on islands that are particularly vulnerable due to their smaller amounts of genetic diversity. In these populations, local adaptations can be disrupted by the introduction of new genes that may not be as suitable for the small island environments. For example, the Cercocarpus traskiae of the Catalina Island off the coast of California has faced near extinction with only a single population remaining due to the hybridization of its offspring with Cercocarpus betuloides.

Domestic populations

Increased contact between wild and domesticated populations of organisms can lead to reproductive interactions that are detrimental to the wild population's ability to survive. A wild population is one that lives in natural areas and is not regularly looked after by humans. This contrasts with domesticated populations that live in human controlled areas and are regularly, and historically, in contact with humans. Genes from domesticated populations are added to wild populations as a result of reproduction. In many crop populations this can be the result of pollen traveling from farmed crops to neighboring wild plants of the same species. For farmed animals, this reproduction may happen as the result of escaped or released animals.

Aquaculture

Aquaculture is the practice of farming aquatic animals or plants for the purpose of consumption. This practice is becoming increasingly common for the production of salmon. This is specifically termed aquaculture of salmonoids. One of the dangers of this practice is the possibility of domesticated salmon breaking free from their containment. The occurrence of escaping incidents is becoming increasingly common as aquaculture gains popularity. Farming structures may be ineffective at holding the vast number of fast growing animals they house. Natural disasters, high tides, and other environmental occurrences can also trigger aquatic animal escapes. The reason these escapes are considered dangers is the impact they pose for the wild population they reproduce with after escaping. In many instances the wild population experiences a decreased likelihood of survival after reproducing with domesticated populations of salmon.

The Washington Department of Fish and Wildlife cites that "commonly expressed concerns surrounding escaped Atlantic salmon include competition with native salmon, predation, disease transfer, hybridization, and colonization." A report done by that organization in 1999 did not find that escaped salmon posed a significant risk to wild populations.

Crops

Crops refer to groups of plants grown for consumption. Despite domestication over many years, these plants are not so far removed from their wild relatives that they could reproduce if brought together. Many crops are still grown in the areas they originated and gene flow between crops and wild relatives impacts the evolution of wild populations. Farmers can avoid reproduction between the different populations by timing their planting of crops so that crops are not flowering when wild relatives would be. Domesticated crops have been changed through artificial selection and genetic engineering. The genetic make-ups of many crops is different from those of their wild relatives, but the closer they grow to one another the more likely they are to share genes through pollen. Gene flow persists between crops and wild counterparts.

Genetically engineered organisms

Genetically engineered organisms are genetically modified in a laboratory, and therefore distinct from those that were bred through artificial selection. In the fields of agriculture, agroforestry and animal husbandry, genetic pollution is being used to describe gene flows between GE species and wild relatives. An early use of the term "genetic pollution" in this later sense appears in a wide-ranging review of the potential ecological effects of genetic engineering in The Ecologist magazine in July 1989. It was also popularized by environmentalist Jeremy Rifkin in his 1998 book The Biotech Century. While intentional crossbreeding between two genetically distinct varieties is described as hybridization with the subsequent introgression of genes, Rifkin, who had played a leading role in the ethical debate for over a decade before, used genetic pollution to describe what he considered to be problems that might occur due to the unintentional process of (modernly) genetically modified organisms (GMOs) dispersing their genes into the natural environment by breeding with wild plants or animals.

Concerns about negative consequences from gene flow between genetically engineered organisms and wild populations are valid. Most corn and soybean crops grown in the midwestern USA are genetically modified. There are corn and soybean varieties that are resistant to herbicides like glyphosate and corn that produces neonicotinoid pesticide within all of its tissues. These genetic modifications are meant to increase yields of crops but there is little evidence that yields actually increase. While scientists are concerned genetically engineered organisms can have negative effects on surrounding plant and animal communities, the risk of gene flow between genetically engineered organisms and wild populations is yet another concern. Many farmed crops may be weed resistant and reproduce with wild relatives. More research is necessary to understand how much gene flow between genetically engineered crops and wild populations occurs, and the impacts of genetic mixing.

Mutated organisms

Mutations within organisms can be executed through the process of exposing the organism to chemicals or radiation in order to generate mutations. This has been done in plants in order to create mutants that have a desired trait. These mutants can then be bred with other mutants or individuals that are not mutated in order to maintain the mutant trait. However, similar to the risks associated with introducing individuals to a certain environment, the variation created by mutated individuals could have a negative impact on native populations as well.

Preventive measures

Since 2005 there has existed a GM Contamination Register, launched for GeneWatch UK and Greenpeace International that records all incidents of intentional or accidental release of organisms genetically modified using modern techniques.

Genetic use restriction technologies (GURTs) were developed for the purpose of property protection, but could be beneficial in preventing the dispersal of transgenes. GeneSafe technologies introduced a method that became known as "Terminator." This method is based on seeds that produce sterile plants. This would prevent movement of transgenes into wild populations as hybridization would not be possible. However, this technology has never been deployed as it disproportionately negatively affects farmers in developing countries, who save seeds to use each year (whereas in developed countries, farmers generally buy seeds from seed production companies).

Physical containment has also been utilized to prevent the escape of transgenes. Physical containment includes barriers such as filters in labs, screens in greenhouses, and isolation distances in the field. Isolation distances have not always been successful, such as transgene escape from an isolated field into the wild in herbicide-resistant bentgrass Agrostis stolonifera.

Another suggested method that applies specifically to protection traits (e.g. pathogen resistance) is mitigation. Mitigation involves linking the positive trait (beneficial to fitness) to a trait that is negative (harmful to fitness) to wild but not domesticated individuals. In this case, if the protection trait was introduced to a weed, the negative trait would also be introduced in order to decrease overall fitness of the weed and decrease possibility of the individual’s reproduction and thus propagation of the transgene.

Risks

Not all genetically engineered organisms cause genetic pollution. Genetic engineering has a variety of uses and is specifically defined as a direct manipulation of the genome of an organism. Genetic pollution can occur in response to the introduction of a species that is not native to a particular environment, and genetically engineered organisms are examples of individuals that could cause genetic pollution following introduction. Due to these risks, studies have been done in order to assess the risks of genetic pollution associated with organisms that have been genetically engineered:

  1. Genetic In a 10-year study of four different crops, none of the genetically engineered plants were found to be more invasive or more persistent than their conventional counterparts. An often cited claimed example of genetic pollution is the reputed discovery of transgenes from GE maize in landraces of maize in Oaxaca, Mexico. The report from Quist and Chapela, has since been discredited on methodological grounds. The scientific journal that originally published the study concluded that "the evidence available is not sufficient to justify the publication of the original paper." More recent attempts to replicate the original studies have concluded that genetically modified corn is absent from southern Mexico in 2003 and 2004.
  2. A 2009 study verified the original findings of the controversial 2001 study, by finding transgenes in about 1% of 2000 samples of wild maize in Oaxaca, Mexico, despite Nature retracting the 2001 study and a second study failing to back up the findings of the initial study. The study found that the transgenes are common in some fields, but non-existent in others, hence explaining why a previous study failed to find them. Furthermore, not every laboratory method managed to find the transgenes.
  3. A 2004 study performed near an Oregon field trial for a genetically modified variety of creeping bentgrass (Agrostis stolonifera) revealed that the transgene and its associate trait (resistance to the glyphosate herbicide) could be transmitted by wind pollination to resident plants of different Agrostis species, up to 14 km from the test field. In 2007, the Scotts Company, producer of the genetically modified bentgrass, agreed to pay a civil penalty of $500,000 to the United States Department of Agriculture (USDA). The USDA alleged that Scotts "failed to conduct a 2003 Oregon field trial in a manner which ensured that neither glyphosate-tolerant creeping bentgrass nor its offspring would persist in the environment".

Not only are there risks in terms of genetic engineering, but there are risks that emerge from species hybridization. In Czechoslovakia, ibex were introduced from Turkey and Sinai to help promote the ibex population there, which caused hybrids that produced offspring too early, which caused the overall population to disappear completely. The genes of each population of the ibex in Turkey and Sinai were locally adapted to their environments so when placed in a new environmental context did not flourish. Additionally, the environmental toll that may arise from the introduction of a new species may be so disruptive that the ecosystem is no longer able to sustain certain populations.

Controversy

Environmentalist perspectives

The use of the word "pollution" in the term genetic pollution has a deliberate negative connotation and is meant to convey the idea that mixing genetic information is bad for the environment. However, because the mixing of genetic information can lead to a variety of outcomes, "pollution" may not be the most accurate descriptor. Gene flow is undesirable according to some environmentalists and conservationists, including groups such as Greenpeace, TRAFFIC, and GeneWatch UK.

"Invasive species have been a major cause of extinction throughout the world in the past few hundred years. Some of them prey on native wildlife, compete with it for resources, or spread disease, while others may hybridize with native species, causing "genetic pollution". In these ways, invasive species are as big a threat to the balance of nature as the direct overexploitation by humans of some species."

It can also be considered undesirable if it leads to a loss of fitness in the wild populations. The term can be associated with the gene flow from a mutation bred, synthetic organism or genetically engineered organism to a non GE organism, by those who consider such gene flow detrimental. These environmentalist groups stand in complete opposition to the development and production of genetically engineered organisms.

Governmental definition

From a governmental perspective, genetic pollution is defined as follows by the Food and Agriculture Organization of the United Nations:

"Uncontrolled spread of genetic information (frequently referring to transgenes) into the genomes of organisms in which such genes are not present in nature."

Scientific perspectives

Use of the term 'genetic pollution' and similar phrases such as genetic deterioration, genetic swamping, genetic takeover, and genetic aggression, are being debated by scientists as many do not find it scientifically appropriate. Rhymer and Simberloff argue that these types of terms:

"...imply either that hybrids are less fit than the parentals, which need not be the case, or that there is an inherent value in "pure" gene pools."

They recommend that gene flow from invasive species be termed genetic mixing since:

" "Mixing" need not be value-laden, and we use it here to denote mixing of gene pools whether or not associated with a decline in fitness."

Patrick Moore has questioned whether the term "genetic pollution" is more political than scientific. The term is considered to arouse emotional feelings towards the subject matter. In an interview he comments:

"If you take a term used quite frequently these days, the term "genetic pollution," otherwise referred to as genetic contamination, it is a propaganda term, not a technical or scientific term. Pollution and contamination are both value judgments. By using the word "genetic" it gives the public the impression that they are talking about something scientific or technical--as if there were such a thing as genes that amount to pollution."

Thus, using the term "genetic pollution" is inherently political. A scientific approach to discussing gene flow between introduced and native species would be to use terms like genetic mixing or gene flow. Such mixing can definitely have negative consequences on the fitness of native populations, so it is important not to minimize the risk. However, because genetic mixing can also lead to fitness recovery in cases that could be described as "genetic rescue", it is important to distinguish that just mixing genes from introduced into native populations can lead to variable outcomes for the fitness of native populations.

Wireless telegraphy

From Wikipedia, the free encyclopedia

A US Army Signal Corps radio operator in 1943 in New Guinea transmitting by radiotelegraphy

Wireless telegraphy or radiotelegraphy is transmission of telegraph signals by radio waves. Before about 1910, the term wireless telegraphy was also used for other experimental technologies for transmitting telegraph signals without wires. In radiotelegraphy, information is transmitted by pulses of radio waves of two different lengths called "dots" and "dashes", which spell out text messages, usually in Morse code. In a manual system, the sending operator taps on a switch called a telegraph key which turns the transmitter on and off, producing the pulses of radio waves. At the receiver the pulses are audible in the receiver's speaker as beeps, which are translated back to text by an operator who knows Morse code.

Radiotelegraphy was the first means of radio communication. The first practical radio transmitters and receivers invented in 1894–1895 by Guglielmo Marconi used radiotelegraphy. It continued to be the only type of radio transmission during the first few decades of radio, called the "wireless telegraphy era" up until World War I, when the development of amplitude modulation (AM) radiotelephony allowed sound (audio) to be transmitted by radio. Beginning about 1908, powerful transoceanic radiotelegraphy stations transmitted commercial telegram traffic between countries at rates up to 200 words per minute.

Radiotelegraphy was used for long-distance person-to-person commercial, diplomatic, and military text communication throughout the first half of the 20th century. It became a strategically important capability during the two world wars since a nation without long-distance radiotelegraph stations could be isolated from the rest of the world by an enemy cutting its submarine telegraph cables. Radiotelegraphy remains popular in amateur radio. It is also taught by the military for use in emergency communications. However, commercial radiotelegraphy is obsolete.

Overview

Amateur radio operator transmitting Morse code

Wireless telegraphy or radiotelegraphy, commonly called CW (continuous wave), ICW (interrupted continuous wave) transmission, or on-off keying, and designated by the International Telecommunication Union as emission type A1A or A2A, is a radio communication method. It was transmitted by several different modulation methods during its history. The primitive spark-gap transmitters used until 1920 transmitted damped waves, which had very wide bandwidth and tended to interfere with other transmissions. This type of emission was banned by 1934, except for some legacy use on ships. The vacuum tube (valve) transmitters which came into use after 1920 transmitted code by pulses of unmodulated sinusoidal carrier wave called continuous wave (CW), which is still used today. To receive CW transmissions, the receiver requires a circuit called a beat frequency oscillator (BFO). The third type of modulation, frequency-shift keying (FSK) was used mainly by radioteletype networks (RTTY). Morse code radiotelegraphy was gradually replaced by radioteletype in most high volume applications by World War II.

In manual radiotelegraphy the sending operator manipulates a switch called a telegraph key, which turns the radio transmitter on and off, producing pulses of unmodulated carrier wave of different lengths called "dots" and "dashes", which encode characters of text in Morse code. At the receiving location, Morse code is audible in the receiver's earphone or speaker as a sequence of buzzes or beeps, which is translated back to text by an operator who knows Morse code. With automatic radiotelegraphy teleprinters at both ends use a code such as the International Telegraph Alphabet No. 2 and produced typed text.

Radiotelegraphy is obsolete in commercial radio communication, and its last civilian use, requiring maritime shipping radio operators to use Morse code for emergency communications, ended in 1999 when the International Maritime Organization switched to the satellite-based GMDSS system. However it is still used by amateur radio operators, and military services require signalmen to be trained in Morse code for emergency communication. A CW coastal station, KSM, still exists in California, run primarily as a museum by volunteers, and occasional contacts with ships are made. In a minor legacy use, VHF omnidirectional range (VOR) and NDB radio beacons in the aviation radio navigation service still transmit their one to three letter identifiers in Morse code.

Radiotelegraphy is popular amongst radio amateurs world-wide, who commonly refer to it as continuous wave, or just CW. A 2021 analysis of over 700 million communications logged by the Club Log blog, and a similar review of data logged by the American Radio Relay League, both show that wireless telegraphy is the 2nd most popular mode of amateur radio communication, accounting for nearly 20% of contacts. This makes it more popular than voice communication, but not as popular as the FT8 digital mode, which accounted for 60% of amateur radio contacts made in 2021. Since 2003, knowledge of Morse code and wireless telegraphy has no longer been required to obtain an amateur radio license in many countries, it is, however, still required in some countries to obtain a licence of a different class. As of 2021, licence Class A in Belarus and Estonia, or the General class in Monaco, or Class 1 in Ukraine require Morse proficiency to access the full amateur radio spectrum including the high frequency (HF) bands. Further, CEPT Class 1 licence in Ireland, and Class 1 in Russia, both of which require proficiency in wireless telegraphy, offer additional privileges: a shorter and more desirable call sign in both countries, and the right to use a higher transmit power in Russia.

Non-radio methods

Efforts to find a way to transmit telegraph signals without wires grew out of the success of electric telegraph networks, the first instant telecommunication systems. Developed beginning in the 1830s, a telegraph line was a person-to-person text message system consisting of multiple telegraph offices linked by an overhead wire supported on telegraph poles. To send a message, an operator at one office would tap on a switch called a telegraph key, creating pulses of electric current which spelled out a message in Morse code. When the key was pressed, it would connect a battery to the telegraph line, sending current down the wire. At the receiving office, the current pulses would operate a telegraph sounder, a device that would make a "click" sound when it received each pulse of current. The operator at the receiving station who knew Morse code would translate the clicking sounds to text and write down the message. The ground was used as the return path for current in the telegraph circuit, to avoid having to use a second overhead wire.

By the 1860s, the telegraph was the standard way to send most urgent commercial, diplomatic and military messages, and industrial nations had built continent-wide telegraph networks, with submarine telegraph cables allowing telegraph messages to bridge oceans. However installing and maintaining a telegraph line linking distant stations was very expensive, and wires could not reach some locations such as ships at sea. Inventors realized if a way could be found to send electrical impulses of Morse code between separate points without a connecting wire, it could revolutionize communications.

The successful solution to this problem was the discovery of radio waves in 1887, and the development of practical radiotelegraphy transmitters and receivers by about 1899, described in the next section. However, this was preceded by a 50-year history of ingenious but ultimately unsuccessful experiments by inventors to achieve wireless telegraphy by other means.

Ground, water, and air conduction

Several wireless electrical signaling schemes based on the (sometimes erroneous) idea that electric currents could be conducted long-range through water, ground, and air were investigated for telegraphy before practical radio systems became available.

The original telegraph lines used two wires between the two stations to form a complete electrical circuit or "loop". In 1837, however, Carl August von Steinheil of Munich, Germany, found that by connecting one leg of the apparatus at each station to metal plates buried in the ground, he could eliminate one wire and use a single wire for telegraphic communication. This led to speculation that it might be possible to eliminate both wires and therefore transmit telegraph signals through the ground without any wires connecting the stations. Other attempts were made to send the electric current through bodies of water, to span rivers, for example. Prominent experimenters along these lines included Samuel F. B. Morse in the United States and James Bowman Lindsay in Great Britain, who in August 1854, was able to demonstrate transmission across a mill dam at a distance of 500 yards (457 metres).

Tesla's explanation in the 1919 issue of "Electrical Experimenter" on how he thought his wireless system would work

US inventors William Henry Ward (1871) and Mahlon Loomis (1872) developed electrical conduction systems based on the erroneous belief that there was an electrified atmospheric stratum accessible at low altitude. They thought atmosphere current, connected with a return path using "Earth currents" would allow for wireless telegraphy as well as supply power for the telegraph, doing away with artificial batteries. A more practical demonstration of wireless transmission via conduction came in Amos Dolbear's 1879 magneto electric telephone that used ground conduction to transmit over a distance of a quarter of a mile.

In the 1890s inventor Nikola Tesla worked on an air and ground conduction wireless electric power transmission system, similar to Loomis', which he planned to include wireless telegraphy. Tesla's experiments had led him to incorrectly conclude that he could use the entire globe of the Earth to conduct electrical energy and his 1901 large scale application of his ideas, a high-voltage wireless power station, now called Wardenclyffe Tower, lost funding and was abandoned after a few years.

Telegraphic communication using earth conductivity was eventually found to be limited to impractically short distances, as was communication conducted through water, or between trenches during World War I.

Electrostatic and electromagnetic induction

Thomas Edison's 1891 patent for a ship-to-shore wireless telegraph that used electrostatic induction

Both electrostatic and electromagnetic induction were used to develop wireless telegraph systems that saw limited commercial application. In the United States, Thomas Edison, in the mid-1880s, patented an electromagnetic induction system he called "grasshopper telegraphy", which allowed telegraphic signals to jump the short distance between a running train and telegraph wires running parallel to the tracks. This system was successful technically but not economically, as there turned out to be little interest by train travelers in the use of an on-board telegraph service. During the Great Blizzard of 1888, this system was used to send and receive wireless messages from trains buried in snowdrifts. The disabled trains were able to maintain communications via their Edison induction wireless telegraph systems, perhaps the first successful use of wireless telegraphy to send distress calls. Edison would also help to patent a ship-to-shore communication system based on electrostatic induction.

The most successful creator of an electromagnetic induction telegraph system was William Preece, chief engineer of Post Office Telegraphs of the General Post Office (GPO) in the United Kingdom. Preece first noticed the effect in 1884 when overhead telegraph wires in Grays Inn Road were accidentally carrying messages sent on buried cables. Tests in Newcastle succeeded in sending a quarter of a mile using parallel rectangles of wire. In tests across the Bristol Channel in 1892, Preece was able to telegraph across gaps of about 5 kilometres (3.1 miles). However, his induction system required extensive lengths of antenna wires, many kilometers long, at both the sending and receiving ends. The length of those sending and receiving wires needed to be about the same length as the width of the water or land to be spanned. For example, for Preece's station to span the English Channel from Dover, England, to the coast of France would require sending and receiving wires of about 30 miles (48 kilometres) along the two coasts. These facts made the system impractical on ships, boats, and ordinary islands, which are much smaller than Great Britain or Greenland. Also, the relatively short distances that a practical Preece system could span meant that it had few advantages over underwater telegraph cables.

Radiotelegraphy

British Post Office engineers inspect Marconi's transmitter (center) and receiver (bottom) on Flat Holm, May 1897
 
Typical commercial radiotelegraphy receiver from the first decade of the 20th century. The "dots" and "dashes" of Morse code were recorded in ink on paper tape by a siphon recorder (left).
 
Example of transatlantic radiotelegraph message recorded on paper tape at RCA's New York receiving center in 1920. The translation of the Morse code is given below the tape.

Over several years starting in 1894, the Italian inventor Guglielmo Marconi worked on adapting the newly discovered phenomenon of radio waves to communication, turning what was essentially a laboratory experiment up to that point into a useful communication system, building the first radiotelegraphy system using them. Preece and the GPO in Britain at first supported and gave financial backing to Marconi's experiments conducted on Salisbury Plain from 1896. Preece had become convinced of the idea through his experiments with wireless induction. However, the backing was withdrawn when Marconi formed the Wireless Telegraph & Signal Company. GPO lawyers determined that the system was a telegraph under the meaning of the Telegraph Act and thus fell under the Post Office monopoly. This did not seem to hold back Marconi. After Marconi sent wireless telegraphic signals across the Atlantic Ocean in 1901, the system began being used for regular communication including ship-to-shore and ship-to-ship communication.

With this development, wireless telegraphy came to mean radiotelegraphy, Morse code transmitted by radio waves. The first radio transmitters, primitive spark gap transmitters used until World War I, could not transmit voice (audio signals). Instead, the operator would send the text message on a telegraph key, which turned the transmitter on and off, producing short ("dot") and long ("dash") pulses of radio waves, groups of which comprised the letters and other symbols of the Morse code. At the receiver, the signals could be heard as musical "beeps" in the earphones by the receiving operator, who would translate the code back into text. By 1910, communication by what had been called "Hertzian waves" was being universally referred to as "radio", and the term wireless telegraphy has been largely replaced by the more modern term "radiotelegraphy".

Continuous wave (CW)

The primitive spark-gap transmitters used until 1920 transmitted by a modulation method called damped wave. As long as the telegraph key was pressed, the transmitter would produce a string of transient pulses of radio waves which repeated at an audio rate, usually between 50 and several thousand hertz. In a receiver's earphone, this sounded like a musical tone, rasp or buzz. Thus the Morse code "dots" and "dashes" sounded like beeps. Damped wave had a large frequency bandwidth, meaning that the radio signal was not a single frequency but occupied a wide band of frequencies. Damped wave transmitters had a limited range and interfered with the transmissions of other transmitters on adjacent frequencies.

After 1905 new types of radiotelegraph transmitters were invented which transmitted code using a new modulation method: continuous wave (CW) (designated by the International Telecommunication Union as emission type A1A). As long as the telegraph key was pressed, the transmitter produced a continuous sinusoidal wave of constant amplitude. Since all the radio wave's energy was concentrated at a single frequency, CW transmitters could transmit further with a given power, and also caused virtually no interference to transmissions on adjacent frequencies. The first transmitters able to produce continuous wave were the arc converter (Poulsen arc) transmitter, invented by Danish engineer Valdemar Poulsen in 1903, and the Alexanderson alternator, invented 1906-1912 by Reginald Fessenden and Ernst Alexanderson. These slowly replaced the spark transmitters in high power radiotelegraphy stations.

However, the radio receivers used for damped wave could not receive continuous wave. Because the CW signal produced while the key was pressed was just an unmodulated carrier wave, it made no sound in a receiver's earphones. To receive a CW signal, some way had to be found to make the Morse code carrier wave pulses audible in a receiver.

This problem was solved by Reginald Fessenden in 1901. In his "heterodyne" receiver, the incoming radiotelegraph signal is mixed in the receiver's detector crystal or vacuum tube with a constant sine wave generated by an electronic oscillator in the receiver called a beat frequency oscillator (BFO). The frequency of the oscillator is offset from the radio transmitter's frequency . In the detector the two frequencies subtract, and a beat frequency (heterodyne) at the difference between the two frequencies is produced: . If the BFO frequency is near enough to the radio station's frequency, the beat frequency is in the audio frequency range and can be heard in the receiver's earphones. During the "dots" and "dashes" of the signal, the beat tone is produced, while between them there is no carrier so no tone is produced. Thus the Morse code is audible as musical "beeps" in the earphones.

The BFO was rare until the invention in 1913 of the first practical electronic oscillator, the vacuum tube feedback oscillator by Edwin Armstrong. After this time BFOs were a standard part of radiotelegraphy receivers. Each time the radio was tuned to a different station frequency, the BFO frequency had to be changed also, so the BFO oscillator had to be tunable. In later superheterodyne receivers from the 1930s on, the BFO signal was mixed with the constant intermediate frequency (IF) produced by the superheterodyne's detector. Therefore, the BFO could be a fixed frequency.

Continuous-wave vacuum tube transmitters replaced the other types of transmitter with the availability of power tubes after World War I because they were cheap. CW became the standard method of transmitting radiotelegraphy by the 20s, damped wave spark transmitters were banned by 1930 and CW continues to be used today. Even today most communications receivers produced for use in shortwave communication stations have BFOs.

The radiotelegraphy industry

In World War I balloons were used as a quick way to raise wire antennas for military field radiotelegraph stations. Balloons at Tempelhofer Field, Germany, 1908.

The International Radiotelegraph Union was unofficially established at the first International Radiotelegraph Convention in 1906, and was merged into the International Telecommunication Union in 1932. When the United States entered World War I, private radiotelegraphy stations were prohibited, which put an end to several pioneers' work in this field. By the 1920s, there was a worldwide network of commercial and government radiotelegraphic stations, plus extensive use of radiotelegraphy by ships for both commercial purposes and passenger messages. The transmission of sound (radiotelephony) began to displace radiotelegraphy by the 1920s for many applications, making possible radio broadcasting. Wireless telegraphy continued to be used for private person-to-person business, governmental, and military communication, such as telegrams and diplomatic communications, and evolved into radioteletype networks. The ultimate implementation of wireless telegraphy was telex, using radio signals, which was developed in the 1930s and was for many years the only reliable form of communication between many distant countries. The most advanced standard, CCITT R.44, automated both routing and encoding of messages by short wave transmissions.

Today, due to more modern text transmission methods, Morse code radiotelegraphy for commercial use has become obsolete. On shipboard, the computer and satellite-linked GMDSS system have largely replaced Morse as a means of communication.

Regulation of radiotelegraphy

Continuous wave (CW) radiotelegraphy is regulated by the International Telecommunication Union (ITU) as emission type A1A.

The US Federal Communications Commission issues a lifetime commercial Radiotelegraph Operator License. This requires passing a simple written test on regulations, a more complex written exam on technology, and demonstrating Morse reception at 20 words per minute plain language and 16 wpm code groups. (Credit is given for amateur extra class licenses earned under the old 20 wpm requirement.)

Gallery

E-patient

From Wikipedia, the free encyclopedia https://en.wikipedi...