Search This Blog

Wednesday, February 19, 2025

Transgene


From Wikipedia, the free encyclopedia

A transgene is a gene that has been transferred naturally, or by any of a number of genetic engineering techniques, from one organism to another. The introduction of a transgene, in a process known as transgenesis, has the potential to change the phenotype of an organism. Transgene describes a segment of DNA containing a gene sequence that has been isolated from one organism and is introduced into a different organism. This non-native segment of DNA may either retain the ability to produce RNA or protein in the transgenic organism or alter the normal function of the transgenic organism's genetic code. In general, the DNA is incorporated into the organism's germ line. For example, in higher vertebrates this can be accomplished by injecting the foreign DNA into the nucleus of a fertilized ovum. This technique is routinely used to introduce human disease genes or other genes of interest into strains of laboratory mice to study the function or pathology involved with that particular gene.

The construction of a transgene requires the assembly of a few main parts. The transgene must contain a promoter, which is a regulatory sequence that will determine where and when the transgene is active, an exon, a protein coding sequence (usually derived from the cDNA for the protein of interest), and a stop sequence. These are typically combined in a bacterial plasmid and the coding sequences are typically chosen from transgenes with previously known functions.

Transgenic or genetically modified organisms, be they bacteria, viruses or fungi, serve many research purposes. Transgenic plants, insects, fish and mammals (including humans) have been bred. Transgenic plants such as corn and soybean have replaced wild strains in agriculture in some countries (e.g. the United States). Transgene escape has been documented for GMO crops since 2001 with persistence and invasiveness. Transgenetic organisms pose ethical questions and may cause biosafety problems.

History

The idea of shaping an organism to fit a specific need is not a new science. However, until the late 1900s farmers and scientists could breed new strains of a plant or organism only from closely related species because the DNA had to be compatible for offspring to be able to reproduce.

In the 1970 and 1980s, scientists passed this hurdle by inventing procedures for combining the DNA of two vastly different species with genetic engineering. The organisms produced by these procedures were termed transgenic. Transgenesis is the same as gene therapy in the sense that they both transform cells for a specific purpose. However, they are completely different in their purposes, as gene therapy aims to cure a defect in cells, and transgenesis seeks to produce a genetically modified organism by incorporating the specific transgene into every cell and changing the genome. Transgenesis will therefore change the germ cells, not only the somatic cells, in order to ensure that the transgenes are passed down to the offspring when the organisms reproduce. Transgenes alter the genome by blocking the function of a host gene; they can either replace the host gene with one that codes for a different protein, or introduce an additional gene.

The first transgenic organism was created in 1974 when Annie Chang and Stanley Cohen expressed Staphylococcus aureus genes in Escherichia coli. In 1978, yeast cells were the first eukaryotic organisms to undergo gene transfer. Mouse cells were first transformed in 1979, followed by mouse embryos in 1980. Most of the very first transmutations were performed by microinjection of DNA directly into cells. Scientists were able to develop other methods to perform the transformations, such as incorporating transgenes into retroviruses and then infecting cells; using electroinfusion, which takes advantage of an electric current to pass foreign DNA through the cell wall; biolistics, which is the procedure of shooting DNA bullets into cells; and also delivering DNA into the newly fertilized egg.

The first transgenic animals were only intended for genetic research to study the specific function of a gene, and by 2003, thousands of genes had been studied.

Use in plants

A variety of transgenic plants have been designed for agriculture to produce genetically modified crops, such as corn, soybean, rapeseed oil, cotton, rice and more. As of 2012, these GMO crops were planted on 170 million hectares globally.

Golden rice

One example of a transgenic plant species is golden rice. In 1997, five million children developed xerophthalmia, a medical condition caused by vitamin A deficiency, in Southeast Asia alone. Of those children, a quarter million went blind. To combat this, scientists used biolistics to insert the daffodil phytoene synthase gene into Asia indigenous rice cultivars. The daffodil insertion increased the production of β-carotene. The product was a transgenic rice species rich in vitamin A, called golden rice. Little is known about the impact of golden rice on xerophthalmia because anti-GMO campaigns have prevented the full commercial release of golden rice into agricultural systems in need.

Transgene escape

The escape of genetically-engineered plant genes via hybridization with wild relatives was first discussed and examined in Mexico and Europe in the mid-1990s. There is agreement that escape of transgenes is inevitable, even "some proof that it is happening". Up until 2008 there were few documented cases.

Corn

Corn sampled in 2000 from the Sierra Juarez, Oaxaca, Mexico contained a transgenic 35S promoter, while a large sample taken by a different method from the same region in 2003 and 2004 did not. A sample from another region from 2002 also did not, but directed samples taken in 2004 did, suggesting transgene persistence or re-introduction. A 2009 study found recombinant proteins in 3.1% and 1.8% of samples, most commonly in southeast Mexico. Seed and grain import from the United States could explain the frequency and distribution of transgenes in west-central Mexico, but not in the southeast. Also, 5.0% of corn seed lots in Mexican corn stocks expressed recombinant proteins despite the moratorium on GM crops.

Cotton

In 2011, transgenic cotton was found in Mexico among wild cotton, after 15 years of GMO cotton cultivation.

Rapeseed (canola)

Transgenic rapeseed Brassicus napus – hybridized with a native Japanese species, Brassica rapa – was found in Japan in 2011 after having been identified in 2006 in Québec, Canada. They were persistent over a six-year study period, without herbicide selection pressure and despite hybridization with the wild form. This was the first report of the introgression—the stable incorporation of genes from one gene pool into another—of an herbicide-resistance transgene from Brassica napus into the wild form gene pool.

Creeping bentgrass

Transgenic creeping bentgrass, engineered to be glyphosate-tolerant as "one of the first wind-pollinated, perennial, and highly outcrossing transgenic crops", was planted in 2003 as part of a large (about 160 ha) field trial in central Oregon near Madras, Oregon. In 2004, its pollen was found to have reached wild growing bentgrass populations up to 14 kilometres away. Cross-pollinating Agrostis gigantea was even found at a distance of 21 kilometres. The grower, Scotts Company could not remove all genetically engineered plants, and in 2007, the U.S. Department of Agriculture fined Scotts $500,000 for noncompliance with regulations.

Risk assessment

The long-term monitoring and controlling of a particular transgene has been shown not to be feasible. The European Food Safety Authority published a guidance for risk assessment in 2010.

Use in mice

Genetically modified mice are the most common animal model for transgenic research. Transgenic mice are currently being used to study a variety of diseases including cancer, obesity, heart disease, arthritis, anxiety, and Parkinson's disease. The two most common types of genetically modified mice are knockout mice and oncomice. Knockout mice are a type of mouse model that uses transgenic insertion to disrupt an existing gene's expression. In order to create knockout mice, a transgene with the desired sequence is inserted into an isolated mouse blastocyst using electroporation. Then, homologous recombination occurs naturally within some cells, replacing the gene of interest with the designed transgene. Through this process, researchers were able to demonstrate that a transgene can be integrated into the genome of an animal, serve a specific function within the cell, and be passed down to future generations.

Oncomice are another genetically modified mouse species created by inserting transgenes that increase the animal's vulnerability to cancer. Cancer researchers utilize oncomice to study the profiles of different cancers in order to apply this knowledge to human studies.

Use in Drosophila

Multiple studies have been conducted concerning transgenesis in Drosophila melanogaster, the fruit fly. This organism has been a helpful genetic model for over 100 years, due to its well-understood developmental pattern. The transfer of transgenes into the Drosophila genome has been performed using various techniques, including P element, Cre-loxP, and ΦC31 insertion. The most practiced method used thus far to insert transgenes into the Drosophila genome utilizes P elements. The transposable P elements, also known as transposons, are segments of bacterial DNA that are translocated into the genome, without the presence of a complementary sequence in the host's genome. P elements are administered in pairs of two, which flank the DNA insertion region of interest. Additionally, P elements often consist of two plasmid components, one known as the P element transposase and the other, the P transposon backbone. The transposase plasmid portion drives the transposition of the P transposon backbone, containing the transgene of interest and often a marker, between the two terminal sites of the transposon. Success of this insertion results in the nonreversible addition of the transgene of interest into the genome. While this method has been proven effective, the insertion sites of the P elements are often uncontrollable, resulting in an unfavorable, random insertion of the transgene into the Drosophila genome.

To improve the location and precision of the transgenic process, an enzyme known as Cre has been introduced. Cre has proven to be a key element in a process known as recombinase-mediated cassette exchange (RMCE). While it has shown to have a lower efficiency of transgenic transformation than the P element transposases, Cre greatly lessens the labor-intensive abundance of balancing random P insertions. Cre aids in the targeted transgenesis of the DNA gene segment of interest, as it supports the mapping of the transgene insertion sites, known as loxP sites. These sites, unlike P elements, can be specifically inserted to flank a chromosomal segment of interest, aiding in targeted transgenesis. The Cre transposase is important in the catalytic cleavage of the base pairs present at the carefully positioned loxP sites, permitting more specific insertions of the transgenic donor plasmid of interest.

To overcome the limitations and low yields that transposon-mediated and Cre-loxP transformation methods produce, the bacteriophage ΦC31 has recently been utilized. Recent breakthrough studies involve the microinjection of the bacteriophage ΦC31 integrase, which shows improved transgene insertion of large DNA fragments that are unable to be transposed by P elements alone. This method involves the recombination between an attachment (attP) site in the phage and an attachment site in the bacterial host genome (attB). Compared to usual P element transgene insertion methods, ΦC31 integrates the entire transgene vector, including bacterial sequences and antibiotic resistance genes. Unfortunately, the presence of these additional insertions has been found to affect the level and reproducibility of transgene expression.

Use in livestock and aquaculture

One agricultural application is to selectively breed animals for particular traits: Transgenic cattle with an increased muscle phenotype has been produced by overexpressing a short hairpin RNA with homology to the myostatin mRNA using RNA interference. Transgenes are being used to produce milk with high levels of proteins or silk from the milk of goats. Another agricultural application is to selectively breed animals, which are resistant to diseases or animals for biopharmaceutical production.

Future potential

The application of transgenes is a rapidly growing area of molecular biology. As of 2005 it was predicted that in the next two decades, 300,000 lines of transgenic mice will be generated. Researchers have identified many applications for transgenes, particularly in the medical field. Scientists are focusing on the use of transgenes to study the function of the human genome in order to better understand disease, adapting animal organs for transplantation into humans, and the production of pharmaceutical products such as insulin, growth hormone, and blood anti-clotting factors from the milk of transgenic cows.

As of 2004 there were five thousand known genetic diseases, and the potential to treat these diseases using transgenic animals is, perhaps, one of the most promising applications of transgenes. There is a potential to use human gene therapy to replace a mutated gene with an unmutated copy of a transgene in order to treat the genetic disorder. This can be done through the use of Cre-Lox or knockout. Moreover, genetic disorders are being studied through the use of transgenic mice, pigs, rabbits, and rats. Transgenic rabbits have been created to study inherited cardiac arrhythmias, as the rabbit heart markedly better resembles the human heart as compared to the mouse. More recently, scientists have also begun using transgenic goats to study genetic disorders related to fertility.

Transgenes may be used for xenotransplantation from pig organs. Through the study of xeno-organ rejection, it was found that an acute rejection of the transplanted organ occurs upon the organ's contact with blood from the recipient due to the recognition of foreign antibodies on endothelial cells of the transplanted organ. Scientists have identified the antigen in pigs that causes this reaction, and therefore are able to transplant the organ without immediate rejection by removal of the antigen. However, the antigen begins to be expressed later on, and rejection occurs. Therefore, further research is being conducted. Transgenic microorganisms capable of producing catalytic proteins or enzymes which increase the rate of industrial reactions.

Ethical controversy

Transgene use in humans is currently fraught with issues. Transformation of genes into human cells has not been perfected yet. The most famous example of this involved certain patients developing T-cell leukemia after being treated for X-linked severe combined immunodeficiency (X-SCID). This was attributed to the close proximity of the inserted gene to the LMO2 promoter, which controls the transcription of the LMO2 proto-oncogene.

Computer cluster

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Computer_cluster
Technicians working on a large Linux cluster at the Chemnitz University of Technology, Germany
Sun Microsystems Solaris Cluster, with In-Row cooling
Taiwania series uses cluster architecture.

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.

The components of a cluster are usually connected to each other through fast local area networks, with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups (e.g. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, or different hardware.

Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

Computer clusters emerged as a result of the convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performance distributed computing. They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. Prior to the advent of clusters, single-unit fault tolerant mainframes with modular redundancy were employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes, clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not opaque to running programs.

Basic concepts

A simple, home-built Beowulf cluster

The desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations.

The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast local area network. The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.

Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer-to-peer or grid computing which also use many nodes, but with a far more distributed nature.

A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high-performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer. The developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a relatively low cost.

Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. The TOP500 organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture.

History

A VAX 11/780, c. 1977, as used in early VAXcluster development

Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law.

The history of early computer clusters is more or less directly tied to the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster.

The first production system designed as a cluster was the Burroughs B5700 in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation.

Tandem NonStop II circa 1980

The first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" (ARC) system, developed in 1977, and using ARCnet as the cluster interface. Clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VMS operating system. The ARC and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem NonStop (a 1976 high-availability commercial product) and the IBM S/390 Parallel Sysplex (circa 1994, primarily for business use).

Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer. Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, and introduced internal parallelism via vector processing. While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K computer) relied on cluster architectures.

Attributes of clusters

A load balancing cluster with two servers and N user stations

Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use a high-availability approach. Note that the attributes described below are not exclusive and a "computer cluster" may also use a high-availability approach, etc.

"Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized. However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simple round-robin method by assigning each new request to a different node.

Computer clusters are used for computation-intensive purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a computer cluster might support computational simulations of vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach "supercomputing".

"High-availability clusters" (also known as failover clusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundant nodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure. There are commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating system.

Benefits

Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (the ability of a system to continue operating despite a malfunctioning node) enables scalability, and in high-performance situations, allows for a low frequency of maintenance routines, resource consolidation (e.g., RAID), and centralized management. Advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity.

In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers.

When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node.

If you have a large number of computers clustered together, this lends itself to the use of distributed file systems and RAID, both of which can increase the reliability and speed of a cluster.

Design and configuration

A typical Beowulf configuration

One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approaching grid computing.

In a Beowulf cluster, the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves. In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization. The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed.

A special purpose 144-node DEGIMA cluster is tuned to running astrophysical N-body simulations using the Multiple-Walk parallel tree code, rather than general purpose scientific computations.

Due to the increasing computing power of each generation of game consoles, a novel use has emerged where they are repurposed into High-performance computing (HPC) clusters. Some examples of game console clusters are Sony PlayStation clusters and Microsoft Xbox clusters. Another example of consumer game product is the Nvidia Tesla Personal Supercomputer workstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost).

Computer clusters have historically run on separate physical computers with the same operating system. With the advent of virtualization, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar. The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation is Xen as the virtualization manager with Linux-HA.

Data sharing and communication

Data sharing

A NEC Nehalem cluster

As the computer clusters were appearing during the 1980s, so were supercomputers. One of the elements that distinguished the three classes at that time was that the early supercomputers relied on shared memory. Clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it.

However, the use of a clustered file system is essential in modern computer clusters. Examples include the IBM General Parallel File System, Microsoft's Cluster Shared Volumes or the Oracle Cluster File System.

Message passing and communication

Two widely used approaches for communication between cluster nodes are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine).

PVM was developed at the Oak Ridge National Laboratory around 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc.

MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported by ARPA and National Science Foundation. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically use TCP/IP and socket connections. MPI is now a widely available communications model that enables parallel programs to be written in languages such as C, Fortran, Python, etc. Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such as MPICH and Open MPI.

Cluster management

Low-cost and low energy tiny-cluster of Cubieboards, using Apache Hadoop on Lubuntu
A pre-release sample of the Ground Electronics/AB Open Circumference C25 cluster computer system, fitted with 8x Raspberry Pi 3 Model B+ and 1x UDOO x86 boards

One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes. In some cases this provides an advantage to shared memory architectures with lower administration costs. This has also made virtual machines popular, due to the ease of administration.

Task scheduling

When a large multi-user cluster needs to access very large amounts of data, task scheduling becomes a challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges. This is an area of ongoing research; algorithms that combine and extend MapReduce and Hadoop have been proposed and studied.

Node failure management

When a node in a cluster fails, strategies such as "fencing" may be employed to keep the rest of the system operational. Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks.

The STONITH method stands for "Shoot The Other Node In The Head", meaning that the suspected node is disabled or powered off. For instance, power fencing uses a power controller to turn off an inoperable node.

The resources fencing approach disallows access to resources without powering off the node. This may include persistent reservation fencing via the SCSI3, fibre channel fencing to disable the fibre channel port, or global network block device (GNBD) fencing to disable access to the GNBD server.

Software development and administration

Parallel programming

Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achieving task parallelism without multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes.

Automatic parallelization of programs remains a technical challenge, but parallel programming models can be used to effectuate a higher degree of parallelism via the simultaneous execution of separate portions of a program on different processors.

Debugging and monitoring

Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by the High Performance Debugging Forum (HPDF) which resulted in the HPD specifications. Tools such as TotalView were then developed to debug parallel implementations on computer clusters which use Message Passing Interface (MPI) or Parallel Virtual Machine (PVM) for message passing.

The University of California, Berkeley Network of Workstations (NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters.

Application checkpointing can be used to restore a given state of the system when a node fails during a long multi-node computation. This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore the system to a stable state so that processing can resume without needing to recompute results.

Implementations

The Linux world supports various cluster software; for application clustering, there is distcc, and MPICH. Linux Virtual Server, Linux-HA – director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes. MOSIX, LinuxPMI, Kerrighed, OpenSSI are full-blown clusters integrated into the kernel that provide for automatic process migration among homogeneous nodes. OpenSSI, openMosix and Kerrighed are single-system image implementations.

Microsoft Windows computer cluster Server 2003 based on the Windows Server platform provides pieces for high-performance computing like the job scheduler, MSMPI library and management tools.

gLite is a set of middleware technologies created by the Enabling Grids for E-sciencE (EGEE) project.

slurm is also used to schedule and manage some of the largest supercomputer clusters (see top500 list).

Other approaches

Although most computer clusters are permanent fixtures, attempts at flash mob computing have been made to build short-lived clusters for specific computations. However, larger-scale volunteer computing systems such as BOINC-based systems have had more followers.

Humanized mouse

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Humanized_mouse

A humanized mouse is a genetically modified mouse that has functioning human genes, cells, tissues and/or organs. Humanized mice are commonly used as small animal models in biological and medical research for human therapeutics.

A humanized mouse or a humanized mouse model is one that has been xenotransplanted with human cells and/or engineered to express human gene products, so as to be utilized for gaining relevant insights in the in vivo context for understanding of human-specific physiology and pathologies. Several human biological processes have been explored using animal models like rodents and non-human primates. In particular, small animals such as mice are advantageous in such studies owing to their small size, brief reproductive cycle, easy handling and due to the genomic and physiological similarities with humans; moreover, these animals can also be genetically modified easily. Nevertheless, there are several incongruencies of these animal systems with those of humans, especially with regard to the components of the immune system. To overcome these limitations and to realize the full potential of animal models to enable researchers to get a clear picture of the nature and pathogenesis of immune responses mounted against human-specific pathogens, humanized mouse models have been developed. Such mouse models have also become an integral aspect of preclinical biomedical research.

History

The discovery of the athymic mouse, commonly known as the nude mouse, and that of the SCID mouse were major events that paved the way for humanized mice models. The first such mouse model was derived by backcrossing C57BL/Ka and BALB/c mice, featuring a loss of function mutation in the PRKDC gene. The PRKDC gene product is necessary for resolving breaks in DNA strands during the development of T cells and B cells. A mutation in the Foxn1 gene on chromosome 11 resulted in impaired thymus development, leading to a deficiency in mature T lymphocytes. Dysfunctional PRKDC gene leads to impaired development of T and B lymphocytes which gives rise to severe combined immunodeficiency (SCID). In spite of the efforts in developing this mouse model, poor engraftment of human hematopoietic stem cells (HSCs) was a major limitation that called for further advancement in the development humanized mouse models. Nude mice were the earliest immunodeficient mouse model. These mice primarily produced IgM and had minimal or no IgA. As a result, they did not exhibit a rejection response to allogeneic tissue. Commonly utilized strains included BALB/c-nu, Swiss-nu, NC-nu, and NIH-nu, which were extensively employed in the research of immune diseases and tumors. However, due to the retention of B cells and NK cells, they were unable to fully support engraftment of human immune cells, thus making them unsuitable as an ideal humanized mouse model.

The next big step in the development of humanized mice models came with transfer of the scid mutation to a non-obese diabetic mouse. This resulted in the creation of the NOD-scid mice which lacked T cells, B cells, and NK cells. This mouse model permitted for a slightly higher level of human cell reconstitution. Nevertheless, a major breakthrough in this field came with the introduction of the mutant IL-2 receptor (IL2rg) gene in the NOD-scid model. This accounted for the creation of the NOD-scid-γcnull mice (NCG, NSG or NOG) models which were found to have defective signaling of interleukins IL-2, IL-4, IL-7, IL-9, IL-15 and IL-21. Researchers evolved this NSG model by knocking out the RAG1 and RAG2 genes (recombination activation genes), resulting into the RAGnull version of the NSG model that was devoid of major cells of the immune system including the natural killer cells, B lymphocytes and T lymphocytes, macrophages and dendritic cells, causing the greatest immunodeficiency in mice models so far. The limitation with this model was that it lacked the human leukocyte antigen. In accordance to this limitation, the human T cells when engrafted in the mice, failed to recognize human antigen-presenting cells, which consequated in defective immunoglobulin class switching and improper organization of the secondary lymphoid tissue.

To circumvent this limitation, the next development came with the introduction of transgenes encoding for HLA I and HLA II in the NSG RAGnull model that enabled buildout of human T-lymphocyte repertoires as well as the respective immune responses. Mice with such human genes are technically human-animal hybrids.

Types

Engrafting an immunodeficient mouse with functional human cells can be achieved by intravenous injections of human cells and tissue into the mouse, and/or creating a genetically modified mouse from human genes. These models have been instrumental in studying human diseases, immune responses, and therapeutic interventions. This section highlights the various humanized mice models developed using the different methods.

Hu-PBL-scid model

The human peripheral blood lymphocyte-severe combined immunodeficiency mouse model has been employed in a diverse array of research, encompassing investigations into Epstein-Barr virus (EBV)-associated lymphoproliferative disease, toxoplasmosis, human immunodeficiency virus (HIV) infection, and autoimmune diseases. These studies have highlighted the effectiveness of the hu-PBL-SCID mouse model in examining various facets of human diseases, including pathogenesis, immune responses, and therapeutic interventions. Furthermore, the model has been utilized to explore genetic and molecular factors linked to neuropsychiatric disorders such as schizophrenia, offering valuable insights into the pathophysiology and potential therapeutic targets for these conditions. This model is developed by intravenously injecting human PBMCs into immunodeficient mice. The peripheral blood mononuclear cells to be engrafted into the model are obtained from consented adult donors. The advantages associated with this method are that it is comparatively an easy technique, the model takes relatively less time to get established and that the model exhibits functional memory T cells. It is particularly very effective for modelling graft vs. host disease. The model lacks engraftment of B lymphocytes and myeloid cells. Other limitations with this model are that it is suitable for use only in short-term experiments (<3 months) and the possibility that the model itself might develop graft vs. host disease.

Hu-SRC-scid model

The humanized severe combined immunodeficiency (SCID) mouse model, also known as the hu-SRC-scid model, has been extensively utilized in various research areas, including immunology, infectious diseases, cancer, and drug development. This model has been instrumental in studying the human immune response to xenogeneic and allogeneic decellularized biomaterials, providing valuable insights into the biocompatibility and gene expression regulation of these materials. Hu-SRC-scid mice are developed by engrafting CD34+ human hematopoietic stem cells into immunodeficient mice. The cells are obtained from human fetal liver, bone marrow or from blood derived from the umbilical cord, and engrafted via intravenous injection. The advantages of this model are that it offers multilineage development of hematopoietic cells, generation of a naïve immune system, and if engraftment is carried out by intrahepatic injection of newborn mice within 72 hours of birth, it can lead to enhanced human cell reconstitution. Nevertheless, limitations associated with the model are that it takes a minimum of 10 weeks for cell differentiation to occur, it harbors low levels of human RBCs, polymorphonuclear leukocytes, and megakaryocytes.

BLT (bone marrow/liver/thymus) model

The BLT model is constituted with human HSCs, bone marrow, liver, and thymus. The engraftment is carried out by implantation of liver and thymus under the kidney capsule and by transplantation of HSCs obtained from fetal liver. The BLT model has a complete and totally functional human immune system with HLA-restricted T lymphocytes. The model also comprises a mucosal system that is similar to that of humans. Moreover, among all models the BLT model has the highest level of human cell reconstitution.

However, since it requires surgical implantation, this model is the most difficult and time-consuming to develop. Other drawbacks associated with the model are that it portrays weak immune responses to xenobiotics, sub-optimal class switching and may develop GvHD.

Transplanted human organoids

Bio- and electrical engineers have shown that human cerebral organoids transplanted into mice functionally integrate with their visual cortex. Such models may raise similar ethical issues as organoid-based humanization of other animals.

Mouse-human hybrid

A mouse-human hybrid is a genetically modified mouse whose genome has both mouse and human genes, thus being a murine form of a human-animal hybrid. For example, genetically modified mice may be born with human leukocyte antigen genes in order to provide a more realistic environment when introducing human white blood cells into them in order to study immune system responses. One such application is the identification of hepatitis C virus (HCV) peptides that bind to HLA, and that can be recognized by the human immune system, thereby potentially being targets for future vaccines against HCV.

Established models for human diseases

Several mechanisms underlying human maladies are not fully understood. Utilization of humanized mice models in this context allows researchers to determine and unravel important factors that bring about the development of several human diseases and disorders falling under the categories of infectious disease, cancer, autoimmunity, and GvHD.

Infectious diseases

Among the human-specific infectious pathogens studied on humanized mice models, the human immunodeficiency virus has been successfully studied. Besides this, humanized models for studying Ebola virus, Hepatitis B, Hepatitis C, Kaposi's sarcoma-associated herpesvirus, Leishmania major, malaria, and tuberculosis have been reported by various studies.

NOD/scid mice models for dengue virus and varicella-zoster virus, and a Rag2null𝛾cnull model for studying influenza virus have also been developed.

Cancers

On the basis of the type of human cells/tissues that have been used for engraftment, humanized mouse models for cancer can be classified as patient-derived xenografts or cell line-derived xenografts. PDX models are considered to retain the parental malignancy characteristics at a greater extent and hence these are regarded as the more powerful tool for evaluating the effect of anticancer drugs in pre-clinical studies. Humanized mouse models for studying cancers of various organs have been designed. A mouse model for the study of breast cancer has been generated by the intrahepatic engraftment of SK-BR-3 cells in NSG mice. Similarly, NSG mice intravenously engrafted with patient-derived AML cells, and those engrafted (via subcutaneous, intravenous or intra-pancreatic injections) with patient-derived pancreatic cancer tumors have also been developed for the study of leukemia and pancreatic cancer respectively. Several other humanized rodent models for the study of cancer and cancer immunotherapy have also been reported.

Autoimmune diseases

Problems posed by the differences in the human and rodent immune systems have been overcome using a few strategies, so as to enable researchers to study autoimmune disorders using humanized models. As a result, the use of humanized mouse models has extended to various areas of immunology and disease research. For instance, humanized mice have been utilized to study human-tropic pathogens, liver cancer models, and the comparison of mouse models to human diseases NSG mice engrafted with PBMCs and administered with myelin antigens in Freund's adjuvant, and antigen-pulsed autologous dendritic cells have been used to study multiple sclerosis. Similarly, NSG mice engrafted with hematopoietic stem cells and administered with pristane have been used for studying lupus erythematosus. Furthermore, NOG mice engrafted with PBMCs has been used to study mechanisms of allografts rejection in vivo. The development of humanized mouse models has significantly advanced the study of autoimmune disorders and various areas of immunology and disease research. These models have provided a platform for investigating human diseases, immune responses, and therapeutic interventions, bridging the gap between human and rodent immune systems and offering valuable insights into disease pathogenesis and potential therapeutic strategies.

Tuesday, February 18, 2025

Blood plasma

From Wikipedia, the free encyclopedia
A unit of donated fresh plasma

Blood plasma is a light amber-colored liquid component of blood in which blood cells are absent, but which contains proteins and other constituents of whole blood in suspension. It makes up about 55% of the body's total blood volume. It is the intravascular part of extracellular fluid (all body fluid outside cells). It is mostly water (up to 95% by volume), and contains important dissolved proteins (6–8%; e.g., serum albumins, globulins, and fibrinogen), glucose, clotting factors, electrolytes (Na+
, Ca2+
, Mg2+
, HCO3, Cl
, etc.), hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and oxygen. It plays a vital role in an intravascular osmotic effect that keeps electrolyte concentration balanced and protects the body from infection and other blood-related disorders.

Blood plasma can be separated from whole blood through blood fractionation, by adding an anticoagulant to a tube filled with blood, which is spun in a centrifuge until the blood cells fall to the bottom of the tube. The blood plasma is then poured or drawn off. For point-of-care testing applications, plasma can be extracted from whole blood via filtration or via agglutination to allow for rapid testing of specific biomarkers. Blood plasma has a density of approximately 1,025 kg/m3 (1.025 g/ml). Blood serum is blood plasma without clotting factors. Plasmapheresis is a medical therapy that involves blood plasma extraction, treatment, and reintegration.

Fresh frozen plasma is on the WHO Model List of Essential Medicines, the most important medications needed in a basic health system. It is of critical importance in the treatment of many types of trauma which result in blood loss, and is therefore kept stocked universally in all medical facilities capable of treating trauma (e.g., trauma centers, hospitals, and ambulances) or that pose a risk of patient blood loss such as surgical suite facilities. 

Volume

Reference ranges for blood tests, showing normal mass concentration of blood plasma constituents
 
The same information, shown in molarity rather than mass

Blood plasma volume may be expanded by or drained to extravascular fluid when there are changes in Starling forces across capillary walls. For example, when blood pressure drops in circulatory shock, Starling forces drive fluid into the interstitium, causing third spacing.

Standing still for a prolonged period will cause an increase in transcapillary hydrostatic pressure. As a result, approximately 12% of blood plasma volume will cross into the extravascular compartment. This plasma shift causes an increase in hematocrit, serum total protein, blood viscosity and, as a result of increased concentration of coagulation factors, it causes orthostatic hypercoagulability.

Plasma proteins

Albumins

Serum albumins are the most common plasma proteins, and they are responsible for maintaining the osmotic pressure of the blood. Without albumins, the consistency of blood would be closer to that of water. The increased viscosity of blood prevents fluid from entering the bloodstream from outside the capillaries. Albumins are produced in the liver, assuming the absence of a hepatocellular deficiency.

Globulins

The second most common type of protein in the blood plasma are globulins. Important globulins include immunoglobins which are important for the immune system and transport hormones and other compounds around the body. There are three main types of globulins. Alpha-1 and Alpha-2 globulins are formed in the liver and play an important role in mineral transport and the inhibition of blood coagulation. An example of beta globulin found in blood plasma includes low-density lipoproteins (LDL) which are responsible for transporting fat to the cells for steroid and membrane synthesis. Gamma globulin, better known as immunoglobulins, are produced by plasma B cells, and provides the human body with a defense system against invading pathogens and other immune diseases.

Fibrinogen

Fibrinogen proteins make up most of the remaining proteins in the blood. Fibrinogens are responsible for clotting blood to help prevent blood loss.

Color

Bags of frozen plasma, from a person with hypercholesterolemia (left) and typical plasma (right)

Plasma is normally yellow due to bilirubin, carotenoids, hemoglobin, and transferrin. In abnormal cases, plasma can have varying shades of orange, green, or brown. The green color can be due to ceruloplasmin or sulfhemoglobin. The latter may form due to medicines that are able to form sulfonamides once ingested. A dark brown or reddish color can appear due to hemolysis, in which methemoglobin is released from broken blood cells. Plasma is normally relatively transparent, but sometimes it can be opaque. Opaqueness is typically due to elevated content of lipids like cholesterol and triglycerides.

Plasma vs. serum in medical diagnostics

Plasma and serum are both derived from full blood, but serum is obtained by removing blood cells, fibrin clots, and other coagulation factors while plasma is obtained by only removing blood cells. Blood plasma and blood serum are often used in blood tests. Tests can be done on plasma, serum or both. In addition, some tests have to be done with whole blood, such as the determination of the amount of blood cells in blood via flow cytometry.

Benefits of plasma over serum

Plasma preparation is quick, as it is not coagulated. Serum sample preparation requires about 30 minutes of waiting time before it can be centrifuged and then analyzed. However, coagulation can be hastened down to a few minutes by adding thrombin or similar agents to the serum sample.

Compared to serum, 15–20% larger volume of plasma can be obtained from a blood sample of certain size. Serum lacks some proteins that partake in coagulation and increase the sample volume.

Serum preparation can cause measurement errors by increasing or decreasing the concentration of the analyte that is meant to be measured. For example, during coagulation, blood cells consume blood glucose and platelets increase the sample content of compounds like potassium, phosphates and aspartate transaminase by secreting them. Glucose or these other compounds may be the analytes.

Benefits of serum over plasma

Plasma preparation requires the addition of anticoagulants, which can cause expected and unexpected measurement errors. For example, anticoagulant salts can add extra cations like NH4+, Li+, Na+ and K+ to the sample, or impurities like lead and aluminum. Chelator anticoagulants like EDTA and citrate salts work by binding calcium (see carboxyglutamic acid), but they may also bind other ions. Even if such ions are not the analytes, chelators can interfere with enzyme activity measurements. For example, EDTA binds zinc ions, which alkaline phosphatases need as cofactors. Thus, phosphatase activity cannot be measured if EDTA is used.

An unknown volume of anticoagulants can be added to a plasma sample by accident, which may ruin the sample as the analyte concentration is changed by an unknown amount.

No anticoagulants are added to serum samples, which decreases the preparation cost of the samples relative to plasma samples.

Plasma samples can form tiny clots if the added anticoagulant is not properly mixed with the sample. Non-uniform samples can cause measurement errors.

History

Private Roy W. Humphrey is being given blood plasma after he was wounded by shrapnel in Sicily in August 1943.
Dried plasma packages used by the British and US militaries during WWII

Plasma was already well known when described by William Harvey in de Motu Cordis in 1628, but knowledge of it probably dates as far back as Vesalius (1514–1564). The discovery of fibrinogen by William Henson, c. 1770, made it easier to study plasma, as ordinarily, upon coming in contact with a foreign surface – something other than the vascular endothelium – clotting factors become activated and clotting proceeds rapidly, trapping RBCs etc. in the plasma and preventing separation of plasma from the blood. Adding citrate and other anticoagulants is a relatively recent advance. Upon the formation of a clot, the remaining clear fluid (if any) is blood serum, which is essentially plasma without the clotting factors

The use of blood plasma as a substitute for whole blood and for transfusion purposes was proposed in March 1918, in the correspondence columns of the British Medical Journal, by Gordon R. Ward. "Dried plasmas" in powder or strips of material format were developed and first used in World War II. Prior to the United States' involvement in the war, liquid plasma and whole blood were used.

The origin of plasmapheresis

Dr. José Antonio Grifols Lucas, a scientist from Vilanova i la Geltrú, Spain, founded Laboratorios Grifols in 1940. Dr. Grifols pioneered a first-of-its-kind technique called plasmapheresis, where a donor's red blood cells would be returned to the donor's body almost immediately after the separation of the blood plasma. This technique is still in practice today, almost 80 years later. In 1945, Dr. Grifols opened the world's first plasma donation center.

Blood for Britain

The "Blood for Britain" program during the early 1940s was quite successful (and popular in the United States) based on Charles Drew's contribution. A large project began in August 1940 to collect blood in New York City hospitals for the export of plasma to Britain. Drew was appointed medical supervisor of the "Plasma for Britain" project. His notable contribution at this time was to transform the test tube methods of many blood researchers into the first successful mass production techniques.

Nevertheless, the decision was made to develop a dried plasma package for the armed forces as it would reduce breakage and make the transportation, packaging, and storage much simpler. The resulting dried plasma package came in two tin cans containing 400 cc bottles. One bottle contained enough distilled water to reconstitute the dried plasma contained within the other bottle. In about three minutes, the plasma would be ready to use and could stay fresh for around four hours. The Blood for Britain program operated successfully for five months, with total collections of almost 15,000 people donating blood, and with over 5,500 vials of blood plasma.

Following the Supplying Blood Plasma to England project, Drew was named director of the Red Cross blood bank and assistant director of the National Research Council, in charge of blood collection for the United States Army and Navy. Drew argued against the armed forces directive that blood/plasma was to be separated by the race of the donor. Drew insisted that there was no racial difference in human blood and that the policy would lead to needless deaths as soldiers and sailors were required to wait for "same race" blood.

By the end of the war the American Red Cross had provided enough blood for over six million plasma packages. Most of the surplus plasma was returned to the United States for civilian use. Serum albumin replaced dried plasma for combat use during the Korean War.

Plasma donation

A machine being used for plasma donation

Plasma as a blood product prepared from blood donations is used in blood transfusions, typically as fresh frozen plasma (FFP) or Plasma Frozen within 24 hours after phlebotomy (PF24). When donating whole blood or packed red blood cell (PRBC) transfusions, O- is the most desirable and is considered a "universal donor," since it has neither A nor B antigens and can be safely transfused to most recipients. Type AB+ is the "universal recipient" type for PRBC donations. However, for plasma the situation is somewhat reversed. Blood donation centers will sometimes collect only plasma from AB donors through apheresis, as their plasma does not contain the antibodies that may cross react with recipient antigens. As such, AB is often considered the "universal donor" for plasma. Special programs exist just to cater to the male AB plasma donor, because of concerns about transfusion related acute lung injury (TRALI) and female donors who may have higher leukocyte antibodies. However, some studies show an increased risk of TRALI despite increased leukocyte antibodies in women who have been pregnant.

United Kingdom

Following fears of variant Creutzfeldt-Jakob disease (vCJD) being spread through the blood supply, the British government began to phase out blood plasma from U.K. donors and by the end of 1999 had imported all blood products made with plasma from the United States. In 2002, the British government purchased Life Resources Incorporated, an American blood supply company, to import plasma. The company became Plasma Resources UK (PRUK) which owned Bio Products Laboratory. In 2013, the British government sold an 80% stake in PRUK to American hedge fund Bain Capital, in a deal estimated to be worth £200 million. The sale was met with criticism in the UK. In 2009, the U.K. stopped importing plasma from the United States, as it was no longer a viable option due to regulatory and jurisdictional challenges.

At present (2024), blood donated in the United Kingdom is used by UK Blood Services for the manufacture of plasma blood components (Fresh Frozen Plasma (FFP) and cryoprecipitate). However, plasma from UK donors is still not used for the commercial manufacture of fractionated plasma medicines.

Synthetic blood plasma

Simulated body fluid (SBF) is a solution having a similar ion concentration to that of human blood plasma. SBF is normally used for the surface modification of metallic implants, and more recently in gene delivery application.

Knockout mouse

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Knockout_mouse   ...