Search This Blog

Thursday, January 29, 2015

Gene therapy


From Wikipedia, the free encyclopedia
Gene therapy using an adenovirus vector. A new gene is inserted into a cell using an adenovirus. If the treatment is successful, the new gene will make a functional protein to treat a disease.

Gene therapy is the use of nucleic acid polymers as a drug to treat disease by therapeutic delivery into a patient's cells, where they are either expressed as proteins, interfere with the expression of proteins, or possibly even correct genetic mutations. The most common form of gene therapy involves using DNA that encodes a functional, therapeutic gene to replace a mutated gene. In gene therapy, the nucleic acid molecule is packaged within a "vector", which is used to get the molecule inside cells within the body.

Gene therapy was first conceptualized in 1972, with the authors urging caution before commencing gene therapy studies in humans. The first FDA-approved gene therapy experiment in the United States occurred in 1990, when Ashanti DeSilva was treated for ADA-SCID.[1] By January 2014, about 2,000 clinical trials had been conducted or had been approved using a number of techniques for gene therapy.[2]

Although early clinical failures led many to dismiss gene therapy as over-hyped, clinical successes since 2006 have bolstered new optimism in the promise of gene therapy. These include successful treatment of patients with the retinal disease Leber's congenital amaurosis,[3][4][5][6] X-linked SCID,[7] ADA-SCID,[8][9] adrenoleukodystrophy,[10] chronic lymphocytic leukemia (CLL),[11] acute lymphocytic leukemia (ALL),[12] multiple myeloma,[13] haemophilia[9] and Parkinson's disease.[14] These clinical successes have led to a renewed interest in gene therapy, with several articles in scientific and popular publications calling for continued investment in the field[15][16] and between 2013 and April 2014, US companies invested over $600 million in gene therapy.[17]

The first commercial gene therapy, Gendicine, was approved in China in 2003 for the treatment of certain cancers.[18] Glybera, a treatment for a rare inherited disorder, became the first gene therapy treatment to be approved for clinical use in either Europe or the United States in 2012 after its
endorsement by the European Commission.[19][20]

Approach

Following early advances in genetic engineering of bacteria, cells, and small animals, scientists have started considering how this technique could be applied to medicine; could human chromosomes be modified to treat disease. Two main approaches have been considered - adding a gene to replace a gene that wasn't working properly, or disrupting genes that were not working properly.[21] Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis, haemophilia, muscular dystrophy, thalassemia, and sickle cell anemia. As of 2014, gene therapy was still generally an experimental technique, although in 2012 Glybera became the first gene therapy treatment to be approved for clinical use in either Europe or the United States after its endorsement by the European Commission, as a treatment for a disease caused by a defect in a single gene, lipoprotein lipase.[19][20]

In gene therapy, DNA must be administered to the patient, get to the cells that need repair, enter the cell, and express a protein in a medically useful way.[22] Generally the DNA is incorporated into an engineered virus that serves as a vector, to get the DNA through the bloodstream, into cells, and incorporated into a chromosome.[23][24] However, so-called naked DNA approaches have also been explored, especially in the context of vaccine development.[25]

Generally, efforts have focused on administering a gene that causes a protein to be expressed, that the patient directly needs. However, with development of our understanding of the function of nucleases such as zinc finger nucleases in humans, efforts have begun to incorporate genes encoding nucleases into chromosomes; the expressed nucleases then "edit" the chromosome, disrupting genes causing disease. As of 2014 these approaches have been limited to taking cells from patients, delivering the nuclease gene to the cells, and then administering the transformed cells to patients.[26]

There are other technologies in which nucleic acids are being developed as drugs, such as antisense, small interfering RNA, and others. To the extent that these technologies do not seek to alter the chromosome, but instead are intended to directly interact with other biomolecules such as RNA, they are generally not considered "gene therapy" per se.[citation needed]

Types of gene therapy

Gene therapy may be classified into the two following types, only one of which has been used in humans:

Somatic gene therapy

As the name suggests, in somatic gene therapy, the therapeutic genes are transferred into the somatic cells (non sex-cells), or body, of a patient. Any modifications and effects will be restricted to the individual patient only, and will not be inherited by the patient's offspring or later generations.
Somatic gene therapy represents the mainstream line of current basic and clinical research, where the therapeutic DNA transgene (either integrated in the genome or as an external episome or plasmid) is used to treat a disease in an individual.

Several somatic cell gene transfer experiments are currently in clinical trials with varied success. Over 600 clinical trials utilizing somatic cell therapy are underway in the United States. Most of these trials focus on treating severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia, and cystic fibrosis. These disorders are good candidates for somatic cell therapy because they are caused by single gene defects. While somatic cell therapy is promising for treatment, a complete correction of a genetic disorder or the replacement of multiple genes in somatic cells is not yet possible. Only a few of the many clinical trials are in the advanced stages.[27]

Germline gene therapy

In germline gene therapy, germ cells (sperm or eggs) are modified by the introduction of functional genes, which are integrated into their genomes. Germ cells will combine to form a zygote which will divide to produce all the other cells in an organism and therefore if a germ cell is genetically modified then all the cells in the organism will contain the modified gene. This would allow the therapy to be heritable and passed on to later generations. Although this should, in theory, be highly effective in counteracting genetic disorders and hereditary diseases, some jurisdictions, including Australia, Canada, Germany, Israel, Switzerland, and the Netherlands[28] prohibit this for application in human beings, at least for the present, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations[28] and higher risk than somatic gene therapy (e.g. using non-integrative vectors).[29] The USA has no federal legislation specifically addressing human germ-line or somatic genetic modification (beyond the FDA testing regulations for therapies in general).[28][30][31][32]

Vectors in gene therapy

Gene therapy utilizes the delivery of DNA into cells, which can be accomplished by a number of methods. The two major classes of methods are those that use recombinant viruses (sometimes called biological nanoparticles or viral vectors) and those that use naked DNA or DNA complexes (non-viral methods).

Viruses

All viruses bind to their hosts and introduce their genetic material into the host cell as part of their replication cycle. Therefore this has been recognized as a plausible strategy for gene therapy, by removing the viral DNA and using the virus as a vehicle to deliver the therapeutic DNA.
A number of viruses have been used for human gene therapy, including retrovirus, adenovirus, lentivirus, herpes simplex virus, vaccinia, pox virus, and adeno-associated virus.[2]

Non-viral methods

Non-viral methods can present certain advantages over viral methods, such as large scale production and low host immunogenicity. Previously, low levels of transfection and expression of the gene held non-viral methods at a disadvantage; however, recent advances in vector technology have yielded molecules and techniques that approach the transfection efficiencies of viruses.

There are several methods for non-viral gene therapy, including the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, and the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles.

Technological hurdles

Some of the unsolved problems with the technology underlying gene therapy include:
  • Short-lived nature of gene therapy – Before gene therapy can become a permanent cure for any condition, the therapeutic DNA introduced into target cells must remain functional and the cells containing the therapeutic DNA must be long-lived and stable. Problems with integrating therapeutic DNA into the genome and the rapidly dividing nature of many cells prevent gene therapy from achieving any long-term benefits. Patients will have to undergo multiple rounds of gene therapy.
  • Immune response – Any time a foreign object is introduced into human tissues, the immune system is stimulated to attack the invader. The risk of stimulating the immune system in a way that reduces gene therapy effectiveness is always a possibility. Furthermore, the immune system's enhanced response to invaders that it has seen before makes it difficult for gene therapy to be repeated in patients.
  • Problems with viral vectors – Viruses, the carrier of choice in most gene therapy studies, present a variety of potential problems to the patient: toxicity, immune and inflammatory responses, and gene control and targeting issues. In addition, there is always the fear that the viral vector, once inside the patient, may recover its ability to cause disease.
  • Multigene disorders – Conditions or disorders that arise from mutations in a single gene are the best candidates for gene therapy. Unfortunately, some of the most commonly occurring disorders, such as heart disease, high blood pressure, Alzheimer's disease, arthritis, and diabetes, are caused by the combined effects of variations in many genes. Multigene or multifactorial disorders such as these would be especially difficult to treat effectively using gene therapy.
  • For countries in which germ-line gene therapy is illegal, indications[33] that the Weismann barrier (between soma and germ-line) can be breached are relevant; spread to the testes, therefore could impact the germline against the intentions of the therapy.
  • Chance of inducing a tumor (insertional mutagenesis) – If the DNA is integrated in the wrong place in the genome, for example in a tumor suppressor gene, it could induce a tumor. This has occurred in clinical trials for X-linked severe combined immunodeficiency (X-SCID) patients, in which hematopoietic stem cells were transduced with a corrective transgene using a retrovirus, and this led to the development of T cell leukemia in 3 of 20 patients.[34][35] One possible solution for this is to add a functional tumor suppressor gene onto the DNA to be integrated; however, this poses its own problems, since the longer the DNA is, the harder it is to integrate it efficiently into cell genomes. The development of CRISPR technology in 2012 allowed researchers to make much more precise changes at exact locations in the genome.[36]
  • The cost - only a small number of patients can be treated with gene therapy because of the extremely high cost (Alipogene tiparvovec or Glybera, for example, at a cost of $1.6 million per patient was reported in 2013 to be the most expensive drug in the world).[37][38]

Deaths

Three patients' deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger in 1999,[39] which represented a major setback in the field. One X-SCID patient died of leukemia following gene therapy treatment in 2003.[1] In 2007, a rheumatoid arthritis patient died from an infection in a gene therapy trial; a subsequent investigation concluded that the death was not related to her gene therapy treatment.[40]

Development of gene therapy technology

1970s and earlier

In 1972 Friedmann and Roblin authored a paper in Science titled "Gene therapy for human genetic disease?"[41] Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those who suffer from genetic defects.[42]

1980s

In 1984 a retrovirus vector system was designed which could efficiently insert foreign genes into mammalian chromosomes.[43]

1990s

The first approved gene therapy case in the United States took place on 14 September 1990, at the National Institute of Health, under the direction of Professor William French Anderson.[44] It was performed on a four year old girl named Ashanti DeSilva. It was a treatment for a genetic defect that left her with ADA-SCID, a severe immune system deficiency. The effects were only temporary, but successful.[45]

In 1992 Doctor Claudio Bordignon working at the Vita-Salute San Raffaele University, Milan, Italy performed the first procedure of gene therapy using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases.[46] In 2002 this work led to the publication of the first successful gene therapy treatment for adenosine deaminase-deficiency (SCID). The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or "bubble boy" disease) held from 2000 and 2002 was questioned when two of the ten children treated at the trial's Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the United States, the United Kingdom, France, Italy, and Germany.[47]

In 1993 Andrew Gobea was born with severe combined immunodeficiency (SCID). Genetic screening before birth showed that he had SCID. Blood was removed from Andrew's placenta and umbilical cord immediately after birth, containing stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and was inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses entered and inserted the gene into the stem cells' chromosomes. Stem cells containing the working ADA gene were injected into Andrew's blood system via a vein. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed.[citation needed]

The 1999 death of Jesse Gelsinger in a gene therapy clinical trial resulted in a significant setback to gene therapy research in the United States.[48][49] As a result, the U.S. FDA suspended several clinical trials pending the re-evaluation of ethical and procedural practices in the field.[50]

2000s

2002

Sickle-cell disease is successfully treated in mice.[51] The mice – which have essentially the same defect that causes sickle cell disease in humans – through the use a viral vector, were made produce fetal hemoglobin (HbF), which normally ceases to be produced by an individual shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF has long been shown to temporarily alleviate the symptoms of sickle cell disease. The researchers demonstrated this method of gene therapy to be a more permanent means to increase the production of the therapeutic HbF.[52]

A new gene therapy approach repairs errors in messenger RNA derived from defective genes. This technique has the potential to treat the blood disorder thalassaemia, cystic fibrosis, and some cancers.[53]

Researchers at Case Western Reserve University and Copernicus Therapeutics are able to create tiny liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane.[54]

2003

In 2003 a University of California, Los Angeles research team inserted genes into the brain using liposomes coated in a polymer called polyethylene glycol. The transfer of genes into the brain is a significant achievement because viral vectors are too big to get across the blood–brain barrier. This method has potential for treating Parkinson's disease.[55]

RNA interference or gene silencing may be a new way to treat Huntington's disease. Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.[56]

Gendicine is a gene therapy to treat certain cancers; it delivers the tumor suppressor gene p53 using an engineered adenovirus. In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma.[18]

2006

In March 2006 an international group of scientists announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and which gives a defective immune system. The study, published in Nature Medicine, is believed to be the first to show that gene therapy can cure diseases of the myeloid system.[57]

In May 2006 a team of scientists led by Dr. Luigi Naldini and Dr. Brian Brown from the San Raffaele Telethon Institute for Gene Therapy (HSR-TIGET) in Milan, Italy reported a breakthrough for gene therapy in which they developed a way to prevent the immune system from rejecting a newly delivered gene.[58] Similar to organ transplantation, gene therapy has been plagued by the problem of immune rejection. So far, delivery of the 'normal' gene has been difficult because the immune system recognizes the new gene as foreign and rejects the cells carrying it. To overcome this problem, the HSR-TIGET group utilized a newly uncovered network of genes regulated by molecules known as microRNAs. Dr. Naldini's group reasoned that they could use this natural function of microRNA to selectively turn off the identity of their therapeutic gene in cells of the immune system and prevent the gene from being found and destroyed. The researchers injected mice with the gene containing an immune-cell microRNA target sequence, and the mice did not reject the gene, as previously occurred when vectors without the microRNA target sequence were used. This work will have important implications for the treatment of hemophilia and other genetic diseases by gene therapy.

In August 2006, scientists at the National Institutes of Health (Bethesda, Maryland) successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells. This study constitutes one of the first demonstrations that gene therapy can be effective in treating cancer.[59]

In November 2006 Preston Nix from the University of Pennsylvania School of Medicine reported on VRX496, a gene-based immunotherapy for the treatment of human immunodeficiency virus (HIV) that uses a lentiviral vector for delivery of an antisense gene against the HIV envelope. In the Phase I trial enrolling five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens, a single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was safe and well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. In addition, all five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in U.S. Food and Drug Administration-approved human clinical trials for any disease.[60] Data from an ongoing Phase I/II clinical trial were presented at CROI 2009.[61]

2007

On 1 May 2007 Moorfields Eye Hospital and University College London's Institute of Ophthalmology announced the world's first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23 year-old British male, Robert Johnson, in early 2007.[62] Leber's congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in New England Journal of Medicine in April 2008.[63] They researched the safety of the subretinal delivery of recombinant adeno-associated virus (AAV) carrying RPE65 gene, and found it yielded positive results, with patients having modest increase in vision, and, perhaps more importantly, no apparent side-effects.

2008

In May 2008, two more groups, one at the University of Florida and another at the University of Pennsylvania, reported positive results in independent clinical trials using gene therapy to treat Leber's congenital amaurosis.
In all three clinical trials, patients recovered functional vision without apparent side-effects.[3][4][5][6] These studies, which used adeno-associated virus, have spawned a number of new studies investigating gene therapy for human retinal disease.

2009

In September 2009, the journal Nature reported that researchers at the University of Washington and University of Florida were able to give trichromatic vision to squirrel monkeys using gene therapy, a hopeful precursor to a treatment for color blindness in humans.[64] In November 2009, the journal Science reported that researchers succeeded at halting a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.[65]

2010s

2010

A paper by Komáromy et al. published in April 2010, deals with gene therapy for a form of achromatopsia in dogs. Achromatopsia, or complete color blindness, is presented as an ideal model to develop gene therapy directed to cone photoreceptors. Cone function and day vision have been restored for at least 33 months in two young dogs with achromatopsia. However, the therapy was less efficient for older dogs.[66]

In September 2010, it was announced that an 18 year old male patient in France with beta-thalassemia major had been successfully treated with gene therapy.[67] Beta-thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions.[68] A team directed by Dr. Phillipe Leboulch (of the University of Paris, Bluebird Bio and Harvard Medical School[69]) used a lentiviral vector to transduce the human ß-globin gene into purified blood and marrow cells obtained from the patient in June 2007.[70] The patient's haemoglobin levels were stable at 9 to 10 g/dL, about a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions had not been needed.[69][70] Further clinical trials were planned.[71] Bone marrow transplants are the only cure for thalassemia but 75% of patients are unable to find a matching bone marrow donor.[69]

2011

In 2007 and 2008, a man being treated by Gero Hütter was cured of HIV by repeated Hematopoietic stem cell transplantation (see also Allogeneic stem cell transplantation, Allogeneic bone marrow transplantation, Allotransplantation) with double-delta-32 mutation which disables the CCR5 receptor; this cure was not completely accepted by the medical community until 2011.[72] This cure required complete ablation of existing bone marrow which is very debilitating.

In August 2011, two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The study carried out by the researchers at the University of Pennsylvania used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease.[11] In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free.[73]

Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.[74][75]

2012

The FDA approved Phase 1 clinical trials of the use of gene therapy on thalassemia major patients in the US. Researchers at Memorial Sloan Kettering Cancer Center in New York began to recruit 10 participants for the study in July 2012.[76] The study was expected to end in 2015.[77]

In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment, called Alipogene tiparvovec (Glybera), compensates for lipoprotein lipase deficiency, which can cause severe pancreatitis.[78] The recommendation was endorsed by the European Commission in November 2012[19][20] and commercial rollout is expected in late 2014.[79][80]

In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission "or very close to it" three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1 which exist only on cancerous myeloma cells. This procedure had been developed by a company called Adaptimmune.[13]

2013

In March 2013, Researchers at the Memorial Sloan-Kettering Cancer Center in New York, reported that three of five subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B-cells, cancerous or not. The researchers believed that the patients immune systems would make normal T-cells and B-cells after a couple of months however they were given bone marrow to make sure. One patient had relapsed and died and one had died of a blood clot unrelated to the disease.[12]

Following encouraging Phase 1 trials, in April 2013, researchers in the UK and the US announced they were starting Phase 2 clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients[81] at several hospitals in the US and Europe to use gene therapy to combat heart disease. These trials were designed to increase the levels of SERCA2a protein in the heart muscles and improve the function of these muscles.[82] The FDA granted this a Breakthrough Therapy Designation which would speed up the trial and approval process in the USA.[83]

In July 2013 the Italian San Raffaele Telethon Institute for Gene Therapy (HSR-TIGET) reported that six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 7–32 months the results were promising. Three of the children had metachromatic leukodystrophy which causes children to lose cognitive and motor skills.[84] The other children had Wiskott-Aldrich syndrome which leaves them to open to infection, autoimmune diseases and cancer due to a faulty immune system.[85]

In October 2013, the Great Ormond Street Hospital, London reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and their immune systems were showing signs of full recovery. Another three children treated since then were also making good progress. ADA-SCID children have no functioning immune system and are sometimes known as "bubble children."[9]

In October 2013, Amit Nathwani of the Royal Free London NHS Foundation Trust in London reported that they had treated six people with haemophilia in early 2011 using genetically engineered adeno-associated virus. Over two years later all six were still producing blood plasma clotting factor.[9][86]

2014

In January 2014, researchers at the University of Oxford reported that six people suffering from choroideremia had been treated with a genetically engineered adeno-associated virus with a copy of a gene REP1. Over a six month to two year period all had improved their sight. Choroideremia is an inherited genetic eye disease for which in the past there has been no treatment and patients eventually go blind.[87][88]

In March 2014 researchers at the University of Pennsylvania reported that 12 patients with HIV had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation known to protect against HIV (CCR5 deficiency). Results were promising.[89][90]

Speculative uses for gene therapy

Several uses for gene therapy have been speculated.

Gene doping

There is a risk that athletes might abuse gene therapy technologies to improve their athletic performance.[91] This idea is known as gene doping and is as yet not known to be in use but a number of gene therapies have potential applications to athletic enhancement. In some cases, scholars have argued that genetic technology can make doping safer and thus more ethically acceptable. For example, Kayser et al. argue that if anything, gene doping will level the playing field if all athletes receive equal access: this will ensure that all athletes compete solely on how well they are performing relative to their maximum potential. In other cases, scientists and medics consider that any application of a therapeutic intervention for non-therapeutic or enhancing purposes compromises the ethical foundation of medicine and the spirit of sport.[92]

Human genetic engineering

It has been speculated that genetic engineering could be used to change physical appearance, metabolism, and even improve physical capabilities and mental faculties like memory and intelligence, although for now these uses are limited to science fiction. These speculations have in turn led to ethical concerns and claims, including the belief that every fetus has an inherent right to remain genetically unmodified, the belief that parents hold the rights to modify their unborn offspring, and the belief that every child has the right to be born free from preventable diseases. 
[93][94][95] On the other hand, others have made claims that many people try to improve themselves already through diet, exercise, education, cosmetics, and plastic surgery and that accomplishing these goals through genetics could be more efficient and worthwhile.[96][97] This view sees the prevention of genetic diseases as a duty to humankind in preventing harm to future generations.

Genetic enhancement is considered morally contentious,[98] however, and access to enhancement procedures will probably be regulated. Possible regulatory schemes include a complete ban of genetic enhancement, provision of genetic enhancement procedures to everyone, or a system of professional self-regulation.

Perhaps the most practical regulatory approach is the self-regulation of health professionals. The American Medical Association’s Council on Ethical and Judicial Affairs has stated that “genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics.”[99]

Evidence regarding clinical use of Gene Therapy

Data from three trials on Topical cystic fibrosis transmembrane conductance regulator gene therapy were reported in 2013 not to support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections and outcomes studied in these trials were not of clinical relevance.[100]

Clinical trials of gene therapy for sickle cell disease were started in 2014[101][102] although one review failed to find them.[103]

Regulations

Policies on genetic modification tend to fall in the realm of general guidelines about human-involved biomedical research. Universal restrictions and documents have been made by international organizations to set a general standard on the issue of involving humans directly in research.[citation needed]

One key regulation comes from the Declaration of Helsinki (Ethical Principles for Medical Research Involving Human Subjects), last amended by the World Medical Association’s General Assembly in 2008.[104] This document focuses on the principles physicians and researchers must consider when involving humans as the research subject. Additionally, the Statement on Gene Therapy Research initiated by the Human Genome Organization in 2001 also provides a legal baseline for all countries. HUGO’s document reiterates the organization’s common principles researchers must follow when conducting human genetic research including the recognition of human freedom and adherence to human rights, and the statement also declares recommendations for somatic gene therapy including a call for researchers and governments to attend to public concerns about the pros, cons and ethical concerns about the research.[105]

United States

No federal legislation specifically lays out protocol and restrictions about either germline or somatic human genetic engineering. Instead, this subject is governed by overlapping regulations from local and federal agencies. Included agencies, from the Department of Health and Human Services, are the Food and Drug Administration and the Recombinant DNA Advisory Committee of the National Institutes of Health. Additionally, researchers who wish to receive federal funds when conducting research about an investigational new drug application, which is commonly the case for somatic human genetic engineering, are required to obey international and federal guidelines dealing with the protection of human test subjects.[106]

The National Institutes of Health (NIH) mainly serves as the gene therapy regulator for federally funded research institutions and projects. Privately funded human genetic research can only be recommended to voluntarily follow their regulations. NIH provides funding for lab research that develops or enhances devices utilized in human genetic engineering and to evaluate the ethics and quality of science present in current research labs. The NIH maintains a mandatory registry of human genetic engineering research protocols from all federally funded projects. An advisory committee to the NIH published a set of guidelines on the manipulation of genes.[107] The document for the NIH guidelines discusses safety considerations for the lab as well as for any human patient test subject. A wide range of various experimental types which involve any type of gene transfer or alteration are discussed. Several sections specifically pertain to human genetic engineering including Section III-C-1. This section states the review process researches must undergo and the aspects that are considered when attempting to be approved to begin clinical research involving human genetic transfer into a patient. This document is an important tool required for scientists to follow in order to further scientific progress in the field of somatic cell therapy.[108]

The United States Food and Drug Administration (FDA) regulates the quality and safety of gene therapy products and supervises how these products are implicated clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials, must be reviewed and approved by the FDA and an Institutional Review Board.[109][110]

Popular culture


Wednesday, January 28, 2015

Antimatter


From Wikipedia, the free encyclopedia

In particle physics, antimatter is material composed of antiparticles, which have the same mass as particles of ordinary matter but have opposite charge and other particle properties such as lepton and baryon number, quantum spin, etc. Encounters between particles and antiparticles lead to the annihilation of both, giving rise to varying proportions of high-energy photons (gamma rays), neutrinos, and lower-mass particle–antiparticle pairs. Setting aside the mass of any product neutrinos, which represent released energy that generally continues to be unavailable, the end result of annihilation is a release of energy available to do work, proportional to the total matter and antimatter mass, in accord with the mass-energy equivalence equation, E=mc2.[1]

Antiparticles bind with each other to form antimatter just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton can form an antihydrogen atom. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements. To date, however, anti-atoms more complex than antihelium have neither been artificially produced nor observed in nature. Studies of cosmic rays have identified both positrons and antiprotons, presumably produced by high-energy collisions between particles of ordinary matter.

There is considerable speculation as to why the observable universe is apparently composed almost entirely of ordinary matter, as opposed to a more symmetric combination of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the greatest unsolved problems in physics.[2] The process by which this asymmetry between particles and antiparticles developed is called baryogenesis.

Antimatter in the form of anti-atoms is one of the most difficult materials to produce. Antimatter in the form of individual anti-particles, however, is commonly produced by particle accelerators and in some types of radioactive decay.
There are some 500 terrestrial gamma-ray flashes daily. The red dots show those the Fermi Gamma-ray Space Telescope spotted through 2010.
A video showing how scientists used the Fermi Gamma-ray Space Telescope's gamma-ray detector to uncover bursts of antimatter from thunderstorms

History of the concept

The idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular vortex theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of "squirts" [3] and sinks of the flow of aether. The squirts represented normal matter and the sinks represented negative matter. Pearson's theory required a fourth dimension for the aether to flow from and into.[4]

The term antimatter was first used by Arthur Schuster in two rather whimsical letters to Nature in 1898,[5] in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, and discussed the possibility of matter and antimatter annihilating each other. Schuster's ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity.[6]

The modern theory of antimatter began in 1928, with a paper[7] by Paul Dirac. Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. These were discovered by Carl D. Anderson in 1932 and named positrons (a contraction of "positive electrons"). Although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc.[8] A complete periodic table of antimatter was envisaged by Charles Janet in 1929.[9]

Notation

One way to denote an antiparticle is by adding a bar over the particle's symbol. For example, the proton and antiproton are denoted as p and p, respectively. The same rule applies if one were to address a particle by its constituent components. A proton is made up of uud quarks, so an antiproton must therefore be formed from uud antiquarks. Another convention is to distinguish particles by their electric charge. Thus, the electron and positron are denoted simply as e and e+ respectively.
However, to prevent confusion, the two conventions are never mixed.

Origin and asymmetry

Almost all matter observable from the Earth seems to be made of matter rather than antimatter. If antimatter-dominated regions of space existed, the gamma rays produced in annihilation reactions along the boundary between matter and antimatter regions would be detectable.[10]

Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays impacting Earth's atmosphere (or any other matter in the Solar System) produce minute quantities of antiparticles in the resulting particle jets, which are immediately annihilated by contact with nearby matter. They may similarly be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the two gamma rays produced every time positrons annihilate with nearby matter. The frequency and wavelength of the gamma rays indicate that each carries 511 keV of energy (i.e., the rest mass of an electron multiplied by c2).

Recent observations by the European Space Agency's INTEGRAL satellite may explain the origin of a giant cloud of antimatter surrounding the galactic center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries (binary star systems containing black holes or neutron stars), mostly on one side of the galactic center. While the mechanism is not fully understood, it is likely to involve the production of electron–positron pairs, as ordinary matter gains tremendous energy while falling into a stellar remnant.[11][12]

Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish.[13] NASA is trying to determine if such galaxies exist by looking for X-ray and gamma-ray signatures of annihilation events in colliding superclusters.[14]

Natural production

Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle created by natural radioactivity (β decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In January 2011, research by the American Astronomical Society discovered antimatter (positrons) originating above thunderstorm clouds; positrons are produced in gamma-ray flashes created by electrons accelerated by strong electric fields in the clouds.[15] Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module.[16][17]

Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). During the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter,[18] also called baryon asymmetry, is attributed to CP-violation: a violation of the CP-symmetry relating matter to antimatter. The exact mechanism of this violation during baryogenesis remains a mystery.

Positrons can be produced by radioactive β+ decay, but this mechanism can occur both naturally and artificially.

Observation in cosmic rays

Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe. Rather, they appear to consist of only these two elementary particles, newly made in energetic processes.[citation needed]
Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV. In September, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters.[19][20] A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV.[21] These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.[22]

Cosmic ray antiprotons also have a much higher energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.[23]

There is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the Space Shuttle Discovery on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio.[24]

Artificial production

Positrons

Positrons were reported[25] in November 2008 to have been generated by Lawrence Livermore National Laboratory in larger numbers than by any previous synthetic process. A laser drove electrons through a millimeter-radius gold target's nuclei, which caused the incoming electrons to emit energy quanta that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory. Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; however, new simulations showed that short, ultra-intense lasers and millimeter-thick gold are a far more effective source.[26]

Antiprotons, antineutrons, and antinuclei

The existence of the antiproton was experimentally confirmed in 1955 by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics.[27] An antiproton consists of two up antiquarks and one down antiquark (uud). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception of the antiproton having opposite electric charge and magnetic moment from the proton. Shortly afterwards, in 1956, the antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork and colleagues.[28]

In addition to antibaryons, anti-nuclei consisting of multiple bound antiprotons and antineutrons have been created. These are typically produced at energies far too high to form antimatter atoms (with bound positrons in place of electrons). In 1965, a group of researchers led by Antonino Zichichi reported production of nuclei of antideuterium at the Proton Synchrotron at CERN.[29] At roughly the same time, observations of antideuterium nuclei were reported by a group of American physicists at the Alternating Gradient Synchrotron at Brookhaven National Laboratory.[30]

Antihydrogen atoms

In 1995, CERN announced that it had successfully brought into existence nine antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri.[citation needed] Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities. The antihydrogen atoms created during PS210 and subsequent experiments (at both CERN and Fermilab) were extremely energetic ("hot") and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s, namely, ATHENA and ATRAP. In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also based at CERN. The primary goal of these collaborations is the creation of less energetic ("cold") antihydrogen, better suited to study.[citation needed]

In 1999, CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from 3.5 GeV to 5.3 MeV — still too "hot" to produce study-effective antihydrogen, but a huge leap forward. In late 2002 the ATHENA project announced that they had created the world's first "cold" antihydrogen.[31] The ATRAP project released similar results very shortly thereafter.[32] The antiprotons used in these experiments were cooled by decelerating them with the Antiproton Decelerator, passing them through a thin sheet of foil, and finally capturing them in a Penning-Malmberg trap.[33] The overall cooling process is workable, but highly inefficient; approximately 25 million antiprotons leave the Antiproton Decelerator and roughly 25,000 make it to the Penning-Malmberg trap, which is about 11000 or 0.1% of the original amount.

The antiprotons are still hot when initially trapped. To cool them further, they are mixed into an electron plasma. The electrons in this plasma cool via cyclotron radiation, and then sympathetically cool the antiprotons via Coulomb collisions. Eventually, the electrons are removed by the application of short-duration electric fields, leaving the antiprotons with energies less than 100 meV.[34] While the antiprotons are being cooled in the first trap, a small cloud of positrons is captured from radioactive sodium in a Surko-style positron accumulator.[35] This cloud is then recaptured in a second trap near the antiprotons. Manipulations of the trap electrodes then tip the antiprotons into the positron plasma, where some combine with antiprotons to form antihydrogen. This neutral antihydrogen is unaffected by the electric and magnetic fields used to trap the charged positrons and antiprotons, and within a few microseconds the antihydrogen hits the trap walls, where it annihilates. Some hundreds of millions of antihydrogen atoms have been made in this fashion.

Most of the sought-after high-precision tests of the properties of antihydrogen could only be performed if the antihydrogen were trapped, that is, held in place for a relatively long time. While antihydrogen atoms are electrically neutral, the spins of their component particles produce a magnetic moment. These magnetic moments can interact with an inhomogeneous magnetic field; some of the antihydrogen atoms can be attracted to a magnetic minimum. Such a minimum can be created by a combination of mirror and multipole fields.[36] Antihydrogen can be trapped in such a magnetic minimum (minimum-B) trap; in November 2010, the ALPHA collaboration announced that they had so trapped 38 antihydrogen atoms for about a sixth of a second.[37][38] This was the first time that neutral antimatter had been trapped.

On 26 April 2011, ALPHA announced that they had trapped 309 antihydrogen atoms, some for as long as 1,000 seconds (about 17 minutes). This was longer than neutral antimatter had ever been trapped before.[39][40] ALPHA has used these trapped atoms to initiate research into the spectral properties of the antihydrogen.[41]

The biggest limiting factor in the large-scale production of antimatter is the availability of antiprotons. Recent data released by CERN states that, when fully operational, their facilities are capable of producing ten million antiprotons per minute.[42] Assuming a 100% conversion of antiprotons to antihydrogen, it would take 100 billion years to produce 1 gram or 1 mole of antihydrogen (approximately 6.02×1023 atoms of antihydrogen).

Antihelium

Antihelium-3 nuclei (3He) were first observed in the 1970s in proton-nucleus collision experiments[43] and later created in nucleus-nucleus collision experiments.[44] Nucleus-nucleus collisions produce antinuclei through the coalescense of antiprotons and antineutrons created in these reactions. In 2011, the STAR detector reported the observation of artificially-created antihelium-4 nuclei (anti-alpha particles) (4He) from such collisions.[45]

Preservation

Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam.[citation needed]

In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes.[46]

Cost

Scientists claim that antimatter is the costliest material to make.[47] In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons[48] (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen.[47] This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators), and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss Francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions).[49] By way of comparison the cost of the Manhattan project to produce the first atomic weapon was estimated at $23 billion at 2007 prices.[50]

Several studies funded by the NASA Institute for Advanced Concepts are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately, the belts of gas giants, like Jupiter, hopefully at a lower cost per gram.[51]

Uses

Medical

Matter-antimatter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and a neutrino is also emitted). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use.
Antiprotons have also been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy.[52]

Fuel

Isolated and stored anti-matter could be used as a fuel for interplanetary or interstellar travel[53] as part of an antimatter catalyzed nuclear pulse propulsion or other antimatter rocketry, such as the redshift rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft.

If matter-antimatter collisions resulted only in photon emission, the entire rest mass of the particles would be converted to kinetic energy. The energy per unit mass (9×1016 J/kg) is about 10 orders of magnitude greater than chemical energies,[54] and about 3 orders of magnitude greater than the nuclear potential energy that can be liberated, today, using nuclear fission (about 200 MeV per fission reaction[55] or 8×1013 J/kg), and about 2 orders of magnitude greater than the best possible results expected from fusion (about 6.3×1014 J/kg for the proton-proton chain). The reaction of kg of antimatter with 1 kg of matter would produce 1.8×1017 J (180 petajoules) of energy (by the mass-energy equivalence formula, E = mc2), or the rough equivalent of 43 megatons of TNT – slightly less than the yield of the 27,000 kg Tsar Bomb, the largest thermonuclear weapon ever detonated.

Not all of that energy can be utilized by any realistic propulsion technology because of the nature of the annihilation products. While electron-positron reactions result in gamma ray photons, these are difficult to direct and use for thrust. In reactions between protons and antiprotons, their energy is converted largely into relativistic neutral and charged pions. The neutral pions decay almost immediately (with a half-life of 84 attoseconds) into high-energy photons, but the charged pions decay more slowly (with a half-life of 26 nanoseconds) and can be deflected magnetically to produce thrust.

Note that charged pions ultimately decay into a combination of neutrinos (carrying about 22% of the energy of the charged pions) and unstable charged muons (carrying about 78% of the charged pion energy), with the muons then decaying into a combination of electrons, positrons and neutrinos (cf. muon decay; the neutrinos from this decay carry about 2/3 of the energy of the muons, meaning that from the original charged pions, the total fraction of their energy converted to neutrinos by one route or another would be about 0.22 + (2/3)*0.78 = 0.74).[56]

Weapons

Antimatter has been considered as a trigger mechanism for nuclear weapons.[57] A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it will ever be feasible.[58] However, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself.[59]

Education reform

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Education_reform ...