Search This Blog

Monday, July 16, 2018

Gene delivery

From Wikipedia, the free encyclopedia
 
Gene delivery is the process of introducing foreign genetic material, such as DNA or RNA, into host cells. Genetic material must reach the nucleus of the host cell to induce gene expression. Successful gene delivery requires the foreign genetic material to remain stable within the host cell and can either integrate into the genome or replicate independently of it. This requires foreign DNA to be synthesized as part of a vector, which is designed to enter the desired host cell and deliver the transgene to that cell's genome. Vectors utilized as the method for gene delivery can be divided into two categories, recombinant viruses and synthetic vectors (viral and non-viral).

In complex multicellular eukaryotes (more specifically Weissmanists), if the transgene is incorporated into the host's germline cells, the resulting host cell can pass the transgene to its progeny. If the transgene is incorporated into somatic cells, the transgene will stay with the somatic cell line, and thus its host organism.[6]

Gene delivery is a necessary step in gene therapy for the introduction or silencing of a gene to promote a therapeutic outcome in patients and also has applications in the genetic modification of crops. There are many different methods of gene delivery for various types of cells and tissues. [7]

History

Viral based vectors emerged in the 1980's as a tool for transgene expression. In 1983, Siegel described the use of viral vectors in plant transgene expression although viral manipulation via cDNA cloning was not yet available. [8] The first virus to be used as a vaccine vector was the vaccinia virus in 1984 as a way to protect chimpanzees against Hepatitis B. [9] Non-viral gene delivery was first reported on in 1943 by Avery et al. who showed cellular phenotype change via Exogenous DNA exposure. [10]

Methods

Electroporator with square wave and exponential decay waveforms for in vitro, in vivo, adherent cell and 96 well electroporation applications. Manufactured by BTX Harvard Apparatus, Holliston MA USA.

Non-viral Delivery

Non-viral based gene delivery encompasses chemical and physical delivery methods.[11] Non-viral methods of gene delivery are less likely to induce an immune response, compared to viral vectors. They are more cost-efficient and can deliver larger sizes of genetic material. A draw back of non-viral gene delivery is low efficiency.[11]

Chemical

Non-viral chemical based methods of gene delivery uses natural or synthetic compounds to form particles that facilitate the transfer of genes into cells.[12] These synthetic vectors have the ability to electrostatically bind DNA or RNA and compact the genetic information to accommodate larger genetic transfers.[13] Non-viral chemical vectors enter cells by endocytosis and can protect genetic material from degradation.[11]

Two common non-viral vectors are liposomes and polymers. Liposome-based non-viral vectors uses liposomes to facilitate gene delivery by the formation of lipoplexes. Lipoplexes are spontaneously formed when positively charged liposomes complex with negatively charged DNA.[12] Polymer-based non-viral vectors uses polymers to interact with DNA and form polyplexes.[11]

The use of engineered organic nanoparticles is another non-viral approach for gene delivery.[14]

Physical

Artificial non-viral gene delivery can be mediated by physical methods which uses force to introduce genetic material through the cell membrane.[12]

Physical methods of gene delivery include:[12]
  • Ballistic DNA injection - Gold coated DNA particles are forced into cells
  • Electroporation -Electric pulses create pores in a cell membrane to allow entry of genetic material
  • Sonoporation - Sound waves create pores in a cell membrane to allow entry of genetic material
  • Photoporation - Laser pulses create pores in a cell membrane to allow entry of genetic material
  • Magnetofection - Magnetic particles complexed with DNA and an external magnetic field concentrate nucleic acid particles into target cells
  • Hydroporation - Hydrodynamic capillary effect manipulates cell permeability

Viral Delivery

Foreign DNA being transduced into the host cell through an adenovirus vector.

Virus mediated gene delivery utilizes the ability of a virus to inject its DNA inside a host cell and takes advantage of the virus' own ability to replicate and implement their own genetic material. Transduction is the process through which DNA is injected into the host cell and inserted into its genome [needs citation]. Viruses are a particularly effective form of gene delivery because the structure of the virus prevents degradation via lysosomes of the DNA it is delivering to the nucleus of the host cell.[15] In gene therapy a gene that is intended for delivery is packaged into a replication-deficient viral particle to form a viral vector.[16] Viruses used for gene therapy to date include retrovirus, adenovirus, adeno-associated virus and herpes simplex virus. However, there are drawbacks to using viruses to deliver genes into cells. Viruses can only deliver very small pieces of DNA into the cells, it is labor-intensive and there are risks of random insertion sites, cytophathic effects and mutagenesis.[17]

Viral vector based gene delivery uses a viral vector to deliver genetic material to the host cell. This is done by using a virus that contains the desired gene and removing the part of the viruses genome that is infectious.[18] Viruses are efficient at delivering genetic material to the host cell's nucleus, which is vital for replication.[15]

RNA-based viral vectors

RNA-based viruses were developed because of the ability to transcribe directly from infectious RNA transcripts. RNA vectors are quickly expressed and expressed in the targeted form since no processing is required. Gene integration leads to long-term transgene expression but RNA-based delivery is usually transient and not permanent.[2] Some retroviral vectors include: Oncoretroviral vectors, Lentiviral vector in gene therapy, Human foamy virus[2]

DNA-based viral vectors

DNA-based viral vectors are usually longer lasting with the possibility of integrating into the genome. Some DNA-based viral vectors include: Adenoviridae, Adeno-associated virus, Herpes simplex virus[2]

Applications

Gene Therapy

Several of the methods used to facilitate gene delivery have applications for therapeutic purposes.  Gene therapy utilizes gene delivery to deliver genetic material with the goal of treating a disease or condition in the cell. Gene delivery in therapeutic settings utilizes non-immunogenic vectors capable of cell specificity that can deliver an adequate amount of transgene expression to cause the desired effect.[19]

Advances in genomics have enabled a variety of new methods and gene targets to be identified for possible applications. DNA microarrays used in a variety of next-gen sequencing can identify thousands of genes simultaneously, with analytical software looking at gene expression patterns, and orthologous genes in model species to identify function.[20] This has allowed a variety of possible vectors to be identified for use in gene therapy. As a method for creating a new class of vaccine, gene delivery has been utilized to generate a hybrid biosynthetic vector to deliver a possible vaccine. This vector overcomes traditional barriers to gene delivery by combining E. coli with a synthetic polymer to create a vector that maintains plasmid DNA while having an increased ability to avoid degradation by target cell lysosomes.

Molecular engineering

From Wikipedia, the free encyclopedia
 
Molecular engineering is an emerging field of study concerned with the design and testing of molecular properties, behavior and interactions in order to assemble better materials, systems, and processes for specific functions. This approach, in which observable properties of a macroscopic system are influenced by direct alteration of a molecular structure, falls into the broader category of “bottom-up” design.
 
Molecular engineering deals with material development efforts in emerging technologies that require rigorous rational molecular design approaches towards systems of high complexity.

Molecular engineering is highly interdisciplinary by nature, encompassing aspects of chemical engineering, materials science, bioengineering, electrical engineering, physics, mechanical engineering, and chemistry. There is also considerable overlap with nanotechnology, in that both are concerned with the behavior of materials on the scale of nanometers or smaller. Given the highly fundamental nature of molecular interactions, there are a plethora of potential application areas, limited perhaps only by one’s imagination and the laws of physics. However, some of the early successes of molecular engineering have come in the fields of immunotherapy, synthetic biology, and printable electronics.

Molecular engineering is a dynamic and evolving field with complex target problems; breakthroughs require sophisticated and creative engineers who are conversant across disciplines. A rational engineering methodology that is based on molecular principles is in contrast to the widespread trial-and-error approaches common throughout engineering disciplines. Rather than relying on well-described but poorly-understood empirical correlations between the makeup of a system and its properties, a molecular design approach seeks to manipulate system properties directly using an understanding of their chemical and physical origins. This often gives rise to fundamentally new materials and systems, which are required to address outstanding needs in numerous fields, from energy to healthcare to electronics. Additionally, with the increased sophistication of technology, trial-and-error approaches are often costly and difficult, as it may be difficult to account for all relevant dependencies among variables in a complex system. Molecular engineering efforts may include computational tools, experimental methods, or a combination of both.

History

Molecular engineering was first mentioned in the research literature in 1956 by Arthur R. von Hippel, who defined it as "… a new mode of thinking about engineering problems. Instead of taking prefabricated materials and trying to devise engineering applications consistent with their macroscopic properties, one builds materials from their atoms and molecules for the purpose at hand."[1] This concept was echoed in Richard Feynman’s seminal 1959 lecture There's Plenty of Room at the Bottom, which is widely regarded as giving birth to some of the fundamental ideas of the field of nanotechnology. In spite of the early introduction of these concepts, it was not until the mid-1980s with the publication of Engines of Creation: The Coming Era of Nanotechnology by Drexler that the modern concepts of nano and molecular-scale science began to grow in the public consciousness.

The discovery of electrically-conductive properties in polyacetylene by Alan J. Heeger in 1977[2] effectively opened the field of organic electronics, which has proved foundational for many molecular engineering efforts. Design and optimization of these materials has led to a number of innovations including organic light-emitting diodes and flexible solar cells.

Applications

Molecular design has been an important element of many disciplines in academia, including bioengineering, chemical engineering, electrical engineering, materials science, mechanical engineering and chemistry. However, one of the ongoing challenges is in bringing together the critical mass of manpower amongst disciplines to span the realm from design theory to materials production, and from device design to product development. Thus, while the concept of rational engineering of technology from the bottom-up is not new, it is still far from being widely translated into R&D efforts.

Molecular engineering is used in many industries. Some applications of technologies where molecular engineering plays a critical role:

Consumer Products

  • Antibiotic surfaces (e.g. incorporation of silver nanoparticles or antibacterial peptides into coatings to prevent microbial infection)[3]
  • Cosmetics (e.g. rheological modification with small molecules and surfactants in shampoo)
  • Cleaning products (e.g. nanosilver in laundry detergent)
  • Consumer electronics (organic light-emitting diode displays (OLED))
  • Electrochromic windows (e.g. windows in Dreamliner 787)
  • Zero emission vehicles (e.g. advanced fuel cells/batteries)
  • Self-cleaning surfaces (e.g. super hydrophobic surface coatings)

Energy Harvesting and Storage

Environmental Engineering

  • Water desalination (e.g. new membranes for highly-efficient low-cost ion removal[12])
  • Soil remediation (e.g. catalytic nanoparticles that accelerate the degradation of long-lived soil contaminants such as chlorinated organic compounds[13])
  • Carbon sequestration (e.g. new materials for CO2 adsorption[14])

Immunotherapy

  • Peptide-based vaccines (e.g. amphiphilic peptide macromolecular assemblies induce a robust immune response)[15]

Synthetic Biology

  • CRISPR - Faster and more efficient gene editing technique
  • Gene delivery/gene therapy - Designing molecules to deliver modified or new genes into cells of live organisms to cure genetic disorders
  • Metabolic engineering - Modifying metabolism of organisms to optimize production of chemicals (e.g. synthetic genomics)
  • Protein engineering - Altering structure of existing proteins to enable specific new functions, or the creation of fully artificial proteins

Techniques and instruments used

Molecular engineers utilize sophisticated tools and instruments to make and analyze the interactions of molecules and the surfaces of materials at the molecular and nano-scale. The complexity of molecules being introduced at the surface is increasing, and the techniques used to analyze surface characteristics at the molecular level are ever-changing and improving. Meantime, advancements in high performance computing have greatly expanded the use of computer simulation in the study of molecular scale systems.

Computational and Theoretical Approaches

An EMSL scientist using the environmental transmission electron microscope at Pacific Northwest National Laboratory. The ETEM provides in situ capabilities that enable atomic-resolution imaging and spectroscopic studies of materials under dynamic operating conditions. In contrast to traditional operation of TEM under high vacuum, EMSL’s ETEM uniquely allows imaging within high-temperature and gas environments.

Microscopy

Molecular Characterization

Spectroscopy

Surface Science

Synthetic Methods

Other Tools

Research / Education

At least three universities offer graduate degrees dedicated to molecular engineering: the University of Chicago,[16] the University of Washington,[17] and Kyoto University.[18] These programs are interdisciplinary institutes with faculty from several research areas.

The academic journal Molecular Systems Design & Engineering[19] publishes research from a wide variety of subject areas that demonstrates "a molecular design or optimisation strategy targeting specific systems functionality and performance."

Molecular modelling

From Wikipedia, the free encyclopedia
 
The backbone dihedral angles are included in the molecular model of a protein.
 
Modeling of ionic liquid

Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling electrons of each atom (a quantum chemistry approach).

Molecular mechanics

Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics.
E=E_{{\text{bonds}}}+E_{{\text{angle}}}+E_{{\text{dihedral}}}+E_{{\text{non-bonded}}}\,
E_{{\text{non-bonded}}}=E_{{\text{electrostatic}}}+E_{{\text{van der Waals}}}\,
This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using high level quantum calculations and/or fitting to experimental data. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, {\mathbf  {F}}=m{\mathbf  {a}}. Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects.

Variables

Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations.

Coordinate representations

Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy.[1] Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method.[1]

Applications

Molecular modelling methods are now used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes.

Big Data Analysis Identifies New Cancer Risk Genes


Summary: A newly developed statistical method has allowed researchers to identify 13 cancer predisposition risk genes, 10 of which, the scientists say, are new discoveries.

Source: Center for Genomic Regulation.

There are many genetic causes of cancer: while some mutations are inherited from your parents, others are acquired all throughout your life due to external factors or due to mistakes in copying DNA. Large-scale genome sequencing has revolutionised the identification of cancers driven by the latter group of mutations – somatic mutations – but it has not been as effective in the identification of the inherited genetic variants that predispose to cancer. The main source for identifying these inherited mutations is still family studies.

Now, three researchers at the Centre for Genomic Regulation (CRG) in Barcelona, led by the ICREA Research Professor Ben Lehner, have developed a new statistical method to identify cancer predisposition genes from tumour sequencing data. “Our computational method uses an old idea that cancer genes often require ‘two hits’ before they cause cancer. We developed a method that allows us to systematically identify these genes from existing cancer genome datasets” explains Solip Park, first author of the study and Juan de la Cierva postdoctoral researcher at the CRG.

The method allows researchers to find risk variants without a control sample, meaning that they do not need to compare cancer patients to groups of healthy people, “Now we have a powerful tool to detect new cancer predisposition genes and, consequently, to contribute to improving cancer diagnosis and prevention in the future,” adds Park.

The work, which is published in Nature Communications, presents their statistical method ALFRED and identifies 13 candidate cancer predisposition genes, of which 10 are new. “We applied our method to the genome sequences of more than 10,000 cancer patients with 30 different tumour types and identified known and new possible cancer predisposition genes that have the potential to contribute substantially to cancer risk,” says Ben Lehner, principal investigator of the study.

dna strand

Three researchers at the Centre for Genomic Regulation (CRG) identified new cancer risk genes only using public available data. Data sharing is key for genomic research to become more open, responsible and efficient. NeuroscienceNews.com image is credited to Jonathan Bailey, NHGRI.

“Our results show that the new cancer predisposition genes may have an important role in many types of cancer. For example, they were associated with 14% of ovarian tumours, 7% of breast tumours and to about 1 in 50 of all cancers. For example, inherited variants in one of the newly-proposed risk genes – NSD1 – may be implicated in at least 3 out of 1,000 cancer patients.” explains Fran Supek, CRG alumnus and currently group leader of the Genome Data Science laboratory at the Institute for Reseach in Biomedicine (IRB Barcelona).

When sharing is key to advance knowledge

The researchers worked with genome data from several cancer studies from around the world, including The Cancer Genome Atlas (TCGA) project and also from several projects having nothing to do with cancer research. “We managed to develop and test a new method that hopefully will improve our understanding of cancer genomics and will contribute to cancer research, diagnostics and prevention just by using public data,” states Solip Park.

Ben Lehner adds, “Our work highlights how important it is to share genomic data. It is a success story for how being open is far more efficient and has a multiplier effect. We combined data from many different projects and by applying a new computational method were able to identify important cancer genes that were not identified by the original studies. Many patient groups lobby for better sharing of genomic data because it is only by comparing data across hospitals, countries and diseases that we can obtain a deep understanding of many rare and common diseases. Unfortunately, many researchers still do not share their data and this is something we need to actively change as a society”.
 
About this neuroscience research article

Funding: European Research Council, AXA Research Fund, Spanish Ministry of Economy and Competitiveness, Centro de Excelencia Severo Ochoa, Agència de Gestió d’Ajuts University funded this study.

Source: Laia Cendros – Center for Genomic Regulation
 
Publisher: Organized by NeuroscienceNews.com.
 
Image Source: NeuroscienceNews.com image is credited to Jonathan Bailey, NHGRI.
 
Original Research: Open access research for “Systematic discovery of germline cancer predisposition genes through the identification of somatic second hits” by Solip Park, Fran Supek & Ben Lehner in Nature Communications. Published July 4 2018.
 
doi:10.1038/s41467-018-04900-7

Sci-Fi Fans Enthusiastic For Digitizing the Brain


Summary: Researchers report science fiction fans are positive about the potential to upload consciousness, neurotech and digitizing the brain.

Source: University of Helsinki.

“Mind upload is a technology rife with unsolved philosophical questions,” says researcher Michael Laakasuo.

“For example, is the potential for conscious experiences transmitted when the brain is copied? Does the digital brain have the ability to feel pain, and is switching off the emulated brain comparable to homicide? And what might potentially everlasting life be like on a digital platform?”

A positive attitude from science fiction enthusiasts

Such questions can be considered science fiction, but the first breakthroughs in digitising the brain have already been made: for example, the nervous system of the roundworm (C. elegans) has been successfully modelled within a Lego robot capable of independently moving and avoiding obstacles. Recently, the creation of a functional digital copy of the piece of a somatosensory cortex of the rat brain was also successful.

Scientific discoveries in the field of brain digitisation and related questions are given consideration in both science fiction and scientific journals in philosophy. Moralities of Intelligent Machines, a research group working at the University of Helsinki, is investigating the subject also from the perspective of moral psychology, in other words mapping out the tendency of ordinary people to either approve of or condemn the use of such technology.

“In the first sub-project, where data was collected in the United States, it was found that men are more approving of the technology than women. But standardising for interest in science fiction evened out such differences,” explains Laakasuo.

According to Laakasuo, a stronger exposure to science fiction correlated with a more positive outlook on the mind upload technology overall. The study also found that traditional religiousness is linked with negative reactions towards the technology.

Disapproval from those disgust sensitive to sexual matters

Another sub-study, where data was collected in Finland, indicated that people disapproved in general of uploading a human consciousness regardless of the target, be it a chimpanzee, a computer or an android.

In a third project, the researchers observed a positive outlook on and approval of the technology in those troubled by death and disapproving of suicide. In this sub-project, the researchers also found a strong connection between individuals who are disgust sensitive to sexual matters and disapproval of the mind upload technology. This type of disgust sensitive people find, for example, the viewing of pornographic videos and the lovemaking noises of neighbours disgusting. The indications of negative links between sexual disgust sensitivity and disapproval of the mind upload technology are surprising, given that, on the face of it, the technology has no relevant association with procreation and mate choice.

a digital brain

According to Laakasuo, a stronger exposure to science fiction correlated with a more positive outlook on the mind upload technology overall. The study also found that traditional religiousness is linked with negative reactions towards the technology. NeuroscienceNews.com image is in the public domain.

“However, the inability to biologically procreate with a person who has digitised his or her brain may make the findings seem reasonable. In other words, technology is posing a fundamental challenge to our understanding of human nature,” reasons Laakasuo.

Digital copies of the human brain can reproduce much like an amoeba, by division, which makes sexuality, one of the founding pillars of humanity, obsolete. Against this background, the link between sexual disgust and the condemnation of using the technology in question seems rational.

Funding for research on machine intelligence and robotics

The research projects above were funded by the Jane and Aatos Erkko Foundation, in addition to which the Moralities of Intelligent Machines project has received €100,000 from the Weisell Foundation (link in Finnish only) for a year of follow-up research. According to Mikko Voipio, the foundation chair, humanism has a great significance to research focused on machine intelligence and robotics.

“The bold advances in artificial intelligence as well as its increasing prevalence in various aspects of life are raising concern about the ethical and humanistic side of technological applications. Are the ethics of the relevant field of application also taken into consideration when developing and training such systems? The Moralities of Intelligent Machines research group is concentrating on this often forgotten factor of applying technology. The board of the Weisell Foundation considers this type of research important right now when artificial intelligence seems to have become a household phrase among politicians. It’s good that the other side of the coin also receives attention.”

According to Michael Laakasuo, funding prospects for research on the moral psychology of robotics and artificial intelligence are currently somewhat hazy, but the Moralities of Intelligent Machines group is grateful to both its funders and Finnish society for their continuous interest and encouragement.
 
About this neuroscience research article

Source: Michael Laakasuo – University of Helsinki
 
Publisher: Organized by NeuroscienceNews.com.
 
Image Source: NeuroscienceNews.com image is in the public domain.
 
Original Research: Open access research for “What makes people approve or condemn mind upload technology? Untangling the effects of sexual disgust, purity and science fiction familiarity” by Michael Laakasuo, Marianna Drosinou, Mika Koverola, Anton Kunnari, Juho Halonen, Noora Lehtonen & Jussi Palomäki in Nature Palgrave Communications. Published July 10 2018.
 
doi:10.1057/s41599-018-0124-6

Nucleophilic substitution

SN2 reaction mechanism
From Wikipedia, the free encyclopedia
In organic and inorganic chemistry, nucleophilic substitution is a fundamental class of reactions in which an electron rich nucleophile selectively bonds with or attacks the positive or partially positive charge of an atom or a group of atoms to replace a leaving group; the positive or partially positive atom is referred to as an electrophile. The whole molecular entity of which the electrophile and the leaving group are part is usually called the substrate.

The most general form of the reaction may be given as the following:
Nuc: + R-LG → R-Nuc + LG:
The electron pair (:) from the nucleophile(Nuc) attacks the substrate (R-LG) forming a new bond, while the leaving group (LG) departs with an electron pair. The principal product in this case is R-Nuc. The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged.

An example of nucleophilic substitution is the hydrolysis of an alkyl bromide, R-Br, under basic conditions, where the attacking nucleophile is the OH and the leaving group is Br.
R-Br + OH → R-OH + Br
Nucleophilic substitution reactions are commonplace in organic chemistry, and they can be broadly categorised as taking place at a saturated aliphatic carbon or at (less often) an aromatic or other unsaturated carbon centre.[3]

Saturated carbon centres

SN1 and SN2 reactions

A graph showing the relative reactivities of the different alkyl halides towards SN1 and SN2 reactions (also see Table 1).

In 1935, Edward D. Hughes and Sir Christopher Ingold studied nucleophilic substitution reactions of alkyl halides and related compounds. They proposed that there were two main mechanisms at work, both of them competing with each other. The two main mechanisms are the SN1 reaction and the SN2 reaction. S stands for chemical substitution, N stands for nucleophilic, and the number represents the kinetic order of the reaction.[4]

In the SN2 reaction, the addition of the nucleophile and the elimination of leaving group take place simultaneously (i.e. concerted reaction). SN2 occurs where the central carbon atom is easily accessible to the nucleophile. [5]
 
Nucleophilic substitution at carbon
SN2 reaction of CH3Cl and Cl-
SN2 mechanism















In SN2 reactions, there are a few conditions that affect the rate of the reaction. First of all, the 2 in SN2 implies that there are two concentrations of substances that affect the rate of reaction: substrate and nucleophile. The rate equation for this reaction would be Rate=k[Sub][Nuc]. For a SN2 reaction, an aprotic solvent is best, such as acetone, DMF, or DMSO. Aprotic solvents do not add protons (H+) ions into solution; if protons were present in SN2 reactions, they would react with the nucleophile and severely limit the reaction rate. Since this reaction occurs in one step, steric effects drive the reaction speed. In the intermediate step, the nucleophile is 180 degrees from the leaving group and the stereochemistry is inverted as the nucleophile bonds to make the product. Also, because the intermediate is partially bonded to the nucleophile and leaving group, there is no time for the substrate to rearrange itself: the nucleophile will bond to the same carbon that the leaving group was attached to. A final factor that affects reaction rate is nucleophilicity; the nucleophile must attack an atom other than a hydrogen.

By contrast the SN1 reaction involves two steps. SN1 reactions tend to be important when the central carbon atom of the substrate is surrounded by bulky groups, both because such groups interfere sterically with the SN2 reaction (discussed above) and because a highly substituted carbon forms a stable carbocation.

Nucleophilic substitution at carbon
SN1 reaction mechanism
SN1 mechanism










Like SN2 reactions, there are quite a few factors that affect the reaction rate of SN1 reactions. Instead of having two concentrations that affect the reaction rate, there is only one, substrate. The rate equation for this would be Rate=k[Sub]. Since the rate of a reaction is only determined by its slowest step, the rate at which the leaving group "leaves" determines the speed of the reaction. This means that the better the leaving group, the faster the reaction rate. A general rule for what makes a good leaving group is the weaker the conjugate base, the better the leaving group. In this case, halogens are going to be the best leaving groups, while compounds such as amines, hydrogen, and alkanes are going to be quite poor leaving groups. As SN2 reactions were affected by sterics, SN1 reactions are determined by bulky groups attached to the carbocation. Since there is an intermediate that actually contains a positive charge, bulky groups attached are going to help stabilize the charge on the carbocation through resonance and distribution of charge. In this case, tertiary carbocation will react faster than a secondary which will react much faster than a primary. It is also due to this carbocation intermediate that the product does not have to have inversion. The nucleophile can attack from the top or the bottom and therefore create a racemic product. It is important to use a protic solvent, water and alcohols, since an aprotic solvent could attack the intermediate and cause unwanted product. It does not matter if the hydrogens from the protic solvent react with the nucleophile since the nucleophile is not involved in the rate determining step.


Table 1. Nucleophilic substitutions on RX (an alkyl halide or equivalent)
Factor SN1 SN2 Comments
Kinetics Rate = k[RX] Rate = k[RX][Nuc]
Primary alkyl Never unless additional stabilising groups present Good unless a hindered nucleophile is used
Secondary alkyl Moderate Moderate
Tertiary alkyl Excellent Never Elimination likely if heated or if strong base used
Leaving group Important Important For halogens,
I > Br > Cl >> F
Nucleophilicity Unimportant Important
Preferred solvent Polar protic Polar aprotic
Stereochemistry Racemisation (+ partial inversion possible) Inversion
Rearrangements Common Rare Side reaction
Eliminations Common, especially with basic nucleophiles Only with heat & basic nucleophiles Side reaction
esp. if heated

Reactions

There are many reactions in organic chemistry involve this type of mechanism. Common examples include
R-XR-H using LiAlH4   (SN2)
R-Br + OHR-OH + Br (SN2) or
R-Br + H2O → R-OH + HBr   (SN1)
R-Br + OR'R-OR' + Br   (SN2)

Borderline mechanism

An example of a substitution reaction taking place by a so-called borderline mechanism as originally studied by Hughes and Ingold [6] is the reaction of 1-phenylethyl chloride with sodium methoxide in methanol.
1-phenylethylchloride methanolysis
The reaction rate is found to the sum of SN1 and SN2 components with 61% (3,5 M, 70 °C) taking place by the latter.

Other mechanisms

Besides SN1 and SN2, other mechanisms are known, although they are less common. The SNi mechanism is observed in reactions of thionyl chloride with alcohols, and it is similar to SN1 except that the nucleophile is delivered from the same side as the leaving group.

Nucleophilic substitutions can be accompanied by an allylic rearrangement as seen in reactions such as the Ferrier rearrangement. This type of mechanism is called an SN1' or SN2' reaction (depending on the kinetics). With allylic halides or sulphonates, for example, the nucleophile may attack at the γ unsaturated carbon in place of the carbon bearing the leaving group. This may be seen in the reaction of 1-chloro-2-butene with sodium hydroxide to give a mixture of 2-buten-1-ol and 1-buten-3-ol:
CH3CH=CH-CH2-Cl → CH3CH=CH-CH2-OH + CH3CH(OH)-CH=CH2
The Sn1CB mechanism appears in inorganic chemistry. Competing mechanisms exist.[7][8]

In organometallic chemistry the nucleophilic abstraction reaction occurs with a nucleophilic substitution mechanism.

Unsaturated carbon centres

Nucleophilic substitution via the SN1 or SN2 mechanism does not generally occur with vinyl or aryl halides or related compounds. Under certain conditions nucleophilic substitutions may occur, via other mechanisms such as those described in the nucleophilic aromatic substitution article.

When the substitution occurs at the carbonyl group, the acyl group may undergo nucleophilic acyl substitution. This is the normal mode of substitution with carboxylic acid derivatives such as acyl chlorides, esters and amides.

Friedel–Crafts reaction

From Wikipedia, the free encyclopedia

The Friedel–Crafts reactions are a set of reactions developed by Charles Friedel and James Crafts in 1877 to attach substituents to an aromatic ring. Friedel–Crafts reactions are of two main types: alkylation reactions and acylation reactions. Both proceed by electrophilic aromatic substitution.

Friedel–Crafts alkylation

Friedel–Crafts alkylation involves the alkylation of an aromatic ring with an alkyl halide using a strong Lewis acid catalyst.[6] With anhydrous ferric chloride as a catalyst, the alkyl group attaches at the former site of the chloride ion. The general mechanism is shown below.[7]
Mechanism for the Friedel Crafts alkylation
This reaction suffers from the disadvantage that the product is more nucleophilic than the reactant. Consequenly, overalkylation occurs. Furthermore, the reaction is only very useful for tertiary carbon and secondary alkylating agents. Otherwise the incipient carbocation (R+) will undergo a carbocation rearrangement reaction.[7]

Steric hindrance can be exploited to limit the number of alkylations, as in the t-butylation of 1,4-dimethoxybenzene.[citation needed]
t-butylation of 1,4-dimethoxybenzene
Alkylations are not limited to alkyl halides: Friedel–Crafts reactions are possible with any carbocationic intermediate such as those derived from alkenes and a protic acid, Lewis acid, enones, and epoxides. An example is the synthesis of neophyl chloride from benzene and methallyl chloride:[8]
H2C=C(CH3)CH2Cl + C6H6 → C6H5C(CH3)2CH2Cl
In one study the electrophile is a bromonium ion derived from an alkene and NBS:[9]
Friedel–Crafts alkylation by an alkene
In this reaction samarium(III) triflate is believed to activate the NBS halogen donor in halonium ion formation.

Friedel–Crafts dealkylation

Friedel–Crafts alkylation has been hypothesized to be reversible. In a reversed Friedel–Crafts reaction or Friedel–Crafts dealkylation, alkyl groups are removed in the presence of protons or other Lewis acid.

For example, in a multiple addition of ethyl bromide to benzene, ortho and para substitution is expected after the first monosubstitution step because an alkyl group is an activating group. However, the actual reaction product is 1,3,5-triethylbenzene with all alkyl groups as a meta substituent.[10] Thermodynamic reaction control makes sure that thermodynamically favored meta substitution with steric hindrance minimized takes prevalence over less favorable ortho and para substitution by chemical equilibration. The ultimate reaction product is thus the result of a series of alkylations and dealkylations.[citation needed]
synthesis of 2,4,6-triethylbenzene

Friedel–Crafts acylation

Friedel–Crafts acylation involves the acylation of aromatic rings. Typical acylating agents are acyl chlorides. Typical Lewis acid catalysts are acids and aluminium trichloride. Friedel–Crafts acylation is also possible with acid anhydrides.[11] Reaction conditions are similar to the Friedel–Crafts alkylation. This reaction has several advantages over the alkylation reaction. Due to the electron-withdrawing effect of the carbonyl group, the ketone product is always less reactive than the original molecule, so multiple acylations do not occur. Also, there are no carbocation rearrangements, as the carbonium ion is stabilized by a resonance structure in which the positive charge is on the oxygen.
Friedel–Crafts acylation overview
The viability of the Friedel–Crafts acylation depends on the stability of the acyl chloride reagent. Formyl chloride, for example, is too unstable to be isolated. Thus, synthesis of benzaldehyde via the Friedel–Crafts pathway requires that formyl chloride be synthesized in situ. This is accomplished via the Gattermann-Koch reaction, accomplished by treating benzene with carbon monoxide and hydrogen chloride under high pressure, catalyzed by a mixture of aluminium chloride and cuprous chloride.

Reaction mechanism

The reaction proceeds via generation of an acylium center:
FC acylation step 1
The reaction is completed by deprotonation of the arenium ion by AlCl4, regenerating the AlCl3 catalyst:
FC acylation step III
If desired, the resulting ketone can be subsequently reduced to the corresponding alkane substituent by either Wolff–Kishner reduction or Clemmensen reduction. The net result is the same as the Friedel–Crafts alkylation except that rearrangement is not possible.[12]

Friedel–Crafts hydroxyalkylation

Arenes react with certain aldehydes and ketones to form the hydroxyalkylated products, for example in the reaction of the mesityl derivative of glyoxal with benzene:[13]
Friedel–Crafts hydroxyalkylation
As usual, the aldehyde group is more reactive electrophile than the phenone.

Friedel–Crafts sulfonylation

Under Friedel–Crafts reaction conditions, arenes react with sulfonyl halides and sulfonic acid anhydrides affording sulfones. Commonly used catalysts include AlCl3, FeCl3, GaCl3, BF3, SbCl5, BiCl3 and Bi(OTf)3, among others.[14][15] Intramolecular Friedel–Crafts cyclization occurs with 2-phenyl-1-ethanesulfonyl chloride, 3-phenyl-1-propanesulfonyl chloride and 4-phenyl-1-butanesulfonyl chloride on heating in nitrobenzene with AlCl3.[16] Sulfenyl and sulfinyl chlorides also undergo Friedel–Crafts–type reactions, affording sulfides and sulfoxides, respectively.[17] Both aryl sulfinyl chlorides and diaryl sulfoxides can be prepared from arenes through reaction with thionyl chloride in the presence of catalysts such as BiCl3, Bi(OTf)3, LiClO4 or NaClO4.[18][19]

Scope and variations

This reaction is related to several classic named reactions:
Bogert-Cook Synthesis.png
  • The Darzens–Nenitzescu synthesis of ketones (1910, 1936) involves the acylation of cyclohexene with acetyl chloride to methylcyclohexenylketone.
  • In the related Nenitzescu reductive acylation (1936) a saturated hydrocarbon is added making it a reductive acylation to methylcyclohexylketone
  • The Nencki reaction (1881) is the ring acetylation of phenols with acids in the presence of zinc chloride.[33]
  • In a green chemistry variation aluminium chloride is replaced by graphite in an alkylation of p-xylene with 2-bromobutane. This variation will not work with primary halides from which less carbocation involvement is inferred.[34]

Dyes

Friedel–Crafts reactions have been used in the synthesis of several triarylmethane and xanthene dyes.[35] Examples are the synthesis of thymolphthalein (a pH indicator) from two equivalents of thymol and phthalic anhydride:
Thymolphthalein Synthesis
A reaction of phthalic anhydride with resorcinol in the presence of zinc chloride gives the fluorophore Fluorescein. Replacing resorcinol by N,N-diethylaminophenol in this reaction gives rhodamine B:
Rhodamine B synthesis

Haworth reactions

The Haworth reaction is a classic method for the synthesis of 1-tetralone.[36] In it benzene is reacted with succinic anhydride, the intermediate product is reduced and a second FC acylation takes place with addition of acid.[37]
Haworth reaction
In a related reaction, phenanthrene is synthesized from naphthalene and succinic anhydride in a series of steps.
Haworth Phenanthrene synthesis

Friedel–Crafts test for aromatic hydrocarbons

Reaction of chloroform with aromatic compounds using an aluminium chloride catalyst gives triarylmethanes, which are often brightly colored, as is the case in triarylmethane dyes. This is a bench test for aromatic compounds.

Authorship of the Bible

From Wikipedia, the free encyclopedia ...