Search This Blog

Tuesday, July 17, 2018

Materials science

From Wikipedia, the free encyclopedia
 

The interdisciplinary field of materials science, also commonly termed materials science and engineering is the design and discovery of new materials, particularly solids. The intellectual origins of materials science stem from the Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools of the study, within either the Science or Engineering schools, hence the naming.
Materials science is a syncretic discipline hybridizing metallurgy, ceramics, solid-state physics, and chemistry. It is the first example of a new academic discipline emerging by fusion rather than fission.[3]
Many of the most pressing scientific problems humans currently face are due to the limits of the materials that are available and how they are used. Thus, breakthroughs in materials science are likely to affect the future of technology significantly.[4][5]

Materials scientists emphasize understanding how the history of a material (its processing) influences its structure, and thus the material's properties and performance. The understanding of processing-structure-properties relationships is called the § materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy. Materials science is also an important part of forensic engineering and failure analysis - investigating materials, products, structures or components which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.

History

A late Bronze Age sword or dagger blade.

The material of choice of a given era is often a defining point. Phrases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from mining and (likely) ceramics and earlier from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science are a product of the space race: the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials.

Before the 1960s (and in some cases decades after), many materials science departments were named metallurgy departments, reflecting the 19th and early 20th century emphasis on metals. The growth of materials science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s "to expand the national program of basic research and training in the materials sciences."[6] The field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, medical implant materials, biological materials, and nanomaterials, with modern materials classed within 3 distinct groups: Ceramic, Metal or Polymer. The prominent change in materials science during the last two decades is active usage of computer simulation methods to find new compounds, predict various properties, and as a result design new materials at a much greater rate than previous years.

Fundamentals


The materials paradigm represented in the form of a tetrahedron.

A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications.[7] There are a myriad of materials around us—they can be found in anything from buildings to spacecraft. Materials can generally be further divided into two classes: crystalline and non-crystalline. The traditional examples of materials are metals, semiconductors, ceramics and polymers.[8] New and advanced materials that are being developed include nanomaterials, biomaterials,[9] and energy materials to name a few.

The basis of materials science involves studying the structure of materials, and relating them to their properties. Once a materials scientist knows about this structure-property correlation, they can then go on to study the relative performance of a material in a given application. The major determinants of the structure of a material and thus of its properties are its constituent chemical elements and the way in which it has been processed into its final form. These characteristics, taken together and related through the laws of thermodynamics and kinetics, govern a material's microstructure, and thus its properties.

Structure

As mentioned above, structure is one of the most important components of the field of materials science. Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons, or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy (EDS), chromatography, thermal analysis, electron microscope analysis, etc. Structure is studied at various levels, as detailed below.

Atomic structure

This deals with the atoms of the materials, and how they are arranged to give molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms. The way in which the atoms and molecules are bonded and arranged is fundamental to studying the properties and behavior of any material.

Nanostructure

Buckminsterfullerene nanostructure.

Nanostructure deals with objects and structures that are in the 1—100 nm range.[10] In many materials, atoms or molecules agglomerate together to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties.

In describing nanostructures it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure.

Materials which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructure) are called nanomaterials. Nanomaterials are subject of intense research in the materials science community due to the unique properties that they exhibit.

Microstructure

Microstructure of pearlite.

Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured.

The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects, so that they can be studied, with significant advances in simulation resulting in exponentially increasing understanding of how defects can be used to enhance material properties.

Macro structure

Macro structure is the appearance of a material in the scale millimeters to meters—it is the structure of the material as seen with the naked eye.

Crystallography

Crystal structure of a perovskite with a chemical formula ABX3.[11]

Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Mostly, materials do not occur as a single crystal, but in polycrystalline form, i.e., as an aggregate of small crystals with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely noncrystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical, descriptions of physical properties.

Bonding

To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure.

Synthesis and processing

Synthesis and processing involves the creation of a material with the desired micro-nanostructure. From an engineering standpoint, a material cannot be used in industry if no economical production method for it has been developed. Thus, the processing of materials is vital to the field of materials science.

Different materials require different processing or synthesis methods. For example, the processing of metals has historically been very important and is studied under the branch of materials science named physical metallurgy. Also, chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, thin films, etc. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene.

Thermodynamics

A phase diagram for a binary system displaying a eutectic point.

Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints, that are common to all materials, not the peculiar properties of particular materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics.

The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It also helps in the understanding of phase diagrams and phase equilibrium.

Kinetics

Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change.
Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat.

In research

Materials science has received much attention from researchers. In most universities, many departments ranging from physics to chemistry to chemical engineering, along with materials science departments, are involved in materials research. Research in materials science is vibrant and consists of many avenues. The following list is in no way exhaustive. It serves only to highlight certain important research areas.

Nanomaterials

A scanning electron microscopy image of carbon nanotubes bundles

Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter) but is usually 1—100 nm.

Nanomaterials research takes a materials science-based approach to nanotechnology, leveraging advances in materials metrology and synthesis which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties.

The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc.

Biomaterials

The iridescent nacre inside a nautilus shell.

A biomaterial is any matter, surface, or construct that interacts with biological systems. The study of biomaterials is called bio materials science. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into developing new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science.

Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often used and/or adapted for a medical application, and thus comprises whole or part of a living structure or biomedical device which performs, augments, or replaces a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material.

Electronic, optical, and magnetic


Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance.

Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to impurity concentrations, and this allows for the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer.

This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics.

Computational science and theory

With the increase in computing power, simulating the behavior of materials has become possible. This enables materials scientists to discover properties of materials formerly unknown, as well as to design new materials. Up until now, new materials were found by time-consuming trial and error processes. But, now it is hoped that computational methods could drastically reduce that time, and allow tailoring materials properties. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, etc.

In industry

Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.).

Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced.

Ceramics and glasses

Si3N4 ceramic bearing parts

Another application of material science is the structures of ceramics and glass typically associated with the most brittle materials. Bonding in ceramics and glasses uses covalent and ionic-covalent types with SiO2 (silica or sand) as a fundamental building block. Ceramics are as soft as clay or as hard as stone and concrete. Usually, they are crystalline in form. Most glasses contain a metal oxide fused with silica. At high temperatures used to prepare glass, the material is a viscous liquid. The structure of glass forms into an amorphous state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also available. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components. Diamond and carbon in its graphite form are considered to be ceramics.

Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties.

Composites

A 6 μm diameter carbon filament (running from bottom left to top right) siting atop the much larger human hair.

Filaments are commonly used for reinforcement in composite materials.

Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases. Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles which play a key and integral role in NASA's Space Shuttle thermal protection system which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfural alcohol in a vacuum chamber, and cured-pyrolized to convert the furfural alcohol to carbon. To provide oxidation resistance for reuse ability, the outer layers of the RCC are converted to silicon carbide.

Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose.

Polymers

The repeating unit of the polymer polypropylene
 
Expanded polystyrene polymer packaging.

Polymers are chemical compounds made up of a large number of identical components linked together like chains. They are an important part of materials science. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are really the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics which have been around, and which are in current widespread use, include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates and also rubbers which have been around are natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics.

Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties.

Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics.

Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc.

The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints.

Metal alloys

Wire rope made from steel alloy.

The study of metal alloys is a significant part of materials science. Of all the metallic alloys in use today, the alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00%. For the steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. Cast Iron is defined as an iron–carbon alloy with more than 2.00% but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of Chromium. Nickel and Molybdenum are typically also found in stainless steels.

Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength-to-weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength-to-weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications.

Semiconductors

The study of semiconductors is a significant part of materials science. A semiconductor is a material that has a resistivity between a metal and insulator. Its electronic properties can be greatly altered through intentionally introducing impurities or doping. From these semiconductor materials, things such as diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits can be built, making them materials of interest in industry. Semiconductor devices have replaced thermionic devices (vacuum tubes) in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate.[14]

Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Second to silicon, gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications.

Relation to other fields

Materials science evolved—starting from the 1960s—because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged at the intersection of various fields such as metallurgy, solid state physics, chemistry, chemical engineering, mechanical engineering and electrical engineering.

The field is inherently interdisciplinary, and the materials scientists/engineers must be aware and make use of the methods of the physicist, chemist and engineer. The field thus maintains close relationships with these fields. Also, many physicists, chemists and engineers also find themselves working in materials science.

The overlap between physics and materials science has led to the offshoot field of materials physics, which is concerned with the physical properties of materials. The approach is generally more macroscopic and applied than in condensed matter physics. See important publications in materials physics for more details on this field of study.

The field of materials science and engineering is important both from a scientific perspective, as well as from an engineering one. When discovering new materials, one encounters new phenomena that may not have been observed before. Hence, there is a lot of science to be discovered when working with materials. Materials science also provides a test for theories in condensed matter physics.

Materials are of the utmost importance for engineers, as the usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education.

Emerging technologies in materials science

Emerging technology Status Potentially marginalized technologies Potential applications Related articles
Aerogel Hypothetical, experiments, diffusion, early uses[15] Traditional insulation, glass Improved insulation, insulative glass if it can be made clear, sleeves for oil pipelines, aerospace, high-heat & extreme cold applications
Amorphous metal Experiments Kevlar Armor
Conductive polymers Research, experiments, prototypes Conductors Lighter and cheaper wires, antistatic materials, organic solar cells
Femtotechnology, picotechnology Hypothetical Present nuclear New materials; nuclear weapons, power
Fullerene Experiments, diffusion Synthetic diamond and carbon nanotubes (e.g., Buckypaper) Programmable matter
Graphene Hypothetical, experiments, diffusion, early uses[16][17] Silicon-based integrated circuit Components with higher strength to weight ratios, transistors that operate at higher frequency, lower cost of display screens in mobile devices, storing hydrogen for fuel cell powered cars, filtration systems, longer-lasting and faster-charging batteries, sensors to diagnose diseases[18] Potential applications of graphene
High-temperature superconductivity Cryogenic receiver front-end (CRFE) RF and microwave filter systems for mobile phone base stations; prototypes in dry ice; Hypothetical and experiments for higher temperatures[19] Copper wire, semiconductor integral circuits No loss conductors, frictionless bearings, magnetic levitation, lossless high-capacity accumulators, electric cars, heat-free integral circuits and processors
LiTraCon Experiments, already used to make Europe Gate Glass Building skyscrapers, towers, and sculptures like Europe Gate
Metamaterials Hypothetical, experiments, diffusion[20] Classical optics Microscopes, cameras, metamaterial cloaking, cloaking devices
Metal foam Research, commercialization Hulls Space colonies, floating cities
Multi-function structures[21] Hypothetical, experiments, some prototypes, few commercial Composite materials mostly Wide range, e.g., self health monitoring, self healing material, morphing, ...
Nanomaterials: carbon nanotubes Hypothetical, experiments, diffusion, early uses[22][23] Structural steel and aluminium Stronger, lighter materials, space elevator Potential applications of carbon nanotubes, carbon fiber
Programmable matter Hypothetical, experiments[24][25] Coatings, catalysts Wide range, e.g., claytronics, synthetic biology
Quantum dots Research, experiments, prototypes[26] LCD, LED Quantum dot laser, future use as programmable matter in display technologies (TV, projection), optical data communications (high-speed data transmission), medicine (laser scalpel)
Silicene Hypothetical, research Field-effect transistors
Superalloy Research, diffusion Aluminum, titanium, composite materials Aircraft jet engines
Synthetic diamond early uses (drill bits, jewelry) Silicon transistors Electronics

Monday, July 16, 2018

Gene delivery

From Wikipedia, the free encyclopedia
 
Gene delivery is the process of introducing foreign genetic material, such as DNA or RNA, into host cells. Genetic material must reach the nucleus of the host cell to induce gene expression. Successful gene delivery requires the foreign genetic material to remain stable within the host cell and can either integrate into the genome or replicate independently of it. This requires foreign DNA to be synthesized as part of a vector, which is designed to enter the desired host cell and deliver the transgene to that cell's genome. Vectors utilized as the method for gene delivery can be divided into two categories, recombinant viruses and synthetic vectors (viral and non-viral).

In complex multicellular eukaryotes (more specifically Weissmanists), if the transgene is incorporated into the host's germline cells, the resulting host cell can pass the transgene to its progeny. If the transgene is incorporated into somatic cells, the transgene will stay with the somatic cell line, and thus its host organism.[6]

Gene delivery is a necessary step in gene therapy for the introduction or silencing of a gene to promote a therapeutic outcome in patients and also has applications in the genetic modification of crops. There are many different methods of gene delivery for various types of cells and tissues. [7]

History

Viral based vectors emerged in the 1980's as a tool for transgene expression. In 1983, Siegel described the use of viral vectors in plant transgene expression although viral manipulation via cDNA cloning was not yet available. [8] The first virus to be used as a vaccine vector was the vaccinia virus in 1984 as a way to protect chimpanzees against Hepatitis B. [9] Non-viral gene delivery was first reported on in 1943 by Avery et al. who showed cellular phenotype change via Exogenous DNA exposure. [10]

Methods

Electroporator with square wave and exponential decay waveforms for in vitro, in vivo, adherent cell and 96 well electroporation applications. Manufactured by BTX Harvard Apparatus, Holliston MA USA.

Non-viral Delivery

Non-viral based gene delivery encompasses chemical and physical delivery methods.[11] Non-viral methods of gene delivery are less likely to induce an immune response, compared to viral vectors. They are more cost-efficient and can deliver larger sizes of genetic material. A draw back of non-viral gene delivery is low efficiency.[11]

Chemical

Non-viral chemical based methods of gene delivery uses natural or synthetic compounds to form particles that facilitate the transfer of genes into cells.[12] These synthetic vectors have the ability to electrostatically bind DNA or RNA and compact the genetic information to accommodate larger genetic transfers.[13] Non-viral chemical vectors enter cells by endocytosis and can protect genetic material from degradation.[11]

Two common non-viral vectors are liposomes and polymers. Liposome-based non-viral vectors uses liposomes to facilitate gene delivery by the formation of lipoplexes. Lipoplexes are spontaneously formed when positively charged liposomes complex with negatively charged DNA.[12] Polymer-based non-viral vectors uses polymers to interact with DNA and form polyplexes.[11]

The use of engineered organic nanoparticles is another non-viral approach for gene delivery.[14]

Physical

Artificial non-viral gene delivery can be mediated by physical methods which uses force to introduce genetic material through the cell membrane.[12]

Physical methods of gene delivery include:[12]
  • Ballistic DNA injection - Gold coated DNA particles are forced into cells
  • Electroporation -Electric pulses create pores in a cell membrane to allow entry of genetic material
  • Sonoporation - Sound waves create pores in a cell membrane to allow entry of genetic material
  • Photoporation - Laser pulses create pores in a cell membrane to allow entry of genetic material
  • Magnetofection - Magnetic particles complexed with DNA and an external magnetic field concentrate nucleic acid particles into target cells
  • Hydroporation - Hydrodynamic capillary effect manipulates cell permeability

Viral Delivery

Foreign DNA being transduced into the host cell through an adenovirus vector.

Virus mediated gene delivery utilizes the ability of a virus to inject its DNA inside a host cell and takes advantage of the virus' own ability to replicate and implement their own genetic material. Transduction is the process through which DNA is injected into the host cell and inserted into its genome [needs citation]. Viruses are a particularly effective form of gene delivery because the structure of the virus prevents degradation via lysosomes of the DNA it is delivering to the nucleus of the host cell.[15] In gene therapy a gene that is intended for delivery is packaged into a replication-deficient viral particle to form a viral vector.[16] Viruses used for gene therapy to date include retrovirus, adenovirus, adeno-associated virus and herpes simplex virus. However, there are drawbacks to using viruses to deliver genes into cells. Viruses can only deliver very small pieces of DNA into the cells, it is labor-intensive and there are risks of random insertion sites, cytophathic effects and mutagenesis.[17]

Viral vector based gene delivery uses a viral vector to deliver genetic material to the host cell. This is done by using a virus that contains the desired gene and removing the part of the viruses genome that is infectious.[18] Viruses are efficient at delivering genetic material to the host cell's nucleus, which is vital for replication.[15]

RNA-based viral vectors

RNA-based viruses were developed because of the ability to transcribe directly from infectious RNA transcripts. RNA vectors are quickly expressed and expressed in the targeted form since no processing is required. Gene integration leads to long-term transgene expression but RNA-based delivery is usually transient and not permanent.[2] Some retroviral vectors include: Oncoretroviral vectors, Lentiviral vector in gene therapy, Human foamy virus[2]

DNA-based viral vectors

DNA-based viral vectors are usually longer lasting with the possibility of integrating into the genome. Some DNA-based viral vectors include: Adenoviridae, Adeno-associated virus, Herpes simplex virus[2]

Applications

Gene Therapy

Several of the methods used to facilitate gene delivery have applications for therapeutic purposes.  Gene therapy utilizes gene delivery to deliver genetic material with the goal of treating a disease or condition in the cell. Gene delivery in therapeutic settings utilizes non-immunogenic vectors capable of cell specificity that can deliver an adequate amount of transgene expression to cause the desired effect.[19]

Advances in genomics have enabled a variety of new methods and gene targets to be identified for possible applications. DNA microarrays used in a variety of next-gen sequencing can identify thousands of genes simultaneously, with analytical software looking at gene expression patterns, and orthologous genes in model species to identify function.[20] This has allowed a variety of possible vectors to be identified for use in gene therapy. As a method for creating a new class of vaccine, gene delivery has been utilized to generate a hybrid biosynthetic vector to deliver a possible vaccine. This vector overcomes traditional barriers to gene delivery by combining E. coli with a synthetic polymer to create a vector that maintains plasmid DNA while having an increased ability to avoid degradation by target cell lysosomes.

Molecular engineering

From Wikipedia, the free encyclopedia
 
Molecular engineering is an emerging field of study concerned with the design and testing of molecular properties, behavior and interactions in order to assemble better materials, systems, and processes for specific functions. This approach, in which observable properties of a macroscopic system are influenced by direct alteration of a molecular structure, falls into the broader category of “bottom-up” design.
 
Molecular engineering deals with material development efforts in emerging technologies that require rigorous rational molecular design approaches towards systems of high complexity.

Molecular engineering is highly interdisciplinary by nature, encompassing aspects of chemical engineering, materials science, bioengineering, electrical engineering, physics, mechanical engineering, and chemistry. There is also considerable overlap with nanotechnology, in that both are concerned with the behavior of materials on the scale of nanometers or smaller. Given the highly fundamental nature of molecular interactions, there are a plethora of potential application areas, limited perhaps only by one’s imagination and the laws of physics. However, some of the early successes of molecular engineering have come in the fields of immunotherapy, synthetic biology, and printable electronics.

Molecular engineering is a dynamic and evolving field with complex target problems; breakthroughs require sophisticated and creative engineers who are conversant across disciplines. A rational engineering methodology that is based on molecular principles is in contrast to the widespread trial-and-error approaches common throughout engineering disciplines. Rather than relying on well-described but poorly-understood empirical correlations between the makeup of a system and its properties, a molecular design approach seeks to manipulate system properties directly using an understanding of their chemical and physical origins. This often gives rise to fundamentally new materials and systems, which are required to address outstanding needs in numerous fields, from energy to healthcare to electronics. Additionally, with the increased sophistication of technology, trial-and-error approaches are often costly and difficult, as it may be difficult to account for all relevant dependencies among variables in a complex system. Molecular engineering efforts may include computational tools, experimental methods, or a combination of both.

History

Molecular engineering was first mentioned in the research literature in 1956 by Arthur R. von Hippel, who defined it as "… a new mode of thinking about engineering problems. Instead of taking prefabricated materials and trying to devise engineering applications consistent with their macroscopic properties, one builds materials from their atoms and molecules for the purpose at hand."[1] This concept was echoed in Richard Feynman’s seminal 1959 lecture There's Plenty of Room at the Bottom, which is widely regarded as giving birth to some of the fundamental ideas of the field of nanotechnology. In spite of the early introduction of these concepts, it was not until the mid-1980s with the publication of Engines of Creation: The Coming Era of Nanotechnology by Drexler that the modern concepts of nano and molecular-scale science began to grow in the public consciousness.

The discovery of electrically-conductive properties in polyacetylene by Alan J. Heeger in 1977[2] effectively opened the field of organic electronics, which has proved foundational for many molecular engineering efforts. Design and optimization of these materials has led to a number of innovations including organic light-emitting diodes and flexible solar cells.

Applications

Molecular design has been an important element of many disciplines in academia, including bioengineering, chemical engineering, electrical engineering, materials science, mechanical engineering and chemistry. However, one of the ongoing challenges is in bringing together the critical mass of manpower amongst disciplines to span the realm from design theory to materials production, and from device design to product development. Thus, while the concept of rational engineering of technology from the bottom-up is not new, it is still far from being widely translated into R&D efforts.

Molecular engineering is used in many industries. Some applications of technologies where molecular engineering plays a critical role:

Consumer Products

  • Antibiotic surfaces (e.g. incorporation of silver nanoparticles or antibacterial peptides into coatings to prevent microbial infection)[3]
  • Cosmetics (e.g. rheological modification with small molecules and surfactants in shampoo)
  • Cleaning products (e.g. nanosilver in laundry detergent)
  • Consumer electronics (organic light-emitting diode displays (OLED))
  • Electrochromic windows (e.g. windows in Dreamliner 787)
  • Zero emission vehicles (e.g. advanced fuel cells/batteries)
  • Self-cleaning surfaces (e.g. super hydrophobic surface coatings)

Energy Harvesting and Storage

Environmental Engineering

  • Water desalination (e.g. new membranes for highly-efficient low-cost ion removal[12])
  • Soil remediation (e.g. catalytic nanoparticles that accelerate the degradation of long-lived soil contaminants such as chlorinated organic compounds[13])
  • Carbon sequestration (e.g. new materials for CO2 adsorption[14])

Immunotherapy

  • Peptide-based vaccines (e.g. amphiphilic peptide macromolecular assemblies induce a robust immune response)[15]

Synthetic Biology

  • CRISPR - Faster and more efficient gene editing technique
  • Gene delivery/gene therapy - Designing molecules to deliver modified or new genes into cells of live organisms to cure genetic disorders
  • Metabolic engineering - Modifying metabolism of organisms to optimize production of chemicals (e.g. synthetic genomics)
  • Protein engineering - Altering structure of existing proteins to enable specific new functions, or the creation of fully artificial proteins

Techniques and instruments used

Molecular engineers utilize sophisticated tools and instruments to make and analyze the interactions of molecules and the surfaces of materials at the molecular and nano-scale. The complexity of molecules being introduced at the surface is increasing, and the techniques used to analyze surface characteristics at the molecular level are ever-changing and improving. Meantime, advancements in high performance computing have greatly expanded the use of computer simulation in the study of molecular scale systems.

Computational and Theoretical Approaches

An EMSL scientist using the environmental transmission electron microscope at Pacific Northwest National Laboratory. The ETEM provides in situ capabilities that enable atomic-resolution imaging and spectroscopic studies of materials under dynamic operating conditions. In contrast to traditional operation of TEM under high vacuum, EMSL’s ETEM uniquely allows imaging within high-temperature and gas environments.

Microscopy

Molecular Characterization

Spectroscopy

Surface Science

Synthetic Methods

Other Tools

Research / Education

At least three universities offer graduate degrees dedicated to molecular engineering: the University of Chicago,[16] the University of Washington,[17] and Kyoto University.[18] These programs are interdisciplinary institutes with faculty from several research areas.

The academic journal Molecular Systems Design & Engineering[19] publishes research from a wide variety of subject areas that demonstrates "a molecular design or optimisation strategy targeting specific systems functionality and performance."

Molecular modelling

From Wikipedia, the free encyclopedia
 
The backbone dihedral angles are included in the molecular model of a protein.
 
Modeling of ionic liquid

Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling electrons of each atom (a quantum chemistry approach).

Molecular mechanics

Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics.
E=E_{{\text{bonds}}}+E_{{\text{angle}}}+E_{{\text{dihedral}}}+E_{{\text{non-bonded}}}\,
E_{{\text{non-bonded}}}=E_{{\text{electrostatic}}}+E_{{\text{van der Waals}}}\,
This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using high level quantum calculations and/or fitting to experimental data. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, {\mathbf  {F}}=m{\mathbf  {a}}. Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects.

Variables

Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations.

Coordinate representations

Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy.[1] Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method.[1]

Applications

Molecular modelling methods are now used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes.

Big Data Analysis Identifies New Cancer Risk Genes


Summary: A newly developed statistical method has allowed researchers to identify 13 cancer predisposition risk genes, 10 of which, the scientists say, are new discoveries.

Source: Center for Genomic Regulation.

There are many genetic causes of cancer: while some mutations are inherited from your parents, others are acquired all throughout your life due to external factors or due to mistakes in copying DNA. Large-scale genome sequencing has revolutionised the identification of cancers driven by the latter group of mutations – somatic mutations – but it has not been as effective in the identification of the inherited genetic variants that predispose to cancer. The main source for identifying these inherited mutations is still family studies.

Now, three researchers at the Centre for Genomic Regulation (CRG) in Barcelona, led by the ICREA Research Professor Ben Lehner, have developed a new statistical method to identify cancer predisposition genes from tumour sequencing data. “Our computational method uses an old idea that cancer genes often require ‘two hits’ before they cause cancer. We developed a method that allows us to systematically identify these genes from existing cancer genome datasets” explains Solip Park, first author of the study and Juan de la Cierva postdoctoral researcher at the CRG.

The method allows researchers to find risk variants without a control sample, meaning that they do not need to compare cancer patients to groups of healthy people, “Now we have a powerful tool to detect new cancer predisposition genes and, consequently, to contribute to improving cancer diagnosis and prevention in the future,” adds Park.

The work, which is published in Nature Communications, presents their statistical method ALFRED and identifies 13 candidate cancer predisposition genes, of which 10 are new. “We applied our method to the genome sequences of more than 10,000 cancer patients with 30 different tumour types and identified known and new possible cancer predisposition genes that have the potential to contribute substantially to cancer risk,” says Ben Lehner, principal investigator of the study.

dna strand

Three researchers at the Centre for Genomic Regulation (CRG) identified new cancer risk genes only using public available data. Data sharing is key for genomic research to become more open, responsible and efficient. NeuroscienceNews.com image is credited to Jonathan Bailey, NHGRI.

“Our results show that the new cancer predisposition genes may have an important role in many types of cancer. For example, they were associated with 14% of ovarian tumours, 7% of breast tumours and to about 1 in 50 of all cancers. For example, inherited variants in one of the newly-proposed risk genes – NSD1 – may be implicated in at least 3 out of 1,000 cancer patients.” explains Fran Supek, CRG alumnus and currently group leader of the Genome Data Science laboratory at the Institute for Reseach in Biomedicine (IRB Barcelona).

When sharing is key to advance knowledge

The researchers worked with genome data from several cancer studies from around the world, including The Cancer Genome Atlas (TCGA) project and also from several projects having nothing to do with cancer research. “We managed to develop and test a new method that hopefully will improve our understanding of cancer genomics and will contribute to cancer research, diagnostics and prevention just by using public data,” states Solip Park.

Ben Lehner adds, “Our work highlights how important it is to share genomic data. It is a success story for how being open is far more efficient and has a multiplier effect. We combined data from many different projects and by applying a new computational method were able to identify important cancer genes that were not identified by the original studies. Many patient groups lobby for better sharing of genomic data because it is only by comparing data across hospitals, countries and diseases that we can obtain a deep understanding of many rare and common diseases. Unfortunately, many researchers still do not share their data and this is something we need to actively change as a society”.
 
About this neuroscience research article

Funding: European Research Council, AXA Research Fund, Spanish Ministry of Economy and Competitiveness, Centro de Excelencia Severo Ochoa, Agència de Gestió d’Ajuts University funded this study.

Source: Laia Cendros – Center for Genomic Regulation
 
Publisher: Organized by NeuroscienceNews.com.
 
Image Source: NeuroscienceNews.com image is credited to Jonathan Bailey, NHGRI.
 
Original Research: Open access research for “Systematic discovery of germline cancer predisposition genes through the identification of somatic second hits” by Solip Park, Fran Supek & Ben Lehner in Nature Communications. Published July 4 2018.
 
doi:10.1038/s41467-018-04900-7

Sci-Fi Fans Enthusiastic For Digitizing the Brain


Summary: Researchers report science fiction fans are positive about the potential to upload consciousness, neurotech and digitizing the brain.

Source: University of Helsinki.

“Mind upload is a technology rife with unsolved philosophical questions,” says researcher Michael Laakasuo.

“For example, is the potential for conscious experiences transmitted when the brain is copied? Does the digital brain have the ability to feel pain, and is switching off the emulated brain comparable to homicide? And what might potentially everlasting life be like on a digital platform?”

A positive attitude from science fiction enthusiasts

Such questions can be considered science fiction, but the first breakthroughs in digitising the brain have already been made: for example, the nervous system of the roundworm (C. elegans) has been successfully modelled within a Lego robot capable of independently moving and avoiding obstacles. Recently, the creation of a functional digital copy of the piece of a somatosensory cortex of the rat brain was also successful.

Scientific discoveries in the field of brain digitisation and related questions are given consideration in both science fiction and scientific journals in philosophy. Moralities of Intelligent Machines, a research group working at the University of Helsinki, is investigating the subject also from the perspective of moral psychology, in other words mapping out the tendency of ordinary people to either approve of or condemn the use of such technology.

“In the first sub-project, where data was collected in the United States, it was found that men are more approving of the technology than women. But standardising for interest in science fiction evened out such differences,” explains Laakasuo.

According to Laakasuo, a stronger exposure to science fiction correlated with a more positive outlook on the mind upload technology overall. The study also found that traditional religiousness is linked with negative reactions towards the technology.

Disapproval from those disgust sensitive to sexual matters

Another sub-study, where data was collected in Finland, indicated that people disapproved in general of uploading a human consciousness regardless of the target, be it a chimpanzee, a computer or an android.

In a third project, the researchers observed a positive outlook on and approval of the technology in those troubled by death and disapproving of suicide. In this sub-project, the researchers also found a strong connection between individuals who are disgust sensitive to sexual matters and disapproval of the mind upload technology. This type of disgust sensitive people find, for example, the viewing of pornographic videos and the lovemaking noises of neighbours disgusting. The indications of negative links between sexual disgust sensitivity and disapproval of the mind upload technology are surprising, given that, on the face of it, the technology has no relevant association with procreation and mate choice.

a digital brain

According to Laakasuo, a stronger exposure to science fiction correlated with a more positive outlook on the mind upload technology overall. The study also found that traditional religiousness is linked with negative reactions towards the technology. NeuroscienceNews.com image is in the public domain.

“However, the inability to biologically procreate with a person who has digitised his or her brain may make the findings seem reasonable. In other words, technology is posing a fundamental challenge to our understanding of human nature,” reasons Laakasuo.

Digital copies of the human brain can reproduce much like an amoeba, by division, which makes sexuality, one of the founding pillars of humanity, obsolete. Against this background, the link between sexual disgust and the condemnation of using the technology in question seems rational.

Funding for research on machine intelligence and robotics

The research projects above were funded by the Jane and Aatos Erkko Foundation, in addition to which the Moralities of Intelligent Machines project has received €100,000 from the Weisell Foundation (link in Finnish only) for a year of follow-up research. According to Mikko Voipio, the foundation chair, humanism has a great significance to research focused on machine intelligence and robotics.

“The bold advances in artificial intelligence as well as its increasing prevalence in various aspects of life are raising concern about the ethical and humanistic side of technological applications. Are the ethics of the relevant field of application also taken into consideration when developing and training such systems? The Moralities of Intelligent Machines research group is concentrating on this often forgotten factor of applying technology. The board of the Weisell Foundation considers this type of research important right now when artificial intelligence seems to have become a household phrase among politicians. It’s good that the other side of the coin also receives attention.”

According to Michael Laakasuo, funding prospects for research on the moral psychology of robotics and artificial intelligence are currently somewhat hazy, but the Moralities of Intelligent Machines group is grateful to both its funders and Finnish society for their continuous interest and encouragement.
 
About this neuroscience research article

Source: Michael Laakasuo – University of Helsinki
 
Publisher: Organized by NeuroscienceNews.com.
 
Image Source: NeuroscienceNews.com image is in the public domain.
 
Original Research: Open access research for “What makes people approve or condemn mind upload technology? Untangling the effects of sexual disgust, purity and science fiction familiarity” by Michael Laakasuo, Marianna Drosinou, Mika Koverola, Anton Kunnari, Juho Halonen, Noora Lehtonen & Jussi Palomäki in Nature Palgrave Communications. Published July 10 2018.
 
doi:10.1057/s41599-018-0124-6

Nucleophilic substitution

SN2 reaction mechanism
From Wikipedia, the free encyclopedia
In organic and inorganic chemistry, nucleophilic substitution is a fundamental class of reactions in which an electron rich nucleophile selectively bonds with or attacks the positive or partially positive charge of an atom or a group of atoms to replace a leaving group; the positive or partially positive atom is referred to as an electrophile. The whole molecular entity of which the electrophile and the leaving group are part is usually called the substrate.

The most general form of the reaction may be given as the following:
Nuc: + R-LG → R-Nuc + LG:
The electron pair (:) from the nucleophile(Nuc) attacks the substrate (R-LG) forming a new bond, while the leaving group (LG) departs with an electron pair. The principal product in this case is R-Nuc. The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged.

An example of nucleophilic substitution is the hydrolysis of an alkyl bromide, R-Br, under basic conditions, where the attacking nucleophile is the OH and the leaving group is Br.
R-Br + OH → R-OH + Br
Nucleophilic substitution reactions are commonplace in organic chemistry, and they can be broadly categorised as taking place at a saturated aliphatic carbon or at (less often) an aromatic or other unsaturated carbon centre.[3]

Saturated carbon centres

SN1 and SN2 reactions

A graph showing the relative reactivities of the different alkyl halides towards SN1 and SN2 reactions (also see Table 1).

In 1935, Edward D. Hughes and Sir Christopher Ingold studied nucleophilic substitution reactions of alkyl halides and related compounds. They proposed that there were two main mechanisms at work, both of them competing with each other. The two main mechanisms are the SN1 reaction and the SN2 reaction. S stands for chemical substitution, N stands for nucleophilic, and the number represents the kinetic order of the reaction.[4]

In the SN2 reaction, the addition of the nucleophile and the elimination of leaving group take place simultaneously (i.e. concerted reaction). SN2 occurs where the central carbon atom is easily accessible to the nucleophile. [5]
 
Nucleophilic substitution at carbon
SN2 reaction of CH3Cl and Cl-
SN2 mechanism















In SN2 reactions, there are a few conditions that affect the rate of the reaction. First of all, the 2 in SN2 implies that there are two concentrations of substances that affect the rate of reaction: substrate and nucleophile. The rate equation for this reaction would be Rate=k[Sub][Nuc]. For a SN2 reaction, an aprotic solvent is best, such as acetone, DMF, or DMSO. Aprotic solvents do not add protons (H+) ions into solution; if protons were present in SN2 reactions, they would react with the nucleophile and severely limit the reaction rate. Since this reaction occurs in one step, steric effects drive the reaction speed. In the intermediate step, the nucleophile is 180 degrees from the leaving group and the stereochemistry is inverted as the nucleophile bonds to make the product. Also, because the intermediate is partially bonded to the nucleophile and leaving group, there is no time for the substrate to rearrange itself: the nucleophile will bond to the same carbon that the leaving group was attached to. A final factor that affects reaction rate is nucleophilicity; the nucleophile must attack an atom other than a hydrogen.

By contrast the SN1 reaction involves two steps. SN1 reactions tend to be important when the central carbon atom of the substrate is surrounded by bulky groups, both because such groups interfere sterically with the SN2 reaction (discussed above) and because a highly substituted carbon forms a stable carbocation.

Nucleophilic substitution at carbon
SN1 reaction mechanism
SN1 mechanism










Like SN2 reactions, there are quite a few factors that affect the reaction rate of SN1 reactions. Instead of having two concentrations that affect the reaction rate, there is only one, substrate. The rate equation for this would be Rate=k[Sub]. Since the rate of a reaction is only determined by its slowest step, the rate at which the leaving group "leaves" determines the speed of the reaction. This means that the better the leaving group, the faster the reaction rate. A general rule for what makes a good leaving group is the weaker the conjugate base, the better the leaving group. In this case, halogens are going to be the best leaving groups, while compounds such as amines, hydrogen, and alkanes are going to be quite poor leaving groups. As SN2 reactions were affected by sterics, SN1 reactions are determined by bulky groups attached to the carbocation. Since there is an intermediate that actually contains a positive charge, bulky groups attached are going to help stabilize the charge on the carbocation through resonance and distribution of charge. In this case, tertiary carbocation will react faster than a secondary which will react much faster than a primary. It is also due to this carbocation intermediate that the product does not have to have inversion. The nucleophile can attack from the top or the bottom and therefore create a racemic product. It is important to use a protic solvent, water and alcohols, since an aprotic solvent could attack the intermediate and cause unwanted product. It does not matter if the hydrogens from the protic solvent react with the nucleophile since the nucleophile is not involved in the rate determining step.


Table 1. Nucleophilic substitutions on RX (an alkyl halide or equivalent)
Factor SN1 SN2 Comments
Kinetics Rate = k[RX] Rate = k[RX][Nuc]
Primary alkyl Never unless additional stabilising groups present Good unless a hindered nucleophile is used
Secondary alkyl Moderate Moderate
Tertiary alkyl Excellent Never Elimination likely if heated or if strong base used
Leaving group Important Important For halogens,
I > Br > Cl >> F
Nucleophilicity Unimportant Important
Preferred solvent Polar protic Polar aprotic
Stereochemistry Racemisation (+ partial inversion possible) Inversion
Rearrangements Common Rare Side reaction
Eliminations Common, especially with basic nucleophiles Only with heat & basic nucleophiles Side reaction
esp. if heated

Reactions

There are many reactions in organic chemistry involve this type of mechanism. Common examples include
R-XR-H using LiAlH4   (SN2)
R-Br + OHR-OH + Br (SN2) or
R-Br + H2O → R-OH + HBr   (SN1)
R-Br + OR'R-OR' + Br   (SN2)

Borderline mechanism

An example of a substitution reaction taking place by a so-called borderline mechanism as originally studied by Hughes and Ingold [6] is the reaction of 1-phenylethyl chloride with sodium methoxide in methanol.
1-phenylethylchloride methanolysis
The reaction rate is found to the sum of SN1 and SN2 components with 61% (3,5 M, 70 °C) taking place by the latter.

Other mechanisms

Besides SN1 and SN2, other mechanisms are known, although they are less common. The SNi mechanism is observed in reactions of thionyl chloride with alcohols, and it is similar to SN1 except that the nucleophile is delivered from the same side as the leaving group.

Nucleophilic substitutions can be accompanied by an allylic rearrangement as seen in reactions such as the Ferrier rearrangement. This type of mechanism is called an SN1' or SN2' reaction (depending on the kinetics). With allylic halides or sulphonates, for example, the nucleophile may attack at the γ unsaturated carbon in place of the carbon bearing the leaving group. This may be seen in the reaction of 1-chloro-2-butene with sodium hydroxide to give a mixture of 2-buten-1-ol and 1-buten-3-ol:
CH3CH=CH-CH2-Cl → CH3CH=CH-CH2-OH + CH3CH(OH)-CH=CH2
The Sn1CB mechanism appears in inorganic chemistry. Competing mechanisms exist.[7][8]

In organometallic chemistry the nucleophilic abstraction reaction occurs with a nucleophilic substitution mechanism.

Unsaturated carbon centres

Nucleophilic substitution via the SN1 or SN2 mechanism does not generally occur with vinyl or aryl halides or related compounds. Under certain conditions nucleophilic substitutions may occur, via other mechanisms such as those described in the nucleophilic aromatic substitution article.

When the substitution occurs at the carbonyl group, the acyl group may undergo nucleophilic acyl substitution. This is the normal mode of substitution with carboxylic acid derivatives such as acyl chlorides, esters and amides.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...