Search This Blog

Sunday, March 28, 2021

Metallurgy

From Wikipedia, the free encyclopedia
 
Smelting, a basic step in obtaining usable quantities of most metals.
 
Casting; pouring molten gold into an ingot.
Gold was processed in La Luz Gold Mine (pictured) near Siuna, Nicaragua, until 1968.

Metallurgy is a domain of materials science and engineering that studies the physical and chemical behavior of metallic elements, their inter-metallic compounds, and their mixtures, which are called alloys. Metallurgy encompasses both the science and the technology of metals; that is, the way in which science is applied to the production of metals, and the engineering of metal components used in products for both consumers and manufacturers. Metallurgy is distinct from the craft of metalworking. Metalworking relies on metallurgy in a similar manner to how medicine relies on medical science for technical advancement. A specialist practitioner of metallurgy is known as a metallurgist.

The science of metallurgy is subdivided into two broad categories: chemical metallurgy and physical metallurgy. Chemical metallurgy is chiefly concerned with the reduction and oxidation of metals, and the chemical performance of metals. Subjects of study in chemical metallurgy include mineral processing, the extraction of metals, thermodynamics, electrochemistry, and chemical degradation (corrosion). In contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. Topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms.

Historically, metallurgy has predominately focused on the production of metals. Metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. Metal alloys are often a blend of at least two different metallic elements. However, non-metallic elements are often added to alloys in order to achieve properties suitable for an application. The study of metal production is subdivided into ferrous metallurgy (also known as black metallurgy) and non-ferrous metallurgy (also known as colored metallurgy). Ferrous metallurgy involves processes and alloys based on iron, while non-ferrous metallurgy involves processes and alloys based on other metals. The production of ferrous metals accounts for 95% of world metal production.

Modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists, and other engineers. Some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals (including welding, brazing, and soldering). Emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials (semiconductors) and surface engineering.

Etymology and pronunciation

Metallurgy derives from the Ancient Greek μεταλλουργός, metallourgós, "worker in metal", from μέταλλον, métallon, "mine, metal" + ἔργον, érgon, "work" The word was originally an alchemist's term for the extraction of metals from minerals, the ending -urgy signifying a process, especially manufacturing: it was discussed in this sense in the 1797 Encyclopædia Britannica. In the late 19th century, it was extended to the more general scientific study of metals, alloys, and related processes. In English, the /mɛˈtæləri/ pronunciation is the more common one in the UK and Commonwealth. The /ˈmɛtəlɜːri/ pronunciation is the more common one in the US and is the first-listed variant in various American dictionaries (e.g., Merriam-Webster Collegiate, American Heritage).

History

The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves dating to the late Paleolithic period, 40,000 BC. Silver, copper, tin and meteoric iron can also be found in native form, allowing a limited amount of metalworking in early cultures. Egyptian weapons made from meteoric iron in about 3,000 BC were highly prized as "daggers from heaven". Certain metals, notably tin, lead, and at a higher temperature, copper, can be recovered from their ores by simply heating the rocks in a fire or blast furnace, a process known as smelting. The first evidence of this extractive metallurgy, dating from the 5th and 6th millennia BC, has been found at archaeological sites in Majdanpek, Jarmovac near Priboj and Pločnik, in present-day Serbia. To date, the earliest evidence of copper smelting is found at the Belovode site near Plocnik. This site produced a copper axe from 5,500 BC, belonging to the Vinča culture.

The earliest use of lead is documented from the late neolithic settlement of Yarim Tepe in Iraq:

"The earliest lead (Pb) finds in the ancient Near East are a 6th millennium BC bangle from Yarim Tepe in northern Iraq and a slightly later conical lead piece from Halaf period Arpachiyah, near Mosul. As native lead is extremely rare, such artifacts raise the possibility that lead smelting may have begun even before copper smelting."

Copper smelting is also documented at this site at about the same time period (soon after 6,000 BC), although the use of lead seems to precede copper smelting. Early metallurgy is also documented at the nearby site of Tell Maghzaliyah, which seems to be dated even earlier, and completely lacks that pottery. The Balkans were the site of major Neolithic cultures, including Butmir, Vinča, Varna, Karanovo, and Hamangia.

Artefacts from the Varna necropolis, Bulgaria
 
Gold artefacts from the Varna necropolis, Varna culture
 
Gold bulls, Varna culture
 
Elite burial at the Varna necropolis, original find photo (detail)

The Varna Necropolis, Bulgaria, is a burial site in the western industrial zone of Varna (approximately 4 km from the city centre), internationally considered one of the key archaeological sites in world prehistory. The oldest gold treasure in the world, dating from 4,600 BC to 4,200 BC, was discovered at the site. The gold piece dating from 4,500 BC, recently founded in Durankulak, near Varna is another important example. Other signs of early metals are found from the third millennium BC in places like Palmela (Portugal), Los Millares (Spain), and Stonehenge (United Kingdom). However, the ultimate beginnings cannot be clearly ascertained and new discoveries are both continuous and ongoing.

Mining areas of the ancient Middle East. Boxes colors: arsenic is in brown, copper in red, tin in grey, iron in reddish brown, gold in yellow, silver in white and lead in black. Yellow area stands for arsenic bronze, while grey area stands for tin bronze.

In the Near East, about 3,500 BC, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. This represented a major technological shift known as the Bronze Age.

The extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. The process appears to have been invented by the Hittites in about 1200 BC, beginning the Iron Age. The secret of extracting and working iron was a key factor in the success of the Philistines.

Historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. This includes the ancient and medieval kingdoms and empires of the Middle East and Near East, ancient Iran, ancient Egypt, ancient Nubia, and Anatolia (Turkey), Ancient Nok, Carthage, the Greeks and Romans of ancient Europe, medieval Europe, ancient and medieval China, ancient and medieval India, ancient and medieval Japan, amongst others. Many applications, practices, and devices associated or involved in metallurgy were established in ancient China, such as the innovation of the blast furnace, cast iron, hydraulic-powered trip hammers, and double acting piston bellows.

A 16th century book by Georg Agricola called De re metallica describes the highly developed and complex processes of mining metal ores, metal extraction and metallurgy of the time. Agricola has been described as the "father of metallurgy".

Extraction

Furnace bellows operated by waterwheels, Yuan Dynasty, China.
 
Aluminium plant in Žiar nad Hronom (Central Slovakia)

Extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. In order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. Extractive metallurgists are interested in three primary streams: feed, concentrate (metal oxide/sulphide) and tailings (waste).

After mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. Concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products.

Mining may not be necessary, if the ore body and physical environment are conducive to leaching. Leaching dissolves minerals in an ore body and results in an enriched solution. The solution is collected and processed to extract valuable metals. Ore bodies often contain more than one valuable metal.

Tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. Additionally, a concentrate may contain more than one valuable metal. That concentrate would then be processed to separate the valuable metals into individual constituents.

Metal and its alloys

Casting bronze

Common engineering metals include aluminium, chromium, copper, iron, magnesium, nickel, titanium, zinc, and silicon. These metals are most often used as alloys with the noted exception of silicon.

Much effort has been placed on understanding the iron - carbon alloy system, which includes steels and cast irons. Plain carbon steels (those that contain essentially only carbon as an alloying element) are used in low-cost, high-strength applications, where neither weight nor corrosion are a major concern. Cast irons, including ductile iron, are also part of the iron-carbon system. Iron-Manganese-Chromium alloys (Hadfield-type steels) are also used in non-magnetic applications such as directional drilling.

Stainless steel, particularly Austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important.

Aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications.

Copper-nickel alloys (such as Monel) are used in highly corrosive environments and for non-magnetic applications.

Nickel-based superalloys like Inconel are used in high-temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers.

For extremely high temperatures, single crystal alloys are used to minimize creep. In modern electronics, high purity single crystal silicon is essential for metal-oxide-silicon transistors (MOS) and integrated circuits.

Production

In production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. This involves production of alloys, shaping, heat treatment and surface treatment of product.

Determining the hardness of the metal using the Rockwell, Vickers, and Brinell hardness scales is a commonly used practice that helps better understand the metal's elasticity and plasticity for different applications and production processes.

The task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. To achieve this goal, the operating environment must be carefully considered.

In a saltwater environment, most ferrous metals and some non-ferrous alloys corrode quickly. Metals exposed to cold or cryogenic conditions may undergo a ductile to brittle transition and lose their toughness, becoming more brittle and prone to cracking. Metals under continual cyclic loading can suffer from metal fatigue. Metals under constant stress at elevated temperatures can creep.

Metalworking processes

Metals are shaped by processes such as:

  1. Casting – molten metal is poured into a shaped mold.
  2. Forging – a red-hot billet is hammered into shape.
  3. Rolling – a billet is passed through successively narrower rollers to create a sheet.
  4. Extrusion – a hot and malleable metal is forced under pressure through a die, which shapes it before it cools.
  5. Machininglathes, milling machines and drills cut the cold metal to shape.
  6. Sintering – a powdered metal is heated in a non-oxidizing environment after being compressed into a die.
  7. Fabrication – sheets of metal are cut with guillotines or gas cutters and bent and welded into structural shape.
  8. Laser cladding – metallic powder is blown through a movable laser beam (e.g. mounted on a NC 5-axis machine). The resulting melted metal reaches a substrate to form a melt pool. By moving the laser head, it is possible to stack the tracks and build up a three-dimensional piece.
  9. 3D printing – Sintering or melting amorphous powder metal in a 3D space to make any object to shape.

Cold-working processes, in which the product's shape is altered by rolling, fabrication or other processes, while the product is cold, can increase the strength of the product by a process called work hardening. Work hardening creates microscopic defects in the metal, which resist further changes of shape.

Various forms of casting exist in industry and academia. These include sand casting, investment casting (also called the lost wax process), die casting, and continuous castings. Each of these forms has advantages for certain metals and applications considering factors like magnetism and corrosion.

Heat treatment

Metals can be heat-treated to alter the properties of strength, ductility, toughness, hardness and resistance to corrosion. Common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering.

Annealing process softens the metal by heating it and then allowing it to cool very slowly, which gets rid of stresses in the metal and makes the grain structure large and soft-edged so that, when the metal is hit or stressed it dents or perhaps bends, rather than breaking; it is also easier to sand, grind, or cut annealed metal.

Quenching is the process of cooling metal very quickly after heating, thus "freezing" the metal's molecules in the very hard martensite form, which makes the metal harder.

Tempering relieves stresses in the metal that were caused by the hardening process; tempering makes the metal less hard while making it better able to sustain impacts without breaking.

Often, mechanical and thermal treatments are combined in what are known as thermo-mechanical treatments for better properties and more efficient processing of materials. These processes are common to high-alloy special steels, superalloys and titanium alloys.

Plating

Electroplating is a chemical surface-treatment technique. It involves bonding a thin layer of another metal such as gold, silver, chromium or zinc to the surface of the product. This is done by selecting the coating material electrolyte solution, which is the material that is going to coat the workpiece (gold, silver, zinc). There needs to be two electrodes of different materials: one the same material as the coating material and one that is receiving the coating material. Two electrodes are electrically charged and the coating material is stuck to the work piece. It is used to reduce corrosion as well as to improve the product's aesthetic appearance. It is also used to make inexpensive metals look like the more expensive ones (gold, silver).

Shot peening

Shot peening is a cold working process used to finish metal parts. In the process of shot peening, small round shot is blasted against the surface of the part to be finished. This process is used to prolong the product life of the part, prevent stress corrosion failures, and also prevent fatigue. The shot leaves small dimples on the surface like a peen hammer does, which cause compression stress under the dimple. As the shot media strikes the material over and over, it forms many overlapping dimples throughout the piece being treated. The compression stress in the surface of the material strengthens the part and makes it more resistant to fatigue failure, stress failures, corrosion failure, and cracking.

Thermal spraying

Thermal spraying techniques are another popular finishing option, and often have better high temperature properties than electroplated coatings.Thermal spraying, also known as a spray welding process, is an industrial coating process that consists of a heat source (flame or other) and a coating material that can be in a powder or wire form, which is melted then sprayed on the surface of the material being treated at a high velocity. The spray treating process is known by many different names such as HVOF (High Velocity Oxygen Fuel), plasma spray, flame spray, arc spray and metalizing.

Metallography allows the metallurgist to study the microstructure of metals.

Characterization

Metallurgists study the microscopic and macroscopic structure of metals using metallography, a technique invented by Henry Clifton Sorby.

In metallography, an alloy of interest is ground flat and polished to a mirror finish. The sample can then be etched to reveal the microstructure and macrostructure of the metal. The sample is then examined in an optical or electron microscope, and the image contrast provides details on the composition, mechanical properties, and processing history.

Crystallography, often using diffraction of x-rays or electrons, is another valuable tool available to the modern metallurgist. Crystallography allows identification of unknown materials and reveals the crystal structure of the sample. Quantitative crystallography can be used to calculate the amount of phases present as well as the degree of strain to which a sample has been subjected.

Laboratory robotics

From Wikipedia, the free encyclopedia
 
Laboratory robots doing acid digestion chemical analysis.

Laboratory robotics is the act of using robots in biology or chemistry labs. For example, pharmaceutical companies employ robots to move biological or chemical samples around to synthesize novel chemical entities or to test pharmaceutical value of existing chemical matter. Advanced laboratory robotics can be used to completely automate the process of science, as in the Robot Scientist project.

Laboratory processes are suited for robotic automation as the processes are composed of repetitive movements (e.g. pick/place, liquid & solid additions, heating/cooling, mixing, shaking, testing). Many laboratory robots are commonly referred as autosamplers, as their main task is to provide continuous samples for analytical devices.

History

The first compact computer controlled robotic arms appeared in the early 1980s, and have continuously been employed in laboratories since then. These robots can be programmed to perform many different tasks, including sample preparation and handling.

Yet in the early 1980s, a group led by Dr. Masahide Sasaki, from Kochi Medical School, introduced the first fully automated laboratory employing several robotic arms working together with conveyor belts and automated analyzers. The success of Dr. Sasaki's pioneer efforts made other groups around the world to adopt the approach of Total Laboratory Automation (TLA).

Despite the undeniable success of TLA, its multimillion-dollar cost prevented that most laboratories adopted it. Also, the lack of communication between different devices slowed down the development of automation solutions for different applications, while contributing to keeping costs high. Therefore, the industry attempted several times to develop standards that different vendors would follow in order to enable communication between their devices. However, the success of this approach has been only partial, as nowadays many laboratories still do not employ robots for many tasks due to their high costs.

Recently, a different solution for the problem became available, enabling the use of inexpensive devices, including open-source hardware, to perform many different tasks in the laboratory. This solution is the use of scripting languages that can control mouse clicks and keyboard inputs, like AutoIt. This way, it is possible to integrate any device by any manufacturer as long as they are controlled by a computer, which is often the case.

Another important development in robotics which has important potential implications for laboratories is the arrival of robots that do not demand special training for their programming, like Baxter, the robot.

Applications

Low-cost laboratory robotics

Low-cost robotic arm used as an autosampler.
Low-cost robotic arm used as an autosampler.

The high cost of many laboratory robots has inhibited their adoption. However, currently there are many robotic devices that have very low cost, and these could be employed to do some jobs in a laboratory. For example, a low-cost robotic arm was employed to perform several different kinds of water analysis, without loss of performance compared to much more expensive autosamplers. Alternatively, the autosampler of a device can be used with another device, thus avoiding the need for purchasing a different autosampler or hiring a technician for doing the job. The key aspects to achieve low-cost in laboratory robotics are 1) the use of low-cost robots, which become more and more common, and 2) the use of scripting, which enables compatibility between robots and other analytical equipment.

Robotic, mobile laboratory operators

In July 2020 scientists reported the development of a mobile robot chemist and demonstrate that it can assist in experimental searches. According to the scientists their strategy was automating the researcher rather than the instruments – freeing up time for the human researchers to think creatively – and could identify photocatalyst mixtures for hydrogen production from water that were six times more active than initial formulations. The modular robot can operate laboratory instruments, work nearly around the clock, and autonomously make decisions on his next actions depending on experimental results.

Biological Laboratory Robotics

An example of pipettes and microplates manipulated by an anthropomorphic robot (Andrew Alliance)

Biological and chemical samples, in either liquid or solid state, are stored in vials, plates or tubes. Often, they need to be frozen and/or sealed to avoid contamination or to retain their biological and/or chemical properties. Specifically, the life science industry has standardized on a plate format, known as the microtiter plate, to store such samples.

The microtiter plate standard was formalized by the Society for Biomolecular Screening in 1996. It typically has 96, 384 or even 1536 sample wells arranged in a 2:3 rectangular matrix. The standard governs well dimensions (e.g. diameter, spacing and depth) as well as plate properties (e.g. dimensions and rigidity).

A number of companies have developed robots to specifically handle SBS microplates. Such robots may be liquid handlers which aspirates or dispenses liquid samples from and to these plates, or "plate movers" which transport them between instruments.

Other companies have pushed integration even further: on top of interfacing to the specific consumables used in biology, some robots (Andrew by Andrew Alliance, see picture) have been designed with the capability of interfacing to volumetric pipettes used by biologists and technical staff. Essentially, all the manual activity of liquid handling can be performed automatically, allowing humans spending their time in more conceptual activities.

Instrument companies have designed plate readers which can carry out detect specific biological, chemical or physical events in samples stored in these plates. These readers typically use optical and/or computer vision techniques to evaluate the contents of the microtiter plate wells.

One of the first applications of robotics in biology was peptide and oligonucleotide synthesis. One early example is the polymerase chain reaction (PCR) which is able to amplify DNA strands using a thermal cycler to micromanage DNA synthesis by adjusting temperature using a pre-made computer program. Since then, automated synthesis has been applied to organic chemistry and expanded into three categories: reaction-block systems, robot-arm systems, and non-robotic fluidic systems. The primary objective of any automated workbench is high-throughput processes and cost reduction. This allows a synthetic laboratory to operate with a fewer number of people working more efficiently.

Pharmaceutical Applications

One major area where automated synthesis has been applied is structure determination in pharmaceutical research. Processes such as NMR and HPLC-MS can now have sample preparation done by robotic arm. Additionally, structural protein analysis can be done automatically using a combination of NMR and X-ray crystallography. Crystallization often takes hundreds to thousands of experiments to create a protein crystal suitable for X-ray crystallography. An automated micropipet machine can allow nearly a million different crystals to be created at once, and analyzed via X-ray crystallography.

Combinatorial Library Synthesis

Robotics have applications with Combinatorial Chemistry which has great impact on the pharmaceutical industry. The use of robotics has allowed for the use of much smaller reagent quantities and mass expansion of chemical libraries. The "parallel synthesis" method can be improved upon with automation. The main disadvantage to "parallel-synthesis" is the amount of time it takes to develop a library, automation is typically applied to make this process more efficient.

The main types of automation are classified by the type of solid-phase substrates, the methods for adding and removing reagents, and design of reaction chambers. Polymer resins may be used as a substrate for solid-phase. It is not a true combinatorial method in the sense that "split-mix" where a peptide compound is split into different groups and reacted with different compounds. This is then mixed back together split into more groups and each groups is reacted with a different compound. Instead the "parallel-synthesis" method does not mix, but reacts different groups of the same peptide with different compounds and allows for the identification of the individual compound on each solid support. A popular method implemented is the reaction block system due to its relative low cost and higher output of new compounds compared to other "parallel-synthesis" methods. Parallel-Synthesis was developed by Mario Geysen and his colleagues and is not a true type of combinatorial synthesis, but can be incorporated into a combinatorial synthesis. This group synthesized 96 peptides on plastic pins coated with a solid support for the solid phase peptide synthesis. This method uses a rectangular block moved by a robot so that reagents can be pipetted by a robotic pipetting system. This block is separated into wells which the individual reactions take place. These compounds are later cleaved from the solid-phase of the well for further analysis. Another method is the closed reactor system which uses a completely closed off reaction vessel with a series of fixed connections to dispense. Though the produce fewer number of compounds than other methods, its main advantage is the control over the reagents and reaction conditions. Early closed reaction systems were developed for peptide synthesis which required variations in temperature and a diverse range of reagents. Some closed reactor system robots have a temperature range of 200 °C and over 150 reagents.

Purification

Simulated distillation, a type of gas chromatography testing method used in the petroleum, can be automated via robotics. An older method used a system called ORCA (Optimized Robot for Chemical Analysis) was used for the analysis of petroleum samples by simulated distillation (SIMDIS). ORCA has allowed for shorter analysis times and has reduced maximum temperature needed to elute compounds. One major advantage of automating purification is the scale at which separations can be done. Using microprocessors, ion-exchange separation can be conducted on a nanoliter scale in a short period of time.

Robotics have been implemented in liquid-liquid extraction (LLE) to streamline the process of preparing biological samples using 96-well plates. This is an alternative method to solid-phase extraction methods and protein precipitation, which has the advantage of being more reproducible and robotic assistance has made LLE comparable in speed to solid phase extraction. The robotics used for LLE can perform an entire extraction with quantities in the microliter scale and performing the extraction in as little as ten minutes.

Advantages and Disadvantages

Advantages

One of the advantages to automation is faster processing, but it is not necessarily faster than a human operator. Repeatability and reproducibility are improved as automated systems as less likely to have variances in reagent quantities and less likely to have variances in reaction conditions. Typically productivity is increased since human constraints, such as time constraints, are no longer a factor. Efficiency is generally improved as robots can work continuously and reduce the amount of reagents used to perform a reaction. Also there is a reduction in material waste. Automation can also establish safer working environments since hazardous compounds do not have to be handled. Additionally automation allows staff to focus on other tasks that are not repetitive.

Disadvantages

Typically the cost of a single synthesis or sample assessment are expensive to set up and start up cost for automation can be expensive (but see above "Low-cost laboratory robotics"). Many techniques have not been developed for automation yet. Additionally there is difficulty automating instances where visual analysis, recognition, or comparison is required such as color changes. This also leads to the analysis being limited by available sensory inputs. One potential disadvantage is an increases job shortages as automation may replace staff members who do tasks easily replicated by a robot. Some systems require the use of programming languages such as C++ or Visual Basic to run more complicated tasks.


Laboratory automation

From Wikipedia, the free encyclopedia
Automated laboratory equipment

Laboratory automation is a multi-disciplinary strategy to research, develop, optimize and capitalize on technologies in the laboratory that enable new and improved processes. Laboratory automation professionals are academic, commercial and government researchers, scientists and engineers who conduct research and develop new technologies to increase productivity, elevate experimental data quality, reduce lab process cycle times, or enable experimentation that otherwise would be impossible.

The most widely known application of laboratory automation technology is laboratory robotics. More generally, the field of laboratory automation comprises many different automated laboratory instruments, devices (the most common being autosamplers), software algorithms, and methodologies used to enable, expedite and increase the efficiency and effectiveness of scientific research in laboratories.

The application of technology in today's laboratories is required to achieve timely progress and remain competitive. Laboratories devoted to activities such as high-throughput screening, combinatorial chemistry, automated clinical and analytical testing, diagnostics, large-scale biorepositories, and many others, would not exist without advancements in laboratory automation.

An autosampler for liquid or gaseous samples based on a microsyringe
An autosampler for liquid or gaseous samples based on a microsyringe

Some universities offer entire programs that focus on lab technologies. For example, Indiana University-Purdue University at Indianapolis offers a graduate program devoted to Laboratory Informatics. Also, the Keck Graduate Institute in California offers a graduate degree with an emphasis on development of assays, instrumentation and data analysis tools required for clinical diagnostics, high-throughput screening, genotyping, microarray technologies, proteomics, imaging and other applications.

History

At least since 1875 there have been reports of automated devices for scientific investigation. These first devices were mostly built by scientists themselves in order to solve problems in the laboratory. After the second world war, companies started to provide automated equipment with greater and greater complexity.

Automation steadily spread in laboratories through the 20th century, but then a revolution took place: in the early 1980s, the first fully automated laboratory was opened by Dr. Masahide Sasaki. In 1993, Dr. Rod Markin at the University of Nebraska Medical Center created one of the world's first clinical automated laboratory management systems. In the mid-1990s, he chaired a standards group called the Clinical Testing Automation Standards Steering Committee (CTASSC) of the American Association for Clinical Chemistry, which later evolved into an area committee of the Clinical and Laboratory Standards Institute. In 2004, the National Institutes of Health (NIH) and more than 300 nationally recognized leaders in academia, industry, government, and the public completed the NIH Roadmap to accelerate medical discovery to improve health. The NIH Roadmap clearly identifies technology development as a mission critical factor in the Molecular Libraries and Imaging Implementation Group (see the first theme – New Pathways to Discovery – at https://web.archive.org/web/20100611171315/http://nihroadmap.nih.gov/).

Despite the success of Dr. Sasaki laboratory and others of the kind, the multi-million dollar cost of such laboratories has prevented adoption by smaller groups. This is all more difficult because devices made by different manufactures often cannot communicate with each other. However, recent advances based on the use of scripting languages like Autoit have made possible the integration of equipment from different manufacturers. Using this approach, many low-cost electronic devices, including open-source devices, become compatible with common laboratory instruments.

Some startups such as Emerald Cloud Lab and Strateos provide on-demand and remote laboratory access on a commercial scale. A 2017 study indicates that these commercial-scale, fully integrated automated laboratories can improve reproducibility and transparency in basic biomedical experiments, and that over nine in ten biomedical papers use methods currently available through these groups.

Low-cost laboratory automation

A large obstacle to the implementation of automation in laboratories has been its high cost. Many laboratory instruments are very expensive. This is justifiable in many cases, as such equipment can perform very specific tasks employing cutting-edge technology. However, there are devices employed in the laboratory that are not highly technological but still are very expensive. This is the case of many automated devices, which perform tasks that could easily be done by simple and low-cost devices like simple robotic arms, universal (open-source) electronic modules, or 3D printers.

So far, using such low-cost devices together with laboratory equipment was considered to be very difficult. However, it has been demonstrated that such low-cost devices can substitute without problems the standard machines used in laboratory. It can be anticipated that more laboratories will take advantage of this new reality as low-cost automation is very attractive for laboratories.

A technology that enables the integration of any machine regardless of their brand is scripting, more specifically, scripting involving the control of mouse clicks and keyboard entries, like AutoIt. By timing clicks and keyboard inputs, different software interfaces controlling different devices can be perfectly synchronized.

Virtual screening

From Wikipedia, the free encyclopedia

Figure 1. Flow Chart of Virtual Screening

Virtual screening (VS) is a computational technique used in drug discovery to search libraries of small molecules in order to identify those structures which are most likely to bind to a drug target, typically a protein receptor or enzyme.

Virtual screening has been defined as the "automatically evaluating very large libraries of compounds" using computer programs. As this definition suggests, VS has largely been a numbers game focusing on how the enormous chemical space of over 1060 conceivable compounds can be filtered to a manageable number that can be synthesized, purchased, and tested. Although searching the entire chemical universe may be a theoretically interesting problem, more practical VS scenarios focus on designing and optimizing targeted combinatorial libraries and enriching libraries of available compounds from in-house compound repositories or vendor offerings. As the accuracy of the method has increased, virtual screening has become an integral part of the drug discovery process. Virtual Screening can be used to select in house database compounds for screening, choose compounds that can be purchased externally, and to choose which compound should be synthesized next.

Methods

There are two broad categories of screening techniques: ligand-based and structure-based. The remainder of this page will reflect Figure 1 Flow Chart of Virtual Screening.

Ligand-based

Given a set of structurally diverse ligands that binds to a receptor, a model of the receptor can be built by exploiting the collective information contained in such set of ligands. These are known as pharmacophore models. A candidate ligand can then be compared to the pharmacophore model to determine whether it is compatible with it and therefore likely to bind.

Another approach to ligand-based virtual screening is to use 2D chemical similarity analysis methods to scan a database of molecules against one or more active ligand structure.

A popular approach to ligand-based virtual screening is based on searching molecules with shape similar to that of known actives, as such molecules will fit the target's binding site and hence will be likely to bind the target. There are a number of prospective applications of this class of techniques in the literature. Pharmacophoric extensions of these 3D methods are also freely-available as webservers.

Structure-based

Structure-based virtual screening involves docking of candidate ligands into a protein target followed by applying a scoring function to estimate the likelihood that the ligand will bind to the protein with high affinity. Webservers oriented to prospective virtual screening are available to all.

Hybrid methods

Hybrid methods that rely on structural and ligand similarity were also developed to overcome the limitations of traditional VLS approaches. This methodologies utilizes evolution‐based ligand‐binding information to predict small-molecule binders and can employ both global structural similarity and pocket similarity. A global structural similarity based approach employs both an experimental structure or a predicted protein model to find structural similarity with proteins in the PDB holo‐template library. Upon detecting significant structural similarity, 2D fingerprint based Tanimoto coefficient metric is applied to screen for small-molecules that are similar to ligands extracted from selected holo PDB templates. The predictions from this method have been experimentally assessed and shows good enrichment in identifying active small molecules.

The above specified method depends on global structural similarity and is not capable of a priori selecting a particular ligand‐binding site in the protein of interest. Further, since the methods rely on 2D similarity assessment for ligands, they are not capable of recognizing stereochemical similarity of small-molecules that are substantially different but demonstrate geometric shape similarity. To address these concerns, a new pocket centric approach, PoLi, capable of targeting specific binding pockets in holo‐protein templates, was developed and experimentally assessed.

Computing Infrastructure

The computation of pair-wise interactions between atoms, which is a prerequisite for the operation of many virtual screening programs, is of computational complexity, where N is the number of atoms in the system. Because of the quadratic scaling with respect to the number of atoms, the computing infrastructure may vary from a laptop computer for a ligand-based method to a mainframe for a structure-based method.

Ligand-based

Ligand-based methods typically require a fraction of a second for a single structure comparison operation. A single CPU is enough to perform a large screening within hours. However, several comparisons can be made in parallel in order to expedite the processing of a large database of compounds.

Structure-based

The size of the task requires a parallel computing infrastructure, such as a cluster of Linux systems, running a batch queue processor to handle the work, such as Sun Grid Engine or Torque PBS.

A means of handling the input from large compound libraries is needed. This requires a form of compound database that can be queried by the parallel cluster, delivering compounds in parallel to the various compute nodes. Commercial database engines may be too ponderous, and a high speed indexing engine, such as Berkeley DB, may be a better choice. Furthermore, it may not be efficient to run one comparison per job, because the ramp up time of the cluster nodes could easily outstrip the amount of useful work. To work around this, it is necessary to process batches of compounds in each cluster job, aggregating the results into some kind of log file. A secondary process, to mine the log files and extract high scoring candidates, can then be run after the whole experiment has been run.

Accuracy

The aim of virtual screening is to identify molecules of novel chemical structure that bind to the macromolecular target of interest. Thus, success of a virtual screen is defined in terms of finding interesting new scaffolds rather than the total number of hits. Interpretations of virtual screening accuracy should, therefore, be considered with caution. Low hit rates of interesting scaffolds are clearly preferable over high hit rates of already known scaffolds.

Most tests of virtual screening studies in the literature are retrospective. In these studies, the performance of a VS technique is measured by its ability to retrieve a small set of previously known molecules with affinity to the target of interest (active molecules or just actives) from a library containing a much higher proportion of assumed inactives or decoys. By contrast, in prospective applications of virtual screening, the resulting hits are subjected to experimental confirmation (e.g., IC50 measurements). There is consensus that retrospective benchmarks are not good predictors of prospective performance and consequently only prospective studies constitute conclusive proof of the suitability of a technique for a particular target.

Application to drug discovery

Virtual screening is a very useful application when it comes to identifying hit molecules as a beginning for medicinal chemistry. As the virtual screening approach begins to become a more vital and substantial technique within the medicinal chemistry industry the approach has had an expeditious increase.

Ligand-based methods

While not knowing the structure trying to predict how the ligands will bind to the receptor. With the use of pharmacophore features each ligand identified donor, and acceptors. Equating features are overlaid, however given it is unlikely there is a single correct solution.

Pharmacophore models

This technique is used when merging the results of searches by using unlike reference compounds, same descriptors and coefficient, but different active compounds. This technique is beneficial because it is more efficient than just using a single reference structure along with the most accurate performance when it comes to diverse actives.

Pharmacophore is an ensemble of steric and electronic features that are needed to have an optimal supramolecular interaction or interactions with a biological target structure in order to precipitate its biological response. Choose a representative as a set of actives, most methods will look for similar bindings. It is preferred to have multiple rigid molecules and the ligands should be diversified, in other words ensure to have different features that don't occur during the binding phase.

Structure

Build a compound predictive model based on known active and known inactive knowledge. QSAR's (Quantitative-Structure Activity Relationship) which is restricted to a small homogenous dataset. SAR's (Structure Activity Relationship) where data is treated qualitatively and can be used with structural classes and more than one binding mode. Models prioritize compounds for lead discovery.

Machine Learning

In order to use Machine Learning for this model of Virtual Screening there must be a training set with known active and known inactive compounds. There also is a model of activity that then is computed by way of substructural analysis, recursive partitioning, support vector machines, k-nearest neighbors and neural networks. The final step is finding the probability that a compound is active and then ranking each compound based on its probability of being active.

Substructural analysis in Machine Learning

The first Machine Learning model used on large datasets is the Substructure Analysis that was created in 1973. Each fragment substructure make a continuous contribution an activity of specific type. Substructure is a method that overcomes the difficulty of massive dimensionality when it comes to analyzing structures in drug design. An efficient substructure analysis is used for structures that have similarities to a multi-level building or tower. Geometry is used for numbering boundary joints for a given structure in the onset and towards the climax. When the method of special static condensation and substitutions routines are developed this method is proved to be more productive than the previous substructure analysis models.

Recursive partitioning

Recursively partitioning is method that creates a decision tree using qualitative data. Understanding the way rules break classes up with a low error of misclassification while repeating each step until no sensible splits can be found. However, recursive partitioning can have poor prediction ability potentially creating fine models at the same rate.

Structure-based methods known protein ligand docking

Ligand can bind into an active site within a protein by using a docking search algorithm, and scoring function in order to identify the most likely cause for an individual ligand while assigning a priority order.

High-throughput screening

From Wikipedia, the free encyclopedia
 
High-throughput screening robots

High-throughput screening (HTS) is a method for scientific experimentation especially used in drug discovery and relevant to the fields of biology and chemistry. Using robotics, data processing/control software, liquid handling devices, and sensitive detectors, high-throughput screening allows a researcher to quickly conduct millions of chemical, genetic, or pharmacological tests. Through this process one can rapidly identify active compounds, antibodies, or genes that modulate a particular biomolecular pathway. The results of these experiments provide starting points for drug design and for understanding the noninteraction or role of a particular location.

Assay plate preparation

A robot arm handles an assay plate

The key labware or testing vessel of HTS is the microtiter plate: a small container, usually disposable and made of plastic, that features a grid of small, open divots called wells. In general, microplates for HTS have either 96, 192, 384, 1536, 3456 or 6144 wells. These are all multiples of 96, reflecting the original 96-well microplate with spaced wells of 8 x 12 with 9 mm spacing. Most of the wells contain test items, depending on the nature of the experiment. These could be different chemical compounds dissolved e.g. in an aqueous solution of dimethyl sulfoxide (DMSO). The wells could also contain cells or enzymes of some type. (The other wells may be empty or contain pure solvent or untreated samples, intended for use as experimental controls.)

A screening facility typically holds a library of stock plates, whose contents are carefully catalogued, and each of which may have been created by the lab or obtained from a commercial source. These stock plates themselves are not directly used in experiments; instead, separate assay plates are created as needed. An assay plate is simply a copy of a stock plate, created by pipetting a small amount of liquid (often measured in nanoliters) from the wells of a stock plate to the corresponding wells of a completely empty plate.

Reaction observation

To prepare for an assay, the researcher fills each well of the plate with some biological entity that they wish to conduct the experiment upon, such as a protein, cells, or an animal embryo. After some incubation time has passed to allow the biological matter to absorb, bind to, or otherwise react (or fail to react) with the compounds in the wells, measurements are taken across all the plate's wells, either manually or by a machine. Manual measurements are often necessary when the researcher is using microscopy to (for example) seek changes or defects in embryonic development caused by the wells' compounds, looking for effects that a computer could not easily determine by itself. Otherwise, a specialized automated analysis machine can run a number of experiments on the wells (such as shining polarized light on them and measuring reflectivity, which can be an indication of protein binding). In this case, the machine outputs the result of each experiment as a grid of numeric values, with each number mapping to the value obtained from a single well. A high-capacity analysis machine can measure dozens of plates in the space of a few minutes like this, generating thousands of experimental datapoints very quickly.

Depending on the results of this first assay, the researcher can perform follow up assays within the same screen by "cherrypicking" liquid from the source wells that gave interesting results (known as "hits") into new assay plates, and then re-running the experiment to collect further data on this narrowed set, confirming and refining observations.

Automation systems

A carousel system to store assay plates for high storage capacity and high speed access

Automation is an important element in HTS's usefulness. Typically, an integrated robot system consisting of one or more robots transports assay-microplates from station to station for sample and reagent addition, mixing, incubation, and finally readout or detection. An HTS system can usually prepare, incubate, and analyze many plates simultaneously, further speeding the data-collection process. HTS robots that can test up to 100,000 compounds per day currently exist. Automatic colony pickers pick thousands of microbial colonies for high throughput genetic screening. The term uHTS or ultra-high-throughput screening refers (circa 2008) to screening in excess of 100,000 compounds per day.

Experimental design and data analysis

With the ability of rapid screening of diverse compounds (such as small molecules or siRNAs) to identify active compounds, HTS has led to an explosion in the rate of data generated in recent years . Consequently, one of the most fundamental challenges in HTS experiments is to glean biochemical significance from mounds of data, which relies on the development and adoption of appropriate experimental designs and analytic methods for both quality control and hit selection. HTS research is one of the fields that have a feature described by John Blume, Chief Science Officer for Applied Proteomics, Inc., as follows: Soon, if a scientist does not understand some statistics or rudimentary data-handling technologies, he or she may not be considered to be a true molecular biologist and, thus, will simply become "a dinosaur."

Quality control

High-quality HTS assays are critical in HTS experiments. The development of high-quality HTS assays requires the integration of both experimental and computational approaches for quality control (QC). Three important means of QC are (i) good plate design, (ii) the selection of effective positive and negative chemical/biological controls, and (iii) the development of effective QC metrics to measure the degree of differentiation so that assays with inferior data quality can be identified.  A good plate design helps to identify systematic errors (especially those linked with well position) and determine what normalization should be used to remove/reduce the impact of systematic errors on both QC and hit selection.

Effective analytic QC methods serve as a gatekeeper for excellent quality assays. In a typical HTS experiment, a clear distinction between a positive control and a negative reference such as a negative control is an index for good quality. Many quality-assessment measures have been proposed to measure the degree of differentiation between a positive control and a negative reference. Signal-to-background ratio, signal-to-noise ratio, signal window, assay variability ratio, and Z-factor have been adopted to evaluate data quality. Strictly standardized mean difference (SSMD) has recently been proposed for assessing data quality in HTS assays. 

Hit selection

A compound with a desired size of effects in an HTS is called a hit. The process of selecting hits is called hit selection. The analytic methods for hit selection in screens without replicates (usually in primary screens) differ from those with replicates (usually in confirmatory screens). For example, the z-score method is suitable for screens without replicates whereas the t-statistic is suitable for screens with replicates. The calculation of SSMD for screens without replicates also differs from that for screens with replicates .

For hit selection in primary screens without replicates, the easily interpretable ones are average fold change, mean difference, percent inhibition, and percent activity. However, they do not capture data variability effectively. The z-score method or SSMD, which can capture data variability based on an assumption that every compound has the same variability as a negative reference in the screens. However, outliers are common in HTS experiments, and methods such as z-score are sensitive to outliers and can be problematic. As a consequence, robust methods such as the z*-score method, SSMD*, B-score method, and quantile-based method have been proposed and adopted for hit selection.

In a screen with replicates, we can directly estimate variability for each compound; as a consequence, we should use SSMD or t-statistic that does not rely on the strong assumption that the z-score and z*-score rely on. One issue with the use of t-statistic and associated p-values is that they are affected by both sample size and effect size. They come from testing for no mean difference, and thus are not designed to measure the size of compound effects. For hit selection, the major interest is the size of effect in a tested compound. SSMD directly assesses the size of effects. SSMD has also been shown to be better than other commonly used effect sizes. The population value of SSMD is comparable across experiments and, thus, we can use the same cutoff for the population value of SSMD to measure the size of compound effects .

Techniques for increased throughput and efficiency

Unique distributions of compounds across one or many plates can be employed either to increase the number of assays per plate or to reduce the variance of assay results, or both. The simplifying assumption made in this approach is that any N compounds in the same well will not typically interact with each other, or the assay target, in a manner that fundamentally changes the ability of the assay to detect true hits.

For example, imagine a plate wherein compound A is in wells 1-2-3, compound B is in wells 2-3-4, and compound C is in wells 3-4-5. In an assay of this plate against a given target, a hit in wells 2, 3, and 4 would indicate that compound B is the most likely agent, while also providing three measurements of compound B's efficacy against the specified target. Commercial applications of this approach involve combinations in which no two compounds ever share more than one well, to reduce the (second-order) possibility of interference between pairs of compounds being screened.

Recent advances

Automation and low volume assay formats were leveraged by scientists at the NIH Chemical Genomics Center (NCGC) to develop quantitative HTS (qHTS), a paradigm to pharmacologically profile large chemical libraries through the generation of full concentration-response relationships for each compound. With accompanying curve fitting and cheminformatics software qHTS data yields half maximal effective concentration (EC50), maximal response, Hill coefficient (nH) for the entire library enabling the assessment of nascent structure activity relationships (SAR).

In March 2010, research was published demonstrating an HTS process allowing 1,000 times faster screening (100 million reactions in 10 hours) at 1-millionth the cost (using 10−7 times the reagent volume) than conventional techniques using drop-based microfluidics. Drops of fluid separated by oil replace microplate wells and allow analysis and hit sorting while reagents are flowing through channels.

In 2010, researchers developed a silicon sheet of lenses that can be placed over microfluidic arrays to allow the fluorescence measurement of 64 different output channels simultaneously with a single camera. This process can analyze 200,000 drops per second.

Whereby traditional HTS drug discovery uses purified proteins or intact cells, very interesting recent development of the technology is associated with the use of intact living organisms, like the nematode Caenorhabditis elegans and zebrafish (Danio rerio).

In 2016-2018 plate manufacturers began producing specialized chemistry to allow for mass production of ultra-low adherent cell repellent surfaces which facilitated the rapid development of HTS amenable assays to address cancer drug discovery in 3D tissues such as organoids and spheroids; a more physiologically relevant format.

Increasing utilization of HTS in academia for biomedical research

HTS is a relatively recent innovation, made feasible largely through modern advances in robotics and high-speed computer technology. It still takes a highly specialized and expensive screening lab to run an HTS operation, so in many cases a small- to moderate-size research institution will use the services of an existing HTS facility rather than set up one for itself.

There is a trend in academia for universities to be their own drug discovery enterprise. These facilities, which normally are found only in industry, are now increasingly found at universities as well. UCLA, for example, features an open access HTS laboratory Molecular Screening Shared Resources (MSSR, UCLA), which can screen more than 100,000 compounds a day on a routine basis. The open access policy ensures that researchers from all over the world can take advantage of this facility without lengthy intellectual property negotiations. With a compound library of over 200,000 small molecules, the MSSR has one of the largest compound deck of all universities on the west coast. Also, the MSSR features full functional genomics capabilities (genome wide siRNA, shRNA, cDNA and CRISPR) which are complementary to small molecule efforts: Functional genomics leverages HTS capabilities to execute genome wide screens which examine the function of each gene in the context of interest by either knocking each gene out or overexpressing it. Parallel access to high-throughput small molecule screen and a genome wide screen enables researchers to perform target identification and validation for given disease or the mode of action determination on a small molecule. The most accurate results can be obtained by use of "arrayed" functional genomics libraries, i.e. each library contains a single construct such as a single siRNA or cDNA. Functional genomics is typically paired with high content screening using e.g. epifluorescent microscopy or laser scanning cytometry.

The University of Illinois also has a facility for HTS, as does the University of Minnesota. The Life Sciences Institute at the University of Michigan houses the HTS facility in the Center for Chemical Genomics. Columbia University has an HTS shared resource facility with ~300,000 diverse small molecules and ~10,000 known bioactive compounds available for biochemical, cell-based and NGS-based screening. The Rockefeller University has an open-access HTS Resource Center HTSRC (The Rockefeller University, HTSRC), which offers a library of over 380,000 compounds. Northwestern University's High Throughput Analysis Laboratory supports target identification, validation, assay development, and compound screening. The non-profit Sanford Burnham Prebys Medical Discovery Institute also has a long-standing HTS facility in the Conrad Prebys Center for Chemical Genomics which was part of the MLPCN. The non-profit Scripps Research Molecular Screening Center (SRMSC) continues to serve academia across institutes post-MLPCN era. The SRMSC uHTS facility maintains one of the largest library collections in academia, presently at well-over 665,000 small molecule entities, and routinely screens the full collection or sub-libraries in support of multi-PI grant initiatives.

In the United States, the National Institutes of Health or NIH has created a nationwide consortium of small-molecule screening centers to produce innovative chemical tools for use in biological research. The Molecular Libraries Probe Production Centers Network, or MLPCN, performs HTS on assays provided by the research community, against a large library of small molecules maintained in a central molecule repository.

Chemical biology

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Chemical biology is a scientific discipline spanning the fields of chemistry and biology. The discipline involves the application of chemical techniques, analysis, and often small molecules produced through synthetic chemistry, to the study and manipulation of biological systems. In contrast to biochemistry, which involves the study of the chemistry of biomolecules and regulation of biochemical pathways within and between cells, chemical biology deals with chemistry applied to biology (synthesis of biomolecules, simulation of biological systems etc.).

Introduction

Some forms of chemical biology attempt to answer biological questions by directly probing living systems at the chemical level. In contrast to research using biochemistry, genetics, or molecular biology, where mutagenesis can provide a new version of the organism, cell, or biomolecule of interest, chemical biology probes systems in vitro and in vivo with small molecules that have been designed for a specific purpose or identified on the basis of biochemical or cell-based screening.

Chemical biology is one of several interdisciplinary sciences that tend to differ from older, reductionist fields and whose goals are to achieve a description of scientific holism. Chemical biology has scientific, historical and philosophical roots in medicinal chemistry, supramolecular chemistry, bioorganic chemistry, pharmacology, genetics, biochemistry, and metabolic engineering.

Systems of interest

Enrichment techniques for proteomics

Chemical biologists work to improve proteomics through the development of enrichment strategies, chemical affinity tags, and new probes. Samples for proteomics often contain many peptide sequences and the sequence of interest may be highly represented or of low abundance, which creates a barrier for their detection. Chemical biology methods can reduce sample complexity by selective enrichment using affinity chromatography. This involves targeting a peptide with a distinguishing feature like a biotin label or a post translational modification. Methods have been developed that include the use of antibodies, lectins to capture glycoproteins, and immobilized metal ions to capture phosphorylated peptides and enzyme substrates to capture select enzymes.

Enzyme probes

To investigate enzymatic activity as opposed to total protein, activity-based reagents have been developed to label the enzymatically active form of proteins (see Activity-based proteomics). For example, serine hydrolase- and cysteine protease-inhibitors have been converted to suicide inhibitors. This strategy enhances the ability to selectively analyze low abundance constituents through direct targeting. Enzyme activity can also be monitored through converted substrate. Identification of enzyme substrates is a problem of significant difficulty in proteomics and is vital to the understanding of signal transduction pathways in cells. A method that has been developed uses "analog-sensitive" kinases to label substrates using an unnatural ATP analog, facilitating visualization and identification through a unique handle.

Glycobiology

While DNA, RNA and proteins are all encoded at the genetic level, glycans (sugar polymers) are not encoded directly from the genome and fewer tools are available for their study. Glycobiology is therefore an area of active research for chemical biologists. For example, cells can be supplied with synthetic variants of natural sugars to probe their function. Carolyn Bertozzi's research group has developed methods for site-specifically reacting molecules at the surface of cells via synthetic sugars.

Combinatorial chemistry

Chemical biologists used automated synthesis of diverse small molecule libraries in order to perform high-throughput analysis of biological processes. Such experiments may lead to discovery of small molecules with antibiotic or chemotherapeutic properties. These combinatorial chemistry approaches are identical to those employed in the discipline of pharmacology.

Employing biology

Many research programs are also focused on employing natural biomolecules to perform biological tasks or to support a new chemical method. In this regard, chemical biology researchers have shown that DNA can serve as a template for synthetic chemistry, self-assembling proteins can serve as a structural scaffold for new materials, and RNA can be evolved in vitro to produce new catalytic function. Additionally, heterobifunctional (two-sided) synthetic small molecules such as dimerizers or PROTACs bring two proteins together inside cells, which can synthetically induce important new biological functions such as targeted protein degradation.

Peptide synthesis

Chemical synthesis of proteins is a valuable tool in chemical biology as it allows for the introduction of non-natural amino acids as well as residue specific incorporation of "posttranslational modifications" such as phosphorylation, glycosylation, acetylation, and even ubiquitination. These capabilities are valuable for chemical biologists as non-natural amino acids can be used to probe and alter the functionality of proteins, while post translational modifications are widely known to regulate the structure and activity of proteins. Although strictly biological techniques have been developed to achieve these ends, the chemical synthesis of peptides often has a lower technical and practical barrier to obtaining small amounts of the desired protein.

In order to make protein-sized polypeptide chains via the small peptide fragments made by synthesis, chemical biologists use the process of native chemical ligation. Native chemical ligation involves the coupling of a C-terminal thioester and an N-terminal cysteine residue, ultimately resulting in formation of a "native" amide bond. Other strategies that have been used for the ligation of peptide fragments using the acyl transfer chemistry first introduced with native chemical ligation include expressed protein ligation, sulfurization/desulfurization techniques, and use of removable thiol auxiliaries. Expressed protein ligation allows for the biotechnological installation of a C-terminal thioester using inteins, thereby allowing the appendage of a synthetic N-terminal peptide to the recombinantly-produced C-terminal portion. Both sulfurization/desulfurization techniques and the use of removable thiol auxiliaries involve the installation of a synthetic thiol moiety to carry out the standard native chemical ligation chemistry, followed by removal of the auxiliary/thiol.

Directed evolution

A primary goal of protein engineering is the design of novel peptides or proteins with a desired structure and chemical activity. Because our knowledge of the relationship between primary sequence, structure, and function of proteins is limited, rational design of new proteins with engineered activities is extremely challenging. In directed evolution, repeated cycles of genetic diversification followed by a screening or selection process, can be used to mimic natural selection in the laboratory to design new proteins with a desired activity.

Several methods exist for creating large libraries of sequence variants. Among the most widely used are subjecting DNA to UV radiation or chemical mutagens, error-prone PCR, degenerate codons, or recombination. Once a large library of variants is created, selection or screening techniques are used to find mutants with a desired attribute. Common selection/screening techniques include FACS, mRNA display, phage display, and in vitro compartmentalization. Once useful variants are found, their DNA sequence is amplified and subjected to further rounds of diversification and selection.

The development of directed evolution methods was honored in 2018 with the awarding of the Nobel Prize in Chemistry to Frances Arnold for evolution of enzymes, and George Smith and Gregory Winter for phage display.

Bioorthogonal reactions

Successful labeling of a molecule of interest requires specific functionalization of that molecule to react chemospecifically with an optical probe. For a labeling experiment to be considered robust, that functionalization must minimally perturb the system. Unfortunately, these requirements are often hard to meet. Many of the reactions normally available to organic chemists in the laboratory are unavailable in living systems. Water- and redox- sensitive reactions would not proceed, reagents prone to nucleophilic attack would offer no chemospecificity, and any reactions with large kinetic barriers would not find enough energy in the relatively low-heat environment of a living cell. Thus, chemists have recently developed a panel of bioorthogonal chemistry that proceed chemospecifically, despite the milieu of distracting reactive materials in vivo.

The coupling of an probe to a molecule of interest must occur within a reasonably short time frame; therefore, the kinetics of the coupling reaction should be highly favorable. Click chemistry is well suited to fill this niche, since click reactions are rapid, spontaneous, selective, and high-yielding. Unfortunately, the most famous "click reaction," a [3+2] cycloaddition between an azide and an acyclic alkyne, is copper-catalyzed, posing a serious problem for use in vivo due to copper's toxicity. To bypass the necessity for a catalyst, Carolyn R. Bertozzi's lab introduced inherent strain into the alkyne species by using a cyclic alkyne. In particular, cyclooctyne reacts with azido-molecules with distinctive vigor.

The most common method of installing bioorthogonal reactivity into a target biomolecule is through metabolic labeling. Cells are immersed in a medium where access to nutrients is limited to synthetically modified analogues of standard fuels such as sugars. As a consequence, these altered biomolecules are incorporated into the cells in the same manner as the unmodified metabolites. A probe is then incorporated into the system to image the fate of the altered biomolecules. Other methods of functionalization include enzymatically inserting azides into proteins, and synthesizing phospholipids conjugated to cyclooctynes.

Discovery of biomolecules through metagenomics

The advances in modern sequencing technologies in the late 1990s allowed scientists to investigate DNA of communities of organisms in their natural environments ("eDNA"), without culturing individual species in the lab. This metagenomic approach enabled scientists to study a wide selection of organisms that were previously not characterized due in part to an incompetent growth condition. Sources of eDNA include soils, ocean, subsurface, hot springs, hydrothermal vents, polar ice caps, hypersaline habitats, and extreme pH environments. Of the many applications of metagenomics, researchers such as Jo Handelsman, Jon Clardy, and Robert M. Goodman, explored metagenomic approaches toward the discovery of biologically active molecules such as antibiotics.

Overview of metagenomic methods
Overview of metagenomic methods

Functional or homology screening strategies have been used to identify genes that produce small bioactive molecules. Functional metagenomic studies are designed to search for specific phenotypes that are associated with molecules with specific characteristics. Homology metagenomic studies, on the other hand, are designed to examine genes to identify conserved sequences that are previously associated with the expression of biologically active molecules.

Functional metagenomic studies enable the discovery of novel genes that encode biologically active molecules. These assays include top agar overlay assays where antibiotics generate zones of growth inhibition against test microbes, and pH assays that can screen for pH change due to newly synthesized molecules using pH indicator on an agar plate. Substrate-induced gene expression screening (SIGEX), a method to screen for the expression of genes that are induced by chemical compounds, has also been used to search for genes with specific functions. Homology-based metagenomic studies have led to a fast discovery of genes that have homologous sequences as the previously known genes that are responsible for the biosynthesis of biologically active molecules. As soon as the genes are sequenced, scientists can compare thousands of bacterial genomes simultaneously. The advantage over functional metagenomic assays is that homology metagenomic studies do not require a host organism system to express the metagenomes, thus this method can potentially save the time spent on analyzing nonfunctional genomes. These also led to the discovery of several novel proteins and small molecules. In addition, an in silico examination from the Global Ocean Metagenomic Survey found 20 new lantibiotic cyclases.

Kinases

Posttranslational modification of proteins with phosphate groups by kinases is a key regulatory step throughout all biological systems. Phosphorylation events, either phosphorylation by protein kinases or dephosphorylation by phosphatases, result in protein activation or deactivation. These events have an impact on the regulation of physiological pathways, which makes the ability to dissect and study these pathways integral to understanding the details of cellular processes. There exist a number of challenges—namely the sheer size of the phosphoproteome, the fleeting nature of phosphorylation events and related physical limitations of classical biological and biochemical techniques—that have limited the advancement of knowledge in this area.

Through the use of small molecule modulators of protein kinases, chemical biologists have gained a better understanding of the effects of protein phosphorylation. For example, nonselective and selective kinase inhibitors, such as a class of pyridinylimidazole compounds  are potent inhibitors useful in the dissection of MAP kinase signaling pathways. These pyridinylimidazole compounds function by targeting the ATP binding pocket. Although this approach, as well as related approaches, with slight modifications, has proven effective in a number of cases, these compounds lack adequate specificity for more general applications. Another class of compounds, mechanism-based inhibitors, combines knowledge of the kinase enzymology with previously utilized inhibition motifs. For example, a "bisubstrate analog" inhibits kinase action by binding both the conserved ATP binding pocket and a protein/peptide recognition site on the specific kinase. Research groups also utilized ATP analogs as chemical probes to study kinases and identify their substrates.

The development of novel chemical means of incorporating phosphomimetic amino acids into proteins has provided important insight into the effects of phosphorylation events. Phosphorylation events have typically been studied by mutating an identified phosphorylation site (serine, threonine or tyrosine) to an amino acid, such as alanine, that cannot be phosphorylated. However, these techniques come with limitations and chemical biologists have developed improved ways of investigating protein phosphorylation. By installing phospho-serine, phospho-threonine or analogous phosphonate mimics into native proteins, researchers are able to perform in vivo studies to investigate the effects of phosphorylation by extending the amount of time a phosphorylation event occurs while minimizing the often-unfavorable effects of mutations. Expressed protein ligation, has proven to be successful techniques for synthetically producing proteins that contain phosphomimetic molecules at either terminus. In addition, researchers have used unnatural amino acid mutagenesis at targeted sites within a peptide sequence.

Advances in chemical biology have also improved upon classical techniques of imaging kinase action. For example, the development of peptide biosensors—peptides containing incorporated fluorophores improved temporal resolution of in vitro binding assays. One of the most useful techniques to study kinase action is Fluorescence Resonance Energy Transfer (FRET). To utilize FRET for phosphorylation studies, fluorescent proteins are coupled to both a phosphoamino acid binding domain and a peptide that can by phosphorylated. Upon phosphorylation or dephosphorylation of a substrate peptide, a conformational change occurs that results in a change in fluorescence. FRET has also been used in tandem with Fluorescence Lifetime Imaging Microscopy (FLIM) or fluorescently conjugated antibodies and flow cytometry to provide quantitative results with excellent temporal and spatial resolution.

Biological fluorescence

Chemical biologists often study the functions of biological macromolecules using fluorescence techniques. The advantage of fluorescence versus other techniques resides in its high sensitivity, non-invasiveness, safe detection, and ability to modulate the fluorescence signal. In recent years, the discovery of green fluorescent protein (GFP) by Roger Y. Tsien and others, hybrid systems and quantum dots have enabled assessing protein location and function more precisely. Three main types of fluorophores are used: small organic dyes, green fluorescent proteins, and quantum dots. Small organic dyes usually are less than 1 kDa, and have been modified to increase photostability and brightness, and reduce self-quenching. Quantum dots have very sharp wavelengths, high molar absorptivity and quantum yield. Both organic dyes and quantum dyes do not have the ability to recognize the protein of interest without the aid of antibodies, hence they must use immunolabeling. Fluorescent proteins are genetically encoded and can be fused to your protein of interest. Another genetic tagging technique is the tetracysteine biarsenical system, which requires modification of the targeted sequence that includes four cysteines, which binds membrane-permeable biarsenical molecules, the green and the red dyes "FlAsH" and "ReAsH", with picomolar affinity. Both fluorescent proteins and biarsenical tetracysteine can be expressed in live cells, but present major limitations in ectopic expression and might cause a loss of function.

Fluorescent techniques have been used assess a number of protein dynamics including protein tracking, conformational changes, protein–protein interactions, protein synthesis and turnover, and enzyme activity, among others. Three general approaches for measuring protein net redistribution and diffusion are single-particle tracking, correlation spectroscopy and photomarking methods. In single-particle tracking, the individual molecule must be both bright and sparse enough to be tracked from one video to the other. Correlation spectroscopy analyzes the intensity fluctuations resulting from migration of fluorescent objects into and out of a small volume at the focus of a laser. In photomarking, a fluorescent protein can be dequenched in a subcellular area with the use of intense local illumination and the fate of the marked molecule can be imaged directly. Michalet and coworkers used quantum dots for single-particle tracking using biotin-quantum dots in HeLa cells. One of the best ways to detect conformational changes in proteins is to label the protein of interest with two fluorophores within close proximity. FRET will respond to internal conformational changes result from reorientation of one fluorophore with respect to the other. One can also use fluorescence to visualize enzyme activity, typically by using a quenched activity based proteomics (qABP). Covalent binding of a qABP to the active site of the targeted enzyme will provide direct evidence concerning if the enzyme is responsible for the signal upon release of the quencher and regain of fluorescence.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...