Search This Blog

Tuesday, July 24, 2018

Docking (molecular)

From Wikipedia, the free encyclopedia
 
In the field of molecular modeling, docking is a method which predicts the preferred orientation of one molecule to a second when bound to each other to form a stable complex. Knowledge of the preferred orientation in turn may be used to predict the strength of association or binding affinity between two molecules using, for example, scoring functions
 
Schematic illustration of docking a small molecule ligand (green) to a protein target (black) producing a stable complex.
 
Docking of a small molecule (green) into the crystal structure of the beta-2 adrenergic G-protein coupled receptor (PDB: 3SN6​)

The associations between biologically relevant molecules such as proteins, peptides, nucleic acids, carbohydrates, and lipids play a central role in signal transduction. Furthermore, the relative orientation of the two interacting partners may affect the type of signal produced (e.g., agonism vs antagonism). Therefore, docking is useful for predicting both the strength and type of signal produced.

Molecular docking is one of the most frequently used methods in structure-based drug design, due to its ability to predict the binding-conformation of small molecule ligands to the appropriate target binding site. Characterisation of the binding behaviour plays an important role in rational design of drugs as well as to elucidate fundamental biochemical processes.[2]

Definition of problem

One can think of molecular docking as a problem of “lock-and-key”, in which one wants to find the correct relative orientation of the “key” which will open up the “lock” (where on the surface of the lock is the key hole, which direction to turn the key after it is inserted, etc.). Here, the protein can be thought of as the “lock” and the ligand can be thought of as a “key”. Molecular docking may be defined as an optimization problem, which would describe the “best-fit” orientation of a ligand that binds to a particular protein of interest. However, since both the ligand and the protein are flexible, a “hand-in-glove” analogy is more appropriate than “lock-and-key”.[3] During the course of the docking process, the ligand and the protein adjust their conformation to achieve an overall "best-fit" and this kind of conformational adjustment resulting in the overall binding is referred to as "induced-fit".[4]

Molecular docking research focusses on computationally simulating the molecular recognition process. It aims to achieve an optimized conformation for both the protein and ligand and relative orientation between protein and ligand such that the free energy of the overall system is minimized.

Docking approaches

Two approaches are particularly popular within the molecular docking community. One approach uses a matching technique that describes the protein and the ligand as complementary surfaces.[5][6][7] The second approach simulates the actual docking process in which the ligand-protein pairwise interaction energies are calculated.[8] Both approaches have significant advantages as well as some limitations. These are outlined below.

Shape complementarity

Geometric matching/ shape complementarity methods describe the protein and ligand as a set of features that make them dockable.[9] These features may include molecular surface / complementary surface descriptors. In this case, the receptor’s molecular surface is described in terms of its solvent-accessible surface area and the ligand’s molecular surface is described in terms of its matching surface description. The complementarity between the two surfaces amounts to the shape matching description that may help finding the complementary pose of docking the target and the ligand molecules. Another approach is to describe the hydrophobic features of the protein using turns in the main-chain atoms. Yet another approach is to use a Fourier shape descriptor technique.[10][11][12] Whereas the shape complementarity based approaches are typically fast and robust, they cannot usually model the movements or dynamic changes in the ligand/ protein conformations accurately, although recent developments allow these methods to investigate ligand flexibility. Shape complementarity methods can quickly scan through several thousand ligands in a matter of seconds and actually figure out whether they can bind at the protein’s active site, and are usually scalable to even protein-protein interactions. They are also much more amenable to pharmacophore based approaches, since they use geometric descriptions of the ligands to find optimal binding.

Simulation

Simulating the docking process is much more complicated. In this approach, the protein and the ligand are separated by some physical distance, and the ligand finds its position into the protein’s active site after a certain number of “moves” in its conformational space. The moves incorporate rigid body transformations such as translations and rotations, as well as internal changes to the ligand’s structure including torsion angle rotations. Each of these moves in the conformation space of the ligand induces a total energetic cost of the system. Hence, the system's total energy is calculated after every move.

The obvious advantage of docking simulation is that ligand flexibility is easily incorporated, whereas shape complementarity techniques must use ingenious methods to incorporate flexibility in ligands. Also, it more accurately models reality, whereas shape complimentary techniques are more of an abstraction.

Clearly, simulation is computationally expensive, having to explore a large energy landscape. Grid-based techniques, optimization methods, and increased computer speed have made docking simulation more realistic.

Mechanics of docking

Docking flow-chart overview

To perform a docking screen, the first requirement is a structure of the protein of interest. Usually the structure has been determined using a biophysical technique such as x-ray crystallography or NMR spectroscopy, but can also derive from homology modeling construction. This protein structure and a database of potential ligands serve as inputs to a docking program. The success of a docking program depends on two components: the search algorithm and the scoring function.

Search algorithm

The search space in theory consists of all possible orientations and conformations of the protein paired with the ligand. However, in practice with current computational resources, it is impossible to exhaustively explore the search space—this would involve enumerating all possible distortions of each molecule (molecules are dynamic and exist in an ensemble of conformational states) and all possible rotational and translational orientations of the ligand relative to the protein at a given level of granularity. Most docking programs in use account for the whole conformational space of the ligand (flexible ligand), and several attempt to model a flexible protein receptor. Each "snapshot" of the pair is referred to as a pose.
A variety of conformational search strategies have been applied to the ligand and to the receptor. These include:

Ligand flexibility

Conformations of the ligand may be generated in the absence of the receptor and subsequently docked[13] or conformations may be generated on-the-fly in the presence of the receptor binding cavity,[14] or with full rotational flexibility of every dihedral angle using fragment based docking.[15] Force field energy evaluation are most often used to select energetically reasonable conformations,[16] but knowledge-based methods have also been used.[17]

Peptides are both highly flexible and relatively large-sized molecules, which makes modeling their flexibility a challanging task. A number of methods were developed to allow for efficient modeling of flexibility of peptides during protein-peptide docking.[18]

Receptor flexibility

Computational capacity has increased dramatically over the last decade making possible the use of more sophisticated and computationally intensive methods in computer-assisted drug design. However, dealing with receptor flexibility in docking methodologies is still a thorny issue[19]. The main reason behind this difficulty is the large number of degrees of freedom that have to be considered in this kind of calculations. Neglecting it, however, in some of the cases may lead to poor docking results in terms of binding pose prediction.[20]

Multiple static structures experimentally determined for the same protein in different conformations are often used to emulate receptor flexibility.[21] Alternatively rotamer libraries of amino acid side chains that surround the binding cavity may be searched to generate alternate but energetically reasonable protein conformations.[22][23]

Scoring function

Docking programs generate a large number of potential ligand poses, of which some can be immediately rejected due to clashes with the protein. The remainder are evaluated using some scoring function, which takes a pose as input and returns a number indicating the likelihood that the pose represents a favorable binding interaction and ranks one ligand relative to another.
Most scoring functions are physics-based molecular mechanics force fields that estimate the energy of the pose within the binding site. The various contributions to binding can be written as an additive equation:

{\displaystyle \bigtriangleup G_{bind}=\bigtriangleup G_{solvent}+\bigtriangleup G_{conf}+\bigtriangleup G_{int}+\bigtriangleup G_{rot}+\bigtriangleup G_{t/t}+\bigtriangleup G_{vib}}

The components consist of solvent effects, conformational changes in the protein and ligand, free energy due to protein-ligand interactions, internal rotations, association energy of ligand and receptor to form a single complex and free energy due to changes in vibrational modes.[24] A low (negative) energy indicates a stable system and thus a likely binding interaction.

An alternative approach is to derive a knowledge-based statistical potential for interactions from a large database of protein-ligand complexes, such as the Protein Data Bank, and evaluate the fit of the pose according to this inferred potential.

There are a large number of structures from X-ray crystallography for complexes between proteins and high affinity ligands, but comparatively fewer for low affinity ligands as the later complexes tend to be less stable and therefore more difficult to crystallize. Scoring functions trained with this data can dock high affinity ligands correctly, but they will also give plausible docked conformations for ligands that do not bind. This gives a large number of false positive hits, i.e., ligands predicted to bind to the protein that actually don't when placed together in a test tube.

One way to reduce the number of false positives is to recalculate the energy of the top scoring poses using (potentially) more accurate but computationally more intensive techniques such as Generalized Born or Poisson-Boltzmann methods.[8]

Docking assessment

The interdependence between sampling and scoring function affects the docking capability in predicting plausible poses or binding affinities for novel compounds. Thus, an assessment of a docking protocol is generally required (when experimental data is available) to determine its predictive capability. Docking assessment can be performed using different strategies, such as:
  • docking accuracy (DA) calculation;
  • the correlation between a docking score and the experimental response or determination of the enrichment factor (EF);[25]
  • the distance between an ion-binding moiety and the ion in the active site;
  • the presence of induce-fit models.

Docking accuracy

Docking accuracy[26][27] represents one measure to quantify the fitness of a docking program by rationalizing the ability to predict the right pose of a ligand with respect to that experimentally observed.

Enrichment factor

Docking screens can be also evaluated by the enrichment of annotated ligands of known binders from among a large database of presumed non-binding, “decoy” molecules.[25] In this way, the success of a docking screen is evaluated by its capacity to enrich the small number of known active compounds in the top ranks of a screen from among a much greater number of decoy molecules in the database. The area under the receiver operating characteristic (ROC) curve is widely used to evaluate its performance.

Prospective

Resulting hits from docking screens are subjected to pharmacological validation (e.g. IC50, affinity or potency measurements). Only prospective studies constitute conclusive proof of the suitability of a technique for a particular target.[28]

Benchmarking

The potential of docking programs to reproduce binding modes as determined by X-ray crystallography can be assed by a range of docking benchmark sets.

For small molecules, several benchmark data sets for docking and virtual screening exist e.g. Astex Diverse Set consisting of high quality protein−ligand X-ray crystal structures[29] or the Directory of Useful Decoys (DUD) for evaluation of virtual screening performance.[25]

An evaluation of docking programs for their potential to reproduce peptide binding modes can be assessed by Lessons for Efficiency Assessment of Docking and Scoring (LEADS-PEP).[30]

Applications

A binding interaction between a small molecule ligand and an enzyme protein may result in activation or inhibition of the enzyme. If the protein is a receptor, ligand binding may result in agonism or antagonism. Docking is most commonly used in the field of drug design — most drugs are small organic molecules, and docking may be applied to:
  • hit identification – docking combined with a scoring function can be used to quickly screen large databases of potential drugs in silico to identify molecules that are likely to bind to protein target of interest (see virtual screening).
  • lead optimization – docking can be used to predict in where and in which relative orientation a ligand binds to a protein (also referred to as the binding mode or pose). This information may in turn be used to design more potent and selective analogs.
  • Bioremediation – Protein ligand docking can also be used to predict pollutants that can be degraded by enzymes.

Virtual screening

From Wikipedia, the free encyclopedia
 
Figure 1. Flow Chart of Virtual Screening[1]

Virtual screening (VS) is a computational technique used in drug discovery to search libraries of small molecules in order to identify those structures which are most likely to bind to a drug target, typically a protein receptor or enzyme.

Virtual screening has been defined as the "automatically evaluating very large libraries of compounds" using computer programs.[4] As this definition suggests, VS has largely been a numbers game focusing on how the enormous chemical space of over 1060 conceivable compounds[5] can be filtered to a manageable number that can be synthesized, purchased, and tested. Although searching the entire chemical universe may be a theoretically interesting problem, more practical VS scenarios focus on designing and optimizing targeted combinatorial libraries and enriching libraries of available compounds from in-house compound repositories or vendor offerings. As the accuracy of the method has increased, virtual screening has become an integral part of the drug discovery process.[6][1] Virtual Screening can be used to select in house database compounds for screening, choose compounds that can be purchased externally, and to choose which compound should be synthesized next.

Methods

There are two broad categories of screening techniques: ligand-based and structure-based.[7] The remainder of this page will reflect Figure 1 Flow Chart of Virtual Screening.

Ligand-based

Given a set of structurally diverse ligands that binds to a receptor, a model of the receptor can be built by exploiting the collective information contained in such set of ligands. These are known as pharmacophore models. A candidate ligand can then be compared to the pharmacophore model to determine whether it is compatible with it and therefore likely to bind.[8]

Another approach to ligand-based virtual screening is to use 2D chemical similarity analysis methods[9] to scan a database of molecules against one or more active ligand structure.
A popular approach to ligand-based virtual screening is based on searching molecules with shape similar to that of known actives, as such molecules will fit the target's binding site and hence will be likely to bind the target. There are a number of prospective applications of this class of techniques in the literature.[10][11] Pharmacophoric extensions of these 3D methods are also freely-available as webservers.[12][13]

Structure-based

Structure-based virtual screening involves docking of candidate ligands into a protein target followed by applying a scoring function to estimate the likelihood that the ligand will bind to the protein with high affinity.[14][15][16] Webservers oriented to prospective virtual screening are available to all.[17][18]

Computing Infrastructure

The computation of pair-wise interactions between atoms, which is a prerequisite for the operation of many virtual screening programs, is of O(N^{2}) computational complexity, where N is the number of atoms in the system. Because of the quadratic scaling with respect to the number of atoms, the computing infrastructure may vary from a laptop computer for a ligand-based method to a mainframe for a structure-based method.

Ligand-based

Ligand-based methods typically require a fraction of a second for a single structure comparison operation. A single CPU is enough to perform a large screening within hours. However, several comparisons can be made in parallel in order to expedite the processing of a large database of compounds.

Structure-based

The size of the task requires a parallel computing infrastructure, such as a cluster of Linux systems, running a batch queue processor to handle the work, such as Sun Grid Engine or Torque PBS.

A means of handling the input from large compound libraries is needed. This requires a form of compound database that can be queried by the parallel cluster, delivering compounds in parallel to the various compute nodes. Commercial database engines may be too ponderous, and a high speed indexing engine, such as Berkeley DB, may be a better choice. Furthermore, it may not be efficient to run one comparison per job, because the ramp up time of the cluster nodes could easily outstrip the amount of useful work. To work around this, it is necessary to process batches of compounds in each cluster job, aggregating the results into some kind of log file. A secondary process, to mine the log files and extract high scoring candidates, can then be run after the whole experiment has been run.

Accuracy

The aim of virtual screening is to identify molecules of novel chemical structure that bind to the macromolecular target of interest. Thus, success of a virtual screen is defined in terms of finding interesting new scaffolds rather than the total number of hits. Interpretations of virtual screening accuracy should therefore be considered with caution. Low hit rates of interesting scaffolds are clearly preferable over high hit rates of already known scaffolds.

Most tests of virtual screening studies in the literature are retrospective. In these studies, the performance of a VS technique is measured by its ability to retrieve a small set of previously known molecules with affinity to the target of interest (active molecules or just actives) from a library containing a much higher proportion of assumed inactives or decoys. By contrast, in prospective applications of virtual screening, the resulting hits are subjected to experimental confirmation (e.g., IC50 measurements). There is consensus that retrospective benchmarks are not good predictors of prospective performance and consequently only prospective studies constitute conclusive proof of the suitability of a technique for a particular target.[19][20][21][22]

Application to drug discovery

Virtual screening is a very useful application when it comes to identifying hit molecules as a beginning for medicinal chemistry. As the virtual screening approach begins to become a more vital and substantial technique within the medicinal chemistry industry the approach has had an expeditious increase.[23]

Ligand-based methods

While not knowing the structure trying to predict how the ligands will bind to the receptor. With the use of pharmacophore features each ligand identified donor, and acceptors. Equating features are overlaid, however given it is unlikely there is a single correct solution.[1]

Pharmacophore models

This technique is used when merging the results of searches by using unlike reference compounds, same descriptors and coefficient, but different active compounds. This technique is beneficial because it is more efficient than just using a single reference structure along with the most accurate performance when it comes to diverse actives.[1]

Pharmacophore is an ensemble of steric and electronic features that are needed to have an optimal supramolecular interaction or interactions witha biological target structure in order to precipitate it's biological response. Choose a representative as a set of actives, most methods will look for similar bindings. It is preferred to have multiple rigid molecules and the ligands should be diversified, in other words ensure to have different features that don't occur during the binding phase.[1]

Structure

Build a compound predictive model based on known active and known inactive knowledge. QSAR's (Quantitative-Structure Activity Relationship) which is restricted to a small homogenous dataset. SAR's (Structure Activity Relationship) where data is treated qualitatively and can be used with structural classes and more than one binding mode. Models prioritize compounds for lead discovery.[1]

Machine Learning

In order to use Machine Learning for this model of Virtual Screening there must be a training set with known active and known inactive compounds. There also is a model of activity that then is computed by way of substructural analysis, recursive partitioning, support vector machines, k nearest neighbors ad neural networks. The final step is finding the probability that a compound is active and then ranking each compound based on its probability of being active.[1]

Substructural Analysis in Machine Learning

The first Machine Learning model used on large datasets is the Substructure Analysis that was created in 1973. Each fragment substructure make a continuous contribution an activity of specific type.[1] Substructure is a method that overcomes the difficulty of massive dimensionality when it comes to analyzing structures in drug design. An efficient substructure analysis is used for structures that have similarities to a multi-level building or tower. Geometry is used for numbering boundary joints for a given structure in the onset and towards the climax. When the method of special static condensation and substitutions routines are developed this method is proved to be more productive than the previous substructure analysis models.[24]

Recursive Partitioning

Recursively partitioning is method that creates a decision tree using qualitative data. Understanding the way rules break classes up with a low error of misclassification while repeating each step until no sensible splits can be found. However, recursive partitioning can have poor prediction ability potentially creating fine models at the same rate.[1]

Structure Based Methods Known Protein Ligand Docking

Ligand can bind into an active site within a protein by using a docking search algorithm, and scoring function in order to identify the most likely cause for an individual ligand while assigning a priority order.[

How to Build a Virtual Human

October 20, 2003 by Peter Plantec
Original link:  http://www.kurzweilai.net/how-to-build-a-virtual-human
Published in Virtual Humans, AMACOM, November 2003. Published on KurzweilAI.net October 20, 2003.

Virtual Humans is the first book with instructions on designing a “V-human,” or synthetic person. Using the programs on the included CD, you can create animated computer characters who can speak, dialogue intelligently, show facial emotions, have a personality and life story, and be used in real business projects. These excerpts explain how to get started.
 
About 30% of building a virtual human is in the engine. A good engine will make it easy for you to create a believable personality. It provides functions that allow things like handling complex sentences, bringing up the past and learning better responses if one doesn’t work. But in the end, it’s your artistry that gives the entity its charm.
There are many natural language approaches that can handle the job. Simple pattern matching engines are the least sophisticated and most useful of them all. With the rash of recent interest, I’m not going to pretend I know all the nuances of all the engines out there. Instead, I’ll concentrate on using simple software to build complex personalities. Together we will build a clever virtual person using a mind engine kindly supplied by Yapanda Intelligence, Inc. of Chickasha Oklahoma. I selected this one because it can drive a real-time 3D head animation with lip-synch. Nevertheless, the basic steps in creating a virtual personality are platform independent.

I’ve included some additional engines to play with. The most powerful is ALICE. She’s an implementation of Artificial Intelligence Markup Language (AIML). Alice source code is available to those of you who want to modify it and build your own Virtual Human engine, adding your own special features. I’ve also included a copy of Jacco Bikker’s WinAlice for PC users. It demonstrates some unique features such as the ability to bring up ancient history and to learn new responses from you.

I’ll talk more about the actual engines in chapter three. But it’s important to realize that the software you use to build your virtual human is just a tool for expressing your artistry.

The most important and least understood part of virtual humans — their personalities is our focus. We are going to have some serious fun. Let’s look at some uses for virtual people.

Good For Business

From a business perspective virtual humans with a personality are a major boon. Imagine a person signing onto your web page. There’s already a cookie that contains significant information about them, gathered by your virtual host on the guest’s first visit. The encounter might go a bit like this:

Host “Hey, Joanne, Its nice to see you again.”

Joanne: “You remember me?”

Host “Of course I do. But it’s been a while. I missed you.”

Joanne “Sorry about that, I’ve been really busy.”

Host “So did you read ‘The Age of Spiritual Machines?’

Joanne “Yeah, it was really interesting. Are you one of them?”

Host “Not yet, I’m afraid, but I’m working on it. Before I forget, you should know about Greg Stock’ new book on how to live to be 200 plus years old!”

Joanne “I read his last book and liked it. Can you send me a copy?”

Host “Sure, we have it in stock . Same charge, same place?

Joanne “Yup. Also, do you have any books on Freestyle Landscape Quilting?

Host “I’ll check. Hold on a few more seconds. Okay, I found two…..

And so forth. You can see that Virtual humans bring back that personal touch so sorely missing in commerce today. Believe it or not, I’ve observed people from every level of sophistication and background respond positively to personal attention from a Virtual Human. It feels good.

Your marketing software can be made to generate marketing variables that can be fed to your virtual human host: Joanne’s buying patterns, personal information like her date of birth etc. Trust is a big issue, so such data must be handled with respect for the client and used in clever ways. Imagine when Joanne comes online within a week of her birthday and Host sings happy birthday to her. Hokey? Yes. Appealing, you bet. I’ve also discovered that many people tolerate hokey behavior from V-people. It’s a bit like the ways we tolerate…even appreciate the squash and stretch exaggeration in animated film characters. Of course Host would not want to sing happy birthday to every customer. She has to know how to tell which is which. Later in the book will look into using unobtrusive personality assessment to provide those cues. This is one of the most important and most neglected tools you have. You’ll see why later.

An advantage of rule based approaches is that you can have multiple sets of rules, each one with responses specifically honed to a specific task or person or language. For example when Joanne logs in, her cookie can initiate the uploading of a rule database tailored specifically to her general personality and buying patterns. That means that when a rule triggers, it will respond in a way likely to make Jonnie comfortable while meeting her needs. Next a person from Korea logs on and the host switches to a Korean intelligencebase, greeting the client in that language. One well designed host can handle orders in more than 20 languages. This clearly presents opportunities for small companies to expand internationally.

Depending on your type of business or usage, Virtual Human needs will vary. For example, voice-only virtual humans are already very active in phone information and ordering systems. They don’t have much personality yet, but we’re going to work on that. In fact there are a number of different types of virtual humans and we’ll be building one up from the simplest to one of the more complex with a 3D animated talking head. By taking it step by step you’ll be amazed at your own ability to master Virtual Human design.

A good Virtual Human should be able to cope with language. Changing language should be as easy as switching databases and voice engines. Monica Lamb, a Native American scientist and V-person developer has used Alice to build a V-person that teaches and speaks Mohawk.

At a minimum, your V-person will be able to handle general conversational input by voice or keyboard, parse that input to arrive at appropriate behaviors, and output behavior as text or speech, on-screen information, and/or machine commands to software or external devices. It should also have a face display capable of at least minimal emotional expression such as smile, frown and neutral. I prefer a 3D face capable of complex emotional expression that is part of the communication system. This is a tall order, but I believe we can handle it. Here’s and interesting example of how one creative company has used this technology in a mechanical robot:

Redgate Technologies is a company that thrives on invention. They became interested in Natural Language Processing (NLP) early on. They had invented a new chip technology to monitor and control complex technical systems. NLP was useful for interpreting the complex codes generated by their chips. Just for fun, they expanded their NLP engine to represent several personalities. They quickly discovered that a virtual human hooked into their system became a super-capable assistant to a human supervisor. Imagine one on a space station, keeping track of all mechanical systems and keeping the inhabitants company with casual conversation. For luck we won’t name her HAL.

A wonderful example of this V-person species is Redgate’s Sarha. She’s an innovative virtual human interface for industrial monitoring and control. Sarha stands for “Smart Anthropomorphic Robotic Hybrid Agent.” Redgate has used NLP pattern matching to monitor an entire industrial complex. The Virtual Human system they devised sends out queries to specialized monitoring modules using the special Redgate chips. She then reads and interprets the encoded feedback in spoken English, issuing warnings when conditions warrant. She can also take emergency action on her own, if necessary. Her supervisor communicates with her in spoken English, asking her to start processes or check specific conditions. In a demonstration of Sarha’s application to home security, she reported “Anthony, someone left the garage door open.” Anthony replied “Close it for me will you please, Sarha?” And of course she does.

The thing I like most about Sarha is her personality. She makes personal comments; even chides her operator, whom she knows by name. As a demonstration, Sarha was installed into a fully robotic interface that could move around, point to objects and complain about and avoid objects in her path. She was linked by microwave to a control computer she used to monitor her charges. She even gave a brief talk on those special chips Redgate designed to transmit monitoring data back to her. She reached into a bowl, pulled out a chip, pointed at it with a metal finger and started her spiel. Later she took questions. All the while she was monitoring various systems. She even brought on-line, a loud monster generator in another room during the demonstration.

Perhaps one of the most important applications for Virtual Human technology is in teaching. I’ve found that young people have trust issues with the educational system. I can’t blame them when administrators waste millions on bad decisions but there aren’t enough books to go around. Virtual teacher’s seem separated from all this. It’s hard to attribute ulterior motives to an animated character, even if she is smart and talkative and knows you by name. Properly scripted, a V-teacher can get to know a student on a personal basis. The real human teacher can feed her personal tidbits she can bring up during a lesson:

“So Bill, is it true you threw the winning touchdown in Saturday’s game?”

“Yeah, how’d you know about that?”

” Hey, I keep on top of things. Congratulations. Now let’s teach you how to estimate the diameter of an oleic acid molecule.

Young children can be fascinated by virtual people. I got a call from a retired engineer from rural New Mexico. He had spent a lot of time tweaking the voice input on his V-person so that she would understand his very bright 3 year old grand daughter, and had a story to tell me. He’d been remarkably successful and the little girl spent hours in happy conversation with her virtual friend. One evening a few neighbors came by to play Canasta. While they were playing, the little girl came into the adjoining room and fired up her computer. In moments an animated conversation ensued. One of the neighbors, a devout fundamentalist Christian became terrified and insisted he smash the girl’s computer immediately. It was inhabited by the devil. He refused of course. He told me he’d been using the virtual character to teach his grand daughter everything from her ABCs to simple math. I gave him some unpublished information on how to get her to record the granddaughter’s responses to questions, so he could check on them later.

The point is, in creative hands virtual humans already have enormous potential and the platforms are constantly improving.

Blending art, technology and a little psychology allows us to take a functional leap, decades ahead of pure artificial intelligence. Although the simple VH software of today will eventually be replaced by highly sophisticated neural nets or entirely new kinds of computing, it will be a long time before they’ll have unique human like personalities…if ever. Meanwhile let’s give the evolution of technology a kick in the butt by building really smart, personable virtual people today.

Because creating a believable synthetic personality is more of an art than science, it’s important that we get a feel for how we humans handle our conscious lives. It’s part philosophy, part psychology and believe it or not, part quantum physics. We’ll start by comparing people and computers, with out getting to philosophically crazed. Any discussion of the human mind must consider consciousness. It’s a danger zone and I already know the discussions to follow will dump me smack into the boiling kettle. I’ll walk you through the important parts. Disagree and send me nice email if you like. Coming up in chapter two we’ll explore the nature of consciousness and why it’s an essential consideration in virtual human design.

Synthespians: Virtual Acting (Chapter 13)

with Ed Hooks

Virtual people have to convince us they have wheels spinning inside. They do, of course, have electrons spinning in service of the plot, but if they don’t show it on their faces, we just don’t buy it. We’re used to seeing people think. It’s true; thought is conveyed through action.

Although I’m remarkably opinionated about acting in animation, I’m not a certified expert on the subject–Ed Hooks is. He teaches acting classes for animators internationally, and has held workshops for companies such as Disney Animation (Sydney), Tippett Studio (Berkeley), Microsoft (Redmond, Washington), Electronic Arts (Los Angeles), BioWare (Edmonton, Canada), and PDI (Redwood City, California). Among his five books, Acting for Animators: The Complete Guide to Performance Animation. , Heinemann; Revised edition (September 2003) has been a major hit.

The Seven Essential Concepts in Face Acting

The following concepts are interpretations of Ed Hooks’ "Seven
Essential Acting Concepts." We’ve adapted them here to focus
on the V-people and their faces.

1. The face expresses thoughts beneath. The brain, real or artificial, is the most alive part of us. Thinking, awareness, and reasoning are active processes that affect what’s on our face. Emotion happens as a result of thinking. Because these characters don’t have a natural link between thinking and facial expression, your job as animator is to create those links. In effect, you want your synthetic brain to emulate recognizable human cognition on the face, which leads to the illusion of real and appropriate emotions.

2. Acting is reacting. Every facial expression is a reaction to something. Even the slightest head and hand movement in reaction to what’s happening can be most convincing. If the character tilts its head as you begin to speak to it, or nods on occasion in agreement, you get the distinct feeling of a living person paying attention. A double take shows surprise. Because you have very few body parts to work with, you have a superb challenge in front of you.

3. Know your character’s objective. Your character is never static. He is always moving, even if the movement is the occasional twitch, a shift of the eye, or a blink. Your objective is to endow your character with the illusion of life. As such, it is wise to follow Shakespeare’s advice, "Hold the mirror up to nature" (Hamlet, III. ii.17-21). Notice that when a person listens, she may tilt her head to the side or glance off in the distance as she contemplates and integrates new information. When she smiles and says nice things to you, her objective is to please. Always know what your character’s objective is because it is the roadmap linking behaviors to their goals. Knowing her personality and history are essential here.

4. Your character moves continuously from action to action. Your character is doing something 100 percent of the time. There must always be life! Even if she appears to be waiting, things are going on mentally. Make a list of boredom behaviors and use them. When people talk, a good emotion extraction engine will feed her cues on how to react to what’s being said. Her actions expressing emotional responses are fluid. They flow into each other forming a face story. You should be able to tell from the character’s expression how she’s reacting to what you’re saying. Say she takes a deep breath and you see the cords on her neck tighten. They then relax. Her body slumps a bit and perhaps she nods. Always in motion, she maintains the illusion of life.

5. All action begins with movement. You can’t even do math without your face moving, exposing wheels spinning beneath. Your eyes twitch. You glance at the ceiling, pondering. Your brow furrows as you struggle with the solution. Try this experiment: Ask a friend to lie as still as possible on the floor. No movement at all. Then, when he is absolutely stone still, ask him to multiply 36 by 38. Pay close attention to his eyes. You will note that they immediately begin to shift and move. It is impossible to carry out a mental calculation without the eyes moving. Sometimes movement on the screen needs to be a bit more overt than in real life. That’s okay, even essential. It nails down the emotion. Done right, people won’t notice the exaggeration, but will get the point.

6. Empathy is audience glue. The main transaction between humans and Virtual humans has to be emotion, not words. Words alone will lose them. You will catch a viewer’s attention if your character appears to be thinking, but you will engage your viewer emotionally if your character appears to be feeling. You must get across how this V-person feels about what’s going on. If you do it successfully, the audience will care about (empathize with) those feelings. I promise you it can be done. A great autonomous character can addict an audience in ways a static animation cannot. The transaction between audience and character is in real-time and directly motivated, much as it is on stage. This is a unique acting medium, which is part live performance and part animation. It’s an opportunity for you to push things–experiment with building empathy pathways.

7. Interaction requires negotiation. You want a little theatrical heat in any discourse with a V-person. To accomplish this, remember that your character always has choices. We all do, in every waking moment. The character has to decide when and whether to answer or initiate a topic. If your character is simply mouthing words, your audience response will be boredom. Whether they know it or not, people want to be entertained by your character. Artonin Artaud famously observed that "actors are athletes of the heart." Dead talk is not entertaining. There must be emotion. Recognize that you’re working with a theatrical situation and that the viewer will crave more than a static picture.

Sure, there are loads more acting concepts we could talk about, but these seven are the hard-rock core of it. You’re faced with a unique acting challenge because you have an animated character that is essentially alive. If that character is a cartoon or anime design and personality, you’ll have to read Preston Blair , for example, to learn the principles of exaggerated cartoon acting, and then incorporate these squash and stretch type actions into your character’s personality. If you take the easier road and use a photorealistic human actor, you still must make their actions a bit larger than life, but not as magnified as cartoons demand.

The stage you set will depend on the Virtual actor’s intention. If he’s there to guide a person around a no-nonsense corporate Web site, you’ll need to think hard about how much entertainment to inject. Certainly you need some. Intelligent Virtual actors in games situations–especially full-bodied ones–present marvelous opportunities to expand this new field of acting. You’ll know their intentions. Let them lead you to design their actions. Embellish their personalities, embroider their souls, and decorate their actions. Making them bigger than life will generally satisfy.

Synthespians: The Early Years

Next I want to tell you about the clever term "Synthespian," which unfortunately I didn’t coin. I do believe it should become a part of our language.

Diana Walczak and Jeff Kleiser produced some early experimental films featuring excellent solo performances by digital human characters. For example, Nestor Sextone for President premiered at SIGGRAPH in 1988. About a year later, Kleiser and Walczak presented the female Synthespian, Dozo, in a music video: "Don’t Touch Me." These were not intelligent agents, but they were good actors. "It was while we were writing Nestor’s speech to an assembled group of "synthetic thespians" that we coined the term "Synthespian," explains Jeff Kleiser. Nestor Sextone had to be animated from digitized models sculpted by Diana Walczak.

As history will note, the field of digital animation is a close, almost incestuous one. Larry Weinberg, the fellow who later created Poser, worked out some neat software that allowed Jeff and Diana to link together digitized facial expressions created from multiple maquettes she’d sculpted to define visemes. That same software allowed them to animate Nestor’s emotional expression. I’ve put a copy of this wonderful classic bit of animation on the CD-ROM, with their blessing.

Note that this viseme-linking was an early part of the development chain leading to the morph targets you see in Poser and all the high-end animation suites today. Getting your digitized character to act was difficult in those days before bones, articulated joints, and morphing skin made movement realistic. Nestor was made up of interpenetrating parts that had to be cleverly animated to look like a gestalt character without any obvious cracks or breaks or parts sticking out.

In most cases, V-people don’t have a full body to work with, just a face, and perhaps hands. Body language is such an effective communications tool, but when we just don’t have it we end up putting twice as much effort into face and upper body acting. Fortunately a properly animated face can be wonderfully expressive, as shown in Figure 13-1.


Figure 13-1: Virtual actors can really show emotion

Synthespians All Have a Purpose

A Synthespian playing a living person is probably the trickiest circumstance you’ll encounter. Depending on the situation, you want to emulate that person’s real personality closely, or exaggerate it for comedic impact or political statement. If you exaggerate features and behavior heavily you’ve entered a new art form: interactive caricature or parody.

Let’s say we’ve built a synthetic Secretary of Defense Donald Rumsfeld. The interactive theatrical situation is that we are interrupting him while he is hectically planning an attack somewhere in the world. He might be impatient and have an attitude regarding our utter stupidity and lack of patriotism for bothering him at a time like this. His listening skills might be shallow. He might continually give off the dynamic that he has better things to do. By thus exaggerating his personality, we create interest and humor. As a user, you want to interact because you feel something interesting is happening. There is comic relief, and all the while this character is making a political statement. I suspect Rumsfeld would get a kick out of such a representation, as long as it’s done in good taste.

Action conveys personality, and you can’t set up a virtual actor without knowing the character well. For example, Kermit the Frog has a definite psychology behind him. As a Web host, he is just very happy to be there. He enjoys being in the spotlight, and his behavior strongly implies he doesn’t want to be any place else. He’s happy to show you around his Web site, and he might even break out in song along the way. Occasionally he’ll complain about Miss Piggy’s lack of attention or the disadvantages of his verdant complexion.

Think first about your intention and then the character’s intention. Mae West and Will Rogers wanted to make ‘em laugh. No matter what your purpose for a Synthespian, you want it to entertain. Sometimes it may be understated. Remember that cleverness is always in style. Notice the look people get on their faces when they think they’re being clever. It’s usually an understated cockiness that shows around the eyes. The intention is to be clever, the words are smart, but remember to add that subtle touch of smugness or self-satisfaction around the eyes and the corners of the mouth.

Note: There is a new book titled Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and Emotional Life, by Paul Ekman (Times Books, 2003), which is well worth your time to read. Ekman, who is professor of psychology in the department of psychiatry at the University of California Medical School, San Francisco, is one of the world’s great geniuses on the subject of the expression of emotion in the human face. His new book has more than one hundred photographs of nuanced facial expression, complete with explanations for the variances.

As an aside, I used to train counter-terrorist agents in psychological survival. One way to spot a terrorist in a crowd is that they often have facial expressions that are inappropriate to the situation. I used Ekman’s work as a reference to help my agents recognize when facial expression and body language don’t match up, an indication often exhibited by potential terrorists. You can use Ekman’s work to make sure your V-human agents have appropriate expressions for the situation.

You Are the Character

When you’ve done your homework, you’ll know your character like you know yourself. You’ll identify with the character so intensely you will have the sensation of being that character. Stage actors learn to create characters by shifting from the third person to the first person reference. Instead of saying, "My character would be afraid in this situation," a stage actor might say, while portraying the character, "I feel afraid." In your case, you are creating a second-party character, but you’re empathizing personality with the emotions of your own creation. There is an identity between the two of you that will be both fun and compelling.

Designing animation elements for the character requires feeling them. I remember watching my daughter as she animated a baby dragon early in her career. Her natural instinct was to get inside that baby dragon and be it. I smiled as I watched her body and face contort as she acted out each part of the sequence. Her instruction had not come from me…it was intuitive. At Disney, I’ve watched animators making faces in little round mirrors dangling from extension arms above their desks. They glance in the mirror, make a face and then look at the cel and try to capture what they’ve seen. That part hasn’t changed. For us it’s glance at the mirror, glance at the screen, and then tweak a spline or morph setting. You won’t be able to do all this with the simple animation tools I’ve given you for free. Those are just to get you hooked. If you intend to learn this stuff, get ready to invest heavily in time and commitment and a fair amount in coin as well. A small investment considering the return.

If You Want to Go Further

There are great animation schools, and this continent has some of the best. My favorite is at Sheridan College in Oakville, Ontario. But there are many good schools here in the United States as well. A few years ago, most of them were a waste of money. But things have improved. Do some Web research and find which school can best help you meet your goals. There is a long-term need for talented, well-trained character animators, and in general the pay for the talented is phenomenal.

If you’re a developer, you have to be familiar with all this stuff to manage it effectively. You’re responsible for the final product. If you have animators working for you, believe in them, give them freedom, but guide them toward your vision as well. The best animated characters reflect the wisdom, vision, and artistry of their prime artists and the producers behind them. A great producer is an artist, a business person, and a technician. It’s not easy to get there, and too may producers only have the business end down. As a producer, you have to understand the artistry of production. You have to feel the emotion of good animation. How else will you know what to approve and not approve. So learn it and you’ll be way above the crowd.

I want to thank Ed Hooks for contributing his wisdom to this chapter. Remember, what you’ve read here is just a taste of what you need to learn. If you’re lucky, you’ll find a way to take a live class with Ed, who now lives in the Chicago area. It will change your perspective forever.

In the chapter upcoming, I’m going to kick it up a notch with ways to give your character true awareness of his surroundings. Imagine your well-developed character, now able not only to listen and talk, but actually to see you, look you in the eyes, and recognize you without asking. You don’t want to miss this one.

Ed Hooks, author of Acting for Animators (Heinemann, Revised Second Edition 2003), has been a theatre professional for three decades and has taught acting to both animators and actors for PDI, Lucas Learning, Microsoft, Disney Animation, and other leading companies.

© 2004 Peter Plantec

High-throughput screening

From Wikipedia, the free encyclopedia

High-throughput screening robots

High-throughput screening (HTS) is a method for scientific experimentation especially used in drug discovery and relevant to the fields of biology and chemistry. Using robotics, data processing/control software, liquid handling devices, and sensitive detectors, high-throughput screening allows a researcher to quickly conduct millions of chemical, genetic, or pharmacological tests. Through this process one can rapidly identify active compounds, antibodies, or genes that modulate a particular biomolecular pathway. The results of these experiments provide starting points for drug design and for understanding the interaction or role of a particular biochemical process in biology.

Assay plate preparation


A robot arm handles an assay plate

The key labware or testing vessel of HTS is the microtiter plate: a small container, usually disposable and made of plastic, that features a grid of small, open divots called wells. In general, modern (circa 2013) microplates for HTS have either 384, 1536, or 3456 wells. These are all multiples of 96, reflecting the original 96-well microplate with spaced wells of 8 x 12 9 mm . Most of the wells contain test items, depending on the nature of the experiment. These could be different chemical compounds dissolved e.g. in an aqueous solution of dimethyl sulfoxide (DMSO). The wells could also contain cells or enzymes of some type. (The other wells may be empty or contain pure solvent or untreated samples, intended for use as experimental controls.)

A screening facility typically holds a library of stock plates, whose contents are carefully catalogued, and each of which may have been created by the lab or obtained from a commercial source. These stock plates themselves are not directly used in experiments; instead, separate assay plates are created as needed. An assay plate is simply a copy of a stock plate, created by pipetting a small amount of liquid (often measured in nanoliters) from the wells of a stock plate to the corresponding wells of a completely empty plate.

Reaction observation

To prepare for an assay, the researcher fills each well of the plate with some biological entity that they wish to conduct the experiment upon, such as a protein, cells, or an animal embryo. After some incubation time has passed to allow the biological matter to absorb, bind to, or otherwise react (or fail to react) with the compounds in the wells, measurements are taken across all the plate's wells, either manually or by a machine. Manual measurements are often necessary when the researcher is using microscopy to (for example) seek changes or defects in embryonic development caused by the wells' compounds, looking for effects that a computer could not easily determine by itself. Otherwise, a specialized automated analysis machine can run a number of experiments on the wells (such as shining polarized light on them and measuring reflectivity, which can be an indication of protein binding). In this case, the machine outputs the result of each experiment as a grid of numeric values, with each number mapping to the value obtained from a single well. A high-capacity analysis machine can measure dozens of plates in the space of a few minutes like this, generating thousands of experimental datapoints very quickly.

Depending on the results of this first assay, the researcher can perform follow up assays within the same screen by "cherrypicking" liquid from the source wells that gave interesting results (known as "hits") into new assay plates, and then re-running the experiment to collect further data on this narrowed set, confirming and refining observations.

Automation systems


A carousel system to store assay plates for high storage capacity and high speed access

Automation is an important element in HTS's usefulness. Typically, an integrated robot system consisting of one or more robots transports assay-microplates from station to station for sample and reagent addition, mixing, incubation, and finally readout or detection. An HTS system can usually prepare, incubate, and analyze many plates simultaneously, further speeding the data-collection process. HTS robots that can test up to 100,000 compounds per day currently exist.[3][4] Automatic colony pickers pick thousands of microbial colonies for high throughput genetic screening.[5] The term uHTS or ultra-high-throughput screening refers (circa 2008) to screening in excess of 100,000 compounds per day.[6]

Experimental design and data analysis

With the ability of rapid screening of diverse compounds (such as small molecules or siRNAs) to identify active compounds, HTS has led to an explosion in the rate of data generated in recent years .[7] Consequently, one of the most fundamental challenges in HTS experiments is to glean biochemical significance from mounds of data, which relies on the development and adoption of appropriate experimental designs and analytic methods for both quality control and hit selection .[8] HTS research is one of the fields that have a feature described by John Blume, Chief Science Officer for Applied Proteomics, Inc., as follows: Soon, if a scientist does not understand some statistics or rudimentary data-handling technologies, he or she may not be considered to be a true molecular biologist and, thus, will simply become "a dinosaur."[9]

Quality control

High-quality HTS assays are critical in HTS experiments. The development of high-quality HTS assays requires the integration of both experimental and computational approaches for quality control (QC). Three important means of QC are (i) good plate design, (ii) the selection of effective positive and negative chemical/biological controls, and (iii) the development of effective QC metrics to measure the degree of differentiation so that assays with inferior data quality can be identified. [10] A good plate design helps to identify systematic errors (especially those linked with well position) and determine what normalization should be used to remove/reduce the impact of systematic errors on both QC and hit selection.[8]

Effective analytic QC methods serve as a gatekeeper for excellent quality assays. In a typical HTS experiment, a clear distinction between a positive control and a negative reference such as a negative control is an index for good quality. Many quality-assessment measures have been proposed to measure the degree of differentiation between a positive control and a negative reference. Signal-to-background ratio, signal-to-noise ratio, signal window, assay variability ratio, and Z-factor have been adopted to evaluate data quality. [8] [11] Strictly standardized mean difference (SSMD) has recently been proposed for assessing data quality in HTS assays. [12] [13]

Hit selection

A compound with a desired size of effects in an HTS is called a hit. The process of selecting hits is called hit selection. The analytic methods for hit selection in screens without replicates (usually in primary screens) differ from those with replicates (usually in confirmatory screens). For example, the z-score method is suitable for screens without replicates whereas the t-statistic is suitable for screens with replicates. The calculation of SSMD for screens without replicates also differs from that for screens with replicates .[8]

For hit selection in primary screens without replicates, the easily interpretable ones are average fold change, mean difference, percent inhibition, and percent activity. However, they do not capture data variability effectively. The z-score method or SSMD, which can capture data variability based on an assumption that every compound has the same variability as a negative reference in the screens. [14][15] However, outliers are common in HTS experiments, and methods such as z-score are sensitive to outliers and can be problematic. As a consequence, robust methods such as the z*-score method, SSMD*, B-score method, and quantile-based method have been proposed and adopted for hit selection.[4] [8] [16] [17]

In a screen with replicates, we can directly estimate variability for each compound; as a consequence, we should use SSMD or t-statistic that does not rely on the strong assumption that the z-score and z*-score rely on. One issue with the use of t-statistic and associated p-values is that they are affected by both sample size and effect size.[18] They come from testing for no mean difference, and thus are not designed to measure the size of compound effects. For hit selection, the major interest is the size of effect in a tested compound. SSMD directly assesses the size of effects.[19] SSMD has also been shown to be better than other commonly used effect sizes.[20] The population value of SSMD is comparable across experiments and, thus, we can use the same cutoff for the population value of SSMD to measure the size of compound effects .[21]

Techniques for increased throughput and efficiency

Unique distributions of compounds across one or many plates can be employed either to increase the number of assays per plate or to reduce the variance of assay results, or both. The simplifying assumption made in this approach is that any N compounds in the same well will not typically interact with each other, or the assay target, in a manner that fundamentally changes the ability of the assay to detect true hits.

For example, imagine a plate wherein compound A is in wells 1-2-3, compound B is in wells 2-3-4, and compound C is in wells 3-4-5. In an assay of this plate against a given target, a hit in wells 2, 3, and 4 would indicate that compound B is the most likely agent, while also providing three measurements of compound B's efficacy against the specified target. Commercial applications of this approach involve combinations in which no two compounds ever share more than one well, to reduce the (second-order) possibility of interference between pairs of compounds being screened.

Recent advances

Automation and low volume assay formats were leveraged by scientists at the NIH Chemical Genomics Center (NCGC) to develop quantitative HTS (qHTS), a paradigm to pharmacologically profile large chemical libraries through the generation of full concentration-response relationships for each compound. With accompanying curve fitting and cheminformatics software qHTS data yields half maximal effective concentration (EC50), maximal response, Hill coefficient (nH) for the entire library enabling the assessment of nascent structure activity relationships (SAR).[22]

In March 2010, research was published demonstrating an HTS process allowing 1,000 times faster screening (100 million reactions in 10 hours) at 1-millionth the cost (using 10−7 times the reagent volume) than conventional techniques using drop-based microfluidics.[22] Drops of fluid separated by oil replace microplate wells and allow analysis and hit sorting while reagents are flowing through channels.

In 2010, researchers developed a silicon sheet of lenses that can be placed over microfluidic arrays to allow the fluorescence measurement of 64 different output channels simultaneously with a single camera.[23] This process can analyze 200,000 drops per second.

Whereby traditional HTS drug discovery uses purified proteins or intact cells, very interesting recent development of the technology is associated with the use of intact living organisms, like the nematode Caenorhabditis elegans and zebrafish (Danio rerio).[24]

Increasing utilization of HTS in academia for biomedical research

HTS is a relatively recent innovation, made feasible largely through modern advances in robotics and high-speed computer technology. It still takes a highly specialized and expensive screening lab to run an HTS operation, so in many cases a small- to moderate-size research institution will use the services of an existing HTS facility rather than set up one for itself.

There is a trend in academia for universities to be their own drug discovery enterprise.[25] These facilities, which normally are found only in industry, are now increasingly found at universities as well. UCLA, for example, features an open access HTS laboratory Molecular Screening Shared Resources (MSSR, UCLA), which can screen more than 100,000 compounds a day on a routine basis. The open access policy ensures that researchers from all over the world can take advantage of this facility without lengthy intellectual property negotiations. With a compound library of over 200,000 small molecules, the MSSR has one of the largest compound deck of all universities on the west coast. Also, the MSSR features full functional genomics capabilities (genome wide siRNA, shRNA, cDNA and CRISPR) which are complementary to small molecule efforts: Functional genomics leverages HTS capabilities to execute genome wide screens which examine the function of each gene in the context of interest by either knocking each gene out or overexpressing it. Parallel access to high-throughput small molecule screen and a genome wide screen enables researcher to perform target identification and validation for given disease or the mode of action determination on a small molecule. The most accurate results can be obtained by use of "arrayed" functional genomics libraries, i.e. each library contains a single construct such as a single siRNA or cDNA. Functional genomics is typically paired with high content screening using e.g. epifluorescent miscroscopy or laser scanning cytometry.

The University of Illinois also has a facility for HTS, as does the University of Minnesota. The Life Sciences Institute at the University of Michigan houses the HTS facility in the Center for Chemical Genomics. The Rockefeller University has an open-access HTS Resource Center HTSRC (The Rockefeller University, HTSRC), which offers a library of over 380,000 compounds. Northwestern University's High Throughput Analysis Laboratory supports target identification, validation, assay development, and compound screening. The non-profit Sanford Burnham Prebys Medical Discovery Institute also has a long-standing HTS facility in the Conrad Prebys Center for Chemical Genomics which was part of the MLPCN.

In the United States, the National Institutes of Health or NIH has created a nationwide consortium of small-molecule screening centers to produce innovative chemical tools for use in biological research. The Molecular Libraries Probe Production Centers Network, or MLPCN, performs HTS on assays provided by the research community, against a large library of small molecules maintained in a central molecule repository.

22 questions for a less toxic planet

Jeff Glorfeld is a former senior editor of The Age newspaper in Australia, and is now a freelance journalist based in California, US.
Original link:  https://cosmosmagazine.com/chemistry/22-questions-for-a-less-toxic-planet 
 

Understanding how chemicals really work in the environment is the first step to getting pollution under control, according to international experts. Jeff Glorfeld reports.


Chemicals in the environment interact with plants, animals and each other in unpredictable ways.
Chemicals in the environment interact with plants, animals
and each other in unpredictable ways.
Jose A. Bernat Bacete / Getty Images
Unchecked chemical-based environmental pollution threatens our supplies of food, water and energy, damages human health, leads to biodiversity loss, and heightens the advance of climate change and associated extreme weather, according to an international consortium of experts working under the auspices of the Society of Environmental Toxicology and Chemistry (SETAC). What’s more, research and regulatory bodies around the world fail to understand the magnitude of the problem.

The SETAC team has called for urgent action in establishing a fundamental change to the way we study and communicate the impacts and control of chemicals in the natural environment.

Their report, published in the journal Environmental Toxicology and Chemistry, has identified 22 research questions that matter the most to the broad community so that research and regulatory efforts can be applied to the most pressing problems.

The report centres on the United Nations’ 2030 Agenda for Sustainable Development, and its 17 sustainable development goals, which came into force in 2016 and aims to end poverty, protect the planet and ensure prosperity for all, and which depends for its success on a healthy and productive environment.

Although the report is European-based, its authors took input from SETAC members in Africa, the Asia Pacific region, Europe, Latin America and North America.

The report says many questions need to be addressed about the risks of chemicals in the environment, “and it will be impossible to tackle them all”.

“There is therefore an urgent need to identify the research questions that matter most to the broad community across sectors and multiple disciplines so that research and regulatory efforts can be focused on the most pressing ones.”

Our understanding of how chemicals affect the environment and human health is still poorly developed, the report says. For example, most research and regulation considers the impacts of individual substances, yet in the real environment chemicals are present with hundreds or thousands of other substances and influencing agents.

Studies to support research and regulation tend to focus on single species rather than populations and communities. Variations in the nature of the environment in time and space, which will affect chemical influence, are hardly accounted for in research and risk assessments.

“Considering chemicals in isolation can result in a simplistic assessment that doesn’t account for the complexity of the real world,” says one of the study’s lead authors, Alistair Boxall, from the University of York, in Britain.

“A fish won't be exposed to a single chemical but to hundreds if not thousands of chemicals,” he says. “Other pressures, such as temperature stress, will also be at play, and it is likely that these components work together to adversely affect ecosystem health.”

The report says Europe faces significant challenges regarding the risk assessment and management of chemicals and other factors, which constrains the region's ability to achieve sustainable development.

“This study is the first attempt to set a research agenda for the European research community for the assessment and management of stressor impacts on environmental quality,” it says. “The questions arising from this exercise are complex. To answer them, it will be necessary to adopt a systems approach for environmental risk assessment and management. In particular, it is important that we establish novel partnerships across sectors, disciplines, and policy areas, which requires new and effective collaboration, communication, and co-ordination.”

Boxall says studies similar to this one are being performed in North America, Latin America, Africa, Asia and Australasia. “Taken together, these exercises should help to focus global research into the impacts of chemicals in the environment.”

The researchers say they hope their study is a first step in a longer process. “The results of this project now need to be disseminated to the policy, business, and scientific communities. The output should be used for setting of research agendas and to inform the organisation of scientific networking activities to discuss these questions in more detail and identify pathways for future work.”

The 22 questions

  1. How can interactions among different stress factors operating at different levels of biological organisation be accounted for in environmental risk assessment?
  2. How do we improve risk assessment of environmental stressors to be more predictive across increasing environmental complexity and spatiotemporal scales?
  3. How can we define, distinguish, and quantify the effects of multiple stressors on ecosystems?
  4. How can we develop mechanistic modelling to extrapolate adverse effects across levels of biological organisation?
  5. How can we properly characterise the chemical use, emissions, fate, and exposure at different spatial and temporal scales?
  6. Which chemicals are the main drivers of mixture toxicity in the environment?
  7. What are the key ecological challenges arising from global megatrends?
  8. How can we develop, assess, and select the most effective mitigation measures for chemicals in the environment?
  9. How do sublethal effects alter individual fitness and propagate to the population and community levels?
  10. Biodiversity and ecosystem services: What are we trying to protect, where, when, why, and how?
  11. What approaches should be used to prioritise compounds for environmental risk assessment and management?
  12. How can monitoring data be used to determine whether current regulatory risk‐assessment schemes are effective for emerging contaminants?
  13. How can we improve in silico methods for environmental fate and effects estimation?
  14. How can we integrate evolutionary and ecological knowledge to better determine vulnerability of populations and communities to stressors?
  15. How do we create high‐throughput strategies for predicting environmentally relevant effects and processes?
  16. How can we better manage, use, and share data to develop more sustainable and safer products?
  17. Which interactions are not captured by currently accepted mixture toxicity models?
  18. How can we assess the environmental risk of emerging and future stressors?
  19. How can we integrate comparative risk assessment, life cycle analysis, and risk–benefit analysis to identify and design more sustainable alternatives?
  20. How can we improve the communication of risk to different stakeholders?
  21. How do we detect and characterise difficult‐to‐measure substances in the environment?
  22. Where are the hotspots of key contaminants around the globe?

Cooperative

From Wikipedia, the free encyclopedia ...