Sections
You are here: Home » Meet » OpenTox USA 2013 » OpenTox USA 2013 Speaker Program

OpenTox USA 2013 Speaker Program

OpenTox logoOpenTox InterAction Meeting

Speaker Program

 

 Organised in Collaboration with ToxBankToxBank Logo

29 - 30 October 2013

Hamner Conference Center, North Carolina Biotechnology Center, Raleigh-Durham, USA

Tuesday, October 29

09:00 - Section A. The Data Foundation for Predictive Toxicology, chaired by Scott Auerbach (NIEHS)

OpenTox - Development of Open Standards, Open Source and Open Data supporting Predictive Toxicology, Barry Hardy (Douglas Connect)

One important goal of OpenTox is to support the development of an Open Standards-based predictive toxicology framework that provides a unified access to toxicological data and models. OpenTox supports the development of tools for the integration of data, for the generation and validation of in silico models for toxic effects, libraries for the development and integration of modelling algorithms, and scientifically sound validation and reporting routines.

The OpenTox Application Programming Interface (API) is an important open standards development for software development purposes. It provides a specification against which development of global interoperable toxicology resources by the broader community can be carried out. The use of OpenTox API-compliant web services to communicate instructions between linked resources with URI addresses supports the use of a wide variety of commands to carry out operations such as data integration, algorithm use, model building and validation. The OpenTox Framework currently includes, with its APIs, services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, reporting, investigations, studies, assays, and authentication and authorisation, which may be combined into multiple applications satisfying a variety of different user needs. As OpenTox creates a semantic web for toxicology, it should be an ideal framework for incorporating toxicology data, ontology and modelling developments, thus supporting both a mechanistic framework for toxicology and best practices in statistical analysis and computational modelling.

In this presentation we will review OpenTox-related developments supported by Open Standards:
a) Registration of data resources within OpenTox as demonstrated with Ambit;
b) ToxPredict - a web-based predictive toxicology application;
c) Bioclipse-OpenTox – integration of distributed OpenTox resources in a workbench evaluating toxicological properties and chemical liabilities;
d) Generation of OpenTox-compliant validated models through lazar;
e) Incorporation of metabolism data and predictions by SMARTCyp and XMetDB;
f) Incorporation of OpenTox resources in Taverna prediction workflows;
g) 3D Chemical Space Mapping and Visualisation with CheS-Mapper;
h) Development of ToxBank data warehouse infrastructure supporting integrated data analysis across functional and omics datasets.

 

The NTP DrugMatrix Database, Scott Auerbach (NIEHS)

DrugMatrix is the scientific communities' largest molecular toxicology reference database and informatics system. DrugMatrix is populated with the comprehensive results of thousands of highly controlled and standardized toxicological experiments in which rats or primary rat hepatocytes were systematically treated with therapeutic, industrial, and environmental chemicals at both non-toxic and toxic doses. Following administration of these compounds in vivo, comprehensive studies of the effects of these compounds were carried out at multiple time points and in multiple target organs. These studies included extensive pharmacology, clinical chemistry, hematology, histology, body and organ weights, and clinical observations. Additionally, a curation team extracted all relevant information on the compounds from the literature, the Physicians' Desk Reference, package inserts, and other relevant sources. The heart of the DrugMatrix database is large-scale gene expression data generated by extracting RNA from the toxicologically relevant organs and tissues and applying these RNAs to the GE Codelink™ 10,000 gene rat array and more recently the Affymetrix whole genome 230 2.0 rat GeneChip® array. DrugMatrix contains toxicogenomic profiles for 638 different compounds; these compounds include FDA approved drugs, drugs approved in Europe and Japan, withdrawn drugs, drugs in preclinical and clinical studies, biochemical standards, and industrial and environmental toxicants. Contained in the database are 148 scorable genomic signatures derived using MOSEK computational software that cover 96 distinct phenotypes. The signatures are informative of organ-specific pathology (e.g., hepatic steatosis) and mode of toxicological action (e.g., PXR activation in the liver). The phenotypes cover a number of common target tissues in toxicity testing (including liver, kidney, heart, bone marrow, spleen and skeletal muscle). The primary value that DrugMatrix provides to the toxicology community is in its capacity to use toxicogenomic data to perform rapid toxicological evaluations. Further value is provided by DrugMatrix ontologies that help characterize mechanisms of pharmacological/toxicological action and identify potential human toxicities. Overall, DrugMatrix allows a toxicologist to formulate a comprehensive picture of toxicity with greater efficiency than traditional methods.

 

ToxCast High Throughput Screening Assay Annotation and Data Analysis Pipeline, Matt Martin (US EPA)

The ToxCast assay library currently includes 328 high-throughput screening (HTS) assays with a total of 555 readouts or components. Assessing a battery of cellular, pathway, protein and gene inhibitions and stimulations, the total number of assay endpoints (assay_component_endpoints) generated was 826. Selection of assay endpoints encounters questions such as suitable cell type, assay design specifics, protein form, and detection method. To simplify these concepts, the BioAssay Ontology was applied to annotate 36 descriptors for assay design and target information and to create a scheme for identifying assay_component_endpoints. Assay design information is distinguished by assay design types (n= 22), detection technology types (n=11), and cell types (n=36), which consists of 14 species and 18 tissues. ToxCast assay target information is separated by technological target mapped to gene(s) (n=381), technological target types (n=15), intended target genes (n= 362), intended target families (n=23), intended target types (n=14), and biological process targets (n=11). Differentiating technological and intended target information created a distinction between the measured target and the purpose or intention of running the assay. For example, a cell proliferation assay was run in T47D cells in which the primary readout and target was cell proliferation. However, the intention of the assay was to detect estrogen receptor agonists as T47D cells are estrogen responsive. For purposes of quality control and data aggregation, we are now able to rapidly group this particular assay with other estrogen receptor targeting assays. In conjunction with assay annotation, we have also developed a data analysis pipeline that encompasses the many stages of HTS data processing, including raw data management, chemical and assay mapping, data normalization, outlier detection, concentration response modeling and hit-calling. We have created an 8 level system for analyzing the millions of concentration response curves generated as part of the ToxCast research program. In conjunction, the assay annotation and data analysis pipeline foster data transparency and sharing improving downstream modeling and increasing the odds of incorporation of HTS data into the chemical safety decision process.

 

The CEBS Database, a Structure for Data Integration across Studies, Asif Rashid (NIEHS)

The Chemical Effects in Biological Systems (CEBS) Database at the US National Toxicology Program (NTP) is designed to archive legacy NTP data and to support integration across studies in CEBS. CEBS also houses public studies from industry, academic labs, and from other government labs, making it a resource for the environmental health sciences. CEBS houses primary data from individual animals and from in vitro exposures of cultured cells. This presentation will give three examples of accessing CEBS by means of the user interface, discuss meta data in CEBS and the CEBS data dictionary, and briefly discuss a project to make CEBS legacy data available using an RDF triple store.

  

Implementation of Systematic Review, Kristina Thayer (NTP)

The NTP Office of Health Assessment and Translation (OHAT) conducts literature-based evaluations to assess the evidence that environmental chemicals, physical substances, or mixtures (collectively referred to as "substances") cause adverse health effects and provides opinions on whether these substances may be of concern given what is known about current human exposure levels. OHAT also organizes state-of-the-science evaluations to address issues of importance in environmental health sciences. In 2012, OHAT began implementing systematic review methodology to carry out these evaluations to enhance transparency for reaching and communicating evidence assessment conclusions. The systematic review format provides increased transparency through a detailed protocol that outlines each step in an evaluation including procedures used to perform a literature search, determine whether studies are relevant for inclusion, extract data from studies, assess study quality, and synthesize data for reaching conclusions. The method for data synthesis includes steps to assess confidence within an evidence stream (i.e., human, animal, and other relevant data separately) and then to integrate across evidence streams to reach hazard identification conclusions. This presentation will discuss some of the basic concepts of systematic review and highlight the software-based information management tools utilized by OHAT in order to implement systematic review, visually display data, and manage data warehousing for manually curated human, animal, and in vitro studies in a format that facilitates data sharing and mining across agencies and the public.

 

COSMOS Database as a basis for ontology-driven data mining, Chihae Yang (Altamira)

The COSMOS consortium, a part of the SEURAT-1 cluster, has been engaged in development of computational approaches to enable alternatives to animal testing. The COSMOS database provides the management and sharing of chemical, biological and toxicity data to support activities. This database uses a compound-centered data model that can store both chemical and toxicological data. The chemistry content includes a cosmetics inventory compiled from EU COSING and US PCPC for cosmetics ingredients and related chemicals. The database supports dose-level effects data from repeated dose toxicity studies, metabolism information and dermal penetration/absorption data. A user-friendly web-based user interface provides a searching/retrieval system as well as links to other SEURAT resources. The data have been collected, curated, quality-controlled, stored and managed in a flexible and sustainable manner to support predictive modeling tasks. Many such modeling tasks begin by connecting biological effects and chemicals involved in pathways. The in vivo oral repeated dose toxicity database (oRepeatToxDB) is part of the COSMOS DB. The controlled vocabulary for oRepeatToxDB allows phenotypic effects to be linked to cellular events. Toxicity effects observed at target organ sites have been organized hierarchically to relate organs to tissues to cells, while also mapping biological processes to phenotypic effects. Thus, oRepeatToxDB is well suited for data mining to elucidate site/effect combinations and to be used as a basis for building toxicity ontology for relating chemistry mechanistically to effects in toxicity pathways. A case study investigating the relationship between liver steatosis and fibrosis and the underlying morphological changes caused by, for example, analogs of retinoids as well as aromatic amines will be discussed from a mechanistic perspective. This methodology provides a systematic approach for investigating chemically-induced toxicity and elucidating the underlying mechanism, and may further guide studies to determine the mode of action for hepatotoxicity.

 

12:00 - Section B. Development and Application of Predictive Toxicology Models, chaired by Barry Hardy (Douglas Connect)

Evidence-based toxicology - the toolbox for quality assurance of Tox-21c tools, Thomas Hartung (John Hopkins) (see Slides PDF)

A mechanistic toxicology has evolved over the last decades, which is effectively relying to large extent on methodologies which substitute or complement traditional animal tests. The accompanying biotechnology and informatics revolution has made such technologies broadly available and useful. Regulatory toxicology has only slowly begun to embrace these new approaches. Major validation efforts, however, have delivered the evidence that new approaches do not necessarily lower safety standards and can be integrated into regulatory safety assessments, especially in integrated testing strategies. Political pressures especially in the EU, such as the REACH legislation and the 7th amendment to the cosmetic legislation, further prompt the need of new approaches. In the US, especially the NAS vision report for a toxicology in the 21st century and its most recent adaptation by the US EPA for their toxicity testing strategy have initiated a debate on how to create a novel approach based on human cell cultures, lower species, high-throughput testing and modeling. The report suggests moving away from traditional (animal) testing to modern technologies based on pathways of toxicity. These pathways of toxicity could be modeled in relatively simple cell tests, which can be run by robots. The goal is to develop a public database for such pathways, the Human Toxome, to enable scientific collaboration and exchange.
The problem is that the respective science is only emerging. What will be needed is the Human Toxome as the comprehensive pathway list, an annotation of cell types, species, toxicant classes and hazards to these pathways, an integration of information in systems toxicology approaches, the in-vitro-in-vivo-extrapolation by reversed dosimetry and finally making sense of the data, most probably in a probabilistic way. The NIH has been funding since September 2011 by a transformative research grant our Human Toxome project (humantoxome.com). The project involves US EPA ToxCast, the Hamner Institute, Agilent and several members of the Tox-21c panel. The new approach is shaped around pro-estrogenic endocrine disruption as a test case.
Early on, the need for quality assurance for the new approaches as a sparring partner for their development and implementation has been noted. The Evidence-based Toxicology Collaboration (EBTC, www.ebtox.com) was created in the US and Europe in 2011 and 2012, respectively. This collaboration of representatives from all stakeholder groups aims to develop tools of Evidence-based Medicine for toxicology, with the secretariat run by CAAT. All together, Tox-21c and its implementation activities including the Human Toxome and the EBTC promise a credible approach to revamp regulatory toxicology.

 

Supporting an Integrated Data Analysis across SEURAT-1 through the ToxBank Data Warehouse, Glenn Myatt (Leadscope)

The SEURAT-1 (Safety Evaluation Ultimately Replacing Animal Testing-1) research cluster is comprised of seven EU FP7 Health projects and is co-financed by Cosmetics Europe. The aim is to generate a proof-of-concept to show how the latest in vitro and in silico technologies, systems toxicology and toxicogenomics can be combined to deliver a test replacement for repeated dose systemic toxicity testing on animals. The SEURAT-1 strategy is to adopt a mode-of-action framework to describe repeated dose toxicity to derive predictions of in vivo toxicity responses. ToxBank is the cross-cluster infrastructure project whose activities include the development of a data warehouse to provide a web-accessible shared repository of research data and protocols. Experiments are generating dose response data over multiple timepoints using different omics platforms including transcriptomics, proteomics, metabolomics, and epigenetics over a variety of different cell lines and a common set of reference compounds. Experimental data is also being generated from functional assays and bioreactors, and supplemented with in silico approaches including kinetic information. This complex and heterogeneous data is being consolidated and harmonized through the ToxBank data warehouse. It is being organized in order to perform an integrated data analysis and ultimately predict repeated dose systemic toxicity. Core technologies used include the ISA-Tab universal data exchange format, Representational State Transfer (REST) web services, the W3C Resource Description Framework (RDF) and the OpenTox standards. We describe the design of the data warehouse based on cluster requirements, the implementation based on open standards, and illustrate using a data analysis case study.

 

Creation of microfluidic bioreactor to assess in vitro long-term fibrogenic toxicity of drugs and cosmetics on liver, Pau Sancho-Bru (Institut d’Investigacions Biomèdiques August Pi i Suyner)

The Cosmetics Europe industry together with the European Commission launched a Research Initiative focused on the "Safety Evaluation Ultimately Replacing Animal Testing (SEURAT)". SEURAT-1 aims at trying to fill current gaps in scientific knowledge and accelerate the development of non-animal test methods. In this respect, HeMiBio is one of the six research projects funded under the SEURAT-1 cluster umbrella, with the specific aim of developing a device that simulates the complex structure of the human liver for preclinical toxicity testing. One of the frequent adverse outcomes in toxicity testing under chronic exposure is liver fibrosis. Liver fibrosis results as a respose to a persistent liver injury. In most cases, hepatocytes are the primary target cells due to their high metabolic activity, but Hepatic Stellate Cells (HSC), Liver Sinusoidal Endothelial Cells (LSEC) or macrophages may also be directly targeted. Irrespectively of the molecular initiating event, HSC activation and the acquisition of a myofibroblastic state is the key event leading to extracellular matrix deposition and liver fibrosis.
In order to evaluate this process in vitro in a repeated dose toxicity testing we need to develop new tools able to mimic the complex interaction among liver cells that occur during a fibrogenic stimulus. In vitro fibrosis assessment will require the development of: 1- expansion of existing current knowledge on the different liver cell types, particularly HSCs and LSEC; 2- a reproducible human Hepatocyte, HSC and LSEC cell source; 3- co-culture systems to maintain cells in an optimal phenotype; and 4- a device able to maintain liver cells in a co-culture setting and maintaining their interaction. This presentation will introduce the efforts taken by HeMiBio to accomplish these four goals and will focus specifically on the work performed to characterize primary HSCs and to develop methods for obtaining HSCs from induced pluripotent stem cells for toxicity testing.  

 

Advanced Multi-Label Classification Methods for the Prediction of ToxCast Endpoints, Stefan Kramer (Johannes Gutenberg University of Mainz)

The data from the US EPA ToxCast program has been the subject of active discussion in the literature recently. In the talk, I will provide a balanced view on the quality and predictability of the data from a statistical point of view and present the results of large-scale computational experiments along those lines. I will show that, using so-called multi-label classification, dependencies among toxic effects in the ToxCast data set can be exploited successfully. A filtering step by an internal leave-one-out cross-validation filters those endpoints that can be predicted worse than by random guessing and additionally those that do not benefit from a joint prediction together with other endpoints by multi-label classification. As a result of our experiments, we obtain a list of in vivo endpoints that can be predicted with some confidence and a set of related endpoints that benefit from a concerted prediction.

 

Complementary PLS and KNN algorithms for improved 3D-QSDAR consensus modeling of AhR binding and other toxicological endpoints, Richard Beger (US FDA)

A consensus partial least squares (PLS)-similarity based k nearest neighbors (KNN) model was developed utilizing 3D-SDAR (three dimensional spectral data-activity relationship) fingerprint descriptors for prediction of the log(1/EC50) values of a dataset of 94 aryl hydrocarbon receptor (ArH) binders. This consensus model was constructed from a PLS model utilizing 10 ppm x 10 ppm x 0.5 Å bins and 7 latent variables (R2test of 0.617), and a KNN model using 2 ppm x 2 ppm x 0.5 Å bins and 6 neighbors (R2test of 0.622). Compared to individual models, improvement in predictive performance of approximately 10.5% (R2test of 0.685) was observed. Further experiments indicated that this improvement is likely an outcome of the complementarity of the information contained in 3D-SDAR matrices of different bin granularity.  For similarly sized data sets of AhR binders the consensus KNN and PLS models compare favorably to earlier reports.  The ability of PLS QSDAR models to provide structural interpretation was illustrated by a projection of the most frequently occurring bins on the standard coordinate space, thus allowing identification of structural features related to toxicity.  Consensus QSDAR modeling results for other toxicological endpoints will also be provided.

The views presented in this article are those of the authors and do not necessarily reflect those of the US Food and Drug Administration.  No official endorsement is intended nor should be inferred.

 

Wednesday, October 30

09.00 Section C. Extrapolation and Integration of Predictive Toxicology Models, chaired by Rusty Thomas (US EPA)

In Vitro and In Vivo Extrapolation in Predictive Toxicology, Harvey Clewell (Hamner Institute)

The field of toxicology is currently undergoing a global paradigm shift to use of in vitro approaches for assessing the risks of chemicals and drugs, yielding results more rapidly and more mechanistically-based than current approaches relying primarily on in vivo testing. However, reliance on in vitro data entails a number of new challenges associated with translating the in vitro results to corresponding in vivo exposures. Physiologically-based pharmacokinetic (PBPK) models provide an effective framework for conducting quantitative in vitro to in vivo extrapolation (IVIVE). Their physiological structure facilitates the incorporation of in silico- and in vitro-derived chemical-specific parameters in order to predict in vivo absorption, distribution, metabolism and excretion. In particular, the combination of in silico- and in vitro parameter estimation with PBPK modeling can be used to predict the in vivo exposure conditions that would produce chemical concentrations in the target tissue equivalent to the concentrations at which effects were observed with in vitro assays of tissue/organ toxicity. This presentation will describe the various elements of IVIVE and highlight key aspects of the process including: (1) characterization of free concentration, metabolism, and cellular uptake in the toxicity assay; (2) conversion of in vitro data to equivalent PBPK model inputs, and (3) potential complications associated with metabolite toxicity. Two examples of PBPK-based IVIVE will be described: a simple approach using whole hepatocyte clearance and plasma binding that is suitable for a high-throughput environment, and a more complicated approach for a case of metabolite toxicity. 

 

High Throughput Human Exposure Forecasts for Environmental Chemical Risk, John Wambaugh (US EPA) 

Prioritization of thousands of environmental chemicals requires reliable methods for screening on both hazard and exposure potential. High-throughput biological activity assays allow the ToxCast™ and Tox21 projects to compare the in vitro activity of chemicals with known in vivo toxicity to those with little or no in vivo data. Additional in vitro assays characterize key aspects of pharmacokinetics and allow in vitro-in vivo extrapolation to predict human uptake (mg/kg BW/day) that might be sufficient to cause the observed in vitro bioactivity in vivo, thereby identifying potential hazard. Without similar capability to make quantitative, albeit uncertain, forecasts of exposure, the putative risk due to an arbitrary chemical cannot be rapidly evaluated. Using physico-chemical properties and provisional chemical use categories, most of the ~8,000 Tox21 chemicals have been evaluated with respect to exposure from near field sources, i.e. proximate exposures in the home that might result from, for example, the use of consumer products. A Bayesian methodology was used to infer ranges of exposures consistent with biomarkers measured in urine samples by the National Health and Nutrition Examination Survey (NHANES). For various demographic groups within NHANES, such as children aged 6-11 and 12-18, males, and females, we consider permutations of linear regression models, including as few as one and as many as seventeen available and seemingly relevant physico-chemical and use factors in order to select the most parsimonious model. Each demographic-specific linear regression provides a predictor, calibrated to the NHANES data, which can then be applied to the remainder of the Tox21 list. The variance of this calibration serves as an empirical determination of uncertainty. These exposure predictions are then directly compared to the doses predicted to cause bioactivity for 231 ToxCast chemicals for which in vitro-in vivo extrapolation to mg/kg BW/day doses has been performed. For chemicals without any exposure data, the models determined by this method can predict a likely range for the average human exposure due to near field sources. When combined with high throughput hazard data, these exposures can allow chemical research prioritization on the basis of probable risk. This abstract does not necessarily reflect U.S. EPA policy.

 

HAWC (Health Assessment Workspace Collaborative): a modular web-based interface to facilitate development of human health assessments of chemicals, Ivan Rusyn (UNC)

HAWC (https://hawcproject.org/) is a modular, cloud-ready, informatics-based system to synthesize multiple data sources into overall human health assessments of chemicals. Developed in collaboration with EPA/NCEA, this system seamlessly integrates and documents the overall workflow from literature search and review, data extraction and evidence synthesis, dose-response analysis and uncertainty characterization, to creation of customized reports. Crucial benefits of such a system include improved integrity of the data and analysis results, greater transparency, standardization of data presentation, and increased consistency. By including both a web-based workspace for assessment teams who can collaborate on the same assessment rather than share files and edits, and a complementary web-based portal for reviewers and stakeholders, all interested parties have dynamic access to completed and ongoing assessments.

 

Multi-scale data integration and modeling, Tom Knudsen (US EPA)

Multiscale modeling and simulation is an important approach for discovery and synthesis of biological design principles underlying the response of complex adaptive systems to perturbation. Virtual Tissue Models (VTMs) integrate empirical data with information that can be mapped to dynamic biological tissue architectures relevant to Adverse Outcome Pathway (AOP) elucidation. VTMs bring together in vitro data from diverse assay and high-throughput screening (HCS) platforms with biological information on dynamic cellular behaviors in a computational systems biology context. Such models might be useful to predict the potential impact of chemical perturbations on higher-order biological organization and functions. In silico VTMs engineered with CompuCell3D (www.compucell3d.org) and other resources are being used to simulate biological design principles underlying important developmental processes such as blood vessel development, somite formation, palatal fusion, male urethral morphogenesis, and limb-bud outgrowth. The simulations engage the normal biology of a complex adaptive system based on cellular-molecular knowledge and understanding of embryogenesis. Perturbing embryological VTMs with HTS and in vitro data has the potential to assess the plausibility of different tissue-level changes in leading to AOPs relevant to human developmental toxicity. Embryological VTMs also enable different modeled exposures to be evaluated for early lifestage considerations. This abstract does not necessarily reflect US EPA policy.

 

11.30 Section D. Application of Predictive Toxicology in Risk Assessment, chaired by Asish Mohapatra (Health Canada)

From bench to FDA, validation of in vitro methods: who is responsible for independent validation of human biology-based tests? Katya Tsaioun (Safer Medicines Trust)

Every organization be it industry, academic or government organizations involved in safety testing of new chemical entities agrees that the current system of toxicity testing is not as predictive of human outcomes as we need.  Though the current paradigm has been "tried" for decades, it is hard to call it "true" to human outcomes, when it misses 94% of human toxicities. So what are we as industry, government, charities and foundations doing about it? We will review the roles of all stakeholders in validation and adoption of alternative methods: industry, regulatory authorities, alternative technology inventors (academic and industry) and independent, government and non-profit organizations. Paths to validation and acceptance of alternative methods in different industries will be discussed. The role of government initiatives will be discussed using examples of the cosmetics industry 2013 ban on use of animal testing and the REACH directive. A new paradigm for faster validation and industry adoption of alternative methods will be presented with a consortium of independent organizations managing the process. Safer Medicines Trust is a non-profit organization whose mission is to improve patient safety via including human biology-based methods into the drug regulatory approval process. An outline of the independent pilot validation study proposed by Safer Medicines Trust will be presented for discussion.

 

In Silico Models for Screening New Drugs for QT Prolongation Potential using Human Clinical Trial Data, Luis G. Valerio, Jr (US FDA) and Kevin Cross (Leadscope)

A regulatory science priority at the US Food and Drug Administration (FDA) is to promote the development of new innovative predictive tools to support safety evaluations of regulated products. One predictive method in current use as investigative and applied safety science tools are well validated and defined computational (in silico) quantitative structure-activity relationship (QSAR) models. In this FDA Critical Path Initiative project, predictive in silico QSAR models were developed based on clinical trials of drugs reviewed at CDER. These models are being tested as clinical decision-support tools for CDER evaluations of the proarrhythmic potential of non-antiarrhythmic drugs. Assessing drug-induced proarrhythmia is of regulatory interest to public health since it may present as sudden cardiac death in patients. Classification models were built using two different QSAR predictive technologies. Drugs from clinical QT prolongation assessments and thorough QT (TQT) studies comprised the training sets of the models and these data focus on the regulatory threshold of concern for QT/QTc prolongation using exposure-response testing in humans and risk-based evidence of torsade de pointes (TdP) per ICH E14 guidance. This presentation will cover the development of these unique QSAR models and the implications for use in risk prioritization in a CDER regulatory safety and risk assessment setting of the evaluation of drug-induced proarrhythmia in humans. Based on external validation, the models perform exceptionally well, and take into account exposure-response relationships, metabolic activation, and assay sensitivity as well as other standards that establish a regulatory acceptable clinical trial. Predictive performance for both in silico technologies was observed to have high and desirable values, and was judged based on external validation whenever possible. The Leadscope clinical QT/QTc prolongation model which was enriched with the mechanistic knowledge of potent hERG blockers, demonstrated concordance of 82%, sensitivity 86%, specificity 78%, a positive predictive value (PPV) 82%, and negative predictive value (NPV) of 82%. Y-scrambling results provided supporting evidence of the predictive power of the final model over random. A QSAR model was built with the Prous Institute’s Symmetry℠ technology to forecast TdP activity using a training set of high risk drugs withdrawn from the market due to QT prolongation and TdP concerns, and which induce exceptionally high QT/QTc interval effects > 20 msec. Validation of this model demonstrated exceptional concordance of 91%, sensitivity 87%, specificity 92%, PPV 85%, and NPV 93%. Collectively, these in silico models represent innovative clinical decision-support tools to detect risk through computational assessment of drug molecular structure for forecasting the proarrhythmic potential of new drugs in a regulatory risk assessment setting.

 

The (U.S.) National Library of Medicine: Collecting, Organizing, and Disseminating Information in Toxicology, Exposure Science, and Risk Assessment, Pertti Hakkinen (NIH)

The (U.S.) National Library of Medicine (NLM) is the world’s largest biomedical library, with the mission of collecting, organizing, preserving, and disseminating health-related information. NLM's resources are available for free by global users, with the Specialized Information Services Division (SIS, sis.nlm.nih.gov) responsible for information resources and services in toxicology, environmental health, chemistry, and other topics. This includes databases and special topics Web pages, with “what you need, when you need it, where you need it, and how you need it” access to many of its resources as downloads, Smartphone apps, and/or as Web pages optimized for mobile devices. One example of an NLM SIS resource is NLM’s TOXNET “one stop access” set of numerous databases (TOXicology Data NETwork, toxnet.nlm.nih.gov). For example, TOXNET’s TOXLINE provides bibliographic information covering the toxicological and other effects of chemicals, and incorporates content from NLM’s PubMed/MEDLINE. Also, TOXNET’s Hazardous Substances Data Bank (HSDB®) continues to be enhanced to include new materials. HSDB’s first set of nanomaterials was added in late 2009, and NLM SIS has since included additional nanomaterials and other substances of emerging interest to toxicologists and others. NLM SIS is also seeking to add state-of-the-science toxicology, exposure, and risk assessment information, and images (e.g., metabolic pathways) to HSDB. Further, NLM SIS recently developed an enhanced version of its ALTBIB Web portal to provide better access to information on in silico, in vitro, and improved (refined) animal testing methods, along with information on the testing strategies incorporating these methods and other approaches. Other efforts include providing improved access to genomics-related information, one example being the addition in 2011 of the Comparative Toxicogenomics Database (CTD) to TOXNET. Another area of interest is to help provide access to information from Tox21, ToxCast, ExpoCast, and other similar efforts around the world. A further set of NLM SIS–developed resources is the Enviro-Health Links collection of online pages (EHLs, sis.nlm.nih.gov/pathway.html). The EHLs are Web bibliographies of links to authoritative and trustworthy online resources in toxicology, environmental health, and everyday types of exposures such as indoor air pollution. The resources noted in the EHLs are selected from government agencies and non-governmental organizations. Many EHLs include sets of pre-formulated searches of all relevant SIS and NLM databases, allowing users to search for the most recent citations on a topic of interest. The “Toxicology Web Links” EHL includes an extensive collection of “.gov,” “.edu,” “.org,” and “.com” resources judged to meet the selection criteria for inclusion in an EHL. The resources noted in the Toxicology Web Links EHL are divided into “U.S. government,” “international,” “associations and societies,” and “additional resources” categories, and users of this EHL will note a strong emphasis on resources providing free online access to information.

 

Application of Predictive Tools to Environmental Assessment and Remediation, Asish Mohapatra (Health Canada) amd Barry Hardy (Douglas Connect) (see Slides PDF)

Health Canada Contaminated Sites group provides expert support to other federal departments in the area of human health risk assessment and contaminated sites remediation projects. The contaminants can include “Data Rich” and “Data Poor” chemicals.  Data poor chemicals often have limited or incomplete experimental toxicology information for extrapolation to human health. At times, data poor chemicals may have undergone extensive toxicity testing; however diverse datasets may make it difficult to extrapolate experimental toxicology information to make it biologically relevant to human health (e.g., perfluorinated chemicals).  Persistence in the environment, fate and transport, and transformation and chemical interaction issues potentially have implications from a site remediation perspective. Chlorinated Solvent clusters (Perchloroethylene, Trichloroethylene, Vinyl Chloride, etc.), Petroleum Hydrocarbons and metals can be considered as “Data Rich” clusters. Of interest to the Health Canada contaminated sites group, to inform site remediation and risk management of contaminated sites, is the possible use of predictive toxicology tools in both Data Poor and Data Rich situations based on various considerations of availability of site specific data, location of those sites and availability of resources.

The Health Canada contaminated sites group have developed a Remediation Technology Exposure Check List Tool. The tool uses a selection of technologies with flowcharts and a decision matrix for evaluating human health exposure concerns related to different remediation technologies. A decision analytic process to consider, screen and prioritize chemical contaminants to achieve remediation goals is a complex situation because chemical contaminants can co-occur, interact or competitively bind with each other. For many such situations, insufficient experimental data is directly available to understand their interactions in the context of chemical fate, transport and transformation. Hence, predictive toxicology tools can potentially play an important role in better informing risk managers and site remediation experts so that they could use an extended set of evidences in their decision making and contaminated site management activities.

To address the knowledge gap, predictive tools can provide estimates, if applicable, based on chemical structure (e.g., Structure Activity Relationships (SARs), QSAR modelling, Structural Alerts). Additionally, clustering of chemicals based on available toxicity data and human biology datasets can be used to better inform chemicals that are undergoing remediation as well as residual chemicals that are undergoing changes in the surface and sub-surface environments.  When we integrate this evidence in the context of risk management of contaminated sites, we can envisage a potential valuable contribution of predictive modelling based on a combination of risk estimation, toxicokinetics, toxicodynamics, chemical toxicology and interactions (e.g., combined exposure issues from a site remediation perspective) to inform remediation projects so that better health risk management decisions can be taken and so that custodians can effectively utilize their limited resources.

In this study, we selected a small number of remediation sites and identified risk assessment, management and remediation issues faced by the sites in assessing the application of some remediation technologies. We reviewed current practices involved in the design and intended future use of the Remediation Check List Tool.  We also reviewed the use of toxicology data resources, databases, and mapped predictive tools in providing value to decision making, including potential use of human health risk characterization data and tools in considering remediation alternatives and a cost-benefit comparison analysis. The research included consideration of contaminant properties, reactivity, fate and transport, and exposure. An integrated data analysis methodology, carried out over multiple sources of in silico, in vitro, and in vivo data, and involving consensus models and a Weight of Evidence framework was investigated. The methodology was applied to the case study and its site contaminants, and used to support risk based predictions and recommendations for management of contaminated sites.


HESI’s RISK21 Project: A Tiered and Transparent Human Health Risk Assessment System for the 21st Century, Tim Pastoor (Syngenta)

The RISK21 Roadmap is a straightforward, efficient, and systematic way to achieving a transparent assessment of human health risk to chemicals.  The Roadmap is a problem-formulation based, exposure-driven, and tiered methodology that seeks to derive only as much data as is necessary to make a safety decision.  The Roadmap begins with problem formulation in which a clear statement is made about what information is needed to make a safety decision and criteria that indicate when the problem is solved.  Exposure evaluation is brought to the beginning of the assessment rather than at the end, which in turn directly affects the need for toxicological information, including the type of  studies to be done and the dose levels to be used.  The effectiveness of the RISK21 Roadmap relies on the vast body of exposure and toxicity knowledge that has been built in the last 50 years to provide starting points of reference and precedence.  Exposure and toxicity estimates are brought together on a highly visual Matrix that displays the degree of confidence in the estimates and gives guidance as to the most efficient way to generate new data to make a decision on safety.  Two case studies were generated by the RISK21 project team.  One is a data-rich case study that uses the RISK21 approach in the development of a new pesticide.  The second case study focuses on a group of chemicals in drinking water that are data-“poor” and uses the RISK21 approach to prioritize which chemicals are of greatest concern.  The RISK21 Matrix itself has the potential to widen the scope of risk communication beyond those with technical expertise, fostering understanding and communication. Being problem formulation and exposure led, it homes in quickly on the scenarios and chemicals that will be of concern. The RISK21 Roadmap and Matrix will reduce the time, resources, and number of animals required to assess a vast number of chemicals.  It represents a step forward in the goal to introduce new methodology into risk assessment in the 21st Century.  

Document Actions