Sections
OpenTox Blog
 
You are here: Home » Data » Blogentries » Public » Some guidance on what is needed for predictive toxicology infrastructure

Some guidance on what is needed for predictive toxicology infrastructure

Posted by Barry Hardy at Mar 18, 2011 02:55 PM |

OpenTox Coordinator Barry Hardy summarises some results and recommendations from the pre-AXLR8 OpenTox workshop discussions, held 30 May 2010 in Potsdam, Germany.

Last May 2010 we held an OpenTox workshop in Potsdam with the theme of "Integrating Predictive Toxicology Resources and Applications", Proceedings with slides are located at:

http://www.opentox.org/data/blogentries/public/opentoxworkshoppotsdam2010

Using a Knowledge Café format, we posed numerous questions related to what is needed for predictive toxicology infrastructure. I provide a summary of some of the main issues and recommendations emerging from that discussion here, which I think provides useful guidance.  Additional comments and feedback welcome!

Workshop participants discussed that current purpose-driven tools have value but their scope and extensibility are limited; hence we have a strong need in the field of predictive toxicology to build an interoperable framework and set of resources that can “talk to each other”, “have flexibility to be changed and extended to new developments”, and can “exchange information and metadata on experiments, data, analysis methods and models in a reliable way”.

With data resources we need to be able to more reliably and efficiently integrate data from different sources, and to have transparent information available on the quality and uncertainties in the data used to build and use predictive toxicology models.

Overall the quality of deployed computer science and engineering approaches in the field of toxicology are relatively weak, at least in terms of our broader common adoption practices and use on projects and resources. Approaches to resources e.g., for data and models are siloed and unsustainable. Addressing this issue requires a broad collaborative effort on resource interoperability, (working ) standards and common terminology. Increased development and incorporation of common ontologies will enable a much broader range of potential user queries and applications across data, e.g., based on mechanism and biological effect and pathway-driven questions related to toxic phenomena.

Key infrastructure requirements for supporting biological research in predictive toxicology were identified as:

·        Availability of simple in vitro test systems with adequate in vivo functionality

·        Tools for assessing the quality of the cell system on line

·        Availability of biomarkers of toxicity

·        Understanding mechanistic pathways discriminating adaptation from adversity

Current improvements with regards to toxicity data resources identified were:

·        Increased availability of data (e.g. –omics, cell-related data)

·        Proper quality control of the available data

·        Tools allowing integration of the data

·        Need for development of defined structure rules applied to available databases

·        Need for establishment of links with available databases to provide mechanistic information

·        Establish links with biological and chemical R&D: not to be considered as distinct topics

·        Need for comprehensive, standard incorporation rules of omics data

·        Establish greater data coordination and collaboration between EU initiatives

·        Need to check the relevance of the data accessible through publicly available databases

·        Need for comprehensive linked set of data resources (with both raw and consolidated data), with flexible interfaces and a standard vocabulary

·        It would be useful to develop data resources where continuous data (e.g. EC3 values from the LLNA) are also captured into classes (e.g. Non sensitizer/weak/mod/strong...)

Current improvements with regards to common terminology and ontology in toxicology identified were:

·        Collaboratively finding a common language is crucial

·        Establishing platforms of stakeholders having the same needs

·        Developing mechanisms addressing confidentiality issues

·        Providing education and training, e.g., on SOPs, vocabulary etc.

For  integrated testing strategies (ITS), we need to put in place procedures and guidelines for establishing and validating ITS approaches. Proposals include:

·        Resolution of difficulties that can arise from the fact that some methods (both in silico and in vitro) provide a potential (e.g. EC3) when others provide classes (e.g. binary responses yes/no)

·        Missing values in the data matrices used to build ITS can be a problem

·        Assigning weights to the tests used in an ITS is challenging, e.g. decision on whether the ITS should be directed more towards a good sensitivity or a good specificity

·        Methods for assessment of applicability domains of all tests (both in silico and in vitro) need development

Key  requirements for infrastructure supporting advancing methods to achieve validation and acceptance success are: 

·        Methods available for everyone

·        Establishment of infrastructure requirements, design and implementation plan at start of projects

·        Provision of decision support tools for categorization and prioritization

·        Definition and integration of biological, chemical, medical, and toxicological ontologies, implemented into practical use cases and tools

·        Centralized and well curated data repositories

·        Creation of a repository of the models used by regulatory authorities

·        Reproducible exchange of chemical structures, formats, and descriptors

·        Availability of cheminformatics tools to handle isomers, salts

·        Development of new tools supporting predictive models for mixtures

·        Development of improved tools for applicability domain of models

·        Improve the capability to reproduce models from the literature

·        Improve ability to provide reliable estimates of uncertainties in predictive models

·        Creation of a curation capacity to maintain quality data resources beyond the lifetime of projects

·        Increased capability to process unstructured or free text information into usable knowledge resources

·        Ability to reproduce or redo analysis results through the execution of all data processing steps all the way back to raw data sets

·        Development of new methods addressing dose-response relationships and dose-dependent predictions

·        Development of new toxicokinetic methods providing reliable extrapolation from in vitro/in silico predictions of hazard to in vivo exposure, e.g., determination of a reliable NOAEL

·        Reduction on uncertainty in finding data relevant to model and endpoint

·        Increase in trust in in silico models, increased transparency, reduced black box approaches, and hence leading in turn to increased understanding and acceptance of models by regulators

·        Interdisciplinary collaboration to improve communications, develop common ontologies

·        Development of alternative methods in hazard and exposure estimation that can be used in risk assessment

·        Harmonization and standardization

·        Interoperable linked resources

·        Open Standards

·        Accessible Open Data

·        Measures to improve acceptance and implementation

·        Manage lists of compounds (CAS, InChI names, structure, properties) and maintain/refine them

·        Need to apply the validation principles recommended by the OECD for computational methods

Participants proposed that OpenTox helps fill a large current gap in how the EC deals with data development by FP projects. It is a good start, but the pace has to be kept up and increased. How will that be achieved? A funded mechanism for engagement and support of other FP7 projects to ensure resource sustainability is critical.

Requirements for computational R&D identified were:

·        Need for flexible workflow making the interrogation of in-house & external data sources easier (either from public or commercial sources)

·        Need for workflows easily integrating software to calculate physical chemical and biological properties, required to make toxicology predictions

·        Need for "universal" models to calculate properties (e.g., pKa, binding affinity) required by predictive models

·        need for a minimum of expertise in the field to be able to apply the model when relevant

·        There is benefit in combining different analysis methods, both knowledge-based and statistical, in the development of improved predictive models. 

(Q)SARs, i.e., (Quantitative) Structure-Activity Relationships, can be most straightforwardly applied to endpoints which rely on only one defined mechanism, where the molecular interaction and structure activity relation is more or less clear (e.g., skin sensitization, genotoxicity, mutagenicity). For complex endpoints such as reproductive and chronic toxicity, where mechanism of action is unknown and multiple mechanisms can be assumed to take place for different types of molecules, the applicability is less clear. Nevertheless, in these cases (Q)SARs can have value in their contribution to a Weight of Evidence approach. 

The language of (Q)SAR developers and toxicologists differs and provides a challenge. Toxicologists can have difficulties to understand the modelling and validation process even if a publication is given as reference. (Q)SAR predictions should be accompanied by information for the toxicologist that aid interpretation and analysis of conflicting predictions.  A ranking of predictions from different models in terms of confidence would be useful. A combined organisation of results from different methods e.g., Cramer classification, decision trees and genotoxicity predictions would be helpful to the risk assessor. 

It might not be sufficient only to describe sensitivity and specificity of the algorithm as additionally the relevance of the (Q)SAR for the selected endpoint is still questionable. Can an indication that the algorithm is able to address the endpoint be given? Can information on data quality used to create the model be provided with it? Can OpenTox provide a ranking of most reliable to less reliable prediction results for an endpoint?

Harmonisation will strengthen the path of most promising new alternative testing methods on their development through validation and eventual regulatory acceptance.  Regulators will need to acquire new scientific resources and knowledge to influence and evaluate this method development and to eventually incorporate relevant competencies and practices into their activities.

We need greater coordination and integration of activities across projects, e.g., between those funded within FP7, and more broadly with other programs, including internationally. We should increase performance, dissemination and exploitation of the results from national and international projects, and capitalize more quickly on the lessons learned and the innovation potential created by projects.

High-throughput screening tools such as the NCGC's robotic system used within the Tox21 program (www.epa.gov/ncct/Tox21/) - combined with a growing assortment of in vitro assays and computational methods - are revealing how chemicals interact with biologic targets. Scientists increasingly believe these tools could generate more accurate assessments of human toxicity risk than those currently predicted by animal tests. Moreover, in vitro and in silico analytical approaches are seen as the best hope for evaluating the enormous backlog of untested chemicals. Estimates vary, but tens of thousands of industrial chemicals are used in consumer products without any knowledge of their potential toxicity. Meanwhile, it takes years and millions of dollars to assess risks for a single chemical using traditional animal testing methods.

In the predictive toxicology field European initiatives need an integrated vision and strategy, comparable with the US Tox21 vision being pursued in the U.S. Currently, Europe is ambitious and progressive in its aims (Lisbon agenda) and legislation (REACH) but fragmented and inefficient in tackling the underlying problems and hurdles. Improved coordination and integration of activities and resources would strengthen achievement of implementation goals.  OpenTox-based methods could be used to provide sustainable interoperable resources that could be used across all research projects, industry and regulators, e.g., data resources created by projects could be sustained, integrated and made available in a more accessible and usable manner. Repositories for test chemicals and biomaterials and associated information resources could provide common services that would improve quality, reduce cost and strengthen the path to validation.

Document Actions