Sections
You are here: Home » Meet » OpenTox Euro 2013 » OpenTox Euro 2013 Speaker Program

OpenTox Euro 2013 Speaker Program

OpenTox logoOpenTox InterAction Meeting

Speaker Program

 

 Organised in Collaboration with ToxBankToxBank Logo

30 September - 2 October 2013

Johannes Gutenberg University of Mainz, Mainz, Germany

Monday, September 30, 2013

09:00 - Section A. Data Management and Analysis, chaired by Nina Jeliazkova (Ideaconsult Ltd)

XMetDB - Xenobiotics Metabolism Database, Patrik Rydberg (University of Copenhagen)

The xenobiotics metabolism database is the first open access database for human metabolism of drugs, drug-like compounds, and other xenobiotic compounds. It has been developed to allow full programmatic access to the data through an open API, and will enable any toxicity models to fully integrate existing knowledge about metabolism. Besides a thorough description of XMetDB, this presentation will discuss the limitations of existing data, and how a community adoption of this database will enhance computational predictions of both metabolism and toxicity.

PathVisio 3: new features for pathway analysis and visualization, Martina Kutmon (Maastricht University)

In 2008, we presented the first version of our open source pathway visualization and analysis tool PathVisio[1] (www.pathvisio.org). Since then, PathVisio has been used in a number of studies to create pathway maps, perform pathway statistics or visualize biological data on pathways [e.g. 2,3]. The core application of PathVisio has now been refactored using the OSGi framework (Open Service Gateway initiative) with the goal to achieve a better, modular system that can be easily extended with plugins. PathVisio plugins are extensions of the core application that provide features relevant for a specific task. Plugins are accessible to users through the new plugin repository and can be installed through the plugin manager from within the application. This is an important aspect of usability that will allow users to build an application with all the necessary modules relevant for their work.
We would like to present different use cases to demonstrate how PathVisio 3 can be used in the field of Toxicology and Cheminformatics. The following topics will be covered:
- General functionality of PathVisio and how to install plugins
- Use pathways from WikiPathways [4] (www.wikipathways.org) an open, public platform dedicated to the curation of biological pathways by and for the scientific community in your analysis
- Visualize multi-omics data on pathways
- Analyze toxicology data sets from ArrayExpress (www.ebi.ac.uk/arrayexpress/) or TG-GATEs (toxico.nibio.go.jp/english/) with PathVisio
References
[1] Van Iersel, Martijn P., et al. "Presenting and exploring biological pathways with PathVisio." BMC bioinformatics 9.1 (2008): 399. doi: 10.1186/1471-2105-9-399
[2] Jennen, Danyel GJ, et al. "Biotransformation pathway maps in WikiPathways enable direct visualization of drug metabolism related expression changes." Drug discovery today 15.19 (2010): 851-858. doi: 10.1016/j.drudis.2010.08.002
[3] Coort, Susan LM, et al. "Bioinformatics for the NuGO proof of principle study: analysis of gene expression in muscle of ApoE3* Leiden mice on a high-fat diet using PathVisio." Genes & nutrition 3.3 (2008): 185-191. doi: 10.1007/s12263-008-0100-7
[4] Kelder, Thomas, et al. "WikiPathways: building research communities on biological pathways." Nucleic Acids Research 40.D1 (2012): D1301-D1307. doi: 10.1093/nar/gkr1074

ToxML: Community Based Development of a Common Data Exchange Standard for Toxicology, Mohammed Ali (Lhasa Ltd)

ToxML is an open standard based on Extensible Markup Language (XML) that consists of an XML Schema (XSD) defining the toxicology schema and lists of controlled vocabulary that ensure consistency of usage. The use of XML means that the data can be created, stored and transported in a structured format that is not bound to a specific software application or programming language. The data file model resulting from this approach is very versatile and allows for the aggregation of experimental data up to the compound level in the detail needed to support areas such as quantitative structure-activity relationship (QSAR) development. ToxML formats have been developed, so far, for 27 toxicity study types. These cover both in vivo and in vitro studies, and currently include the following super toxicity endpoints: genetic toxicity, carcinogenicity, skin sensitisation, skin penetration, in vivo repeat dose toxicity, in vivo single dose toxicity and ecotoxicity.
The Opentox project was an early supporter of ToxML and understood the issues of comparing or combining disparate data that originated from diverse and heterogeneous sources. ToxML addresses the need for a common data exchange standard that allows the representation and communication of this data in a well-structured electronic format. The standard is maintained by a curation team overseen by the ToxML organisation and is published on a web site (www.toxml.org) together with tools to view, edit and download it. Contributions from the user community to the ongoing evolution of the standard are facilitated in an open forum via a wiki on the web site.

The ISA infrastructure: from experimental planning to data publication, Alejandra Gonzalez-Beltran (University of Oxford)

ISA stands for Investigation/Study/Assay and it is an infrastructure (http://isa-tools.org) composed of the generic and tabular ISA-TAB format and a set of software tools facilitating the management of bioscience experimental data [1]. The format supports the description of multi-omic experiments and each software tool in the infrastructure facilitates one or more tasks related to data management in the life sciences, environmental or biomedical domains. The ISAcreator tool supports experimental planning based on design of experimental concepts and the description of the investigation, including protocols and assays. It can be configured to, for example, to comply with minimum information checklists and to use certain ontologies for annotation by using the ISAconfigurator tool. There are also tools for storage and querying (BioInvestigation Index manager, web application and database), for interfacing with data analysis platforms (Risa for R/Bioconductor, GenomeSpace, Galaxy) and for conversion to multiple formats, including those required by public repositories (e.g. MAGE-TAB for ArrayExpress, PRIDE-XML for PRIDE) and the Resource Description Framework (RDF) for use in Semantic Web/Linked Data applications. Data publication platforms such as Nature Publishing Group Scientific Data and BioMed Central GigaScience database support the ISA-TAB format. The community of users and collaborators is grouped under the ISA commons (http://isacommons.org) and its members belong to academic and industrial organisations worldwide working in a wide range of domains. In the systems toxicology and toxicogenomics domains, the ToxBank Data Warehouse [2] relies on the ISA-TAB format and the OpenTox standards.
This presentation will describe the ISA infrastructure and provide specific examples of its advantages, with focus on the the toxicology domain.
References
[1] Rocca-Serra et al. ISA software suite: supporting standards-compliant experimental annotation and enabling curation at the community level. Bioinformatics 2010.
[2] Kohonen et al. The ToxBank Data Warehouse: Supporting the Replacement of In Vivo Repeated Dose Systemic Toxicity Testing. Molecular Informatics 2013.

The Open Pharmacological Triple Store Concepts, Egon Willighagen (Maastricht University)

This presentation introduces the Open Pharmacological Space (OPS), the key output of the Innovative Medicines Initiative-funded Open PHACTS Project. The OPS has as goal to establish a semantic integration hub to deliver services to support on-going drug discovery programs in pharma, academia, and the public domain. On top of the OPS various data analysis platforms have been developed which will be presented. The project is an effort from sixteen academic partners, nine pharmaceutical companies, four biotechs, and numerous associate partners, and provides open solutions touching on technical data integration problems, answering community and scientific needs, and that develops a sustainable future.

11:30 - Section E1. Innovative Developments in Predictive Toxicology, chaired by Barry Hardy (Douglas Connect) and Stefan Kramer (Johannes Gutenberg University of Mainz)

HeMiBio - Generation of hepatic microfluidic bioreactors with a regenerative cell source of parenchymal and non-parenchymal liver cells for high throughput long-term hepatotoxicity testing, Stefan Heinz (Medicyte)

HeMiBio is one of the six building blocks of the European research initiative SEURAT-1 (Towards the Replacement of in vivo Repeated Dose Systemic Toxicity Testing) . The Research Initiative follows the long-term target in chemical safety testing ‘Safety Evaluation Ultimately Replacing Animal Testing’ (SEURAT), which was presented by the HEALTH theme of the 7th European Research Programme (FP7) in 2008. SEURAT-1 is co-funded by the European Commission Directorate-General for Research & Innovation and Cosmetics Europe - The Personal Care Association (previously named Colipa), with a total funding of EUR 50 million. The specific goal of HeMiBio is to develop a hepatic microfluidic bioreactor (HeMiBio) from a renewable source of human hepatocytes, hepatic sinusoidal endothelial cells (HSEC) and stellate cells (HSC), suitable for inclusion in a repeated dose toxicity testing strategy of pharmaceuticals/cosmetic ingredients. The successful creation of such a liver-device requires (a) homotypic and heterotypic interactions between the three cell types to induce and maintain their functional, differentiated state, and (b) optimisation of the matrix, oxygenation conditions, nutrient transport and physiological shear forces. The objectives are (1) to engineer the cellular components incorporated in the bioreactor to enable specific and spatially defined enrichment of the different cells from iPSC and upcyte® cells (Medicyte, Germany), and, by gene editing, to allow non-invasive monitoring of the cellular state (differentiation and damage), (2) Aside from the molecular sensors, an array of electro-chemical sensors will be embedded in the reactors to assess liver-specific function and cellular health under repeated dose toxicity conditions, dynamically and in a high-throughput way. Cells and sensors will be built into (3) bioreactors that will be sequentially upgraded from 2D to 3D microfluidic reactors to ultimately allow full maintenance of mature functional hepatocytes, HSC and HSEC for >28 days. (4) As the ultimate goal is to use the device as a human-based alternative to rodent long-term hepatotoxicity studies, it will be of utmost importance to provide proof of concept that the 3D-devices reveal the hepatotoxicity of prototypical hepatotoxic compounds in vivo (5). “-Omics” and cell functionality studies will provide evidence that liver-like cells are present, exposed and affected by the selected toxic compounds. These ambitious objectives will be achieved by the excellent project team, composed of academic/industrial partners with unique and complementary biology, physiology, toxicology and technical skills from 7 EU Member States.

 

Approaches to analyze the eTOX database from a data mining perspective, Jörg Wichard (Bayer Healthcare)

The eTOX project aims to develop a drug safety database from the pharmaceutical industry legacy toxicology reports and public toxicology data. This database is also the starting point for the development of an in silico toxicity prediction system able to estimate the biological properties of a certain compound, with respect to a wide variety of relevant toxicological endpoints. Therefore an analysis of the database with respect to its content is the first step. It starts with simple statistics followed by the creation of an ontology that covers and unifies almost all included terms.

 

InCroMAP – a tool for the integrated analysis and pathway-centered visualization of cross-omics datasets,  Johannes Eichner (University of Tuebingen)

Most of the established omics data visualization tools were specifically designed for the inspection of individual types of genomic features either in the context of their genetic locus (region-based visualization) or their interaction partners (network-based visualization). However, today a heterogeneous inventory of high-throughput techniques exists for the specific detection and quantitative measurement of a wide variety of genomic and epigenomic features. Specialized microarray platforms were developed for global expression analysis of mRNA or micro-RNA transcripts, genome-wide monitoring of promoter methylation status and abundance profiling of particular protein isoforms and modifications. Since complex interactions between multiple layers of gene regulation can only be inferred by integration of omics datasets across multiple platforms, novel analysis tools which provide appropriate visualizations are required.
Here, we present InCroMAP, a tool which was designed for the enrichment analysis and pathway-based visualization of omics datasets, where multiple biological layers were monitored in the same set of samples. In addition to the automatic recognition of identifiers used by the most common microarray manufacturers (e.g., Affymetrix, Agilent, etc.), we support generic formats to enable the import of processed omics data, provided that each measurement can be either associated to a certain gene or genomic region. Pathways of interest can then either be automatically downloaded from KEGG or imported from other sources in BioPAX format. Being rendered in an interactive graph viewer, the pathway nodes (i.e., genes) can be overlaid with expression data from mRNAs and multiple protein products. Additionally, miRNAs can be connected to a given pathway based on experimentally confirmed or predicted interactions with their target mRNAs. If desired, the tool also visualizes differential methylation of proximal gene promoters, which is by default computed based on the largest peak observed in the upstream region of the transcription start site.
In a typical use case of InCroMAP the user first imports his preprocessed multi-level omics data, given in tabular format. Then, the deregulated genes for each platform are determined based on appropriate cutoffs and relevant pathways related to the biological background of the experiments are inferred. For this purpose, InCroMAP employs a special pathway enrichment algorithm which integrates deregulated genes across multiple platforms. The resulting pathways can then be selected for further visual inspection from a table in which each pathway is associated with a significance value. Alternatively, the metabolic overview function of InCroMAP can be used to generate an interactive global map of cellular metabolism, in which each subordinate metabolic pathway is colored according to the significance of its enrichment with deregulated genes. InCroMAP is freely available under LGPL3 and can be downloaded from www.cogsys.cs.uni-tuebingen.de/software/InCroMAP/.

Tuesday, October 1, 2013

09:00 - Section B. Open Data, Open Source, and Open Standards for Toxicology, chaired by Egon Willighagen (Maastricht University)

AMBIT Web services: chemical data and models via OpenTox API, Nina Jeliazkova (IdeaConsult Ltd)

The AMBIT web services package is one of the several existing independent implementations of the OpenTox API, providing data sharing and remote calculations capabilities. Initially a standalone application, it was considerably enhanced within the framework of the OpenTox project (Hardy et al., 2010), adding the ability to describe data, algorithms, and model resources via corresponding ontologies and to build QSAR models via OpenTox API compliant REST web services. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package and available as web services and web pages. AMBIT includes web services wrappers for external software packages, providing means for launching external (local or remote) calculation procedures and storing the results in the database. Examples are MOPAC, descriptors, large number of statistical procedures and machine learning algorithms, remote (third party) web services and new algorithms for tautomers generation and fast identification of activity cliffs.
AMBIT is an open source project, is distributed as a web archive (war file) and can be deployed in an Apache Tomcat application server or any other compatible servlet container. AMBIT offers both a graphical interface via web pages and a programmatic interface, which are complementary options for end users working directly with the system and for developers who consider using AMBIT search, import/export and modelling functionality via scripts and workflows. This allows development of multiple user interfaces, depending on specific requirements, while still using the same AMBIT package in the background. Among the recent developments using this approach is the redesigned QMRF repository (JRC, Italy) and the Xenobiotics Metabolism Database (XMetDB, Uppsala University).

Chemical decision support in toxicology and pharmacology, Ola Spjuth (Uppsala University)

This presentation will present our research on chemical decision support in predictive toxicology and pharmacology and other related open source and open data projects. Central to the presentation are the recent developments of the Bioclipse workbench in this field, including the Decision Support feature and the connection with OpenTox providing access to download and publish data and consume predictive models using the OpenTox infrastructure.

Phenotype Database, Jildau Bouwman (TNO) (see Slides PDF)

Storing data in a structured and standardized way facilitates the comparison and meta-analysis of studies. The Phenotype Database (www.dbxp.org and test.dbnp.org) is a web-based application/database that is especially designed to store complex study designs such as cross-over designs. It contains templates which makes it possible to customize the system in order to allow flexibility to capture all information available within a study (all meta-data) and contains links to ontologies to ensure standardization of terms. Different types of data including transcriptomics, clinical chemistry and proteomics can be stored in platform specific modules (for some data types, analysis and processing pipelines are available). The Phenotype Database project can be downloaded (https://github.com/PhenotypeFoundation/GSCF), installed on your own server and developers can adjust the open source software. The Phenotype database is used in a number of disciplines, where its generic components (strong emphasis on metadata description, standardization, complete owner control, facilitating open access, fully facilitating multi-omics, in vivo and in vitro studies) are complemented with discipline-specific aspects. The nutritional instance of the system can be found on studies.dbnp.org and is used in several European projects (NU-AGE, COSMOS, Euro-DISH, Bioclaims, Nutritech). The owner of a study is in control of who can view the study (other persons or the whole world). The tox version is integrated with DIAMONDS (an infrastructure for data integration and analysis of computational chemistry data, with toxicogenomics and molecular toxicology data). We will show that the system makes it possible to compare challenges of different studies and we were able to answer questions that could not be answered by data from one single study only.

Assessing compound carcinogenicity in vitro using connectivity mapping, Florian Caiment (Maastricht University)

One of the main challenges of toxicology is the accurate prediction of compound carcinogenicity. The default test model for assessing chemical carcinogenicity, the 2-year rodent cancer bioassay, is currently criticized because of its limited specificity. With increased societal attention and new legislation against animal testing, toxicologists urgently need an alternative to the current rodent bioassays for chemical cancer risk assessment. Toxicogenomics approaches propose to use global high-throughput technologies (transcriptomics, proteomics and metabolomics) to study the toxic effect of compounds on a biological system. Here, using only open source data repositories (Open TG_Gates, Array Express...), we demonstrate the improvement of transcriptomics assay consisting of primary human hepatocytes, to predict the putative liver carcinogenicity of several compounds by applying the connectivity map methodology (using the sscMAP open software). Our analyses underline that connectivity mapping is useful for predicting compound carcinogenicity by connecting in vivo expression profiles from human cancer tissue samples with in vitro toxicogenomics datasets. Furthermore, the importance of time and dose effect on carcinogenicity prediction is demonstrated, showing best prediction for low dose and 24 hours exposure of potential carcinogens.

Reference
Caiment et al., Assessing compound carcinogenicity in vitro using connectivity mapping, Carcinogenesis, 2013

 

The ChEMBL Database: Open data for use in Toxicity Prediction, Anne Hersey (EMBL-EBI)

The ChEMBL database https://www.ebi.ac.uk/chembldb is an open and freely available database containing quantitative bioactivity data manually extracted from the primary medicinal chemistry literature (1) which provides a useful resource for scientists working in chemical biology. The data extracted includes bioactivity data from in vitro assays for compounds binding to protein targets, data from functional assays such as cell-based or whole organism studies, in vivo pharmacology assays and data from experiments to determine the ADMET properties of molecules. The bioactivity data is linked to a compound’s chemical structure and can hence be used to build structure activity relationships (SAR). As well as utilising the data to identify compounds that bind to therapeutic targets of interest, the ChEMBL database is being used to identify potential liability targets and to better understand the relationship between observed toxicities and chemical structures. This talk will cover an analysis of the ChEMBL data related to toxicology and the work that is currently in progress to extend the ChEMBL data to include information from the later phases of drug development from pre-clinical and clinical development to marketed drugs.
Reference
(1) ChEMBL: a large-scale bioactivity database for drug discovery, Nucleic Acids Res. 2012 January; 40(D1): D1100–D1107, Anna Gaulton, Louisa J. Bellis, A. Patricia Bento, Jon Chambers, Mark Davies, Anne Hersey, Yvonne Light, Shaun McGlinchey, David Michalovich, Bissan Al-Lazikani, and John P. Overington

11:30 - Secton D. Systems Biology & Predictive Toxicology, chaired by Jürgen Borlak (Hannover Medical School)

Application of toxicogenomics and TG-GATEs database for drug safety screening, Takeki Uehara (Shionogi Pharmaceutical Research Center)

Toxicogenomics is a promising approach for identifying compounds with potential safety problems based on genome-wide gene expression profiles. In general, toxicogenomics has been applied in two broad research areas: mechanistic research and predictive toxicology. The toxicogenomic approach facilitates investigations of the molecular mode of action of toxic substances. Predictive toxicology approach, using predictive biomarker gene signatures, enables the efficient selection of drug candidates at an early stage of drug development, resulting in a significant reduction in the time and cost associated with development of new molecular entities. In Japan, the Toxicogenomics Project (TGP; Uehara et al, 2010), which is a collaboration consortium involving the National Institute of Health Sciences, the National Institute of Biomedical Innovation, and 15 pharmaceutical companies, has constructed a large-scale toxicogenomics database named TG-GATEs. Data from about 20,000 microarrays were generated through both in vitro and in vivo experiments for over 150 compounds. Specifically, in the single dose study, rats were treated in one of the three dose levels with concurrent controls and sacrificed at 3, 6, 9, or 24 h after a single administration. In the repeated-dose study, rats were also treated at the three dose levels with concurrent controls, but sacrificed at 24 h after the last dose of repeated administration for 3, 7, 14, and 28 days. Other data obtained include histopathological examination, blood chemistry, hematology, body weight, organ weight, and general symptoms. In vitro datasets contain information regarding the effects on rat and human hepatocytes resulting from exposure to the same compounds. The online version of this database, Open TG-GATEs, is now available for free public download (toxico.nibio.go.jp/open-tggates/english/search.html). In this session, I will share our efforts on toxicogenomics research, with particular focus on developing the TG-GATEs database, and the practical application of toxicogenomics for drug safety screening.
Reference
Uehara T, Ono A, Maruyama T, et al. The Japanese toxicogenomics project: application of toxicogenomics. Mol Nutr Food Res. 2010; 54:218–27.

 

The Adverse Outcome Pathways Knowledge Base, Clemens Wittwehr (EU Joint Research Centre)

An Adverse Outcome Pathway (AOP) is an analytical construct that describes a sequential chain of causally linked events at different levels of biological organization that lead to an adverse health or ecotoxicological effect. AOPs are the central element of a toxicological knowledge framework being built to support chemical risk assessment based on mechanistic reasoning.

The AOP Knowledge Base (AOP-KB) is an IT system to capture, manage, share and add value to AOP information; the AOP-KB will consist of three modules:

• AOP Wiki, a text-based tool allowing the management of AOP-related knowledge (AOPs, Key Events, relationships between them) in a Wikipedia-like environment;

• Effectopedia, a graphical tool implementing quantitative models depicting the relationship between two events in an AOP;

• Intermediate Effects DB, an IUCLID-based system that manages information about intermediate effects triggered by chemicals.

The first available module, the AOP Wiki, leads AOP developers through the steps necessary to capture the scientific information needed to document an AOP – following and implementing OECD guidance on how to describe AOPs. The Wiki also provides a collaborative space for groups to develop AOPs independent of geography or organizational boundaries. While very similar to Wikipedia, a system of user-friendly tables, drop-down-boxes and built-in functionality for automatic cross-referencing between related pages makes sure that users can focus on content, and IT issues (e.g. web markup tags) are largely kept in the background.

The project is steered by the OECD Extended Advisory Group on Molecular Screening and Toxicogenomics (EAG MST) and is executed by the European Commission's Joint Research Centre (JRC) and the US Environmental Protection Agency (US-EPA).

 

The Systems Biology Simulation Core Library: A numerical method for the quantitative simulation of biochemical reaction networks, Alexander Dörr (University of Tuebingen)

In the last decades pharmaceutical research came to a state where drugs are designed in the way of an inventive process with the help of high-throughput and computer-aided methods. Despite the financial effort in genome sequencing and combinatorial chemistry, the development of new drugs is still impaired by high failure rates during clinical studies. Such throwbacks are not necessarily caused by the binding properties of ligands at their targets but rather due to insufficient pharmacokinetic properties and toxicity. In order to deal with this problem the elimination process of drugs has to be assessed with dynamic models. To facilitate the sharing and reuse of such models they can be encoded in XML-based standard description formats such as the Systems Biology Markup Language (SBML). For sound predictive capabilities models have to be provided with kinetic equations and parameters that enable a simulation of their state over time. The quantitative results of a simulation express the change in the concentration of each metabolite in a model. This change can then be illustrated with charts and diagrams. Hence, the metabolism of an organism or the ADME behavior of drugs can be observed under different conditions.

An efficient and accurate algorithm is crucial for the simulation of dynamical systems. To this end, we derived a new algorithm with its mathematical description for the accurate interpretation and simulation of all currently existing levels and versions of SBML. To demonstrate the usefulness of the algorithm, we introduce an exhaustive reference implementation in Java that efficiently solves SBML models in terms of an ordinary differential equation framework. The Systems Biology Simulation Core Library comprises a platform-independent and well-tested generic open-source library. The library is completely decoupled from any graphical user interface and can therefore easily be integrated into third-party programs. It includes several ordinary differential equation (ODE) solvers and an interpreter for SBML models. As an important design feature, the algorithm can be combined with existing numerical solvers in a plugin fashion. It can be easily integrated into larger customized applications. The abstract class structure of the library supports the integration of further model formats, such as CellML, in addition to its SBML implementation. The algorithm has been successfully tested with the entire SBML Test Suite version 2.3.2 and all models of BioModels Database (release 23, October 2012).

 

14:00 - Secton D. Systems Biology & Predictive Toxicology, chaired by Jürgen Borlak (Hannover Medical School)

Integrated Analysis of Toxicology Data supported by ToxBank, Barry Hardy (Douglas Connect) (see Slides PDF)

The SEURAT-1 (Safety Evaluation Ultimately Replacing Animal Testing-1) research cluster is comprised of seven EU FP7 Health projects and is co-financed by Cosmetics Europe. The aim is to generate a proof-of-concept to show how the latest in vitro and in silico technologies, systems toxicology and toxicogenomics can be combined to deliver a test replacement for repeated dose systemic toxicity testing on animals. The SEURAT-1 strategy is to adopt a mode-of-action framework to describe repeated dose toxicity to derive predictions of in vivo toxicity responses. ToxBank is the cross-cluster infrastructure project whose activities include the development of a data warehouse to provide a web-accessible shared repository of research data and protocols. Experiments are generating dose response data over multiple timepoints using different omics platforms including transcriptomics, proteomics, metabolomics, epigenetics over a variety of different cell lines and a common set of reference compounds. Experimental data is also being generated from functional assays and bioreactors, and supplemented with in silico approaches including kinetic information. This complex and heterogeneous data is being consolidated and harmonized through the ToxBank data warehouse. It is being organized in order to perform an integrated data analysis and ultimately predict repeated dose systemic toxicity. Core technologies used include the ISA-Tab universal data exchange format, Representational State Transfer (REST) web services, the W3C Resource Description Framework (RDF) and the OpenTox standards. We describe the design of the data warehouse based on cluster requirements, the implementation based on open standards, and illustrate using a data analysis case study.

 

Human Embryonic Stem Cell-Derived Hepatocytes as a Predictive Model for Drug Screening, Seokjoo Yoon (Korea Institute of Toxicology)

The substantial interest in human pluripotent stem cell derived hepatocytes-like cells (HLCs) reflects the indispensable necessity for in vitro models of hepatocytes especially for the toxicological assessment of the drug development process. Unexpected toxicity is one of the major causes of removal of potential drug candidates from pharmaceutical projects, and drug-induced human hepatotoxicity is the most frequent occurring reason among drug molecules. To date, the established in vitro tools have demonstrated the lack of predictive power, and none of the cell lines which reflect the complexity and function of the human liver are available. The recent progress in stem cell technologies offers a promising alternative to a variety of hepatic assays used in drug development process including toxicity assessment. In the last few years, several groups have established the efficient protocols for the differentiation of human embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs) into functional HLCs. Although a variety of hepatic functions including albumin secretion, glycogen storage, hepatic uptake, and cytochrome P450 induction in human pluripotent stem cell (HPSC)-derived HLCs have been demonstrated, drug metabolism activity of HPSC-derived HLCs was poorly evaluated. Therefore, determining of gene expression and enzymatic activities of detoxifying enzymes in hPSC-derived HLCs is an essential prerequisite for the use in toxicity testing. Here, we investigated expression of drug metabolic enzymes between hESC-derived HLCs and hepatocellular carcinoma cell (HCC) lines. Expressions of phase-I, phase-II enzymes and phase-III transporters involved in xenobiotics metabolism were observed both in hESC-derived HLCs and HCCs. However, some of the key detoxifying enzymes such as GSTA1 and GSTP1 were prominently expressed only in hESC-derived HLCs. These results indicate that hESC-derived HLCs may be a useful source for xenobiotic metabolism and toxicity prediction in drug screening.

 

Wednesday, October 2, 2013

09:00 - Section C. Visualization & Visual Analytics, chaired by Andreas Karwath (Johannes Gutenberg University of Mainz)

The Chemical Space Project, Jean-Louis Reymond (University of Berne)

Organic molecules consist of covalently bound atoms of carbon, hydrogen, oxygen, nitrogen, halogens, and a few other elements (S, P, Si). The ensemble of all possible molecules forms the chemical universe, or chemical space, which is believed to contain at least 10E60 molecules up to a MW of 500 Da of possible interest for drug discovery. Our aim is to explore this chemical space by enumeration, virtual screening and chemical synthesis to identify new drugs. I will discuss some of our latest advances as follows: 1) enumerating chemical space by assembly of the chemical universe databases GDB; 2) the visualisation of the chemical space of large databases in 2D and 3D using the MQN-mapplet application; 3) tools for browsing chemical space for predictive polypharmacology. 

 

Visual Analytics for the Comparison of Chemical and Biologic Data, Tatiana von Landesberger (Technische Universität Darmstadt)

In many research areas multiple, differing data sets need to be analyzed simultaneously in a comparative way – in particular to highlight differences between them, which sometimes can be subtle. A prominent example is the analysis of so-called phylogenetic trees in biology, describing hierarchical evolutionary relationships among a set of organisms. The simultaneous analysis of a collection of such trees leads to more insight into the evolutionary process. Another example, is the comparison of chemical structures and their properties. In this talk, I present visual analytics approaches for the comparison of multiple phylogenetic trees and chemical structures. This approach was developed in close cooperation with experts from the evolutionary biology domain. 

 

Visual Analysis of Chemical Space with Scaffold Hunter, Nils Kriege (TU Dortmund)

The development of new drugs is a challenging and often tedious task. In recent years an increasing amount of chemical and biological activity data became available, e.g., in in-house or public databases. The drug discovery process can greatly benefit from computational methods to efficiently analyze these datasets. An essential problem is to elucidate the complex relationship between the structure of compounds and their properties such as bioactivity or toxic side effects. This task is often not suitable to fully automated methods, but can greatly benefit from software tools that foster the systematic visual exploration of compound and bioactivity data. Scaffold Hunter is a highly interactive tool for the visual analysis of chemical compound datasets from a variety of sources and provides analysis and integrated visualization of the associated chemical and biological activity space.

CheS-Mapper: New Developments, Martin Gütlein (University of Freiburg)

Visualization is essential when analyzing chemical datasets in order to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. CheS-Mapper is a 3D molecular viewer that arranges a dataset in 3D space such that compounds with similar feature values are located close to each other. To this end, CheS-Mapper integrates various feature computation methods and multiple 3D embedding tools. Further available algorithms allow clustering, 3D building and 3D alignment of the chemical compounds.
CheS-Mapper is compatible to OpenTox and can directly load datasets from dataset web services.
The presentation will include a live demonstration of the software on artifical and real world data. We will introduce CheS-Mapper and the underlying workflow, and present new developments like extended highlighting techniques and the integration into the KNIME workbench.

 

11:15 - Section E2. Innovative Developments in Predictive Toxicology, chaired by Barry Hardy (Douglas Connect) and Stefan Kramer (Johannes Gutenberg University of Mainz)

Integration of molecular detail from OMICS-technologies for prediction of toxicity, André Schrattenholz (ProteoSys AG)

There are quite diverse phenotypic effects resulting from toxic interventions, which are accessible for measurements with modern OMICS-technologies. They include a variety of nucleic acid species (RNA’s and DNA) and mass spectrometry (metabolites, proteins, lipids, sugars). Toxic effects depend on individual genetic, epigenetic and environmental predispositions and conditions.
Epigenetic and environmental effects are to a large extent present on the proteomic level. Proteins also show the most immediate molecular effects in time-resolved experiments. The kinetics of cellular responses requires special attention with regard to collection of OMICS data and in particular proteomic data by mass spectrometry-based methods, with its own set of quantitative, statistical and bioinformatic necessities. Data types and formats, data models and ontologies and interface solutions for integration will be discussed.
Working with human in vitro models requires the establishment of kinetically controlled SOP’s for sample generation, processing and storage, meta data tracking frameworks (like e.g. ISA-TAB). The aim is to define sets of biomarkers exactly describing molecular initiating events and the downstream key events which eventually cause adverse outcomes. In terms of molecular biomarkers, in a first stage a thorough statistical analysis of raw data will reveal consistent quantitative data signatures plausible across samples and conditions. This is pivotal to exclude effects of contamination and unrelated biological activity. A clear definition of biological and data acquisition criteria will result in the selection of validated data sets. In a second stage the whole validated data set or selected subgroups of biomarker candidates will be searched according to biological criteria. In the toxicological projects investigated so far (SEURAT-1, Reprotect) one of the hallmarks was oxidative stress, contributing to a cascade of specific posttranslational modifications (oxidative, glycation), some of them directly accessible by mass spectrometry (e.g. N-formyl-kynurenin modification).
Obviously the number of pathways is relatively limited and these pathways are organized in flexible and redundant feed-back systems. Certain layers of Omics analyses reflect better or worse the kinetics of reactions in these pathways of stress and escape responses. Predictive modelling will require adequate incorporation of kinetic information and treatment of feed-back and feed-forward mechanisms.

 

DNA Repair and Damage Response Following Exposure of Cells to Alkylating Carcinogens, Bernd Kaina (University Medical Center, Mainz)

Alkylating carcinogens are widely distributed in the environment and are present in food, beverages and tobacco. They are also endogenously formed in stomach and gut. These agents induce a dozen different DNA lesions, some of them have been identified to be carcinogenic, clastogenic, recombinogenic and cytotoxic. A critical DNA adduct is O6-methylguanine (O6MeG). This damage causes mutations and is responsible for most of the carcinogenic effects of simple alkylating agents. At the same time O6MeG is a highly powerful cytotoxic lesion, giving rise to the induction of apoptosis, necrosis and autophagy. The damage is repaired by the suicide enzyme alkyltransferase (MGMT), which is a very important first-line defense mechanism and biomarker of alkylating drug resistance, both in normal tissue and tumors (therefore it plays also a key role in tumor therapy). MGMT knockout mice respond to alkylating agent treatment with a high yield of colon cancer. The same is true for MPG ko mice defective in base excision repair, indicating that not only O6MeG, but also non-repaired N-alkylation lesions give rise to mutations and cancer. Elimination of pre-transformed cells by apoptosis counteracts this process. We have shown that O6MeG is a very powerful trigger of apoptosis, which is executed via the death receptor and the mitochondrial damage pathway. The apoptotic response is downstream triggered by DNA double-strand breaks (DSB) that are formed during the mismatch repair dependent processing of O6MeG. These O6MeG-induced DSBs are repaired by homologous recombination (HR), which is a second-line defense against O6MeG triggered cell death. Other players involved in DSB recognition and HR are NBS-1, ATM, Rad51, XRCC2 and XRCC3. In some cell types, the efficiency of O6MeG to trigger the p53 dependent death receptor pathway is higher than the p53 independent endogenous mitochondrial pathway, which rests on p53 driven death receptor upregulation. However, p53 is also able to upregulate DNA repair genes thus protecting against mutations and cell death. The implications for human defense against environmental carcinogens will be discussed. Work was supported by DFG KA724 and Deutsche Krebshilfe.
References: Batista et al. (2007) Cancer Res. 67, 11886-95; Naumann et al. (2009) Br. J. Cancer, 100, 322-33; Quiros et al. (2010) Cell Cycle, 9, 168-78; Roos et al. (2011) Cancer Res. 71, 4150-60; Barckhausen et al. (2013) Oncogene in press; Eich et al. (2013) Mol. Cancer Ther., in press.

Document Actions