Refine
Has Fulltext
- yes (26)
Is part of the Bibliography
- yes (26)
Year of publication
Document Type
- Doctoral Thesis (23)
- Journal article (2)
- Master Thesis (1)
Keywords
- Bioinformatik (26) (remove)
Institute
Sonstige beteiligte Institutionen
In this work models for molecular networks consisting of ordinary differential equations are extended by terms that include the interaction of the corresponding molecular network with the environment that the molecular network is embedded in. These terms model the effects of the external stimuli on the molecular network. The usability of this extension is demonstrated with a model of a circadian clock that is extended with certain terms and reproduces data from several experiments at the same time.
Once the model including external stimuli is set up, a framework is developed in order to calculate external stimuli that have a predefined desired effect on the molecular network. For this purpose the task of finding appropriate external stimuli is formulated as a mathematical optimal control problem for which in order to solve it a lot of mathematical methods are available. Several methods are discussed and worked out in order to calculate a solution for the corresponding optimal control problem. The application of the framework to find pharmacological intervention points or effective drug combinations is pointed out and discussed. Furthermore the framework is related to existing network analysis tools and their combination for network analysis in order to find dedicated external stimuli is discussed.
The total framework is verified with biological examples by comparing the calculated results with data from literature. For this purpose platelet aggregation is investigated based on a corresponding gene regulatory network and associated receptors are detected. Furthermore a transition from one to another type of T-helper cell is analyzed in a tumor setting where missing agents are calculated to induce the corresponding switch in vitro. Next a gene regulatory network of a myocardiocyte is investigated where it is shown how the presented framework can be used to compare different treatment strategies with respect to their beneficial effects and side effects quantitatively. Moreover a constitutively activated signaling pathway, which thus causes maleficent effects, is modeled and intervention points with corresponding treatment strategies are determined that steer the gene regulatory network from a pathological expression pattern to physiological one again.
Development and application of computational tools for RNA-Seq based transcriptome annotations
(2019)
In order to understand the regulation of gene expression in organisms, precise genome annotation is essential. In recent years, RNA-Seq has become a potent method for generating and improving genome annotations. However, this Approach is time consuming and often inconsistently performed when done manually. In particular, the discovery of non-coding RNAs benefits strongly from the application of RNA-Seq data but requires significant amounts of expert knowledge and is labor-intensive. As a part of my doctoral study, I developed a modular tool called ANNOgesic that can detect numerous transcribed genomic features, including non-coding RNAs, based on RNA-Seq data in a precise and automatic fashion with a focus on bacterial and achaeal species. The software performs numerous analyses and generates several visualizations. It can generate annotations of high-Resolution that are hard to produce using traditional annotation tools that are based only on genome sequences. ANNOgesic can detect numerous novel genomic Features like UTR-derived small non-coding RNAs for which no other tool has been developed before. ANNOgesic is available under an open source license (ISCL) at https://github.com/Sung-Huan/ANNOgesic.
My doctoral work not only includes the development of ANNOgesic but also its application to annotate the transcriptome of Staphylococcus aureus HG003 - a strain which has been a insightful model in infection biology. Despite its potential as a model, a complete genome sequence and annotations have been lacking for HG003. In order to fill this gap, the annotations of this strain, including sRNAs and their functions, were generated using ANNOgesic by analyzing differential RNA-Seq data from 14 different samples (two media conditions with seven time points), as well as RNA-Seq data generated after transcript fragmentation. ANNOgesic was
also applied to annotate several bacterial and archaeal genomes, and as part of this its high performance was demonstrated. In summary, ANNOgesic is a powerful computational tool for RNA-Seq based annotations and has been successfully applied to several species.
Neurobiology is widely supported by bioinformatics. Due to the big amount of data generated from the biological side a computational approach is required. This thesis presents four different cases of bioinformatic tools applied to the service of Neurobiology.
The first two tools presented belong to the field of image processing. In the first case, we make use of an algorithm based on the wavelet transformation to assess calcium activity events in cultured neurons. We designed an open source tool to assist neurobiology researchers in the analysis of calcium imaging videos. Such analysis is usually done manually which is time consuming and highly subjective. Our tool speeds up the work and offers the possibility of an unbiased detection of the calcium events. Even more important is that our algorithm not only detects the neuron spiking activity but also local spontaneous activity which is normally discarded because it is considered irrelevant. We showed that this activity is determinant in the calcium dynamics in neurons and it is involved in important functions like signal modulation and memory and learning.
The second project is a segmentation task. In our case we are interested in segmenting the neuron nuclei in electron microscopy images of c.elegans. Marking these structures is necessary in order to reconstruct the connectome of the organism. C.elegans is a great study case due to the simplicity of its nervous system (only 502 neurons). This worm, despite its simplicity has taught us a lot about neuronal mechanisms. There is still a lot of information we can extract from the c.elegans, therein lies the importance of reconstructing its connectome. There is a current version of the c.elegans connectome but it was done by hand and on a single subject which leaves a big room for errors. By automatizing the segmentation of the electron microscopy images we guarantee an unbiased approach and we will be able to verify the connectome on several subjects.
For the third project we moved from image processing applications to biological modeling. Because of the high complexity of even small biological systems it is necessary to analyze them with the help of computational tools. The term in silico was coined to refer to such computational models of biological systems. We designed an in silico model of the TNF (Tumor necrosis factor) ligand and its two principal receptors. This biological system is of high relevance because it is involved in the inflammation process. Inflammation is of most importance as protection mechanism but it can also lead to complicated diseases (e.g. cancer). Chronic inflammation processes can be particularly dangerous in the brain. In order to better understand the dynamics that govern the TNF system we created a model using the BioNetGen language. This is a rule based language that allows one to simulate systems where multiple agents are governed by a single rule. Using our model we characterized the TNF system and hypothesized about the relation of the ligand with each of the two receptors. Our hypotheses can be later used to define drug targets in the system or possible treatments for chronic inflammation or lack of the inflammatory response.
The final project deals with the protein folding problem. In our organism proteins are folded all the time, because only in their folded conformation are proteins capable of doing their job (with some very few exceptions). This folding process presents a great challenge for science because it has been shown to be an NP problem. NP means non deterministic Polynomial time problem. This basically means that this kind of problems cannot be efficiently solved. Nevertheless, somehow the body is capable of folding a protein in just milliseconds. This phenomenon puzzles not only biologists but also mathematicians. In mathematics NP problems have been studied for a long time and it is known that given the solution to one NP problem we could solve many of them (i.e. NP-complete problems). If we manage to understand how nature solves the protein folding problem then we might be able to apply this solution to many other problems. Our research intends to contribute to this discussion. Unfortunately, not to explain how nature solves the protein folding problem, but to explain that it does not solve the problem at all. This seems contradictory since I just mentioned that the body folds proteins all the time, but our hypothesis is that the organisms have learned to solve a simplified version of the NP problem. Nature does not solve the protein folding problem in its full complexity. It simply solves a small instance of the problem. An instance which is as simple as a convex optimization problem. We formulate the protein folding problem as an optimization problem to illustrate our claim and present some toy examples to illustrate the formulation. If our hypothesis is true, it means that protein folding is a simple problem. So we just need to understand and model the conditions of the vicinity inside the cell at the moment the folding process occurs. Once we understand this starting conformation and its influence in the folding process we will be able to design treatments for amyloid diseases such as Alzheimer's and Parkinson's.
In summary this thesis project contributes to the neurobiology research field from four different fronts. Two are practical contributions with immediate benefits, such as the calcium imaging video analysis tool and the TNF in silico model. The neuron nuclei segmentation is a contribution for the near future. A step towards the full annotation of the c.elegans connectome and later for the reconstruction of the connectome of other species. And finally, the protein folding project is a first impulse to change the way we conceive the protein folding process in nature. We try to point future research in a novel direction, where the amino code is not the most relevant characteristic of the process but the conditions within the cell.
The field of genetics faces a lot of challenges and opportunities in both research and diagnostics due to the rise of next generation sequencing (NGS), a technology that allows to sequence DNA increasingly fast and cheap.
NGS is not only used to analyze DNA, but also RNA, which is a very similar molecule also present in the cell, in both cases producing large amounts of data.
The big amount of data raises both infrastructure and usability problems, as powerful computing infrastructures are required and there are many manual steps in the data analysis which are complicated to execute.
Both of those problems limit the use of NGS in the clinic and research, by producing a bottleneck both computationally and in terms of manpower, as for many analyses geneticists lack the required computing skills.
Over the course of this thesis we investigated how computer science can help to improve this situation to reduce the complexity of this type of analysis.
We looked at how to make the analysis more accessible to increase the number of people that can perform OMICS data analysis (OMICS groups various genomics data-sources).
To approach this problem, we developed a graphical NGS data analysis pipeline aimed at a diagnostics environment while still being useful in research in close collaboration with the Human Genetics Department at the University of Würzburg.
The pipeline has been used in various research papers on covering subjects, including works with direct author participation in genomics, transcriptomics as well as epigenomics.
To further validate the graphical pipeline, a user survey was carried out which confirmed that it lowers the complexity of OMICS data analysis.
We also studied how the data analysis can be improved in terms of computing infrastructure by improving the performance of certain analysis steps.
We did this both in terms of speed improvements on a single computer (with notably variant calling being faster by up to 18 times), as well as with distributed computing to better use an existing infrastructure.
The improvements were integrated into the previously described graphical pipeline, which itself also was focused on low resource usage.
As a major contribution and to help with future development of parallel and distributed applications, for the usage in genetics or otherwise, we also looked at how to make it easier to develop such applications.
Based on the parallel object programming model (POP), we created a Java language extension called POP-Java, which allows for easy and transparent distribution of objects.
Through this development, we brought the POP model to the cloud, Hadoop clusters and present a new collaborative distributed computing model called FriendComputing.
The advances made in the different domains of this thesis have been published in various works specified in this document.
Biological systems such as cells or whole organisms are governed by complex regulatory networks of transcription factors, hormones and other regulators which determine the behavior of the system depending on internal and external stimuli. In mathematical models of these networks, genes are represented by interacting “nodes” whose “value” represents the activity of the gene.
Control processes in these regulatory networks are challenging to elucidate and quantify. Previous control centrality metrics, which aim to mathematically capture the ability of individual nodes to control biological systems, have been found to suffer from problems regarding biological plausibility.
This thesis presents a new approach to control centrality in biological networks. Three types of network control are distinguished: Total control centrality quantifies the impact of gene mutations and identifies potential pharmacological targets such as genes involved in oncogenesis (e.g. zinc finger protein GLI2 or bone morphogenetic proteins in chondrocytes). Dynamic control centrality describes relaying functions as observed in signaling cascades (e.g control in mouse colon stem cells). Value control centrality measures the direct influence of the value of the node on the network (e.g. Indian hedgehog as an essential regulator of proliferation in chondrocytes). Well-defined network manipulations define all three centralities not only for nodes, but also for the interactions between them, enabling detailed insights into network pathways.
The calculation of the new metrics is made possible by substantial computational improvements in the simulation algorithms for several widely used mathematical modeling paradigms for genetic regulatory networks, which are implemented in the regulatory network simulation framework Jimena created for this thesis.
Applying the new metrics to biological networks and artificial random networks shows how these mathematical concepts correspond to experimentally verified gene functions and signaling pathways in immunity and cell differentiation. In contrast to controversial previous results even from the Barabási group, all results indicate that the ability to control biological networks resides in only few driver nodes characterized by a high number of connections to the rest of the network. Autoregulatory loops strongly increase the controllability of the network, i.e. its ability to control itself, and biological networks are characterized by high controllability in conjunction with high robustness against mutations, a combination that can be achieved best in sparsely connected networks with densities (i.e. connections to nodes ratios) around 2.0 - 3.0.
The new concepts are thus considerably narrowing the gap between network science and biology and can be used in various areas such as system modeling, plausibility trials and system analyses.
Medical applications discussed in this thesis include the search for oncogenes and pharmacological targets, as well their functional characterization.
Die Sequenzierungstechnologien entwickeln sich stetig weiter, dies ermöglicht eine zuvor nicht erreichte Ausbeute an experimentellen Daten und auch an Neuentwicklungen von zuvor nicht realisierbaren Experimenten. Zugleich werden spezifische Datenbanken, Algorithmen und Softwareprogramme entwickelt, um die neu entstandenen Daten zu analysieren. Während der Untersuchung bioinformatischer Methoden für die Identifizierung und Klassifizierung somatischer Mutationen in hämatologischen Erkrankungen, zeigte sich eine hohe Vielfalt an alternativen Softwaretools die für die jeweiligen Analyseschritte genutzt werden können. Derzeit existiert noch kein Standard zur effizienten Analyse von Mutationen aus Next-Generation-Sequencing (NGS)-Daten. Die unterschiedlichen Methoden und Pipelines generieren Kandidaten, die zum größten Anteil in allen Ansätzen identifiziert werden können, jedoch werden Software spezifische Kandidaten nicht einheitlich detektiert.
Um eine einheitliche und effiziente Analyse von NGS-Daten durchzuführen war im Rahmen dieser Arbeit die Entwicklung einer benutzerfreundlichen und einheitlichen Pipeline vorgesehen. Hierfür wurden zunächst die essentiellen Analysen wie die Identifizierung der Basen, die Alignierung und die Identifizierung der Mutationen untersucht. Des Weiteren wurden unter Berücksichtigung von Effizienz und Performance diverse verfügbare Softwaretools getestet, ausgewertet und sowohl mögliche Verbesserungen als auch Erleichterungen der bisherigen Analysen vorgestellt und diskutiert. Durch Mitwirken in Konsortien wie der klinischen Forschergruppe 216 (KFO 216) und International Cancer Genome Consortium (ICGC) oder auch bei Haus-internen Projekten wurden Datensätze zu den Entitäten Multiples Myelom (MM), Burkitt Lymphom (BL) und Follikuläres Lymphom (FL) erstellt und analysiert. Die Selektion geeigneter Softwaretools und die Generierung der Pipeline basieren auf komparativen Analysen dieser Daten, sowie auf geteilte Ergebnisse und Erfahrungen in der Literatur und auch in Foren. Durch die gezielte Entwicklung von Skripten konnten biologische und klinische Fragestellungen bearbeitet werden. Hierzu zählten eine einheitliche Annotation der Gennamen, sowie die Erstellung von Genmutations-Heatmaps mit nicht Variant-Calling-File (VCF)-Syntax konformen Dateien. Des Weiteren konnten nicht abgedeckte Regionen des Genoms in den NGS-Daten identifiziert und analysiert werden. Neue Projekte zur detaillierten Untersuchung der Verteilung von wiederkehrender Mutationen und Funktionsassays zu einzelnen Mutationskandidaten konnten basierend auf den Ergebnissen initiiert werden.
Durch eigens erstellte Python-Skripte konnte somit die Funktionalität der Pipeline erweitert werden und zu wichtigen Erkenntnissen bei der biologischen Interpretation der Sequenzierungsdaten führen, wie beispielsweise zu der Detektion von drei neuen molekularen Subgruppen im MM. Die Erweiterungen, der in dieser Arbeit entwickelten Pipeline verbesserte somit die Effizienz der Analyse und die Vergleichbarkeit unserer Daten. Des Weiteren konnte durch die Erstellung eines eigenen Skripts die Analyse von unbeachteten Regionen in den NGS-Daten erfolgen.
Zentrales Ziel dieser Arbeit war es, Methoden der Mikroskopie, Bildverarbeitung und Bilderkennung für die Charakterisierungen verschiedener Phyotplankter zu nutzen, um deren Analyse zu verbessern und zu vereinfachen.
Der erste Schwerpunkt der Arbeit lag auf der Analyse von Phytoplanktongemeinschaften, die im Rahmen der Überprüfung der Süßwasserqualität als Marker dienen. Die konventionelle Analyse ist dabei sehr aufwendig, da diese noch immer vollständig von Hand durchgeführt wird und hierfür speziell ausgebildetes Personal eingesetzt werden muss. Ziel war es, ein System zur automatischen Erkennung aufzubauen, um die Analyse vereinfachen zu können. Mit Hilfe von automatischer Mikroskopie war es möglich Plankter unterschiedlicher Ausdehnung durch die Integration mehrerer Schärfeebenen besser in einem Bild aufzunehmen. Weiterhin wurden verschiedene Fluoreszenzeigenschaften in die Analyse integriert. Mit einem für ImageJ erstellten Plugin können Organismen vom Hintergrund der Aufnahmen abgetrennt und eine Vielzahl von Merkmalen berechnet werden. Über das Training von neuralen Netzen wird die Unterscheidung von verschieden Gruppen von Planktontaxa möglich. Zudem können weitere Taxa einfach in die Analyse integriert und die Erkennung erweitert werden. Die erste Analyse von Mischproben, bestehend aus 10 verschiedenen Taxa, zeigte dabei eine durchschnittliche Erkennungsrate von 94.7% und eine durchschnittliche Falsch-Positiv Rate von 5.5%. Im Vergleich mit bestehenden Systemen konnte die Erkennungsrate verbessert und die Falsch Positiv Rate deutlich gesenkt werde. Bei einer Erweiterung des Datensatzes auf 22 Taxa wurde darauf geachtet, Arten zu verwenden, die verschiedene Stadien in ihrem Wachstum durchlaufen oder höhere Ähnlichkeiten zu den bereits vorhandenen Arten aufweisen, um evtl. Schwachstellen des Systemes erkennen zu können. Hier ergab sich eine gute Erkennungsrate (86.8%), bei der der Ausschluss von nicht-planktonischen Partikeln (11.9%) weiterhin verbessert war. Der Vergleich mit weiteren Klassifikationsverfahren zeigte, dass neuronale Netze anderen Verfahren bei dieser Problemstellung überlegen sind. Ähnlich gute Klassifikationsraten konnten durch Support Vektor Maschinen erzielt werden. Allerdings waren diese bei der Unterscheidung von unbekannten Partikeln dem neuralen Netz deutlich unterlegen.
Der zweite Abschnitt stellt die Entwicklung einer einfachen Methode zur Viabilitätsanalyse von Cyanobakterien, bei der keine weitere Behandlung der Proben notwendig ist, dar. Dabei wird die rote Chlorophyll - Autofluoreszenz als Marker für lebende Zellen und eine grüne unspezifische Fluoreszenz als Marker für tote Zellen genutzt. Der Assay wurde mit dem Modellorganismus Synechocystis sp. PCC 6803 etabliert und validiert. Die Auswahl eines geeigeneten Filtersets ermöglicht es beide Signale gleichzeitig anzuregen und zu beobachten und somit direkt zwischen lebendenden und toten Zellen zu unterscheiden. Die Ergebnisse zur Etablierung des Assays konnten durch Ausplattieren, Chlorophyllbestimmung und Bestimmung des Absorbtionsspektrums bestätigt werden. Durch den Einsatz von automatisierter Mikroskopie und einem neu erstellten ImageJ Plugin wurde eine sehr genaue und schnelle Analyse der Proben möglich. Der Einsatz beim Monitoring einer mutagenisierten Kultur zur Erhöhung der Temperaturtoleranz ermöglichte genaue und zeitnahe Einblicke in den Zustand der Kultur. Weitere Ergebnisse weisen darauf hin, dass die Kombination mit Absorptionsspektren es ermöglichen können bessere Einblicke in die Vitalität der Kultur zu erhalten.
Localization microscopy is a class of super-resolution fluorescence microscopy techniques. Localization microscopy methods are characterized by stochastic temporal isolation of fluorophore emission, i.e., making the fluorophores blink so rapidly that no two are
likely to be photoactive at the same time close to each other. Well-known localization microscopy methods include dSTORM}, STORM, PALM, FPALM, or GSDIM. The biological community has taken great interest in localization microscopy, since it can enhance the resolution of common fluorescence microscopy by an order of magnitude at little experimental cost.
However, localization microscopy has considerable computational cost since millions of individual stochastic emissions must be located with nanometer precision. The computational cost of this evaluation, and the organizational cost of implementing the complex algorithms, has impeded adoption of super-resolution microscopy for a long time.
In this work, I describe my algorithmic framework for evaluating localization microscopy data.
I demonstrate how my novel open-source software achieves real-time data evaluation, i.e., can evaluate data faster than the common experimental setups can capture them.
I show how this speed is attained on standard consumer-grade CPUs, removing the need for computing on expensive clusters or deploying graphics processing units.
The evaluation is performed with the widely accepted Gaussian PSF model and a Poissonian maximum-likelihood noise model.
I extend the computational model to show how robust, optimal two-color evaluation is realized, allowing correlative microscopy between multiple proteins or structures. By employing cubic B-splines, I show how the evaluation of three-dimensional samples can be made simple and robust, taking an important step towards precise imaging of micrometer-thick samples.
I uncover the behavior and limits of localization algorithms in the face of increasing emission densities.
Finally, I show up algorithms to extend localization microscopy to common biological problems.
I investigate cellular movement and motility by considering the in vitro movement of myosin-actin filaments. I show how SNAP-tag fusion proteins enable imaging with bright and stable organic fluorophores in live cells. By analyzing the internal structure of protein clusters, I show how localization microscopy can provide new quantitative approaches beyond pure imaging.
Dynamic interactions and their changes are at the forefront of current research in bioinformatics and systems biology. This thesis focusses on two particular dynamic aspects of cellular adaptation: miRNA and metabolites.
miRNAs have an established role in hematopoiesis and megakaryocytopoiesis, and platelet miRNAs have potential as tools for understanding basic mechanisms of platelet function. The thesis highlights the possible role of miRNAs in regulating protein translation in platelet lifespan with relevance to platelet apoptosis and identifying involved pathways and potential key regulatory molecules. Furthermore, corresponding miRNA/target mRNAs in murine platelets are identified. Moreover, key miRNAs involved in aortic aneurysm are predicted by similar techniques. The clinical relevance of miRNAs as biomarkers, targets, resulting later translational therapeutics, and tissue specific restrictors of genes expression in cardiovascular diseases is also discussed.
In a second part of thesis we highlight the importance of scientific software solution development in metabolic modelling and how it can be helpful in bioinformatics tool development along with software feature analysis such as performed on metabolic flux analysis applications. We proposed the “Butterfly” approach to implement efficiently scientific software programming. Using this approach, software applications were developed for quantitative Metabolic Flux Analysis and efficient Mass Isotopomer Distribution Analysis (MIDA) in metabolic modelling as well as for data management. “LS-MIDA” allows easy and efficient MIDA analysis and, with a more powerful algorithm and database, the software “Isotopo” allows efficient analysis of metabolic flows, for instance in pathogenic bacteria (Salmonella, Listeria). All three approaches have been published (see Appendices).
Background
Phytoplankton communities are often used as a marker for the determination of fresh water quality. The routine analysis, however, is very time consuming and expensive as it is carried out manually by trained personnel. The goal of this work is to develop a system for an automated analysis.
Results
A novel open source system for the automated recognition of phytoplankton by the use of microscopy and image analysis was developed. It integrates the segmentation of the organisms from the background, the calculation of a large range of features, and a neural network for the classification of imaged organisms into different groups of plankton taxa. The analysis of samples containing 10 different taxa showed an average recognition rate of 94.7% and an average error rate of 5.5%. The presented system has a flexible framework which easily allows expanding it to include additional taxa in the future.
Conclusions
The implemented automated microscopy and the new open source image analysis system - PlanktoVision - showed classification results that were comparable or better than existing systems and the exclusion of non-plankton particles could be greatly improved. The software package is published as free software and is available to anyone to help make the analysis of water quality more reproducible and cost effective.
An essential topic for synthetic biologists is to understand the structure and function of biological processes and involved proteins and plan experiments accordingly. Remarkable progress has been made in recent years towards this goal. However, efforts to collect and present all information on processes and functions are still cumbersome. The database tool GoSynthetic provides a new, simple and fast way to analyse biological processes applying a hierarchical database. Four different search modes are implemented. Furthermore, protein interaction data, cross-links to organism-specific databases (17 organisms including six model organisms and their interactions), COG/KOG, GO and IntAct are warehoused. The built in connection to technical and engineering terms enables a simple switching between biological concepts and concepts from engineering, electronics and synthetic biology. The current version of GoSynthetic covers more than one million processes, proteins, COGs and GOs. It is illustrated by various application examples probing process differences and designing modifications.
Die Bioinformatik ist eine interdisziplinäre Wissenschaft, welche Probleme aus allen Lebenswissenschaften mit Hilfe computergestützter Methoden bearbeitet. Ihr Ziel ist es, die Verarbeitung und Interpretation großer Datenmengen zu ermöglichen. Zudem unterstützt sie den Designprozess von Experimenten in der Synthetischen Biologie. Die synthetische Biologie beschäftigt sich mit der Generierung neuer Komponenten und deren Eigenschaften, welche durch die Behandlung und Manipulation lebender Organismen oder Teilen daraus entstehen. Ein besonders interessantes Themengebiet hierbei sind Zweikomponenten-Systeme (Two-Component System, TCS). TCS sind wichtige Signalkaskaden in Bakterien, welche in der Lage sind Informationen aus der Umgebung in eine Zelle zu übertragen und darauf zu reagieren. Die vorliegende Dissertation beschäftigt sich mit der Beurteilung, Nutzung und Weiterentwicklung von bioinformatischen Methoden zur Untersuchung von Proteininteraktionen und biologischen Systemen. Der wissenschaftliche Beitrag der vorliegenden Arbeit kann in drei Aspekte unterteilt werden: - Untersuchung und Beurteilung von bioinformatischen Methoden und Weiterführung der Ergebnisse aus der vorhergehenden Diplomarbeit zum Thema Protein-Protein-Interaktionsvorhersagen. - Analyse genereller evolutionärer Modifikationsmöglichkeiten von TCS sowie deren Design und spezifische Unterschiede. - Abstraktion bzw. Transfer der gewonnenen Erkenntnisse auf technische und biologische Zusammenhänge. Mit dem Ziel das Design neuer Experimente in der synthetischen Biologie zu vereinfachen und die Vergleichbarkeit von technischen und biologischen Prozessen sowie zwischen Organismen zu ermöglichen. Das Ergebnis der durchgeführten Studie zeigte, dass Zweikomponenten-Systeme in ihrem Aufbau sehr konserviert sind. Nichtsdestotrotz konnten viele spezifische Eigenschaften und drei generelle Modifikationsmöglichkeiten entdeckt werden. Die Untersuchungen ermöglichten die Identifikation neuer Promotorstellen, erlaubten aber auch die Beschreibung der Beschaffenheit unterschiedlicher Signalbindestellen. Zudem konnten bisher fehlende Komponenten aus TCS entdeckt werden, ebenso wie neue divergierte TCS-Domänen im Organismus Mycoplasma. Eine Kombination aus technischen Ansätzen und synthetischer Biologie vereinfachte die gezielte Manipulation von TCS oder anderen modularen Systemen. Die Etablierung der vorgestellten zweistufigen Modul-Klassifikation ermöglichte eine effizientere Analyse modular aufgebauter Prozesse und erlaubte somit das molekulare Design synthetischer, biologischer Anwendungen. Zur einfachen Nutzung dieses Ansatzes wurde eine frei zugängliche Software GoSynthetic entwickelt. Konkrete Beispiele demonstrierten die praktische Anwendbarkeit dieser Analysesoftware. Die vorgestellte Klassifikation der synthetisch-biologischen und technischen Einheiten soll die Planung zukünftiger Designexperimente vereinfachen und neue Wege für sinnverwandte Bereiche aufzeigen. Es ist nicht die Hauptaufgabe der Bioinformatik, Experimente zu ersetzen, sondern resultierende große Datenmengen sinnvoll und effizient auszuwerten. Daraus sollen neue Ideen für weitere Analysen und alternative Anwendungen gewonnen werden, um fehlerhafte oder falsche Ansätze frühzeitig zu erkennen. Die Bioinformatik bietet moderne, technische Verfahren, um vertraute, aber oft mühsame experimentelle Wege durch neue, vielversprechende Ansätze zur Datenstrukturierung und Auswertung großer Datenmengen zu ergänzen. Neue Sichtweisen werden durch die Erleichterung des Testprozederes gefördert. Die resultierende Zeitersparnis führt zudem zu einer Kostenreduktion.
Durch das Auftreten neuer Stämme resistenter Krankheitserreger ist die Suche nach neuartigen Wirkstoffen gegen diese, sich ständig weiter ausbreitende Bedrohung, dringend notwendig. Der interdisziplinäre Sonderforschungsbereich 630 der Universität Würzburg stellt sich dieser Aufgabe, indem hier neuartige Xenobiotika synthetisiert und auf ihre Wirksamkeit getestet werden. Die hier vorgelegte Dissertation fügt sich hierbei nahtlos in die verschiedenen Fachbereiche des SFB630 ein: Sie stellt eine Schnittstelle zwischen Synthese und Analyse der Effekte der im Rahmen des SFB630 synthetisierten Isochinolinalkaloid-Derivaten. Mit den hier angewandten bioinformatischen Methoden wurden zunächst die wichtigsten Stoffwechselwege von S. epidermidis R62A, S. aureus USA300 und menschlicher Zellen in sogenannten metabolischen Netzwerkmodellen nachgestellt. Basierend auf diesen Modellen konnten Enzymaktivitäten für verschiedene Szenarien an zugesetzten Xenobiotika berechnet werden. Die hierfür benötigten Daten wurden direkt aus Genexpressionsanalysen gewonnen. Die Validierung dieser Methode erfolgte durch Metabolommessungen. Hierfür wurde S. aureus USA300 mit verschiedenen Konzentrationen von IQ-143 behandelt und gemäß dem in dieser Dissertation vorgelegten Ernteprotokoll aufgearbeitet. Die Ergebnisse hieraus lassen darauf schließen, dass IQ-143 starke Effekte auf den Komplex 1 der Atmungskette ausübt – diese Resultate decken sich mit denen der metabolischen Netzwerkanalyse. Für den Wirkstoff IQ-238 ergaben sich trotz der strukturellen Ähnlichkeiten zu IQ-143 deutlich verschiedene Wirkeffekte: Dieser Stoff verursacht einen direkten Abfall der Enzymaktivitäten in der Glykolyse. Dadurch konnte eine unspezifische Toxizität dieser Stoffe basierend auf ihrer chemischen Struktur ausgeschlossen werden. Weiterhin konnten die bereits für IQ-143 und IQ-238 auf Bakterien angewandten Methoden erfolgreich zur Modellierung der Effekte von Methylenblau auf verschiedene resistente Stämme von P. falciparum 3D7 angewandt werden. Dadurch konnte gezeigt werden, dass Methylenblau in einer Kombination mit anderen Präparaten gegen diesen Parasiten zum einen die Wirkung des Primärpräparates verstärkt, zum anderen aber auch in gewissem Maße vorhandene Resistenzen gegen das Primärpräparat zu verringern vermag. Somit konnte durch die vorgelegte Arbeit eine Pipeline zur Identifizierung der metabolischen Effekte verschiedener Wirkstoffe auf unterschiedliche Krankheitserreger erstellt werden. Diese Pipeline kann jederzeit auf andere Organismen ausgeweitet werden und stellt somit einen wichtigen Ansatz um Netzwerkeffekte verschiedener, potentieller Medikamente aufzuklären.
Die Apoptose der Leberzellen ist abhängig von externen Signalen wie beispielsweise Komponenten der Extrazellulären Matrix sowie anderen Zell-Zell-Kontakten, welche von einer Vielfalt und Vielzahl an Knoten verarbeitet werden. Einige von ihnen wurden im Rahmen dieser Arbeit auf ihre Systemeffekte hin unter- sucht. Trotz verschiedener äußerer Einflüsse und natürlicher Selektion ist das System daraufhin optimiert, eine kleine Anzahl verschiedener und klar voneinander unterscheidbarer Systemzustände anzunehmen. Die verschiedenartigen Einflüsse und Crosstalk-Mechanismen dienen der Optimierung der vorhandenen Systemzustände. Das in dieser Arbeit vorgestellte Modell zeigt zwei apoptotische sowie zwei nicht-apoptotische stabile Systemzustände, wobei der Grad der Aktivierung eines Knotens bis zu dem Moment stark variieren kann, in welchem der absolute Systemzustand selbst verändert wird (Philippi et al., BMC Systems Biology,2009) [1]. Dieses Modell stellt zwar eine Vereinfachung des gesamten zellulären Netzwerkes und seiner verschiedenen Zustände dar, ist aber trotz allem in der Lage, unabhängig von detaillierten kinetischen Daten und Parametern der einzelnen Knoten zu agieren. Gleichwohl erlaubt das Modell mit guter qualitativer Übereinstimmung die Apoptose als Folge einer Stimulation mit FasL zu modellieren. Weiterhin umfasst das Modell sowohl Crosstalk-Möglichkeiten des Collagen-Integrin-Signalwegs, ebenso berücksichtigt es die Auswirkungen der genetischen Deletion von Bid sowie die Konsequenzen einer viralen Infektion. In einem zweiten Teil werden andere Anwendungsmöglichkeiten dargestellt. Hormonale Signale in Pflanzen, Virusinfektionen und intrazelluläre Kommunikation werden semi-quantitativ modelliert. Auch hier zeigte sich eine gute Ubereinstimmung der Modelle mit den experimentellen Daten.
In recent years high-throughput experiments provided a vast amount of data from all areas of molecular biology, including genomics, transcriptomics, proteomics and metabolomics. Its analysis using bioinformatics methods has developed accordingly, towards a systematic approach to understand how genes and their resulting proteins give rise to biological form and function. They interact with each other and with other molecules in highly complex structures, which are explored in network biology. The in-depth knowledge of genes and proteins obtained from high-throughput experiments can be complemented by the architecture of molecular networks to gain a deeper understanding of biological processes. This thesis provides methods and statistical analyses for the integration of molecular data into biological networks and the identification of functional modules, as well as its application to distinct biological data. The integrated network approach is implemented as a software package, termed BioNet, for the statistical language R. The package includes the statistics for the integration of transcriptomic and functional data with biological networks, the scoring of nodes and edges of these networks as well as methods for subnetwork search and visualisation. The exact algorithm is extensively tested in a simulation study and outperforms existing heuristic methods for the calculation of this NP-hard problem in accuracy and robustness. The variability of the resulting solutions is assessed on perturbed data, mimicking random or biased factors that obscure the biological signal, generated for the integrated data and the network. An optimal, robust module can be calculated using a consensus approach, based on a resampling method. It summarizes optimally an ensemble of solutions in a robust consensus module with the estimated variability indicated by confidence values for the nodes and edges. The approach is subsequently applied to two gene expression data sets. The first application analyses gene expression data for acute lymphoblastic leukaemia (ALL) and differences between the subgroups with and without an oncogenic BCR/ABL gene fusion. In a second application gene expression and survival data from diffuse large B-cell lymphomas are examined. The identified modules include and extend already existing gene lists and signatures by further significant genes and their interactions. The most important novelty is that these genes are determined and visualised in the context of their interactions as a functional module and not as a list of independent and unrelated transcripts. In a third application the integrative network approach is used to trace changes in tardigrade metabolism to identify pathways responsible for their extreme resistance to environmental changes and endurance in an inactive tun state. For the first time a metabolic network approach is proposed to detect shifts in metabolic pathways, integrating transcriptome and metabolite data. Concluding, the presented integrated network approach is an adequate technique to unite high-throughput experimental data for single molecules and their intermolecular dependencies. It is flexible to apply on diverse data, ranging from gene expression changes over metabolite abundances to protein modifications in a combination with a suitable molecular network. The exact algorithm is accurate and robust in comparison to heuristic approaches and delivers an optimal, robust solution in form of a consensus module with confidence values. By the integration of diverse sources of information and a simultaneous inspection of a molecular event from different points of view, new and exhaustive insights into biological processes can be acquired.
The phylum Tardigrada consists of about 1000 described species to date. The animals live in habitats within marine, freshwater and terrestrial ecosystems allover the world. Tardigrades are polyextremophiles. They are capable to resist extreme temperature, pressure or radiation. In the event of desiccation, tardigrades enter a so-called tun stage. The reason for their great tolerance capabilities against extreme environmental conditions is not discovered yet. Our Funcrypta project aims at finding answers to the question what mechanisms underlie these adaption capabilities particularly with regard to the species Milnesium tardigradum. The first part of this thesis describes the establishment of expressed sequence tags (ESTs) libraries for different stages of M. tardigradum. From proteomics data we bioinformatically identified 144 proteins with a known function and additionally 36 proteins which seemed to be specific for M. tardigradum. The generation of a comprehensive web-based database allows us to merge the proteome and transcriptome data. Therefore we created an annotation pipeline for the functional annotation of the protein and nucleotide sequences. Additionally, we clustered the obtained proteome dataset and identified some tardigrade-specific proteins (TSPs) which did not show homology to known proteins. Moreover, we examined the heat shock proteins of M. tardigradum and their different expression levels depending on the actual state of the animals. In further bioinformatical analyses of the whole data set, we discovered promising proteins and pathways which are described to be correlated with the stress tolerance, e.g. late embryogenesis abundant (LEA) proteins. Besides, we compared the tardigrades with nematodes, rotifers, yeast and man to identify shared and tardigrade specific stress pathways. An analysis of the 50 and 30 untranslated regions (UTRs) demonstrates a strong usage of stabilising motifs like the 15-lipoxygenase differentiation control element (15-LOX-DICE) but also reveals a lack of other common UTR motifs normally used, e.g. AU rich elements. The second part of this thesis focuses on the relatedness between several cryptic species within the tardigrade genus Paramacrobiotus. Therefore for the first time, we used the sequence-structure information of the internal transcribed spacer 2 (ITS2) as a phylogenetic marker in tardigrades. This allowed the description of three new species which were indistinguishable using morphological characters or common molecular markers like the 18S ribosomal ribonucleic acid (rRNA) or the Cytochrome c oxidase subunit I (COI). In a large in silico simulation study we also succeeded to show the benefit for the phylogenetic tree reconstruction by adding structure information to the ITS2 sequence. Next to the genus Paramacrobiotus we used the ITS2 to corroborate a monophyletic DO-group (Sphaeropleales) within the Chlorophyceae. Additionally we redesigned another comprehensive database—the ITS2 database resulting in a doubled number of sequence-structure pairs of the ITS2. In conclusion, this thesis shows the first insights (6 first author publications and 4 coauthor publications) into the reasons for the enormous adaption capabilities of tardigrades and offers a solution to the debate on the phylogenetic relatedness within the tardigrade genus Paramacrobiotus.
Applying microarray‐based techniques to study gene expression patterns: a bio‐computational approach
(2010)
The regulation and maintenance of iron homeostasis is critical to human health. As a constituent of hemoglobin, iron is essential for oxygen transport and significant iron deficiency leads to anemia. Eukaryotic cells require iron for survival and proliferation. Iron is part of hemoproteins, iron-sulfur (Fe-S) proteins, and other proteins with functional groups that require iron as a cofactor. At the cellular level, iron uptake, utilization, storage, and export are regulated at different molecular levels (transcriptional, mRNA stability, translational, and posttranslational). Iron regulatory proteins (IRPs) 1 and 2 post-transcriptionally control mammalian iron homeostasis by binding to iron-responsive elements (IREs), conserved RNA stem-loop structures located in the 5’- or 3‘- untranslated regions of genes involved in iron metabolism (e.g. FTH1, FTL, and TFRC). To identify novel IRE-containing mRNAs, we integrated biochemical, biocomputational, and microarray-based experimental approaches. Gene expression studies greatly contribute to our understanding of complex relationships in gene regulatory networks. However, the complexity of array design, production and manipulations are limiting factors, affecting data quality. The use of customized DNA microarrays improves overall data quality in many situations, however, only if for these specifically designed microarrays analysis tools are available. Methods In this project response to the iron treatment was examined under different conditions using bioinformatical methods. This would improve our understanding of an iron regulatory network. For these purposes we used microarray gene expression data. To identify novel IRE-containing mRNAs biochemical, biocomputational, and microarray-based experimental approaches were integrated. IRP/IRE messenger ribonucleoproteins were immunoselected and their mRNA composition was analysed using an IronChip microarray enriched for genes predicted computationally to contain IRE-like motifs. Analysis of IronChip microarray data requires specialized tool which can use all advantages of a customized microarray platform. Novel decision-tree based algorithm was implemented using Perl in IronChip Evaluation Package (ICEP). Results IRE-like motifs were identified from genomic nucleic acid databases by an algorithm combining primary nucleic acid sequence and RNA structural criteria. Depending on the choice of constraining criteria, such computational screens tend to generate a large number of false positives. To refine the search and reduce the number of false positive hits, additional constraints were introduced. The refined screen yielded 15 IRE-like motifs. A second approach made use of a reported list of 230 IRE-like sequences obtained from screening UTR databases. We selected 6 out of these 230 entries based on the ability of the lower IRE stem to form at least 6 out of 7 bp. Corresponding ESTs were spotted onto the human or mouse versions of the IronChip and the results were analysed using ICEP. Our data show that the immunoselection/microarray strategy is a feasible approach for screening bioinformatically predicted IRE genes and the detection of novel IRE-containing mRNAs. In addition, we identified a novel IRE-containing gene CDC14A (Sanchez M, et al. 2006). The IronChip Evaluation Package (ICEP) is a collection of Perl utilities and an easy to use data evaluation pipeline for the analysis of microarray data with a focus on data quality of custom-designed microarrays. The package has been developed for the statistical and bioinformatical analysis of the custom cDNA microarray IronChip, but can be easily adapted for other cDNA or oligonucleotide-based designed microarray platforms. ICEP uses decision tree-based algorithms to assign quality flags and performs robust analysis based on chip design properties regarding multiple repetitions, ratio cut-off, background and negative controls (Vainshtein Y, et al., 2010).
The human gut is home for thousands of microbes that are important for human life. As most of these cannot be cultivated, metagenomics is an important means to understand this important community. To perform comparative metagenomic analysis of the human gut microbiome, I have developed SMASH (Simple metagenomic analysis shell), a computational pipeline. SMASH can also be used to assemble and analyze single genomes, and has been successfully applied to the bacterium Mycoplasma pneumoniae and the fungus Chaetomium thermophilum. In the context of the MetaHIT (Metagenomics of the human intestinal tract) consortium our group is participating in, I used SMASH to validate the assembly and to estimate the assembly error rate of 576.7 Gb metagenome sequence obtained using Illumina Solexa technology from fecal DNA of 124 European individuals. I also estimated the completeness of the gene catalogue containing 3.3 million open reading frames obtained from these metagenomes. Finally, I used SMASH to analyze human gut metagenomes of 39 individuals from 6 countries encompassing a wide range of host properties such as age, body mass index and disease states. We find that the variation in the gut microbiome is not continuous but stratified into enterotypes. Enterotypes are complex host-microbial symbiotic states that are not explained by host properties, nutritional habits or possible technical biases. The concept of enterotypes might have far reaching implications, for example, to explain different responses to diet or drug intake. We also find several functional markers in the human gut microbiome that correlate with a number of host properties such as body mass index, highlighting the need for functional analysis and raising hopes for the application of microbial markers as diagnostic or even prognostic tools for microbiota-associated human disorders.
Genome sequence analysis A combination of genome analysis application has been established here during this project. This offers an efficient platform to interactively compare similar genome regions and reveal loci differences. The genes and operons can be rapidly analyzed and local collinear blocks (LCBs) categorized according to their function. The features of interests are parsed, recognized, and clustered into reports. Phylogenetic relationships can be readily examined such as the evolution of critical factors or a certain highly-conserved region. The resulting platform-independent software packages (GENOVA and inGeno), have been proven to be efficient and easy to handle in a number of projects. The capabilities of the software allowed the investigation of virulence factors, e.g., rsbU, strains’ biological design, and in particular pathogenicity feature storage and management. We have successfully investigated the genomes of Staphylococcus aureus strains (COL, N315, 8325, RN1HG, Newman), Listeria spp. (welshimeri, innocua and monocytogenes), E.coli strains (O157:H7 and MG1655) and Vaccinia strains (WR, Copenhagen, Lister, LIVP, GLV-1h68 and parental strains). Metabolic network analysis Our YANAsquare package offers a workbench to rapidly establish the metabolic network of such as Staphylococcous aureus bacteria in genome-scale size as well as metabolic networks of interest such as the murine phagosome lipid signalling network. YANAsquare recruits reactions from online databases using an integrated KEGG browser. This reduces the efforts in building large metabolic networks. The involved calculation routines (METATOOL-derived wrapper or native Java implementation) readily obtain all possible flux modes (EM/EP) for metabolite fluxes within the network. Advanced layout algorithms visualize the topological structure of the network. In addition, the generated structure can be dynamically modified in the graphic interface. The generated network as well as the manipulated layout can be validated and stored (XML file: scheme of SBML level-2). This format can be further parsed and analyzed by other systems biology software, such as CellDesigner. Moreover, the integrated robustness-evaluation routine is able to examine the synthesis rates affected by each single mutation throughout the whole network. We have successfully applied the method to simulate single and multiple gene knockouts, and the affected fluxes are comprehensively revealed. Recently we applied the method to proteomic data and extra-cellular metabolite data of Staphylococci, the physiological changes regarding the flux distribution are studied. Calculations at different time points, including different conditions such as hypoxia or stress, show a good fit to experimental data. Moreover, using the proteomic data (enzyme amounts) calculated from 2D-Gel-EP experiments our study provides a way to compare the fluxome and the enzyme expression. Oncolytic vaccinia virus (VACV) We investigated the genetic differences between the de novo sequence of the recombinant oncolytic GLV-1h68 and other related VACVs, including function predictions for all found genome differences. Our phylogenetic analysis indicates that GLV-1h68 is closest to Lister strains but has lost several ORFs present in its parental LIVP strain, including genes encoding CrmE and a viral Golgi anti-apoptotic protein, v-GAAP. Functions of viral genes were either strain-specific, tissue-specific or host-specific comparing viral genes in the Lister, WR and COP strains. This helps to rationally design more optimized oncolytic virus strains to benefit cancer therapy in human patients. Identified differences from the comparison in open reading frames (ORFs) include genes for host-range selection, virulence and immune modulation proteins, e.g. ankyrin-like proteins, serine proteinase inhibitor SPI-2/CrmA, tumor necrosis factor (TNF) receptor homolog CrmC, semaphorin-like and interleukin-1 receptor homolog proteins. The contribution of foreign gene expression cassettes in the therapeutic and oncolytic virus GLV-1h68 was studied, including the F14.5L, J2R and A56R loci. The contribution of F14.5L inactivation to the reduced virulence is demonstrated by comparing the virulence data of GLV-1h68 with its F14.5L-null and revertant viruses. The comparison suggests that insertion of a foreign gene expression cassette in a nonessential locus in the viral genome is a practical way to attenuate VACVs, especially if the nonessential locus itself contains a virulence gene. This reduces the virulence of the virus without compromising too much the replication competency of the virus, the key to its oncolytic activity. The reduced pathogenicity of GLV-1h68 was confirmed by our experimental collaboration partners in male mice bearing C6 rat glioma and in immunocompetent mice bearing B16-F10 murine melanoma. In conclusion, bioinformatics and experimental data show that GLV-1h68 is a promising engineered VACV variant for anticancer therapy with tumor-specific replication, reduced pathogenicity and benign tissue tropism.
The topic of my doctorial research was the computational analysis of metagenomic data. A metagenome comprises the genomic information from all the microorganisms within a certain environment. The currently available metagenomic data sets cover only parts of these usually huge metagenomes due to the high technical and financial effort of such sequencing endeavors. During my thesis I developed bioinformatic tools and applied them to analyse genomic features of different metagenomic data sets and to search for enzymes of importance for biotechnology or pharmaceutical applications in those sequence collections. In these studies nine metagenomic projects (with up to 41 subsamples) were analysed. These samples originated from diverse environments like farm soil, acid mine drainage, microbial mats on whale bones, marine water, fresh water, water treatment sludges and the human gut flora. Additionally, data sets of conventionally retrieved sequence data were taken into account and compared with each other