Refine
Has Fulltext
- yes (26)
Is part of the Bibliography
- yes (26)
Year of publication
Document Type
- Doctoral Thesis (23)
- Journal article (2)
- Master Thesis (1)
Keywords
- Bioinformatik (26) (remove)
Institute
Sonstige beteiligte Institutionen
Dynamic interactions and their changes are at the forefront of current research in bioinformatics and systems biology. This thesis focusses on two particular dynamic aspects of cellular adaptation: miRNA and metabolites.
miRNAs have an established role in hematopoiesis and megakaryocytopoiesis, and platelet miRNAs have potential as tools for understanding basic mechanisms of platelet function. The thesis highlights the possible role of miRNAs in regulating protein translation in platelet lifespan with relevance to platelet apoptosis and identifying involved pathways and potential key regulatory molecules. Furthermore, corresponding miRNA/target mRNAs in murine platelets are identified. Moreover, key miRNAs involved in aortic aneurysm are predicted by similar techniques. The clinical relevance of miRNAs as biomarkers, targets, resulting later translational therapeutics, and tissue specific restrictors of genes expression in cardiovascular diseases is also discussed.
In a second part of thesis we highlight the importance of scientific software solution development in metabolic modelling and how it can be helpful in bioinformatics tool development along with software feature analysis such as performed on metabolic flux analysis applications. We proposed the “Butterfly” approach to implement efficiently scientific software programming. Using this approach, software applications were developed for quantitative Metabolic Flux Analysis and efficient Mass Isotopomer Distribution Analysis (MIDA) in metabolic modelling as well as for data management. “LS-MIDA” allows easy and efficient MIDA analysis and, with a more powerful algorithm and database, the software “Isotopo” allows efficient analysis of metabolic flows, for instance in pathogenic bacteria (Salmonella, Listeria). All three approaches have been published (see Appendices).
Development and application of computational tools for RNA-Seq based transcriptome annotations
(2019)
In order to understand the regulation of gene expression in organisms, precise genome annotation is essential. In recent years, RNA-Seq has become a potent method for generating and improving genome annotations. However, this Approach is time consuming and often inconsistently performed when done manually. In particular, the discovery of non-coding RNAs benefits strongly from the application of RNA-Seq data but requires significant amounts of expert knowledge and is labor-intensive. As a part of my doctoral study, I developed a modular tool called ANNOgesic that can detect numerous transcribed genomic features, including non-coding RNAs, based on RNA-Seq data in a precise and automatic fashion with a focus on bacterial and achaeal species. The software performs numerous analyses and generates several visualizations. It can generate annotations of high-Resolution that are hard to produce using traditional annotation tools that are based only on genome sequences. ANNOgesic can detect numerous novel genomic Features like UTR-derived small non-coding RNAs for which no other tool has been developed before. ANNOgesic is available under an open source license (ISCL) at https://github.com/Sung-Huan/ANNOgesic.
My doctoral work not only includes the development of ANNOgesic but also its application to annotate the transcriptome of Staphylococcus aureus HG003 - a strain which has been a insightful model in infection biology. Despite its potential as a model, a complete genome sequence and annotations have been lacking for HG003. In order to fill this gap, the annotations of this strain, including sRNAs and their functions, were generated using ANNOgesic by analyzing differential RNA-Seq data from 14 different samples (two media conditions with seven time points), as well as RNA-Seq data generated after transcript fragmentation. ANNOgesic was
also applied to annotate several bacterial and archaeal genomes, and as part of this its high performance was demonstrated. In summary, ANNOgesic is a powerful computational tool for RNA-Seq based annotations and has been successfully applied to several species.
Localization microscopy is a class of super-resolution fluorescence microscopy techniques. Localization microscopy methods are characterized by stochastic temporal isolation of fluorophore emission, i.e., making the fluorophores blink so rapidly that no two are
likely to be photoactive at the same time close to each other. Well-known localization microscopy methods include dSTORM}, STORM, PALM, FPALM, or GSDIM. The biological community has taken great interest in localization microscopy, since it can enhance the resolution of common fluorescence microscopy by an order of magnitude at little experimental cost.
However, localization microscopy has considerable computational cost since millions of individual stochastic emissions must be located with nanometer precision. The computational cost of this evaluation, and the organizational cost of implementing the complex algorithms, has impeded adoption of super-resolution microscopy for a long time.
In this work, I describe my algorithmic framework for evaluating localization microscopy data.
I demonstrate how my novel open-source software achieves real-time data evaluation, i.e., can evaluate data faster than the common experimental setups can capture them.
I show how this speed is attained on standard consumer-grade CPUs, removing the need for computing on expensive clusters or deploying graphics processing units.
The evaluation is performed with the widely accepted Gaussian PSF model and a Poissonian maximum-likelihood noise model.
I extend the computational model to show how robust, optimal two-color evaluation is realized, allowing correlative microscopy between multiple proteins or structures. By employing cubic B-splines, I show how the evaluation of three-dimensional samples can be made simple and robust, taking an important step towards precise imaging of micrometer-thick samples.
I uncover the behavior and limits of localization algorithms in the face of increasing emission densities.
Finally, I show up algorithms to extend localization microscopy to common biological problems.
I investigate cellular movement and motility by considering the in vitro movement of myosin-actin filaments. I show how SNAP-tag fusion proteins enable imaging with bright and stable organic fluorophores in live cells. By analyzing the internal structure of protein clusters, I show how localization microscopy can provide new quantitative approaches beyond pure imaging.
The field of genetics faces a lot of challenges and opportunities in both research and diagnostics due to the rise of next generation sequencing (NGS), a technology that allows to sequence DNA increasingly fast and cheap.
NGS is not only used to analyze DNA, but also RNA, which is a very similar molecule also present in the cell, in both cases producing large amounts of data.
The big amount of data raises both infrastructure and usability problems, as powerful computing infrastructures are required and there are many manual steps in the data analysis which are complicated to execute.
Both of those problems limit the use of NGS in the clinic and research, by producing a bottleneck both computationally and in terms of manpower, as for many analyses geneticists lack the required computing skills.
Over the course of this thesis we investigated how computer science can help to improve this situation to reduce the complexity of this type of analysis.
We looked at how to make the analysis more accessible to increase the number of people that can perform OMICS data analysis (OMICS groups various genomics data-sources).
To approach this problem, we developed a graphical NGS data analysis pipeline aimed at a diagnostics environment while still being useful in research in close collaboration with the Human Genetics Department at the University of Würzburg.
The pipeline has been used in various research papers on covering subjects, including works with direct author participation in genomics, transcriptomics as well as epigenomics.
To further validate the graphical pipeline, a user survey was carried out which confirmed that it lowers the complexity of OMICS data analysis.
We also studied how the data analysis can be improved in terms of computing infrastructure by improving the performance of certain analysis steps.
We did this both in terms of speed improvements on a single computer (with notably variant calling being faster by up to 18 times), as well as with distributed computing to better use an existing infrastructure.
The improvements were integrated into the previously described graphical pipeline, which itself also was focused on low resource usage.
As a major contribution and to help with future development of parallel and distributed applications, for the usage in genetics or otherwise, we also looked at how to make it easier to develop such applications.
Based on the parallel object programming model (POP), we created a Java language extension called POP-Java, which allows for easy and transparent distribution of objects.
Through this development, we brought the POP model to the cloud, Hadoop clusters and present a new collaborative distributed computing model called FriendComputing.
The advances made in the different domains of this thesis have been published in various works specified in this document.
Applying microarray‐based techniques to study gene expression patterns: a bio‐computational approach
(2010)
The regulation and maintenance of iron homeostasis is critical to human health. As a constituent of hemoglobin, iron is essential for oxygen transport and significant iron deficiency leads to anemia. Eukaryotic cells require iron for survival and proliferation. Iron is part of hemoproteins, iron-sulfur (Fe-S) proteins, and other proteins with functional groups that require iron as a cofactor. At the cellular level, iron uptake, utilization, storage, and export are regulated at different molecular levels (transcriptional, mRNA stability, translational, and posttranslational). Iron regulatory proteins (IRPs) 1 and 2 post-transcriptionally control mammalian iron homeostasis by binding to iron-responsive elements (IREs), conserved RNA stem-loop structures located in the 5’- or 3‘- untranslated regions of genes involved in iron metabolism (e.g. FTH1, FTL, and TFRC). To identify novel IRE-containing mRNAs, we integrated biochemical, biocomputational, and microarray-based experimental approaches. Gene expression studies greatly contribute to our understanding of complex relationships in gene regulatory networks. However, the complexity of array design, production and manipulations are limiting factors, affecting data quality. The use of customized DNA microarrays improves overall data quality in many situations, however, only if for these specifically designed microarrays analysis tools are available. Methods In this project response to the iron treatment was examined under different conditions using bioinformatical methods. This would improve our understanding of an iron regulatory network. For these purposes we used microarray gene expression data. To identify novel IRE-containing mRNAs biochemical, biocomputational, and microarray-based experimental approaches were integrated. IRP/IRE messenger ribonucleoproteins were immunoselected and their mRNA composition was analysed using an IronChip microarray enriched for genes predicted computationally to contain IRE-like motifs. Analysis of IronChip microarray data requires specialized tool which can use all advantages of a customized microarray platform. Novel decision-tree based algorithm was implemented using Perl in IronChip Evaluation Package (ICEP). Results IRE-like motifs were identified from genomic nucleic acid databases by an algorithm combining primary nucleic acid sequence and RNA structural criteria. Depending on the choice of constraining criteria, such computational screens tend to generate a large number of false positives. To refine the search and reduce the number of false positive hits, additional constraints were introduced. The refined screen yielded 15 IRE-like motifs. A second approach made use of a reported list of 230 IRE-like sequences obtained from screening UTR databases. We selected 6 out of these 230 entries based on the ability of the lower IRE stem to form at least 6 out of 7 bp. Corresponding ESTs were spotted onto the human or mouse versions of the IronChip and the results were analysed using ICEP. Our data show that the immunoselection/microarray strategy is a feasible approach for screening bioinformatically predicted IRE genes and the detection of novel IRE-containing mRNAs. In addition, we identified a novel IRE-containing gene CDC14A (Sanchez M, et al. 2006). The IronChip Evaluation Package (ICEP) is a collection of Perl utilities and an easy to use data evaluation pipeline for the analysis of microarray data with a focus on data quality of custom-designed microarrays. The package has been developed for the statistical and bioinformatical analysis of the custom cDNA microarray IronChip, but can be easily adapted for other cDNA or oligonucleotide-based designed microarray platforms. ICEP uses decision tree-based algorithms to assign quality flags and performs robust analysis based on chip design properties regarding multiple repetitions, ratio cut-off, background and negative controls (Vainshtein Y, et al., 2010).
In this century new experimental and computational techniques are adding an enormous amount of information, revealing many biological mysteries. The complexities of biological systems still broach new questions. Till now the main approach to understand a system has been to divide it in components that can be studied. The upcoming new paradigm is to combine the pieces of information in order to understand it at a global level. In the present thesis we have tried to study infectious diseases with such a global ‘Systems Biology’ approach. In the first part the apoptosis pathway is analyzed. Apoptosis (Programmed cell death) is used as a counter measure in different infections, for example viral infections. The interactions between death domain containing proteins are studied to address the following questions: i) How specificity is maintained - showing that it is induced through adaptors, ii) how proliferation/ survival signals are induced during activation of apoptosis – suggesting the pivotal role of RIP. The model also allowed us to detect new possible interacting surfaces. The pathway is then studied at a global level in a time step simulation to understand the evolution of the topology of activators and inhibitors of the pathway. Signal processing is further modeled in detail for the apoptosis pathway in M. musculus to predict the concentration time course of effector caspases. Further, experimental measurements of caspase-3 and viability of cells validate the model. The second part focuses on the phagosome, an organelle which plays an essential role in removal of pathogens as exemplified by M. tuberculosis. Again the problem is addressed in two main sections: i) To understanding the processes that are inhibited by M. tuberculosis; we focused on the phospholipid network applying a time step simulation in section one, which plays an important role in inhibition or activation of actin polymerization on the phagosome membrane. ii) Furthermore, actin polymers are suggested to play a role in the fusion of the phagosome with lysosome. To check this hypothesis an in silico model was developed; we find that the search time is reduced by 5 fold in the presence of actin polymers. Further the effect of length of actin polymers, dimensions of lysosome, phagosome and other model parameter is analyzed. After studying a pathway and then an organelle, the next step was to move to the system. This was exemplified by the host pathogen interactions between Bordetella pertussis and Bordetella bronchiseptica. The limited availability of quantitative information was the crucial factor behind the choice of the model type. A Boolean model was developed which was used for a dynamic simulation. The results predict important factors playing a role in Bordetella pathology especially the importance of Th1 related responses and not Th2 related responses in the clearance of the pathogen. Some of the quantitative predictions have been counterchecked by experimental results such as the time course of infection in different mutants and wild type mice. All these computational models have been developed in presence of limited kinetic data. The success of these models has been validated by comparison with experimental observations. Comparative models studied in chapters 6 and 9 can be used to explore new host pathogen interactions. For example in chapter 6, the analysis of inhibitors and inhibitory paths in three organism leads to the identification of regulatory hotspots in complex organisms and in chapter 9 the identification of three phases in B. bronchiseptica and inhibition of IFN-γ by TTSS lead us to explore similar phases and inhibition of IFN-γ in B. pertussis. Further an important significance of these models is to identify new components playing an essential role in host-pathogen interactions. In silico deletions can point out such components which can be further analyzed by experimental mutations.
The internal transcribed spacer 2 (ITS2) of the ribosomal gene repeat is an increasingly important phylogenetic marker whose RNA secondary structure is widely conserved across eukaryotic organisms. The ITS2 database aims to be a comprehensive resource on ITS2 sequence and secondary structure, based on direct thermodynamic as well as homology modelled RNA folds. Results: (a) A rebuild of the original ITS2 database generation scripts applied to a current NCBI dataset reveal more than 60,000 ITS2 structures. This more than doubles the contents of the original database and triples it when including partial structures. (b) The end-user interface was rewritten, extended and now features user-defined homology modelling. (c) Other possible RNA structure discovery methods (namely suboptimal and shape folding) prove helpful but are not able to replace homology modelling. (d) A use case of the ITS2 database in conjunction with other tools developed at the department gave insight into molecular phylogenetic analysis with ITS2.
Im gleichen Maße wie informatisches Wissen mehr und mehr in den wissenschaftlichen Alltag aller Lebenswissenschaften Einzug gehalten hat, hat sich der Schwerpunkt bioinformatischer Forschung in stärker mathematisch und informatisch-orientierte Themengebiete verschoben. Bioinformatik heute ist mehr als die computergestützte Verarbeitung großer Mengen an biologischen Daten, sondern hat einen entscheidenden Fokus auf der Modellierung komplexer biologischer Systeme. Zur Anwendung kommen hierbei insbesondere Theorien aus dem Bereich der Stochastik und Statistik, des maschinellen Lernens und der theoretischen Informatik. In der vorliegenden Dissertation beschreibe ich in Fallstudien die systematische Modellierung biologischer Systeme aus einem informatisch - mathematischen Standpunkt unter Anwendung von Verfahren aus den genannten Teilbereichen und auf unterschiedlichen Ebenen biologischer Abstraktion. Ausgehend von der Sequenzinformation über Transkriptom, Metabolom und deren regulatorischer Interaktion hin zur Modellierung von Populationseffekten werden hierbei aktuelle biologische Fragestellungen mit mathematisch - informatischen Modellen und einer Vielzahl experimenteller Daten kombiniert. Ein besonderer Augenmerk liegt dabei auf dem Vorgang der Modellierung und des Modellbegriffs als solchem im Rahmen moderner bioinformatischer Forschung. Im Detail umfassen die Projekte (mehrere Publikationen) die Entwicklung eines neuen Ansatzes zur Einbettung und Visualisierung von Multiplen Sequenz- und Sequenz-Strukturalignments, illustriert am Beispiel eines Hemagglutininalignments unterschiedlicher H5N1 Varianten, sowie die Modellierung des Transkriptoms von A. thaliana, bei welchem mit Hilfe einer kernelisierten nicht-parametrischen Metaanalyse neue, an der Infektionsabwehr beteiligten, Gene ausfindig gemacht werden konnten. Desweiteren ist uns mit Hilfe unserer Software YANAsquare eine detaillierte Untersuchung des Metabolismus von L. monocytogenes unter Aktivierung des Transkriptionsfaktors prfA gelungen, dessen Vorhersagen durch experimentelle 13C Isotopologstudien belegt werden konnten. In einem Anschlußprojekt war der Zusammenhang zwischen Regulation des Metabolismus durch Regulation der Genexpression und der Fluxverteilung des metabolischen Steady- State-Netzwerks das Ziel. Die Modellierung eines komplexen organismischen Phänotyps, der Zellgrößenentwicklung der Diatomee Pseudo-nitzschia delicatissima, schließt die Untersuchungen ab.
Background
Phytoplankton communities are often used as a marker for the determination of fresh water quality. The routine analysis, however, is very time consuming and expensive as it is carried out manually by trained personnel. The goal of this work is to develop a system for an automated analysis.
Results
A novel open source system for the automated recognition of phytoplankton by the use of microscopy and image analysis was developed. It integrates the segmentation of the organisms from the background, the calculation of a large range of features, and a neural network for the classification of imaged organisms into different groups of plankton taxa. The analysis of samples containing 10 different taxa showed an average recognition rate of 94.7% and an average error rate of 5.5%. The presented system has a flexible framework which easily allows expanding it to include additional taxa in the future.
Conclusions
The implemented automated microscopy and the new open source image analysis system - PlanktoVision - showed classification results that were comparable or better than existing systems and the exclusion of non-plankton particles could be greatly improved. The software package is published as free software and is available to anyone to help make the analysis of water quality more reproducible and cost effective.
Zentrales Ziel dieser Arbeit war es, Methoden der Mikroskopie, Bildverarbeitung und Bilderkennung für die Charakterisierungen verschiedener Phyotplankter zu nutzen, um deren Analyse zu verbessern und zu vereinfachen.
Der erste Schwerpunkt der Arbeit lag auf der Analyse von Phytoplanktongemeinschaften, die im Rahmen der Überprüfung der Süßwasserqualität als Marker dienen. Die konventionelle Analyse ist dabei sehr aufwendig, da diese noch immer vollständig von Hand durchgeführt wird und hierfür speziell ausgebildetes Personal eingesetzt werden muss. Ziel war es, ein System zur automatischen Erkennung aufzubauen, um die Analyse vereinfachen zu können. Mit Hilfe von automatischer Mikroskopie war es möglich Plankter unterschiedlicher Ausdehnung durch die Integration mehrerer Schärfeebenen besser in einem Bild aufzunehmen. Weiterhin wurden verschiedene Fluoreszenzeigenschaften in die Analyse integriert. Mit einem für ImageJ erstellten Plugin können Organismen vom Hintergrund der Aufnahmen abgetrennt und eine Vielzahl von Merkmalen berechnet werden. Über das Training von neuralen Netzen wird die Unterscheidung von verschieden Gruppen von Planktontaxa möglich. Zudem können weitere Taxa einfach in die Analyse integriert und die Erkennung erweitert werden. Die erste Analyse von Mischproben, bestehend aus 10 verschiedenen Taxa, zeigte dabei eine durchschnittliche Erkennungsrate von 94.7% und eine durchschnittliche Falsch-Positiv Rate von 5.5%. Im Vergleich mit bestehenden Systemen konnte die Erkennungsrate verbessert und die Falsch Positiv Rate deutlich gesenkt werde. Bei einer Erweiterung des Datensatzes auf 22 Taxa wurde darauf geachtet, Arten zu verwenden, die verschiedene Stadien in ihrem Wachstum durchlaufen oder höhere Ähnlichkeiten zu den bereits vorhandenen Arten aufweisen, um evtl. Schwachstellen des Systemes erkennen zu können. Hier ergab sich eine gute Erkennungsrate (86.8%), bei der der Ausschluss von nicht-planktonischen Partikeln (11.9%) weiterhin verbessert war. Der Vergleich mit weiteren Klassifikationsverfahren zeigte, dass neuronale Netze anderen Verfahren bei dieser Problemstellung überlegen sind. Ähnlich gute Klassifikationsraten konnten durch Support Vektor Maschinen erzielt werden. Allerdings waren diese bei der Unterscheidung von unbekannten Partikeln dem neuralen Netz deutlich unterlegen.
Der zweite Abschnitt stellt die Entwicklung einer einfachen Methode zur Viabilitätsanalyse von Cyanobakterien, bei der keine weitere Behandlung der Proben notwendig ist, dar. Dabei wird die rote Chlorophyll - Autofluoreszenz als Marker für lebende Zellen und eine grüne unspezifische Fluoreszenz als Marker für tote Zellen genutzt. Der Assay wurde mit dem Modellorganismus Synechocystis sp. PCC 6803 etabliert und validiert. Die Auswahl eines geeigeneten Filtersets ermöglicht es beide Signale gleichzeitig anzuregen und zu beobachten und somit direkt zwischen lebendenden und toten Zellen zu unterscheiden. Die Ergebnisse zur Etablierung des Assays konnten durch Ausplattieren, Chlorophyllbestimmung und Bestimmung des Absorbtionsspektrums bestätigt werden. Durch den Einsatz von automatisierter Mikroskopie und einem neu erstellten ImageJ Plugin wurde eine sehr genaue und schnelle Analyse der Proben möglich. Der Einsatz beim Monitoring einer mutagenisierten Kultur zur Erhöhung der Temperaturtoleranz ermöglichte genaue und zeitnahe Einblicke in den Zustand der Kultur. Weitere Ergebnisse weisen darauf hin, dass die Kombination mit Absorptionsspektren es ermöglichen können bessere Einblicke in die Vitalität der Kultur zu erhalten.