Refine
Has Fulltext
- yes (19)
Is part of the Bibliography
- yes (19)
Year of publication
Document Type
- Doctoral Thesis (16)
- Journal article (2)
- Master Thesis (1)
Language
- English (19) (remove)
Keywords
- Bioinformatik (19) (remove)
Institute
Sonstige beteiligte Institutionen
Dynamic interactions and their changes are at the forefront of current research in bioinformatics and systems biology. This thesis focusses on two particular dynamic aspects of cellular adaptation: miRNA and metabolites.
miRNAs have an established role in hematopoiesis and megakaryocytopoiesis, and platelet miRNAs have potential as tools for understanding basic mechanisms of platelet function. The thesis highlights the possible role of miRNAs in regulating protein translation in platelet lifespan with relevance to platelet apoptosis and identifying involved pathways and potential key regulatory molecules. Furthermore, corresponding miRNA/target mRNAs in murine platelets are identified. Moreover, key miRNAs involved in aortic aneurysm are predicted by similar techniques. The clinical relevance of miRNAs as biomarkers, targets, resulting later translational therapeutics, and tissue specific restrictors of genes expression in cardiovascular diseases is also discussed.
In a second part of thesis we highlight the importance of scientific software solution development in metabolic modelling and how it can be helpful in bioinformatics tool development along with software feature analysis such as performed on metabolic flux analysis applications. We proposed the “Butterfly” approach to implement efficiently scientific software programming. Using this approach, software applications were developed for quantitative Metabolic Flux Analysis and efficient Mass Isotopomer Distribution Analysis (MIDA) in metabolic modelling as well as for data management. “LS-MIDA” allows easy and efficient MIDA analysis and, with a more powerful algorithm and database, the software “Isotopo” allows efficient analysis of metabolic flows, for instance in pathogenic bacteria (Salmonella, Listeria). All three approaches have been published (see Appendices).
Development and application of computational tools for RNA-Seq based transcriptome annotations
(2019)
In order to understand the regulation of gene expression in organisms, precise genome annotation is essential. In recent years, RNA-Seq has become a potent method for generating and improving genome annotations. However, this Approach is time consuming and often inconsistently performed when done manually. In particular, the discovery of non-coding RNAs benefits strongly from the application of RNA-Seq data but requires significant amounts of expert knowledge and is labor-intensive. As a part of my doctoral study, I developed a modular tool called ANNOgesic that can detect numerous transcribed genomic features, including non-coding RNAs, based on RNA-Seq data in a precise and automatic fashion with a focus on bacterial and achaeal species. The software performs numerous analyses and generates several visualizations. It can generate annotations of high-Resolution that are hard to produce using traditional annotation tools that are based only on genome sequences. ANNOgesic can detect numerous novel genomic Features like UTR-derived small non-coding RNAs for which no other tool has been developed before. ANNOgesic is available under an open source license (ISCL) at https://github.com/Sung-Huan/ANNOgesic.
My doctoral work not only includes the development of ANNOgesic but also its application to annotate the transcriptome of Staphylococcus aureus HG003 - a strain which has been a insightful model in infection biology. Despite its potential as a model, a complete genome sequence and annotations have been lacking for HG003. In order to fill this gap, the annotations of this strain, including sRNAs and their functions, were generated using ANNOgesic by analyzing differential RNA-Seq data from 14 different samples (two media conditions with seven time points), as well as RNA-Seq data generated after transcript fragmentation. ANNOgesic was
also applied to annotate several bacterial and archaeal genomes, and as part of this its high performance was demonstrated. In summary, ANNOgesic is a powerful computational tool for RNA-Seq based annotations and has been successfully applied to several species.
Localization microscopy is a class of super-resolution fluorescence microscopy techniques. Localization microscopy methods are characterized by stochastic temporal isolation of fluorophore emission, i.e., making the fluorophores blink so rapidly that no two are
likely to be photoactive at the same time close to each other. Well-known localization microscopy methods include dSTORM}, STORM, PALM, FPALM, or GSDIM. The biological community has taken great interest in localization microscopy, since it can enhance the resolution of common fluorescence microscopy by an order of magnitude at little experimental cost.
However, localization microscopy has considerable computational cost since millions of individual stochastic emissions must be located with nanometer precision. The computational cost of this evaluation, and the organizational cost of implementing the complex algorithms, has impeded adoption of super-resolution microscopy for a long time.
In this work, I describe my algorithmic framework for evaluating localization microscopy data.
I demonstrate how my novel open-source software achieves real-time data evaluation, i.e., can evaluate data faster than the common experimental setups can capture them.
I show how this speed is attained on standard consumer-grade CPUs, removing the need for computing on expensive clusters or deploying graphics processing units.
The evaluation is performed with the widely accepted Gaussian PSF model and a Poissonian maximum-likelihood noise model.
I extend the computational model to show how robust, optimal two-color evaluation is realized, allowing correlative microscopy between multiple proteins or structures. By employing cubic B-splines, I show how the evaluation of three-dimensional samples can be made simple and robust, taking an important step towards precise imaging of micrometer-thick samples.
I uncover the behavior and limits of localization algorithms in the face of increasing emission densities.
Finally, I show up algorithms to extend localization microscopy to common biological problems.
I investigate cellular movement and motility by considering the in vitro movement of myosin-actin filaments. I show how SNAP-tag fusion proteins enable imaging with bright and stable organic fluorophores in live cells. By analyzing the internal structure of protein clusters, I show how localization microscopy can provide new quantitative approaches beyond pure imaging.
The field of genetics faces a lot of challenges and opportunities in both research and diagnostics due to the rise of next generation sequencing (NGS), a technology that allows to sequence DNA increasingly fast and cheap.
NGS is not only used to analyze DNA, but also RNA, which is a very similar molecule also present in the cell, in both cases producing large amounts of data.
The big amount of data raises both infrastructure and usability problems, as powerful computing infrastructures are required and there are many manual steps in the data analysis which are complicated to execute.
Both of those problems limit the use of NGS in the clinic and research, by producing a bottleneck both computationally and in terms of manpower, as for many analyses geneticists lack the required computing skills.
Over the course of this thesis we investigated how computer science can help to improve this situation to reduce the complexity of this type of analysis.
We looked at how to make the analysis more accessible to increase the number of people that can perform OMICS data analysis (OMICS groups various genomics data-sources).
To approach this problem, we developed a graphical NGS data analysis pipeline aimed at a diagnostics environment while still being useful in research in close collaboration with the Human Genetics Department at the University of Würzburg.
The pipeline has been used in various research papers on covering subjects, including works with direct author participation in genomics, transcriptomics as well as epigenomics.
To further validate the graphical pipeline, a user survey was carried out which confirmed that it lowers the complexity of OMICS data analysis.
We also studied how the data analysis can be improved in terms of computing infrastructure by improving the performance of certain analysis steps.
We did this both in terms of speed improvements on a single computer (with notably variant calling being faster by up to 18 times), as well as with distributed computing to better use an existing infrastructure.
The improvements were integrated into the previously described graphical pipeline, which itself also was focused on low resource usage.
As a major contribution and to help with future development of parallel and distributed applications, for the usage in genetics or otherwise, we also looked at how to make it easier to develop such applications.
Based on the parallel object programming model (POP), we created a Java language extension called POP-Java, which allows for easy and transparent distribution of objects.
Through this development, we brought the POP model to the cloud, Hadoop clusters and present a new collaborative distributed computing model called FriendComputing.
The advances made in the different domains of this thesis have been published in various works specified in this document.
Applying microarray‐based techniques to study gene expression patterns: a bio‐computational approach
(2010)
The regulation and maintenance of iron homeostasis is critical to human health. As a constituent of hemoglobin, iron is essential for oxygen transport and significant iron deficiency leads to anemia. Eukaryotic cells require iron for survival and proliferation. Iron is part of hemoproteins, iron-sulfur (Fe-S) proteins, and other proteins with functional groups that require iron as a cofactor. At the cellular level, iron uptake, utilization, storage, and export are regulated at different molecular levels (transcriptional, mRNA stability, translational, and posttranslational). Iron regulatory proteins (IRPs) 1 and 2 post-transcriptionally control mammalian iron homeostasis by binding to iron-responsive elements (IREs), conserved RNA stem-loop structures located in the 5’- or 3‘- untranslated regions of genes involved in iron metabolism (e.g. FTH1, FTL, and TFRC). To identify novel IRE-containing mRNAs, we integrated biochemical, biocomputational, and microarray-based experimental approaches. Gene expression studies greatly contribute to our understanding of complex relationships in gene regulatory networks. However, the complexity of array design, production and manipulations are limiting factors, affecting data quality. The use of customized DNA microarrays improves overall data quality in many situations, however, only if for these specifically designed microarrays analysis tools are available. Methods In this project response to the iron treatment was examined under different conditions using bioinformatical methods. This would improve our understanding of an iron regulatory network. For these purposes we used microarray gene expression data. To identify novel IRE-containing mRNAs biochemical, biocomputational, and microarray-based experimental approaches were integrated. IRP/IRE messenger ribonucleoproteins were immunoselected and their mRNA composition was analysed using an IronChip microarray enriched for genes predicted computationally to contain IRE-like motifs. Analysis of IronChip microarray data requires specialized tool which can use all advantages of a customized microarray platform. Novel decision-tree based algorithm was implemented using Perl in IronChip Evaluation Package (ICEP). Results IRE-like motifs were identified from genomic nucleic acid databases by an algorithm combining primary nucleic acid sequence and RNA structural criteria. Depending on the choice of constraining criteria, such computational screens tend to generate a large number of false positives. To refine the search and reduce the number of false positive hits, additional constraints were introduced. The refined screen yielded 15 IRE-like motifs. A second approach made use of a reported list of 230 IRE-like sequences obtained from screening UTR databases. We selected 6 out of these 230 entries based on the ability of the lower IRE stem to form at least 6 out of 7 bp. Corresponding ESTs were spotted onto the human or mouse versions of the IronChip and the results were analysed using ICEP. Our data show that the immunoselection/microarray strategy is a feasible approach for screening bioinformatically predicted IRE genes and the detection of novel IRE-containing mRNAs. In addition, we identified a novel IRE-containing gene CDC14A (Sanchez M, et al. 2006). The IronChip Evaluation Package (ICEP) is a collection of Perl utilities and an easy to use data evaluation pipeline for the analysis of microarray data with a focus on data quality of custom-designed microarrays. The package has been developed for the statistical and bioinformatical analysis of the custom cDNA microarray IronChip, but can be easily adapted for other cDNA or oligonucleotide-based designed microarray platforms. ICEP uses decision tree-based algorithms to assign quality flags and performs robust analysis based on chip design properties regarding multiple repetitions, ratio cut-off, background and negative controls (Vainshtein Y, et al., 2010).
In this century new experimental and computational techniques are adding an enormous amount of information, revealing many biological mysteries. The complexities of biological systems still broach new questions. Till now the main approach to understand a system has been to divide it in components that can be studied. The upcoming new paradigm is to combine the pieces of information in order to understand it at a global level. In the present thesis we have tried to study infectious diseases with such a global ‘Systems Biology’ approach. In the first part the apoptosis pathway is analyzed. Apoptosis (Programmed cell death) is used as a counter measure in different infections, for example viral infections. The interactions between death domain containing proteins are studied to address the following questions: i) How specificity is maintained - showing that it is induced through adaptors, ii) how proliferation/ survival signals are induced during activation of apoptosis – suggesting the pivotal role of RIP. The model also allowed us to detect new possible interacting surfaces. The pathway is then studied at a global level in a time step simulation to understand the evolution of the topology of activators and inhibitors of the pathway. Signal processing is further modeled in detail for the apoptosis pathway in M. musculus to predict the concentration time course of effector caspases. Further, experimental measurements of caspase-3 and viability of cells validate the model. The second part focuses on the phagosome, an organelle which plays an essential role in removal of pathogens as exemplified by M. tuberculosis. Again the problem is addressed in two main sections: i) To understanding the processes that are inhibited by M. tuberculosis; we focused on the phospholipid network applying a time step simulation in section one, which plays an important role in inhibition or activation of actin polymerization on the phagosome membrane. ii) Furthermore, actin polymers are suggested to play a role in the fusion of the phagosome with lysosome. To check this hypothesis an in silico model was developed; we find that the search time is reduced by 5 fold in the presence of actin polymers. Further the effect of length of actin polymers, dimensions of lysosome, phagosome and other model parameter is analyzed. After studying a pathway and then an organelle, the next step was to move to the system. This was exemplified by the host pathogen interactions between Bordetella pertussis and Bordetella bronchiseptica. The limited availability of quantitative information was the crucial factor behind the choice of the model type. A Boolean model was developed which was used for a dynamic simulation. The results predict important factors playing a role in Bordetella pathology especially the importance of Th1 related responses and not Th2 related responses in the clearance of the pathogen. Some of the quantitative predictions have been counterchecked by experimental results such as the time course of infection in different mutants and wild type mice. All these computational models have been developed in presence of limited kinetic data. The success of these models has been validated by comparison with experimental observations. Comparative models studied in chapters 6 and 9 can be used to explore new host pathogen interactions. For example in chapter 6, the analysis of inhibitors and inhibitory paths in three organism leads to the identification of regulatory hotspots in complex organisms and in chapter 9 the identification of three phases in B. bronchiseptica and inhibition of IFN-γ by TTSS lead us to explore similar phases and inhibition of IFN-γ in B. pertussis. Further an important significance of these models is to identify new components playing an essential role in host-pathogen interactions. In silico deletions can point out such components which can be further analyzed by experimental mutations.
The internal transcribed spacer 2 (ITS2) of the ribosomal gene repeat is an increasingly important phylogenetic marker whose RNA secondary structure is widely conserved across eukaryotic organisms. The ITS2 database aims to be a comprehensive resource on ITS2 sequence and secondary structure, based on direct thermodynamic as well as homology modelled RNA folds. Results: (a) A rebuild of the original ITS2 database generation scripts applied to a current NCBI dataset reveal more than 60,000 ITS2 structures. This more than doubles the contents of the original database and triples it when including partial structures. (b) The end-user interface was rewritten, extended and now features user-defined homology modelling. (c) Other possible RNA structure discovery methods (namely suboptimal and shape folding) prove helpful but are not able to replace homology modelling. (d) A use case of the ITS2 database in conjunction with other tools developed at the department gave insight into molecular phylogenetic analysis with ITS2.
Background
Phytoplankton communities are often used as a marker for the determination of fresh water quality. The routine analysis, however, is very time consuming and expensive as it is carried out manually by trained personnel. The goal of this work is to develop a system for an automated analysis.
Results
A novel open source system for the automated recognition of phytoplankton by the use of microscopy and image analysis was developed. It integrates the segmentation of the organisms from the background, the calculation of a large range of features, and a neural network for the classification of imaged organisms into different groups of plankton taxa. The analysis of samples containing 10 different taxa showed an average recognition rate of 94.7% and an average error rate of 5.5%. The presented system has a flexible framework which easily allows expanding it to include additional taxa in the future.
Conclusions
The implemented automated microscopy and the new open source image analysis system - PlanktoVision - showed classification results that were comparable or better than existing systems and the exclusion of non-plankton particles could be greatly improved. The software package is published as free software and is available to anyone to help make the analysis of water quality more reproducible and cost effective.
Neurobiology is widely supported by bioinformatics. Due to the big amount of data generated from the biological side a computational approach is required. This thesis presents four different cases of bioinformatic tools applied to the service of Neurobiology.
The first two tools presented belong to the field of image processing. In the first case, we make use of an algorithm based on the wavelet transformation to assess calcium activity events in cultured neurons. We designed an open source tool to assist neurobiology researchers in the analysis of calcium imaging videos. Such analysis is usually done manually which is time consuming and highly subjective. Our tool speeds up the work and offers the possibility of an unbiased detection of the calcium events. Even more important is that our algorithm not only detects the neuron spiking activity but also local spontaneous activity which is normally discarded because it is considered irrelevant. We showed that this activity is determinant in the calcium dynamics in neurons and it is involved in important functions like signal modulation and memory and learning.
The second project is a segmentation task. In our case we are interested in segmenting the neuron nuclei in electron microscopy images of c.elegans. Marking these structures is necessary in order to reconstruct the connectome of the organism. C.elegans is a great study case due to the simplicity of its nervous system (only 502 neurons). This worm, despite its simplicity has taught us a lot about neuronal mechanisms. There is still a lot of information we can extract from the c.elegans, therein lies the importance of reconstructing its connectome. There is a current version of the c.elegans connectome but it was done by hand and on a single subject which leaves a big room for errors. By automatizing the segmentation of the electron microscopy images we guarantee an unbiased approach and we will be able to verify the connectome on several subjects.
For the third project we moved from image processing applications to biological modeling. Because of the high complexity of even small biological systems it is necessary to analyze them with the help of computational tools. The term in silico was coined to refer to such computational models of biological systems. We designed an in silico model of the TNF (Tumor necrosis factor) ligand and its two principal receptors. This biological system is of high relevance because it is involved in the inflammation process. Inflammation is of most importance as protection mechanism but it can also lead to complicated diseases (e.g. cancer). Chronic inflammation processes can be particularly dangerous in the brain. In order to better understand the dynamics that govern the TNF system we created a model using the BioNetGen language. This is a rule based language that allows one to simulate systems where multiple agents are governed by a single rule. Using our model we characterized the TNF system and hypothesized about the relation of the ligand with each of the two receptors. Our hypotheses can be later used to define drug targets in the system or possible treatments for chronic inflammation or lack of the inflammatory response.
The final project deals with the protein folding problem. In our organism proteins are folded all the time, because only in their folded conformation are proteins capable of doing their job (with some very few exceptions). This folding process presents a great challenge for science because it has been shown to be an NP problem. NP means non deterministic Polynomial time problem. This basically means that this kind of problems cannot be efficiently solved. Nevertheless, somehow the body is capable of folding a protein in just milliseconds. This phenomenon puzzles not only biologists but also mathematicians. In mathematics NP problems have been studied for a long time and it is known that given the solution to one NP problem we could solve many of them (i.e. NP-complete problems). If we manage to understand how nature solves the protein folding problem then we might be able to apply this solution to many other problems. Our research intends to contribute to this discussion. Unfortunately, not to explain how nature solves the protein folding problem, but to explain that it does not solve the problem at all. This seems contradictory since I just mentioned that the body folds proteins all the time, but our hypothesis is that the organisms have learned to solve a simplified version of the NP problem. Nature does not solve the protein folding problem in its full complexity. It simply solves a small instance of the problem. An instance which is as simple as a convex optimization problem. We formulate the protein folding problem as an optimization problem to illustrate our claim and present some toy examples to illustrate the formulation. If our hypothesis is true, it means that protein folding is a simple problem. So we just need to understand and model the conditions of the vicinity inside the cell at the moment the folding process occurs. Once we understand this starting conformation and its influence in the folding process we will be able to design treatments for amyloid diseases such as Alzheimer's and Parkinson's.
In summary this thesis project contributes to the neurobiology research field from four different fronts. Two are practical contributions with immediate benefits, such as the calcium imaging video analysis tool and the TNF in silico model. The neuron nuclei segmentation is a contribution for the near future. A step towards the full annotation of the c.elegans connectome and later for the reconstruction of the connectome of other species. And finally, the protein folding project is a first impulse to change the way we conceive the protein folding process in nature. We try to point future research in a novel direction, where the amino code is not the most relevant characteristic of the process but the conditions within the cell.
Insights into the evolution of protein domains give rise to improvements of function prediction
(2005)
The growing number of uncharacterised sequences in public databases has turned the prediction of protein function into a challenging research field. Traditional annotation methods are often error-prone due to the small subset of proteins with experimentally verified function. Goal of this thesis was to analyse the function and evolution of protein domains in order to understand molecular processes in the cell. The focus was on signalling domains of little understood function, as well as on functional sites of protein domains in general. Glucosaminidases (GlcNAcases) represent key enzymes in signal transduction pathways. Together with glucosamine transferases, they serve as molecular switches, similar to kinases and phosphatases. Little was known about the molecular function and structure of the GlcNAcases. In this thesis, the GlcNAcases were identified as remote homologues of N-acetyltransferases. By comparing the homologous sequences, I was able to predict functional sites of the GlcNAcase family and to identify the GlcNAcases as the first family member of the acetyltransferase superfamily with a distinct catalytic mechanism, which is not involved in the transfer of acetyl groups. In a similar approach, the sensor domain of a plant hormone receptor was studied. I was able to predict putative ligand-binding sites by comparing evolutionary constraints in functionally diverged subfamilies. Most of the putative ligand-binding sites have been experimentally confirmed in the meantime. Due to the importance of enzymes involved in cellular signalling, it seems impossible to find substitutions of catalytic amino acids that turn them catalytically inactive. Nevertheless, by scanning catalytic positions of the protein tyrosine phosphatase families, I found many inactive domains among single domain and tandem domain phosphatases in metazoan proteomes. In addition, I found that inactive phosphatases are conserved throughout evolution, which led to the question about the function of these catalytically inactive phosphatase domains. An analysis of evolutionary site rates of amino acid substitutions revealed a cluster of conserved residues in the apparently redundant domain of tandem phosphatases. This putative regulatory center might be responsible for the experimentally verified dimerization of the active and inactive domain in order to control the catalytic activity of the active phosphatase domain. Moreover, I detected a subgroup of inactive phosphatases, which presumably functions in substrate recognition, based on different evolutionary site rates within the phosphatase family. The characterization of these new regulatory modules in the phosphatase family raised the question whether inactivation of enzymes is a more general evolutionary mechanism to enlarge signalling pathways and whether inactive domains are also found in other enzyme families. A large-scale analysis of substitutions at catalytic positions of enzymatic domains was performed in this work. I identified many domains with inactivating substitutions in various enzyme families. Signalling domains harbour a particular high occurrence of catalytically inactive domains indicating that these domains have evolved to modulate existing regulatory pathways. Furthermore, it was shown that inactivation of enzymes by single substitutions happened multiple times independently in evolution. The surprising variability of amino acids at catalytic positions was decisive for a subsequent analysis of the diversity of functional sites in general. Using functional residues extracted from structural complexes I could show that functional sites of protein domains do not only vary in their type of amino acid but also in their structural location within the domain. In the process of evolution, protein domains have arisen from duplication events and subsequently adapted to new binding partners and developed new functions, which is reflected in the high variability of functional sites. However, great differences exist between domain families. The analysis demonstrated that functional sites of nuclear domains are more conserved than functional sites of extracellular domains. Furthermore, the type of ligand influences the degree of conservation, for example ion binding sites are more conserved than peptide binding sites. The work presented in this thesis has led to the detection of functional sites in various protein domains involved in signalling pathways and it has resulted in insights into the molecular function of those domains. In addition, properties of functional sites of protein domains were revealed. This knowledge can be used in the future to improve the prediction of protein function and to identify functional sites of proteins.
Genome sequence analysis A combination of genome analysis application has been established here during this project. This offers an efficient platform to interactively compare similar genome regions and reveal loci differences. The genes and operons can be rapidly analyzed and local collinear blocks (LCBs) categorized according to their function. The features of interests are parsed, recognized, and clustered into reports. Phylogenetic relationships can be readily examined such as the evolution of critical factors or a certain highly-conserved region. The resulting platform-independent software packages (GENOVA and inGeno), have been proven to be efficient and easy to handle in a number of projects. The capabilities of the software allowed the investigation of virulence factors, e.g., rsbU, strains’ biological design, and in particular pathogenicity feature storage and management. We have successfully investigated the genomes of Staphylococcus aureus strains (COL, N315, 8325, RN1HG, Newman), Listeria spp. (welshimeri, innocua and monocytogenes), E.coli strains (O157:H7 and MG1655) and Vaccinia strains (WR, Copenhagen, Lister, LIVP, GLV-1h68 and parental strains). Metabolic network analysis Our YANAsquare package offers a workbench to rapidly establish the metabolic network of such as Staphylococcous aureus bacteria in genome-scale size as well as metabolic networks of interest such as the murine phagosome lipid signalling network. YANAsquare recruits reactions from online databases using an integrated KEGG browser. This reduces the efforts in building large metabolic networks. The involved calculation routines (METATOOL-derived wrapper or native Java implementation) readily obtain all possible flux modes (EM/EP) for metabolite fluxes within the network. Advanced layout algorithms visualize the topological structure of the network. In addition, the generated structure can be dynamically modified in the graphic interface. The generated network as well as the manipulated layout can be validated and stored (XML file: scheme of SBML level-2). This format can be further parsed and analyzed by other systems biology software, such as CellDesigner. Moreover, the integrated robustness-evaluation routine is able to examine the synthesis rates affected by each single mutation throughout the whole network. We have successfully applied the method to simulate single and multiple gene knockouts, and the affected fluxes are comprehensively revealed. Recently we applied the method to proteomic data and extra-cellular metabolite data of Staphylococci, the physiological changes regarding the flux distribution are studied. Calculations at different time points, including different conditions such as hypoxia or stress, show a good fit to experimental data. Moreover, using the proteomic data (enzyme amounts) calculated from 2D-Gel-EP experiments our study provides a way to compare the fluxome and the enzyme expression. Oncolytic vaccinia virus (VACV) We investigated the genetic differences between the de novo sequence of the recombinant oncolytic GLV-1h68 and other related VACVs, including function predictions for all found genome differences. Our phylogenetic analysis indicates that GLV-1h68 is closest to Lister strains but has lost several ORFs present in its parental LIVP strain, including genes encoding CrmE and a viral Golgi anti-apoptotic protein, v-GAAP. Functions of viral genes were either strain-specific, tissue-specific or host-specific comparing viral genes in the Lister, WR and COP strains. This helps to rationally design more optimized oncolytic virus strains to benefit cancer therapy in human patients. Identified differences from the comparison in open reading frames (ORFs) include genes for host-range selection, virulence and immune modulation proteins, e.g. ankyrin-like proteins, serine proteinase inhibitor SPI-2/CrmA, tumor necrosis factor (TNF) receptor homolog CrmC, semaphorin-like and interleukin-1 receptor homolog proteins. The contribution of foreign gene expression cassettes in the therapeutic and oncolytic virus GLV-1h68 was studied, including the F14.5L, J2R and A56R loci. The contribution of F14.5L inactivation to the reduced virulence is demonstrated by comparing the virulence data of GLV-1h68 with its F14.5L-null and revertant viruses. The comparison suggests that insertion of a foreign gene expression cassette in a nonessential locus in the viral genome is a practical way to attenuate VACVs, especially if the nonessential locus itself contains a virulence gene. This reduces the virulence of the virus without compromising too much the replication competency of the virus, the key to its oncolytic activity. The reduced pathogenicity of GLV-1h68 was confirmed by our experimental collaboration partners in male mice bearing C6 rat glioma and in immunocompetent mice bearing B16-F10 murine melanoma. In conclusion, bioinformatics and experimental data show that GLV-1h68 is a promising engineered VACV variant for anticancer therapy with tumor-specific replication, reduced pathogenicity and benign tissue tropism.
Biological systems such as cells or whole organisms are governed by complex regulatory networks of transcription factors, hormones and other regulators which determine the behavior of the system depending on internal and external stimuli. In mathematical models of these networks, genes are represented by interacting “nodes” whose “value” represents the activity of the gene.
Control processes in these regulatory networks are challenging to elucidate and quantify. Previous control centrality metrics, which aim to mathematically capture the ability of individual nodes to control biological systems, have been found to suffer from problems regarding biological plausibility.
This thesis presents a new approach to control centrality in biological networks. Three types of network control are distinguished: Total control centrality quantifies the impact of gene mutations and identifies potential pharmacological targets such as genes involved in oncogenesis (e.g. zinc finger protein GLI2 or bone morphogenetic proteins in chondrocytes). Dynamic control centrality describes relaying functions as observed in signaling cascades (e.g control in mouse colon stem cells). Value control centrality measures the direct influence of the value of the node on the network (e.g. Indian hedgehog as an essential regulator of proliferation in chondrocytes). Well-defined network manipulations define all three centralities not only for nodes, but also for the interactions between them, enabling detailed insights into network pathways.
The calculation of the new metrics is made possible by substantial computational improvements in the simulation algorithms for several widely used mathematical modeling paradigms for genetic regulatory networks, which are implemented in the regulatory network simulation framework Jimena created for this thesis.
Applying the new metrics to biological networks and artificial random networks shows how these mathematical concepts correspond to experimentally verified gene functions and signaling pathways in immunity and cell differentiation. In contrast to controversial previous results even from the Barabási group, all results indicate that the ability to control biological networks resides in only few driver nodes characterized by a high number of connections to the rest of the network. Autoregulatory loops strongly increase the controllability of the network, i.e. its ability to control itself, and biological networks are characterized by high controllability in conjunction with high robustness against mutations, a combination that can be achieved best in sparsely connected networks with densities (i.e. connections to nodes ratios) around 2.0 - 3.0.
The new concepts are thus considerably narrowing the gap between network science and biology and can be used in various areas such as system modeling, plausibility trials and system analyses.
Medical applications discussed in this thesis include the search for oncogenes and pharmacological targets, as well their functional characterization.
The topic of my doctorial research was the computational analysis of metagenomic data. A metagenome comprises the genomic information from all the microorganisms within a certain environment. The currently available metagenomic data sets cover only parts of these usually huge metagenomes due to the high technical and financial effort of such sequencing endeavors. During my thesis I developed bioinformatic tools and applied them to analyse genomic features of different metagenomic data sets and to search for enzymes of importance for biotechnology or pharmaceutical applications in those sequence collections. In these studies nine metagenomic projects (with up to 41 subsamples) were analysed. These samples originated from diverse environments like farm soil, acid mine drainage, microbial mats on whale bones, marine water, fresh water, water treatment sludges and the human gut flora. Additionally, data sets of conventionally retrieved sequence data were taken into account and compared with each other
The phylum Tardigrada consists of about 1000 described species to date. The animals live in habitats within marine, freshwater and terrestrial ecosystems allover the world. Tardigrades are polyextremophiles. They are capable to resist extreme temperature, pressure or radiation. In the event of desiccation, tardigrades enter a so-called tun stage. The reason for their great tolerance capabilities against extreme environmental conditions is not discovered yet. Our Funcrypta project aims at finding answers to the question what mechanisms underlie these adaption capabilities particularly with regard to the species Milnesium tardigradum. The first part of this thesis describes the establishment of expressed sequence tags (ESTs) libraries for different stages of M. tardigradum. From proteomics data we bioinformatically identified 144 proteins with a known function and additionally 36 proteins which seemed to be specific for M. tardigradum. The generation of a comprehensive web-based database allows us to merge the proteome and transcriptome data. Therefore we created an annotation pipeline for the functional annotation of the protein and nucleotide sequences. Additionally, we clustered the obtained proteome dataset and identified some tardigrade-specific proteins (TSPs) which did not show homology to known proteins. Moreover, we examined the heat shock proteins of M. tardigradum and their different expression levels depending on the actual state of the animals. In further bioinformatical analyses of the whole data set, we discovered promising proteins and pathways which are described to be correlated with the stress tolerance, e.g. late embryogenesis abundant (LEA) proteins. Besides, we compared the tardigrades with nematodes, rotifers, yeast and man to identify shared and tardigrade specific stress pathways. An analysis of the 50 and 30 untranslated regions (UTRs) demonstrates a strong usage of stabilising motifs like the 15-lipoxygenase differentiation control element (15-LOX-DICE) but also reveals a lack of other common UTR motifs normally used, e.g. AU rich elements. The second part of this thesis focuses on the relatedness between several cryptic species within the tardigrade genus Paramacrobiotus. Therefore for the first time, we used the sequence-structure information of the internal transcribed spacer 2 (ITS2) as a phylogenetic marker in tardigrades. This allowed the description of three new species which were indistinguishable using morphological characters or common molecular markers like the 18S ribosomal ribonucleic acid (rRNA) or the Cytochrome c oxidase subunit I (COI). In a large in silico simulation study we also succeeded to show the benefit for the phylogenetic tree reconstruction by adding structure information to the ITS2 sequence. Next to the genus Paramacrobiotus we used the ITS2 to corroborate a monophyletic DO-group (Sphaeropleales) within the Chlorophyceae. Additionally we redesigned another comprehensive database—the ITS2 database resulting in a doubled number of sequence-structure pairs of the ITS2. In conclusion, this thesis shows the first insights (6 first author publications and 4 coauthor publications) into the reasons for the enormous adaption capabilities of tardigrades and offers a solution to the debate on the phylogenetic relatedness within the tardigrade genus Paramacrobiotus.
An essential topic for synthetic biologists is to understand the structure and function of biological processes and involved proteins and plan experiments accordingly. Remarkable progress has been made in recent years towards this goal. However, efforts to collect and present all information on processes and functions are still cumbersome. The database tool GoSynthetic provides a new, simple and fast way to analyse biological processes applying a hierarchical database. Four different search modes are implemented. Furthermore, protein interaction data, cross-links to organism-specific databases (17 organisms including six model organisms and their interactions), COG/KOG, GO and IntAct are warehoused. The built in connection to technical and engineering terms enables a simple switching between biological concepts and concepts from engineering, electronics and synthetic biology. The current version of GoSynthetic covers more than one million processes, proteins, COGs and GOs. It is illustrated by various application examples probing process differences and designing modifications.
In this work models for molecular networks consisting of ordinary differential equations are extended by terms that include the interaction of the corresponding molecular network with the environment that the molecular network is embedded in. These terms model the effects of the external stimuli on the molecular network. The usability of this extension is demonstrated with a model of a circadian clock that is extended with certain terms and reproduces data from several experiments at the same time.
Once the model including external stimuli is set up, a framework is developed in order to calculate external stimuli that have a predefined desired effect on the molecular network. For this purpose the task of finding appropriate external stimuli is formulated as a mathematical optimal control problem for which in order to solve it a lot of mathematical methods are available. Several methods are discussed and worked out in order to calculate a solution for the corresponding optimal control problem. The application of the framework to find pharmacological intervention points or effective drug combinations is pointed out and discussed. Furthermore the framework is related to existing network analysis tools and their combination for network analysis in order to find dedicated external stimuli is discussed.
The total framework is verified with biological examples by comparing the calculated results with data from literature. For this purpose platelet aggregation is investigated based on a corresponding gene regulatory network and associated receptors are detected. Furthermore a transition from one to another type of T-helper cell is analyzed in a tumor setting where missing agents are calculated to induce the corresponding switch in vitro. Next a gene regulatory network of a myocardiocyte is investigated where it is shown how the presented framework can be used to compare different treatment strategies with respect to their beneficial effects and side effects quantitatively. Moreover a constitutively activated signaling pathway, which thus causes maleficent effects, is modeled and intervention points with corresponding treatment strategies are determined that steer the gene regulatory network from a pathological expression pattern to physiological one again.
Background: The frequency of the most observed cancer, Non Hodgkin Lymphoma (NHL), is further rising. Diffuse large B-cell lymphoma (DLBCL) is the most common of the NHLs. There are two subgroups of DLBCL with different gene expression patterns: ABC (“Activated B-like DLBCL”) and GCB (“Germinal Center B-like DLBCL”). Without therapy the patients often die within a few months, the ABC type exhibits the more aggressive behaviour. A further B-cell lymphoma is the Mantle cell lymphoma (MCL). It is rare and shows very poor prognosis. There is no cure yet. Methods: In this project these B-cell lymphomas were examined with methods from bioinformatics, to find new characteristics or undiscovered events on the molecular level. This would improve understanding and therapy of lymphomas. For this purpose we used survival, gene expression and comparative genomic hybridization (CGH) data. In some clinical studies, you get large data sets, from which one can reveal yet unknown trends. Results (MCL): The published proliferation signature correlates directly with survival. Exploratory analyses of gene expression and CGH data of MCL samples (n=71) revealed a valid grouping according to the median of the proliferation signature values. The second axis of correspondence analysis distinguishes between good and bad prognosis. Statistical testing (moderate t-test, Wilcoxon rank-sum test) showed differences in the cell cycle and delivered a network of kinases, which are responsible for the difference between good and bad prognosis. A set of seven genes (CENPE, CDC20, HPRT1, CDC2, BIRC5, ASPM, IGF2BP3) predicted, similarly well, survival patterns as proliferation signature with 20 genes. Furthermore, some bands could be associated with prognosis in the explorative analysis (chromosome 9: 9p24, 9p23, 9p22, 9p21, 9q33 and 9q34). Results (DLBCL): New normalization of gene expression data of DLBCL patients revealed better separation of risk groups by the 2002 published signature based predictor. We could achieve, similarly well, a separation with six genes. Exploratory analysis of gene expression data could confirm the subgroups ABC and GCB. We recognized a clear difference in early and late cell cycle stages of cell cycle genes, which can separate ABC and GCB. Classical lymphoma and best separating genes form a network, which can classify and explain the ABC and GCB groups. Together with gene sets which identify ABC and GCB we get a network, which can classify and explain the ABC and GCB groups (ASB13, BCL2, BCL6, BCL7A, CCND2, COL3A1, CTGF, FN1, FOXP1, IGHM, IRF4, LMO2, LRMP, MAPK10, MME, MYBL1, NEIL1 and SH3BP5; Altogether these findings are useful for diagnosis, prognosis and therapy (cytostatic drugs).
In recent years high-throughput experiments provided a vast amount of data from all areas of molecular biology, including genomics, transcriptomics, proteomics and metabolomics. Its analysis using bioinformatics methods has developed accordingly, towards a systematic approach to understand how genes and their resulting proteins give rise to biological form and function. They interact with each other and with other molecules in highly complex structures, which are explored in network biology. The in-depth knowledge of genes and proteins obtained from high-throughput experiments can be complemented by the architecture of molecular networks to gain a deeper understanding of biological processes. This thesis provides methods and statistical analyses for the integration of molecular data into biological networks and the identification of functional modules, as well as its application to distinct biological data. The integrated network approach is implemented as a software package, termed BioNet, for the statistical language R. The package includes the statistics for the integration of transcriptomic and functional data with biological networks, the scoring of nodes and edges of these networks as well as methods for subnetwork search and visualisation. The exact algorithm is extensively tested in a simulation study and outperforms existing heuristic methods for the calculation of this NP-hard problem in accuracy and robustness. The variability of the resulting solutions is assessed on perturbed data, mimicking random or biased factors that obscure the biological signal, generated for the integrated data and the network. An optimal, robust module can be calculated using a consensus approach, based on a resampling method. It summarizes optimally an ensemble of solutions in a robust consensus module with the estimated variability indicated by confidence values for the nodes and edges. The approach is subsequently applied to two gene expression data sets. The first application analyses gene expression data for acute lymphoblastic leukaemia (ALL) and differences between the subgroups with and without an oncogenic BCR/ABL gene fusion. In a second application gene expression and survival data from diffuse large B-cell lymphomas are examined. The identified modules include and extend already existing gene lists and signatures by further significant genes and their interactions. The most important novelty is that these genes are determined and visualised in the context of their interactions as a functional module and not as a list of independent and unrelated transcripts. In a third application the integrative network approach is used to trace changes in tardigrade metabolism to identify pathways responsible for their extreme resistance to environmental changes and endurance in an inactive tun state. For the first time a metabolic network approach is proposed to detect shifts in metabolic pathways, integrating transcriptome and metabolite data. Concluding, the presented integrated network approach is an adequate technique to unite high-throughput experimental data for single molecules and their intermolecular dependencies. It is flexible to apply on diverse data, ranging from gene expression changes over metabolite abundances to protein modifications in a combination with a suitable molecular network. The exact algorithm is accurate and robust in comparison to heuristic approaches and delivers an optimal, robust solution in form of a consensus module with confidence values. By the integration of diverse sources of information and a simultaneous inspection of a molecular event from different points of view, new and exhaustive insights into biological processes can be acquired.
The human gut is home for thousands of microbes that are important for human life. As most of these cannot be cultivated, metagenomics is an important means to understand this important community. To perform comparative metagenomic analysis of the human gut microbiome, I have developed SMASH (Simple metagenomic analysis shell), a computational pipeline. SMASH can also be used to assemble and analyze single genomes, and has been successfully applied to the bacterium Mycoplasma pneumoniae and the fungus Chaetomium thermophilum. In the context of the MetaHIT (Metagenomics of the human intestinal tract) consortium our group is participating in, I used SMASH to validate the assembly and to estimate the assembly error rate of 576.7 Gb metagenome sequence obtained using Illumina Solexa technology from fecal DNA of 124 European individuals. I also estimated the completeness of the gene catalogue containing 3.3 million open reading frames obtained from these metagenomes. Finally, I used SMASH to analyze human gut metagenomes of 39 individuals from 6 countries encompassing a wide range of host properties such as age, body mass index and disease states. We find that the variation in the gut microbiome is not continuous but stratified into enterotypes. Enterotypes are complex host-microbial symbiotic states that are not explained by host properties, nutritional habits or possible technical biases. The concept of enterotypes might have far reaching implications, for example, to explain different responses to diet or drug intake. We also find several functional markers in the human gut microbiome that correlate with a number of host properties such as body mass index, highlighting the need for functional analysis and raising hopes for the application of microbial markers as diagnostic or even prognostic tools for microbiota-associated human disorders.