Refine
Has Fulltext
- yes (26)
Is part of the Bibliography
- yes (26)
Year of publication
Document Type
- Doctoral Thesis (24)
- Journal article (2)
Language
- English (26) (remove)
Keywords
- Simulation (26) (remove)
Institute
- Institut für Theoretische Physik und Astrophysik (5)
- Institut für Informatik (4)
- Physikalisches Institut (4)
- Institut für Mathematik (2)
- Betriebswirtschaftliches Institut (1)
- Center for Computational and Theoretical Biology (1)
- Fakultät für Biologie (1)
- Fakultät für Mathematik und Informatik (1)
- Graduate School of Life Sciences (1)
- Graduate School of Science and Technology (1)
Sonstige beteiligte Institutionen
ResearcherID
- D-1250-2010 (1)
Staphylococcus aureus (SA) causes nosocomial infections including life threatening sepsis by multi-resistant strains (MRSA). It has the ability to form biofilms to protect it from the host immune system and from anti staphylococcal drugs. Biofilm and planctonic life style is regulated by a complex Quorum-Sensing (QS) system with agr as a central regulator. To study biofilm formation and QS mechanisms in SA a Boolean network was build (94 nodes, 184 edges) including two different component systems such as agr, sae and arl. Important proteins such as Sar, Rot and SigB were included as further nodes in the model. System analysis showed there are only two stable states biofilm forming versus planctonic with clearly different subnetworks turned on. Validation according to gene expression data confirmed this. Network consistency was tested first according to previous knowledge and literature. Furthermore, the predicted node activity of different in silico knock-out strains agreed well with corresponding micro array experiments and data sets. Additional validation included the expression of further nodes (Northern blots) and biofilm production compared in different knock-out strains in biofilm adherence assays. The model faithfully reproduces the behaviour of QS signalling mutants. The integrated model allows also prediction of various other network mutations and is supported by experimental data from different strains. Furthermore, the well connected hub proteins elucidate how integration of different inputs is achieved by the QS network. For in silico as well as in vitro experiments it was found that the sae-locus is also a central modulator of biofilm production. Sae knock-out strains showed stronger biofilms. Wild type phenotype was rescued by sae complementation. To elucidate the way in which sae takes influence on biofilm formation the network was used and Venn-diagrams were made, revealing nodes regulated by sae and changed in biofilms. In these Venn-diagrams nucleases and extracellular proteins were found to be promising nodes. The network revealed DNAse to be of great importance. Therefore qualitatively the DNAse amount, produced by different SA mutants was measured, it was tried to dissolve biofilms with according amounts of DNAse and the concentration of nucleic acids, proteins and polysaccharides were measured in biofilms of different SA mutants.
With its thorough validation the network model provides a powerful tool to study QS and biofilm formation in SA, including successful predictions for different knock-out mutant behaviour, QS signalling and biofilm formation. This includes implications for the behaviour of MRSA strains and mutants. Key regulatory mutation combinations (agr–, sae–, sae–/agr–, sigB+, sigB+/sae–) were directly tested in the model but also in experiments. High connectivity was a good guide to identify master regulators, whose detailed behaviour was studied both in vitro and in the model. Together, both lines of evidence support in particular a refined regulatory role for sae and agr with involvement in biofilm repression and/or SA dissemination. With examination of the composition of different mutant biofilms as well as with the examination of the reaction cascade that connects sae to the biofilm forming ability of SA and also by postulating that nucleases might play an important role in that, first steps were taken in proving and explaining regulatory links leading from sae to biofilms. Furthermore differences in biofilms of different mutant SA strains were found leading us in perspective towards a new understanding of biofilms including knowledge how to better regulate, fight and use its different properties.
Lifetime techniques are applied to diverse fields of study including materials sciences, semiconductor physics, biology, molecular biophysics and photochemistry.
Here we present DDRS4PALS, a software for the acquisition and simulation of lifetime spectra using the DRS4 evaluation board (Paul Scherrer Institute, Switzerland) for time resolved measurements and digitization of detector output pulses. Artifact afflicted pulses can be corrected or rejected prior to the lifetime calculation to provide the generation of high-quality lifetime spectra, which are crucial for a profound analysis, i.e. the decomposition of the true information. Moreover, the pulses can be streamed on an (external) hard drive during the measurement and subsequently downloaded in the offline mode without being connected to the hardware. This allows the generation of various lifetime spectra at different configurations from one single measurement and, hence, a meaningful comparison in terms of analyzability and quality. Parallel processing and an integrated JavaScript based language provide convenient options to accelerate and automate time consuming processes such as lifetime spectra simulations.
Development, Simulation and Evaluation of Mobile Wireless Networks in Industrial Applications
(2023)
Manyindustrialautomationsolutionsusewirelesscommunicationandrelyontheavail-
ability and quality of the wireless channel. At the same time the wireless medium is
highly congested and guaranteeing the availability of wireless channels is becoming
increasingly difficult. In this work we show, that ad-hoc networking solutions can be
used to provide new communication channels and improve the performance of mobile
automation systems. These ad-hoc networking solutions describe different communi-
cation strategies, but avoid relying on network infrastructure by utilizing the Peer-to-
Peer (P2P) channel between communicating entities.
This work is a step towards the effective implementation of low-range communication
technologies(e.g. VisibleLightCommunication(VLC), radarcommunication, mmWave
communication) to the industrial application. Implementing infrastructure networks
with these technologies is unrealistic, since the low communication range would neces-
sitate a high number of Access Points (APs) to yield full coverage. However, ad-hoc
networks do not require any network infrastructure. In this work different ad-hoc net-
working solutions for the industrial use case are presented and tools and models for
their examination are proposed.
The main use case investigated in this work are Automated Guided Vehicles (AGVs)
for industrial applications. These mobile devices drive throughout the factory trans-
porting crates, goods or tools or assisting workers. In most implementations they must
exchange data with a Central Control Unit (CCU) and between one another. Predicting
if a certain communication technology is suitable for an application is very challenging
since the applications and the resulting requirements are very heterogeneous.
The proposed models and simulation tools enable the simulation of the complex inter-
action of mobile robotic clients and a wireless communication network. The goal is to
predict the characteristics of a networked AGV fleet.
Theproposedtoolswereusedtoimplement, testandexaminedifferentad-hocnetwork-
ing solutions for industrial applications using AGVs. These communication solutions
handle time-critical and delay-tolerant communication. Additionally a control method
for the AGVs is proposed, which optimizes the communication and in turn increases the
transport performance of the AGV fleet. Therefore, this work provides not only tools
for the further research of industrial ad-hoc system, but also first implementations of
ad-hoc systems which address many of the most pressing issues in industrial applica-
tions.
Nowadays, data centers are becoming increasingly dynamic due to the common adoption of virtualization technologies. Systems can scale their capacity on demand by growing and shrinking their resources dynamically based on the current load. However, the complexity and performance of modern data centers is influenced not only by the software architecture, middleware, and computing resources, but also by network virtualization, network protocols, network services, and configuration. The field of network virtualization is not as mature as server virtualization and there are multiple competing approaches and technologies. Performance modeling and prediction techniques provide a powerful tool to analyze the performance of modern data centers. However, given the wide variety of network virtualization approaches, no common approach exists for modeling and evaluating the performance of virtualized networks.
The performance community has proposed multiple formalisms and models for evaluating the performance of infrastructures based on different network virtualization technologies. The existing performance models can be divided into two main categories: coarse-grained analytical models and highly-detailed simulation models. Analytical performance models are normally defined at a high level of abstraction and thus they abstract many details of the real network and therefore have limited predictive power. On the other hand, simulation models are normally focused on a selected networking technology and take into account many specific performance influencing factors, resulting in detailed models that are tightly bound to a given technology, infrastructure setup, or to a given protocol stack.
Existing models are inflexible, that means, they provide a single solution method without providing means for the user to influence the solution accuracy and solution overhead. To allow for flexibility in the performance prediction, the user is required to build multiple different performance models obtaining multiple performance predictions. Each performance prediction may then have different focus, different performance metrics, prediction accuracy, and solving time.
The goal of this thesis is to develop a modeling approach that does not require the user to have experience in any of the applied performance modeling formalisms. The approach offers the flexibility in the modeling and analysis by balancing between: (a) generic character and low overhead of coarse-grained analytical models, and (b) the more detailed simulation models with higher prediction accuracy.
The contributions of this thesis intersect with technologies and research areas, such as: software engineering, model-driven software development, domain-specific modeling, performance modeling and prediction, networking and data center networks, network virtualization, Software-Defined Networking (SDN), Network Function Virtualization (NFV). The main contributions of this thesis compose the Descartes Network Infrastructure (DNI) approach and include:
• Novel modeling abstractions for virtualized network infrastructures. This includes two meta-models that define modeling languages for modeling data center network performance. The DNI and miniDNI meta-models provide means for representing network infrastructures at two different abstraction levels. Regardless of which variant of the DNI meta-model is used, the modeling language provides generic modeling elements allowing to describe the majority of existing and future network technologies, while at the same time abstracting factors that have low influence on the overall performance. I focus on SDN and NFV as examples of modern virtualization technologies.
• Network deployment meta-model—an interface between DNI and other meta- models that allows to define mapping between DNI and other descriptive models. The integration with other domain-specific models allows capturing behaviors that are not reflected in the DNI model, for example, software bottlenecks, server virtualization, and middleware overheads.
• Flexible model solving with model transformations. The transformations enable solving a DNI model by transforming it into a predictive model. The model transformations vary in size and complexity depending on the amount of data abstracted in the transformation process and provided to the solver. In this thesis, I contribute six transformations that transform DNI models into various predictive models based on the following modeling formalisms: (a) OMNeT++ simulation, (b) Queueing Petri Nets (QPNs), (c) Layered Queueing Networks (LQNs). For each of these formalisms, multiple predictive models are generated (e.g., models with different level of detail): (a) two for OMNeT++, (b) two for QPNs, (c) two for LQNs. Some predictive models can be solved using multiple alternative solvers resulting in up to ten different automated solving methods for a single DNI model.
• A model extraction method that supports the modeler in the modeling process by automatically prefilling the DNI model with the network traffic data. The contributed traffic profile abstraction and optimization method provides a trade-off by balancing between the size and the level of detail of the extracted profiles.
• A method for selecting feasible solving methods for a DNI model. The method proposes a set of solvers based on trade-off analysis characterizing each transformation with respect to various parameters such as its specific limitations, expected prediction accuracy, expected run-time, required resources in terms of CPU and memory consumption, and scalability.
• An evaluation of the approach in the context of two realistic systems. I evaluate the approach with focus on such factors like: prediction of network capacity and interface throughput, applicability, flexibility in trading-off between prediction accuracy and solving time. Despite not focusing on the maximization of the prediction accuracy, I demonstrate that in the majority of cases, the prediction error is low—up to 20% for uncalibrated models and up to 10% for calibrated models depending on the solving technique.
In summary, this thesis presents the first approach to flexible run-time performance prediction in data center networks, including network based on SDN. It provides ability to flexibly balance between performance prediction accuracy and solving overhead. The approach provides the following key benefits:
• It is possible to predict the impact of changes in the data center network on the performance. The changes include: changes in network topology, hardware configuration, traffic load, and applications deployment.
• DNI can successfully model and predict the performance of multiple different of network infrastructures including proactive SDN scenarios.
• The prediction process is flexible, that is, it provides balance between the granularity of the predictive models and the solving time. The decreased prediction accuracy is usually rewarded with savings of the solving time and consumption of resources required for solving.
• The users are enabled to conduct performance analysis using multiple different prediction methods without requiring the expertise and experience in each of the modeling formalisms.
The components of the DNI approach can be also applied to scenarios that are not considered in this thesis. The approach is generalizable and applicable for the following examples: (a) networks outside of data centers may be analyzed with DNI as long as the background traffic profile is known; (b) uncalibrated DNI models may serve as a basis for design-time performance analysis; (c) the method for extracting and compacting of traffic profiles may be used for other, non-network workloads as well.
How genomic and ecological traits shape island biodiversity - insights from individual-based models
(2020)
Life on oceanic islands provides a playground and comparably easy\-/studied basis
for the understanding of biodiversity in general. Island biota feature many
fascinating patterns: endemic species, species radiations and species with
peculiar trait syndromes. However, classic and current island biogeography
theory does not yet consider all the factors necessary to explain many of these
patterns. In response to this, there is currently a shift in island biogeography
research to systematically consider species traits and thus gain a more
functional perspective. Despite this recent development, a set of species
characteristics remains largely ignored in island biogeography, namely genomic
traits. Evidence suggests that genomic factors could explain many of the
speciation and adaptation patterns found in nature and thus may be highly
informative to explain the fascinating and iconic phenomena known for oceanic
islands, including species radiations and susceptibility to biotic invasions.
Unfortunately, the current lack of comprehensive meaningful data makes studying
these factors challenging. Even with paleontological data and space-for-time
rationales, data is bound to be incomplete due to the very environmental
processes taking place on oceanic islands, such as land slides and volcanism,
and lacks causal information due to the focus on correlative approaches. As
promising alternative, integrative mechanistic models can explicitly consider
essential underlying eco\-/evolutionary mechanisms. In fact, these models have
shown to be applicable to a variety of different systems and study questions.
In this thesis, I therefore examined present mechanistic island models to
identify how they might be used to address some of the current open questions in
island biodiversity research. Since none of the models simultaneously considered
speciation and adaptation at a genomic level, I developed a new genome- and
niche-explicit, individual-based model. I used this model to address three
different phenomena of island biodiversity: environmental variation, insular
species radiations and species invasions.
Using only a single model I could show that small-bodied species with flexible
genomes are successful under environmental variation, that a complex combination
of dispersal abilities, reproductive strategies and genomic traits affect the
occurrence of species radiations and that invasions are primarily driven by the
intensity of introductions and the trait characteristics of invasive
species. This highlights how the consideration of functional traits can promote
the understanding of some of the understudied phenomena in island biodiversity.
The results presented in this thesis exemplify the generality of integrative
models which are built on first principles. Thus, by applying such models to
various complex study questions, they are able to unveil multiple biodiversity
dynamics and patterns. The combination of several models such as the one I
developed to an eco\-/evolutionary model ensemble could further help to identify
fundamental eco\-/evolutionary principles. I conclude the thesis with an outlook
on how to use and extend my developed model to investigate geomorphological
dynamics in archipelagos and to allow dynamic genomes, which would further
increase the model's generality.
Neurobiology is widely supported by bioinformatics. Due to the big amount of data generated from the biological side a computational approach is required. This thesis presents four different cases of bioinformatic tools applied to the service of Neurobiology.
The first two tools presented belong to the field of image processing. In the first case, we make use of an algorithm based on the wavelet transformation to assess calcium activity events in cultured neurons. We designed an open source tool to assist neurobiology researchers in the analysis of calcium imaging videos. Such analysis is usually done manually which is time consuming and highly subjective. Our tool speeds up the work and offers the possibility of an unbiased detection of the calcium events. Even more important is that our algorithm not only detects the neuron spiking activity but also local spontaneous activity which is normally discarded because it is considered irrelevant. We showed that this activity is determinant in the calcium dynamics in neurons and it is involved in important functions like signal modulation and memory and learning.
The second project is a segmentation task. In our case we are interested in segmenting the neuron nuclei in electron microscopy images of c.elegans. Marking these structures is necessary in order to reconstruct the connectome of the organism. C.elegans is a great study case due to the simplicity of its nervous system (only 502 neurons). This worm, despite its simplicity has taught us a lot about neuronal mechanisms. There is still a lot of information we can extract from the c.elegans, therein lies the importance of reconstructing its connectome. There is a current version of the c.elegans connectome but it was done by hand and on a single subject which leaves a big room for errors. By automatizing the segmentation of the electron microscopy images we guarantee an unbiased approach and we will be able to verify the connectome on several subjects.
For the third project we moved from image processing applications to biological modeling. Because of the high complexity of even small biological systems it is necessary to analyze them with the help of computational tools. The term in silico was coined to refer to such computational models of biological systems. We designed an in silico model of the TNF (Tumor necrosis factor) ligand and its two principal receptors. This biological system is of high relevance because it is involved in the inflammation process. Inflammation is of most importance as protection mechanism but it can also lead to complicated diseases (e.g. cancer). Chronic inflammation processes can be particularly dangerous in the brain. In order to better understand the dynamics that govern the TNF system we created a model using the BioNetGen language. This is a rule based language that allows one to simulate systems where multiple agents are governed by a single rule. Using our model we characterized the TNF system and hypothesized about the relation of the ligand with each of the two receptors. Our hypotheses can be later used to define drug targets in the system or possible treatments for chronic inflammation or lack of the inflammatory response.
The final project deals with the protein folding problem. In our organism proteins are folded all the time, because only in their folded conformation are proteins capable of doing their job (with some very few exceptions). This folding process presents a great challenge for science because it has been shown to be an NP problem. NP means non deterministic Polynomial time problem. This basically means that this kind of problems cannot be efficiently solved. Nevertheless, somehow the body is capable of folding a protein in just milliseconds. This phenomenon puzzles not only biologists but also mathematicians. In mathematics NP problems have been studied for a long time and it is known that given the solution to one NP problem we could solve many of them (i.e. NP-complete problems). If we manage to understand how nature solves the protein folding problem then we might be able to apply this solution to many other problems. Our research intends to contribute to this discussion. Unfortunately, not to explain how nature solves the protein folding problem, but to explain that it does not solve the problem at all. This seems contradictory since I just mentioned that the body folds proteins all the time, but our hypothesis is that the organisms have learned to solve a simplified version of the NP problem. Nature does not solve the protein folding problem in its full complexity. It simply solves a small instance of the problem. An instance which is as simple as a convex optimization problem. We formulate the protein folding problem as an optimization problem to illustrate our claim and present some toy examples to illustrate the formulation. If our hypothesis is true, it means that protein folding is a simple problem. So we just need to understand and model the conditions of the vicinity inside the cell at the moment the folding process occurs. Once we understand this starting conformation and its influence in the folding process we will be able to design treatments for amyloid diseases such as Alzheimer's and Parkinson's.
In summary this thesis project contributes to the neurobiology research field from four different fronts. Two are practical contributions with immediate benefits, such as the calcium imaging video analysis tool and the TNF in silico model. The neuron nuclei segmentation is a contribution for the near future. A step towards the full annotation of the c.elegans connectome and later for the reconstruction of the connectome of other species. And finally, the protein folding project is a first impulse to change the way we conceive the protein folding process in nature. We try to point future research in a novel direction, where the amino code is not the most relevant characteristic of the process but the conditions within the cell.
In this PhD thesis, we develop models for the numerical simulation of epitaxial crystal growth, as realized, e.g., in molecular beam epitaxy (MBE). The basic idea is to use a discrete lattice gas representation of the crystal structure, and to apply kinetic Monte Carlo (KMC) simulations for the description of the growth dynamics. The main advantage of the KMC approach is the possibility to account for atomistic details and at the same time cover MBE relevant time scales in the simulation. In chapter 1, we describe the principles of MBE, pointing out relevant physical processes and the influence of experimental control parameters. We discuss various methods used in the theoretical description of epitaxial growth. Subsequently, the underlying concepts of the KMC method and the lattice gas approach are presented. Important aspects concerning the design of a lattice gas model are considered, e.g. the solid-on-solid approximation or the choice of an appropriate lattice topology. A key element of any KMC simulation is the selection of allowed events and the evaluation of Arrhenius rates for thermally activated processes. We discuss simplifying schemes that are used to approximate the corresponding energy barriers if detailed knowledge about the barriers is not available. Finally, the efficient implementation of the MC kinetics using a rejection-free algorithm is described. In chapter 2, we present a solid-on-solid lattice gas model which aims at the description of II-VI(001) semiconductor surfaces like CdTe(001). The model accounts for the zincblende structure and the relevant surface reconstructions of Cd- and Te-terminated surfaces. Particles at the surface interact via anisotropic nearest and next nearest neighbor interactions, whereas interactions in the bulk are isotropic. The anisotropic surface interactions reflect known properties of CdTe(001) like the small energy difference between the c(2x2) and (2x1) vacancy structures of Cd-terminated surfaces. A key element of the model is the presence of additional Te atoms in a weakly bound Te* state, which is motivated by experimental observations of Te coverages exceeding one monolayer at low temperatures and high Te fluxes. The true mechanism of binding excess Te to the surface is still unclear. Here, we use a mean-field approach assuming a Te* reservoir with limited occupation. In chapter 3, we perform KMC simulations of atomic layer epitaxy (ALE) of CdTe(001). We study the self-regulation of the ALE growth rate and demonstrate how the interplay of the Te* reservoir occupation with the surface kinetics results in two different regimes: at high temperatures the growth rate is limited to one half layer of CdTe per ALE cycle, whereas at low enough temperatures each cycle adds a complete layer. The temperature where the transition between the two regimes occurs depends mainly on the particle fluxes. The temperature dependence of the growth rate and the flux dependence of the transition temperature are in good qualitative agreement with experimental results. Comparing the macroscopic activation energy for Te* desorption in our model with experimental values we find semiquantitative agreement. In chapter 4, we study the formation of nanostructures with alternating stripes during submonolayer heteroepitaxy of two different adsorbate species on a given substrate. We evaluate the influence of two mechanisms: kinetic segregation due to chemically induced diffusion barriers, and strain relaxation by alternating arrangement of the adsorbate species. KMC simulations of a simple cubic lattice gas with weak inter-species binding energy show that kinetic effects are sufficient to account for stripe formation during growth. The dependence of the stripe width on control parameters is investigated. We find an Arrhenius temperature dependence, in agreement with experimental investigations of phase separation in binary or ternary material systems. Canonical MC simulations show that the observed stripes are not stable under equilibrium conditions: the adsorbate species separate into very large domains. Off-lattice simulations which account for the lattice misfit of the involved particle species show that, under equilibrium conditions, the competition between binding and strain energy results in regular stripe patterns with a well-defined width depending on both misfit and binding energies. In KMC simulations, the stripe-formation and the experimentally reported ramification of adsorbate islands are reproduced. To clarify the origin of the island ramification, we investigate an enhanced lattice gas model whose parameters are fitted to match characteristic off-lattice diffusion barriers. The simulation results show that a satisfactory explanation of experimental observations within the lattice gas framework requires a detailed incorporation of long-range elastic interactions. In the appendix we discuss supplementary topics related to the lattice gas simulations in chapter 4.
This thesis analyzes the 2001-2006 labor market reforms in Germany. The aim of this work is twofold. First, an overview of the most important reform measures and the intended effects is given. Second, two specific and very fundamental amendments, namely the merging of unemployment assistance and social benefits, as well as changes in the duration of unemployment insurance benefits, are analyzed in detail to evaluate their effects on individuals and the entire economy. Using a matching model with optimal search intensity and Semi-Markov methods, the effects of these two amendments on the duration of unemployment, optimal search intensity and unemployment are analyzed.
The present dissertation investigates the management of RFID implementations in retail trade. Our work contributes to this by investigating important aspects that have so far received little attention in scientific literature. We therefore perform three studies about three important aspects of managing RFID implementations. We evaluate in our first study customer acceptance of pervasive retail systems using privacy calculus theory. The results of our study reveal the most important aspects a retailer has to consider when implementing pervasive retail systems. In our second study we analyze RFID-enabled robotic inventory taking with the help of a simulation model. The results show that retailers should implement robotic inventory taking if the accuracy rates of the robots are as high as the robots’ manufacturers claim. In our third and last study we evaluate the potentials of RFID data for supporting managerial decision making. We propose three novel methods in order to extract useful information from RFID data and propose a generic information extraction process. Our work is geared towards practitioners who want to improve their RFID-enabled processes and towards scientists conducting RFID-based research.
This paper presents a measurement of the polarisation of tau leptons produced in Z/gamma* -> tau tau decays which is performed with a dataset of proton-proton collisions at root s = 8 TeV, corresponding to an integrated luminosity of 20.2 fb(-1) recorded with the ATLAS detector at the LHC in 2012. The Z/gamma* -> tau tau decays are reconstructed from a hadronically decaying tau lepton with a single charged particle in the final state, accompanied by a tau lepton that decays leptonically. The tau polarisation is inferred from the relative fraction of energy carried by charged and neutral hadrons in the hadronic tau decays. The polarisation is measured in a fiducial region that corresponds to the kinematic region accessible to this analysis. The tau polarisation extracted over the full phase space within the Z/gamma* mass range of 66 < mZ/gamma* < 116GeVis found to be P-tau = -0.14 +/- 0.02(stat)+/- 0.04(syst). It is in agreement with the Standard Model prediction of Pt = -0.1517 +/- 0.0019, which is obtained from the ALP-GEN event generator interfaced with the PYTHIA 6 parton shower modelling and the TAUOLA tau decay library.