Refine
Has Fulltext
- yes (26)
Is part of the Bibliography
- yes (26)
Year of publication
Document Type
- Doctoral Thesis (24)
- Journal article (2)
Language
- English (26) (remove)
Keywords
- Simulation (26) (remove)
Institute
- Institut für Theoretische Physik und Astrophysik (5)
- Institut für Informatik (4)
- Physikalisches Institut (4)
- Institut für Mathematik (2)
- Betriebswirtschaftliches Institut (1)
- Center for Computational and Theoretical Biology (1)
- Fakultät für Biologie (1)
- Fakultät für Mathematik und Informatik (1)
- Graduate School of Life Sciences (1)
- Graduate School of Science and Technology (1)
Sonstige beteiligte Institutionen
ResearcherID
- D-1250-2010 (1)
The present dissertation investigates the management of RFID implementations in retail trade. Our work contributes to this by investigating important aspects that have so far received little attention in scientific literature. We therefore perform three studies about three important aspects of managing RFID implementations. We evaluate in our first study customer acceptance of pervasive retail systems using privacy calculus theory. The results of our study reveal the most important aspects a retailer has to consider when implementing pervasive retail systems. In our second study we analyze RFID-enabled robotic inventory taking with the help of a simulation model. The results show that retailers should implement robotic inventory taking if the accuracy rates of the robots are as high as the robots’ manufacturers claim. In our third and last study we evaluate the potentials of RFID data for supporting managerial decision making. We propose three novel methods in order to extract useful information from RFID data and propose a generic information extraction process. Our work is geared towards practitioners who want to improve their RFID-enabled processes and towards scientists conducting RFID-based research.
Neurobiology is widely supported by bioinformatics. Due to the big amount of data generated from the biological side a computational approach is required. This thesis presents four different cases of bioinformatic tools applied to the service of Neurobiology.
The first two tools presented belong to the field of image processing. In the first case, we make use of an algorithm based on the wavelet transformation to assess calcium activity events in cultured neurons. We designed an open source tool to assist neurobiology researchers in the analysis of calcium imaging videos. Such analysis is usually done manually which is time consuming and highly subjective. Our tool speeds up the work and offers the possibility of an unbiased detection of the calcium events. Even more important is that our algorithm not only detects the neuron spiking activity but also local spontaneous activity which is normally discarded because it is considered irrelevant. We showed that this activity is determinant in the calcium dynamics in neurons and it is involved in important functions like signal modulation and memory and learning.
The second project is a segmentation task. In our case we are interested in segmenting the neuron nuclei in electron microscopy images of c.elegans. Marking these structures is necessary in order to reconstruct the connectome of the organism. C.elegans is a great study case due to the simplicity of its nervous system (only 502 neurons). This worm, despite its simplicity has taught us a lot about neuronal mechanisms. There is still a lot of information we can extract from the c.elegans, therein lies the importance of reconstructing its connectome. There is a current version of the c.elegans connectome but it was done by hand and on a single subject which leaves a big room for errors. By automatizing the segmentation of the electron microscopy images we guarantee an unbiased approach and we will be able to verify the connectome on several subjects.
For the third project we moved from image processing applications to biological modeling. Because of the high complexity of even small biological systems it is necessary to analyze them with the help of computational tools. The term in silico was coined to refer to such computational models of biological systems. We designed an in silico model of the TNF (Tumor necrosis factor) ligand and its two principal receptors. This biological system is of high relevance because it is involved in the inflammation process. Inflammation is of most importance as protection mechanism but it can also lead to complicated diseases (e.g. cancer). Chronic inflammation processes can be particularly dangerous in the brain. In order to better understand the dynamics that govern the TNF system we created a model using the BioNetGen language. This is a rule based language that allows one to simulate systems where multiple agents are governed by a single rule. Using our model we characterized the TNF system and hypothesized about the relation of the ligand with each of the two receptors. Our hypotheses can be later used to define drug targets in the system or possible treatments for chronic inflammation or lack of the inflammatory response.
The final project deals with the protein folding problem. In our organism proteins are folded all the time, because only in their folded conformation are proteins capable of doing their job (with some very few exceptions). This folding process presents a great challenge for science because it has been shown to be an NP problem. NP means non deterministic Polynomial time problem. This basically means that this kind of problems cannot be efficiently solved. Nevertheless, somehow the body is capable of folding a protein in just milliseconds. This phenomenon puzzles not only biologists but also mathematicians. In mathematics NP problems have been studied for a long time and it is known that given the solution to one NP problem we could solve many of them (i.e. NP-complete problems). If we manage to understand how nature solves the protein folding problem then we might be able to apply this solution to many other problems. Our research intends to contribute to this discussion. Unfortunately, not to explain how nature solves the protein folding problem, but to explain that it does not solve the problem at all. This seems contradictory since I just mentioned that the body folds proteins all the time, but our hypothesis is that the organisms have learned to solve a simplified version of the NP problem. Nature does not solve the protein folding problem in its full complexity. It simply solves a small instance of the problem. An instance which is as simple as a convex optimization problem. We formulate the protein folding problem as an optimization problem to illustrate our claim and present some toy examples to illustrate the formulation. If our hypothesis is true, it means that protein folding is a simple problem. So we just need to understand and model the conditions of the vicinity inside the cell at the moment the folding process occurs. Once we understand this starting conformation and its influence in the folding process we will be able to design treatments for amyloid diseases such as Alzheimer's and Parkinson's.
In summary this thesis project contributes to the neurobiology research field from four different fronts. Two are practical contributions with immediate benefits, such as the calcium imaging video analysis tool and the TNF in silico model. The neuron nuclei segmentation is a contribution for the near future. A step towards the full annotation of the c.elegans connectome and later for the reconstruction of the connectome of other species. And finally, the protein folding project is a first impulse to change the way we conceive the protein folding process in nature. We try to point future research in a novel direction, where the amino code is not the most relevant characteristic of the process but the conditions within the cell.
Nowadays, data centers are becoming increasingly dynamic due to the common adoption of virtualization technologies. Systems can scale their capacity on demand by growing and shrinking their resources dynamically based on the current load. However, the complexity and performance of modern data centers is influenced not only by the software architecture, middleware, and computing resources, but also by network virtualization, network protocols, network services, and configuration. The field of network virtualization is not as mature as server virtualization and there are multiple competing approaches and technologies. Performance modeling and prediction techniques provide a powerful tool to analyze the performance of modern data centers. However, given the wide variety of network virtualization approaches, no common approach exists for modeling and evaluating the performance of virtualized networks.
The performance community has proposed multiple formalisms and models for evaluating the performance of infrastructures based on different network virtualization technologies. The existing performance models can be divided into two main categories: coarse-grained analytical models and highly-detailed simulation models. Analytical performance models are normally defined at a high level of abstraction and thus they abstract many details of the real network and therefore have limited predictive power. On the other hand, simulation models are normally focused on a selected networking technology and take into account many specific performance influencing factors, resulting in detailed models that are tightly bound to a given technology, infrastructure setup, or to a given protocol stack.
Existing models are inflexible, that means, they provide a single solution method without providing means for the user to influence the solution accuracy and solution overhead. To allow for flexibility in the performance prediction, the user is required to build multiple different performance models obtaining multiple performance predictions. Each performance prediction may then have different focus, different performance metrics, prediction accuracy, and solving time.
The goal of this thesis is to develop a modeling approach that does not require the user to have experience in any of the applied performance modeling formalisms. The approach offers the flexibility in the modeling and analysis by balancing between: (a) generic character and low overhead of coarse-grained analytical models, and (b) the more detailed simulation models with higher prediction accuracy.
The contributions of this thesis intersect with technologies and research areas, such as: software engineering, model-driven software development, domain-specific modeling, performance modeling and prediction, networking and data center networks, network virtualization, Software-Defined Networking (SDN), Network Function Virtualization (NFV). The main contributions of this thesis compose the Descartes Network Infrastructure (DNI) approach and include:
• Novel modeling abstractions for virtualized network infrastructures. This includes two meta-models that define modeling languages for modeling data center network performance. The DNI and miniDNI meta-models provide means for representing network infrastructures at two different abstraction levels. Regardless of which variant of the DNI meta-model is used, the modeling language provides generic modeling elements allowing to describe the majority of existing and future network technologies, while at the same time abstracting factors that have low influence on the overall performance. I focus on SDN and NFV as examples of modern virtualization technologies.
• Network deployment meta-model—an interface between DNI and other meta- models that allows to define mapping between DNI and other descriptive models. The integration with other domain-specific models allows capturing behaviors that are not reflected in the DNI model, for example, software bottlenecks, server virtualization, and middleware overheads.
• Flexible model solving with model transformations. The transformations enable solving a DNI model by transforming it into a predictive model. The model transformations vary in size and complexity depending on the amount of data abstracted in the transformation process and provided to the solver. In this thesis, I contribute six transformations that transform DNI models into various predictive models based on the following modeling formalisms: (a) OMNeT++ simulation, (b) Queueing Petri Nets (QPNs), (c) Layered Queueing Networks (LQNs). For each of these formalisms, multiple predictive models are generated (e.g., models with different level of detail): (a) two for OMNeT++, (b) two for QPNs, (c) two for LQNs. Some predictive models can be solved using multiple alternative solvers resulting in up to ten different automated solving methods for a single DNI model.
• A model extraction method that supports the modeler in the modeling process by automatically prefilling the DNI model with the network traffic data. The contributed traffic profile abstraction and optimization method provides a trade-off by balancing between the size and the level of detail of the extracted profiles.
• A method for selecting feasible solving methods for a DNI model. The method proposes a set of solvers based on trade-off analysis characterizing each transformation with respect to various parameters such as its specific limitations, expected prediction accuracy, expected run-time, required resources in terms of CPU and memory consumption, and scalability.
• An evaluation of the approach in the context of two realistic systems. I evaluate the approach with focus on such factors like: prediction of network capacity and interface throughput, applicability, flexibility in trading-off between prediction accuracy and solving time. Despite not focusing on the maximization of the prediction accuracy, I demonstrate that in the majority of cases, the prediction error is low—up to 20% for uncalibrated models and up to 10% for calibrated models depending on the solving technique.
In summary, this thesis presents the first approach to flexible run-time performance prediction in data center networks, including network based on SDN. It provides ability to flexibly balance between performance prediction accuracy and solving overhead. The approach provides the following key benefits:
• It is possible to predict the impact of changes in the data center network on the performance. The changes include: changes in network topology, hardware configuration, traffic load, and applications deployment.
• DNI can successfully model and predict the performance of multiple different of network infrastructures including proactive SDN scenarios.
• The prediction process is flexible, that is, it provides balance between the granularity of the predictive models and the solving time. The decreased prediction accuracy is usually rewarded with savings of the solving time and consumption of resources required for solving.
• The users are enabled to conduct performance analysis using multiple different prediction methods without requiring the expertise and experience in each of the modeling formalisms.
The components of the DNI approach can be also applied to scenarios that are not considered in this thesis. The approach is generalizable and applicable for the following examples: (a) networks outside of data centers may be analyzed with DNI as long as the background traffic profile is known; (b) uncalibrated DNI models may serve as a basis for design-time performance analysis; (c) the method for extracting and compacting of traffic profiles may be used for other, non-network workloads as well.
Following the early experiences in aviation, medical simulation has rapidly
evolved into one of the most novel educational tools of the last three decades. In addition to its
use in training individuals or teams in crisis resource management, simulation has been studied as
a tool to evaluate technical and non-technical skills of individuals as well as, more recently,
entire medical teams.
It is usually fairly difficult to obtain clinical reference data from critical events to refute
claims that the management of actual events fell below what could reasonably be expected and we
demonstrated the use of rank order statistics to calculate quantiles with confidence limits for
management times of critical obstetrical events using data from realistic simulation. This approach
could be used to describe the distribution of treatment times in order to assist in deciding what
performance may constitute an outlier. It can also identify particular challenges of clinical
practice and allow the development of educational curricula. While the information derived from
simulation has to be interpreted with a high degree of caution for a clinical context, it may
represent a further ‘added value’ or important step in establishing simulation as a training tool
and to provide information that could be used in an appropriate clinical context for adverse
events. Large amounts of data (such as from a simulation registry) would allow the calculation of
acceptable confidence intervals for the required
outcome parameters as well as actual tolerance limits.
In this thesis two main projects are presented, both aiming at the overall goal
of particle detector development. In the first part of the thesis detailed shielding
studies are discussed, focused on the shielding section of the planned New Small
Wheel as part of the ATLAS detector upgrade. Those studies supported the discussions
within the upgrade community and decisions made on the final design of
the New Small Wheel. The second part of the thesis covers the design, construction
and functional demonstration of a test facility for gaseous detectors at the
University of Würzburg. Additional studies on the trigger system of the facility are
presented. Especially the precision and reliability of reference timing signals were
investigated.
Staphylococcus aureus (SA) causes nosocomial infections including life threatening sepsis by multi-resistant strains (MRSA). It has the ability to form biofilms to protect it from the host immune system and from anti staphylococcal drugs. Biofilm and planctonic life style is regulated by a complex Quorum-Sensing (QS) system with agr as a central regulator. To study biofilm formation and QS mechanisms in SA a Boolean network was build (94 nodes, 184 edges) including two different component systems such as agr, sae and arl. Important proteins such as Sar, Rot and SigB were included as further nodes in the model. System analysis showed there are only two stable states biofilm forming versus planctonic with clearly different subnetworks turned on. Validation according to gene expression data confirmed this. Network consistency was tested first according to previous knowledge and literature. Furthermore, the predicted node activity of different in silico knock-out strains agreed well with corresponding micro array experiments and data sets. Additional validation included the expression of further nodes (Northern blots) and biofilm production compared in different knock-out strains in biofilm adherence assays. The model faithfully reproduces the behaviour of QS signalling mutants. The integrated model allows also prediction of various other network mutations and is supported by experimental data from different strains. Furthermore, the well connected hub proteins elucidate how integration of different inputs is achieved by the QS network. For in silico as well as in vitro experiments it was found that the sae-locus is also a central modulator of biofilm production. Sae knock-out strains showed stronger biofilms. Wild type phenotype was rescued by sae complementation. To elucidate the way in which sae takes influence on biofilm formation the network was used and Venn-diagrams were made, revealing nodes regulated by sae and changed in biofilms. In these Venn-diagrams nucleases and extracellular proteins were found to be promising nodes. The network revealed DNAse to be of great importance. Therefore qualitatively the DNAse amount, produced by different SA mutants was measured, it was tried to dissolve biofilms with according amounts of DNAse and the concentration of nucleic acids, proteins and polysaccharides were measured in biofilms of different SA mutants.
With its thorough validation the network model provides a powerful tool to study QS and biofilm formation in SA, including successful predictions for different knock-out mutant behaviour, QS signalling and biofilm formation. This includes implications for the behaviour of MRSA strains and mutants. Key regulatory mutation combinations (agr–, sae–, sae–/agr–, sigB+, sigB+/sae–) were directly tested in the model but also in experiments. High connectivity was a good guide to identify master regulators, whose detailed behaviour was studied both in vitro and in the model. Together, both lines of evidence support in particular a refined regulatory role for sae and agr with involvement in biofilm repression and/or SA dissemination. With examination of the composition of different mutant biofilms as well as with the examination of the reaction cascade that connects sae to the biofilm forming ability of SA and also by postulating that nucleases might play an important role in that, first steps were taken in proving and explaining regulatory links leading from sae to biofilms. Furthermore differences in biofilms of different mutant SA strains were found leading us in perspective towards a new understanding of biofilms including knowledge how to better regulate, fight and use its different properties.
Continuously increasing energy prices have considerably influenced the cost of living over the last decades. At the same time increasingly extreme weather conditions, drought-filled summers as well as autumns and winters with heavier rainfall and worsening storms have been reported. These are possibly the harbingers of the expected approaching global climate change. Considering the depletability of fossil energy sources and a rising distrust in nuclear power, investigations into new and innovative renewable energy sources are necessary to prepare for the coming future.
In addition to wind, hydro and biomass technologies, electricity generated by the direct conversion of incident sunlight is one of the most promising approaches. Since the syntheses and detailed studies of organic semiconducting polymers and fullerenes were intensified, a new kind of solar cell fabrication became conceivable. In addition to classical vacuum deposition techniques, organic cells were now also able to be processed from a solution, even on flexible substrates like plastic, fabric or paper.
An organic solar cell represents a complex electrical device influenced for instance by light interference for charge carrier generation. Also charge carrier recombination and transport mechanisms are important to its performance. In accordance to Coulomb interaction, this results in a specific distribution of the charge carriers and the electric field, which finally yield the measured current-voltage characteristics. Changes of certain parameters result in a complex response in the investigated device due to interactions between the physical processes. Consequently, it is necessary to find a way to generally predict the response of such a device to temperature changes for example.
In this work, a numerical, one-dimensional simulation has been developed based on the drift-diffusion equations for electrons, holes and excitons. The generation and recombination rates of the single species are defined according to a detailed balance approach. The Coulomb interaction between the single charge carriers is considered through the Poisson equation. An analytically non-solvable differential equation system is consequently set-up. With numerical approaches, valid solutions describing the macroscopic processes in organic solar cells can be found. An additional optical simulation is used to determine the spatially resolved charge carrier generation rates due to interference.
Concepts regarding organic semiconductors and solar cells are introduced in the first part of this work. All chapters are based on previous ones and logically outline the basic physics, device architectures, models of charge carrier generation and recombination as well as the mathematic and numerical approaches to obtain valid simulation results.
In the second part, the simulation is used to elaborate issues of current interest in organic solar cell research. This includes a basic understanding of how the open circuit voltage is generated and which processes limit its value. S-shaped current-voltage characteristics are explained assigning finite surface recombination velocities at metal electrodes piling-up local space charges. The power conversion efficiency is identified as a trade-off between charge carrier accumulation and charge extraction. This leads to an optimum of the power conversion efficiency at moderate to high charge carrier mobilities. Differences between recombination rates determined by different interpretations of identical experimental results are assigned to a spatially inhomogeneous recombination, relevant for almost all low mobility semiconductor devices.
This thesis deals with the chaotic dynamics of nonlinear networks consisting of semiconductor lasers which have time-delayed self-feedbacks or mutual couplings. These semiconductor lasers are simulated numerically by the Lang-Kobayashi equations. The central issue is how the chaoticity of the lasers, measured by the maximal Lyapunov exponent, changes when the delay time is changed. It is analysed how this change of chaoticity with increasing delay time depends on the reflectivity of the mirror for the self-feedback or the strength of the mutal coupling, respectively. The consequences of the different types of chaos for the effect of chaos synchronization of mutually coupled semiconductor lasers are deduced and discussed. At the beginning of this thesis, the master stability formalism for the stability analysis of nonlinear networks with delay is explained. After the description of the Lang-Kobayashi equations and their linearizations as a model for the numerical simulation of semiconductor lasers with time-delayed couplings, the artificial sub-Lyapunov exponent $\lambda_{0}$ is introduced. It is explained how the sign of the sub-Lyapunov exponent can be determined by experiments. The notions of "strong chaos" and "weak chaos" are introduced and distinguished by their different scaling properties of the maximal Lyapunov exponent with the delay time. The sign of the sub-Lyapunov exponent $\lambda_{0}$ is shown to determine the occurence of strong or weak chaos. The transition sequence "weak to strong chaos and back to weak chaos" upon monotonically increasing the coupling strength $\sigma$ of a single laser's self-feedback is shown for numerical calculations of the Lang-Kobayashi equations. At the transition between strong and weak chaos, the sub-Lyapunov exponent vanishes, $\lambda_{0}=0$, resulting in a special scaling behaviour of the maximal Lyapunov exponent with the delay time. Transitions between strong and weak chaos by changing $\sigma$ can also be found for the Rössler and Lorenz dynamics. The connection between the sub-Lyapunov exponent and the time-dependent eigenvalues of the Jacobian for the internal laser dynamics is analysed. Counterintuitively, the difference between strong and weak chaos is not directly visible from the trajectory although the difference of the trajectories induces the transitions between the two types of chaos. In addition, it is shown that a linear measure like the auto-correlation function cannot unambiguously reveal the difference between strong and weak chaos either. Although the auto-correlations after one delay time are significantly higher for weak chaos than for strong chaos, it is not possible to detect a qualitative difference. If two time-scale separated self-feedbacks are present, the shorter feedback has to be taken into account for the definition of a new sub-Lyapunov exponent $\lambda_{0,s}$, which in this case determines the occurence of strong or weak chaos. If the two self-feedbacks have comparable delay times, the sub-Lyapunov exponent $\lambda_{0}$ remains the criterion for strong or weak chaos. It is shown that the sub-Lyapunov exponent scales with the square root of the effective pump current $\sqrt{p-1}$, both in its magnitude and in the position of the critical coupling strengths. For networks with several distinct sub-Lyapunov exponents, it is shown that the maximal sub-Lyapunov exponent of the network determines whether the network's maximal Lyapunov exponent scales strongly or weakly with increasing delay time. As a consequence, complete synchronization of a network is excluded for arbitrary networks which contain at least one strongly chaotic laser. Furthermore, it is demonstrated that the sub-Lyapunov exponent of a driven laser depends on the number of the incoherently superimposed inputs from unsynchronized input lasers. For networks of delay-coupled lasers operating in weak chaos, the condition $|\gamma_{2}|<\mathrm{e}^{-\lambda_{\mathrm{m}}\,\tau}$ for stable chaos synchronization is deduced using the master stability formalism. Hence, synchronization of any network depends only on the properties of a single laser with self-feedback and the eigenvalue gap of the coupling matrix. The characteristics of the master stability function for the Lang-Kobayashi dynamics is described, and consequently, the master stability function is refined to allow for precise practical prediction of synchronization. The prediction of synchronization with the master stability function is demonstrated for bidirectional and unidirectional networks. Furthermore, the master stability function is extended for two distinct delay times. Finally, symmetries and resonances for certain values of the ratio of the delay times are shown for the master stability function of the Lang-Kobyashi equations.
Understanding the emergence of species' ranges is one of the most fundamental challenges in ecology. Early on, geographical barriers were identified as obvious natural constraints to the spread of species. However, many range borders occur along gradually changing landscapes, where no sharp barriers are obvious. Mechanistic explanations for this seeming contradiction incorporate environmental gradients that either affect the spatio-temporal variability of conditions or the increasing fragmentation of habitat. Additionally, biological mechanisms like Allee effects (i.e. decreased growth rates at low population sizes or densities), condition-dependent dispersal, and biological interactions with other species have been shown to severely affect the location of range margins. The role of dispersal has been in the focus of many studies dealing with range border formation. Dispersal is known to be highly plastic and evolvable, even over short ecological time-scales. However, only few studies concentrated on the impact of evolving dispersal on range dynamics. This thesis aims at filling this gap. I study the influence of evolving dispersal rates on the persistence of spatially structured populations in environmental gradients and its consequences for the establishment of range borders. More specially I investigate scenarios of range formation in equilibrium, periods of range expansion, and range shifts under global climate change ...
This thesis analyzes the 2001-2006 labor market reforms in Germany. The aim of this work is twofold. First, an overview of the most important reform measures and the intended effects is given. Second, two specific and very fundamental amendments, namely the merging of unemployment assistance and social benefits, as well as changes in the duration of unemployment insurance benefits, are analyzed in detail to evaluate their effects on individuals and the entire economy. Using a matching model with optimal search intensity and Semi-Markov methods, the effects of these two amendments on the duration of unemployment, optimal search intensity and unemployment are analyzed.