Refine
Has Fulltext
- yes (46)
Year of publication
Document Type
- Doctoral Thesis (43)
- Journal article (2)
- Book (1)
Keywords
- Simulation (46) (remove)
Institute
- Institut für Informatik (8)
- Institut für Theoretische Physik und Astrophysik (5)
- Medizinische Klinik und Poliklinik I (5)
- Physikalisches Institut (5)
- Klinik und Poliklinik für Allgemein-, Viszeral-, Gefäß- und Kinderchirurgie (Chirurgische Klinik I) (3)
- Graduate School of Science and Technology (2)
- Institut für Geographie und Geologie (2)
- Institut für Mathematik (2)
- Betriebswirtschaftliches Institut (1)
- Center for Computational and Theoretical Biology (1)
Sonstige beteiligte Institutionen
ResearcherID
- D-1250-2010 (1)
The ongoing and evolving usage of networks presents two critical challenges for current and future networks that require attention: (1) the task of effectively managing the vast and continually increasing data traffic and (2) the need to address the substantial number of end devices resulting from the rapid adoption of the Internet of Things. Besides these challenges, there is a mandatory need for energy consumption reduction, a more efficient resource usage, and streamlined processes without losing service quality. We comprehensively address these efforts, tackling the monitoring and quality assessment of streaming applications, a leading contributor to the total Internet traffic, as well as conducting an exhaustive analysis of the network performance within a Long Range Wide Area Network (LoRaWAN), one of the rapidly emerging LPWAN solutions.
The ongoing and evolving usage of networks presents two critical challenges for current and future networks that require attention: (1) the task of effectively managing the vast and continually increasing data traffic and (2) the need to address the substantial number of end devices resulting from the rapid adoption of the Internet of Things. Besides these challenges, there is a mandatory need for energy consumption reduction, a more efficient resource usage, and streamlined processes without losing service quality. We comprehensively address these efforts, tackling the monitoring and quality assessment of streaming applications, a leading contributor to the total Internet traffic, as well as conducting an exhaustive analysis of the network performance within a Long Range Wide Area Network (LoRaWAN), one of the rapidly emerging LPWAN solutions.
This work aims at elucidating chemical processes involving homogeneous catalysis and photo–physical relaxation of excited molecules in the solid state. Furthermore, compounds with supposedly small singlet–triplet gaps and therefore biradicaloid character are investigated with respect to their electro–chemical behavior. The work on hydroboration catalysis via a reduced 9,10–diboraanthracene (DBA) was preformed in collaboration with the Wagner group in Frankfurt, more specifically Dr. Sven Prey, who performed all laboratory experiments. The investigation of delayed luminescence properties in arylboronic esters in their solid state was conducted in collaboration with the Marder group in Würzburg. The author of this work took part in the synthesis of the investigated compounds while being supervised by Dr. Zhu Wu. The final project was a collaboration with the group of Anukul Jana from Hyderabad, India who provided the experimental data.
Verschiedene Konzepte der Röntgenmikroskopie haben sich mittlerweile im Labor etabliert und ermöglichen heute aufschlussreiche Einblicke in eine Vielzahl von Probensystemen. Der „Labormaßstab“ bezieht sich dabei auf Analysemethoden, die in Form von einem eigenständigen Gerät betrieben werden können. Insbesondere sind sie unabhängig von der Strahlerzeugung an einer Synchrotron-Großforschungseinrichtung und einem sonst kilometergroßen Elektronen-speicherring. Viele der technischen Innovationen im Labor sind dabei ein Transfer der am Synchrotron entwickelten Techniken. Andere wiederum basieren auf der konsequenten Weiterentwicklung etablierter Konzepte. Die Auflösung allein ist dabei nicht entscheidend für die spezifische Eignung eines Mikroskopiesystems im Ganzen. Ebenfalls sollte das zur Abbildung eingesetzte Energiespektrum auf das Probensystem abgestimmt sein. Zudem muss eine Tomographieanalage zusätzlich in der Lage sein, die Abbildungsleistung bei 3D-Aufnahmen zu konservieren.
Nach einem Überblick über verschiedene Techniken der Röntgenmikroskopie konzentriert sich die vorliegende Arbeit auf quellbasierte Nano-CT in Projektionsvergrößerung als vielversprechende Technologie zur Materialanalyse. Hier können höhere Photonenenergien als bei konkurrierenden Ansätzen genutzt werden, wie sie von stärker absorbierenden Proben, z. B. mit einem hohen Anteil von Metallen, zur Untersuchung benötigt werden. Das bei einem ansonsten idealen CT-Gerät auflösungs- und leistungsbegrenzende Bauteil ist die verwendete Röntgen-quelle. Durch konstruktive Innovationen sind hier die größten Leistungssprünge zu erwarten. In diesem Zuge wird erörtert, ob die Brillanz ein geeignetes Maß ist, um die Leistungsfähigkeit von Röntgenquellen zu evaluieren, welchen Schwierigkeiten die praktische Messung unterliegt und wie das die Vergleichbarkeit der Werte beeinflusst. Anhand von Monte-Carlo-Simulationen wird gezeigt, wie die Brillanz verschiedener Konstruktionen an Röntgenquellen theoretisch bestimmt und miteinander verglichen werden kann. Dies wird am Beispiel von drei modernen Konzepten von Röntgenquellen demonstriert, welche zur Mikroskopie eingesetzt werden können. Im Weiteren beschäftigt sich diese Arbeit mit den Grenzen der Leistungsfähigkeit von Transmissionsröntgenquellen. Anhand der verzahnten Simulation einer Nanofokus-Röntgenquelle auf Basis von Monte-Carlo und FEM-Methoden wird untersucht, ob etablierte Literatur¬modelle auf die modernen Quell-konstruktionen noch anwendbar sind. Aus den Simulationen wird dann ein neuer Weg abgeleitet, wie die Leistungsgrenzen für Nanofokus-Röntgenquellen bestimmt werden können und welchen Vorteil moderne strukturierte Targets dabei bieten.
Schließlich wird die Konstruktion eines neuen Nano-CT-Gerätes im Labor-maßstab auf Basis der zuvor theoretisch besprochenen Nanofokus-Röntgenquelle und Projektionsvergrößerung gezeigt, sowie auf ihre Leistungsfähigkeit validiert. Es ist spezifisch darauf konzipiert, hochauflösende Messungen an Materialsystemen in 3D zu ermöglichen, welche mit bisherigen Methoden limitiert durch mangelnde Auflösung oder Energie nicht umsetzbar waren. Daher wird die praktische Leistung des Gerätes an realen Proben und Fragestellungen aus der Material¬wissenschaft und Halbleiterprüfung validiert. Speziell die gezeigten Messungen von Fehlern in Mikrochips aus dem Automobilbereich waren in dieser Art zuvor nicht möglich.
Development, Simulation and Evaluation of Mobile Wireless Networks in Industrial Applications
(2023)
Manyindustrialautomationsolutionsusewirelesscommunicationandrelyontheavail-
ability and quality of the wireless channel. At the same time the wireless medium is
highly congested and guaranteeing the availability of wireless channels is becoming
increasingly difficult. In this work we show, that ad-hoc networking solutions can be
used to provide new communication channels and improve the performance of mobile
automation systems. These ad-hoc networking solutions describe different communi-
cation strategies, but avoid relying on network infrastructure by utilizing the Peer-to-
Peer (P2P) channel between communicating entities.
This work is a step towards the effective implementation of low-range communication
technologies(e.g. VisibleLightCommunication(VLC), radarcommunication, mmWave
communication) to the industrial application. Implementing infrastructure networks
with these technologies is unrealistic, since the low communication range would neces-
sitate a high number of Access Points (APs) to yield full coverage. However, ad-hoc
networks do not require any network infrastructure. In this work different ad-hoc net-
working solutions for the industrial use case are presented and tools and models for
their examination are proposed.
The main use case investigated in this work are Automated Guided Vehicles (AGVs)
for industrial applications. These mobile devices drive throughout the factory trans-
porting crates, goods or tools or assisting workers. In most implementations they must
exchange data with a Central Control Unit (CCU) and between one another. Predicting
if a certain communication technology is suitable for an application is very challenging
since the applications and the resulting requirements are very heterogeneous.
The proposed models and simulation tools enable the simulation of the complex inter-
action of mobile robotic clients and a wireless communication network. The goal is to
predict the characteristics of a networked AGV fleet.
Theproposedtoolswereusedtoimplement, testandexaminedifferentad-hocnetwork-
ing solutions for industrial applications using AGVs. These communication solutions
handle time-critical and delay-tolerant communication. Additionally a control method
for the AGVs is proposed, which optimizes the communication and in turn increases the
transport performance of the AGV fleet. Therefore, this work provides not only tools
for the further research of industrial ad-hoc system, but also first implementations of
ad-hoc systems which address many of the most pressing issues in industrial applica-
tions.
The fact that photovoltaics is a key technology for climate-neutral energy production can be taken as a given. The question to what extent perovskite will be used for photovoltaic technologies has not yet been fully answered. From a photophysical point of view, however, it has the potential to make a useful contribution to the energy sector. However, it remains to be seen whether perovskite-based modules will be able to compete with established technologies in terms of durability and cost efficiency. The additional aspect of ionic migration poses an additional challenge. In the present work, primarily the interaction between ionic redistribution, capacitive properties and recombination dynamics was investigated. This was done using impedance spectroscopy, OCVD and IV characteristics as well as extensive numerical drift-diffusion simulations. The combination of experimental and numerical methods proved to be very fruitful. A suitable model for the description of solar cells with respect to mobile ions was introduced in chapter 4.4. The formal mathematical description of the model was transferred by a non-dimensionalization and suitable numerically solvable form. The implementation took place in the Julia language. By intelligent use of structural properties of the sparse systems of equations, automatic differentiation and the use of efficient integration methods, the simulation tool is not only remarkably fast in finding the solution, but also scales quasi-linearly with the grid resolution. The software package was released under an open source license. In conventional semiconductor diodes, capacitance measurements are often used to determine the space charge density. In the first experimental chapter 5, it is shown that although this is also possible for the ionic migration present in perovskites, it cannot be directly understood as doping related, since the space charge distribution strongly depends on the preconditions and can be manipulated by an externally applied voltage. The exact form of this behavior depends on the perovskite composition. This shows, among other things, that experimental results can only be interpreted within the framework of conventional semiconductors to a very limited extent. Nevertheless, the built-in 99 potential of the solar cell can be determined if the experiments are carried out properly. A statement concerning the type and charge of the mobile ions is not possible without further effort, while their number can be determined. The simulations were applied to experimental data in chapter 6. Thus, it could be shown that mobile ions make a significant contribution to the OCVD of perovskite solar cells. j-V characteristics and OCVD transients measured as a function of temperature and illumination intensities could be quantitatively modeled simultaneously using a single global set of parameters. By the simulations it was further possible to derive a simple experimental procedure to determine the concentration and the diffusivity of the mobile ions. The possibility of describing different experiments in a uniform temperaturedependent manner strongly supports the model of mobile ions in perovskites. In summary, this work has made an important contribution to the elucidation of ionic contributions to the (photo)electrical properties of perovskite solar cells. Established experimental techniques for conventional semiconductors have been reinterpreted with respect to ionic mass transport and new methods have been proposed to draw conclusions on the properties for ionic transport. As a result, the published simulation tools can be used for a number of further studies.
In this doctoral thesis we cover the performance evaluation of next generation data plane architectures, comprised of complex software as well as programmable hardware components that allow fine granular configuration. In the scope of the thesis we propose mechanisms to monitor the performance of singular components and model key performance indicators of software based packet processing solutions. We present novel approaches towards network abstraction that allow the integration of heterogeneous data plane technologies into a singular network while maintaining total transparency between control and data plane. Finally, we investigate a full, complex system consisting of multiple software-based solutions and perform a detailed performance analysis. We employ simulative approaches to investigate overload control mechanisms that allow efficient operation under adversary conditions. The contributions of this work build the foundation for future research in the areas of network softwarization and network function virtualization.
We consider a multi-species gas mixture described by a kinetic model. More precisely, we are interested in models with BGK interaction operators. Several extensions to the standard BGK model are studied.
Firstly, we allow the collision frequency to vary not only in time and space but also with the microscopic velocity. In the standard BGK model, the dependence on the microscopic velocity is neglected for reasons of simplicity. We allow for a more physical description by reintroducing this dependence. But even though the structure of the equations remains the same, the so-called target functions in the relaxation term become more sophisticated being defined by a variational procedure.
Secondly, we include quantum effects (for constant collision frequencies). This approach influences again the resulting target functions in the relaxation term depending on the respective type of quantum particles.
In this thesis, we present a numerical method for simulating such models. We use implicit-explicit time discretizations in order to take care of the stiff relaxation part due to possibly large collision frequencies. The key new ingredient is an implicit solver which minimizes a certain potential function. This procedure mimics the theoretical derivation in the models. We prove that theoretical properties of the model are preserved at the discrete level such as conservation of mass, total momentum and total energy, positivity of distribution functions and a proper entropy behavior. We provide an array of numerical tests illustrating the numerical scheme as well as its usefulness and effectiveness.
Die Narbenhernie stellt eine häufige Komplikation nach Laparotomien dar. Die Therapie der Narbenhernie erfolgt mittels chirurgischer Netzimplantation. Dieses Verfahren erfordert detaillierte anatomische Kenntnisse. Dem ethischen Imperativ folgend, wurde ein kosten-effizientes Modell entwickelt, welches den humanen Situs imitiert und an dem sich eine retromuskuläre Netzimplantation durchführen lässt. Das High-Fidelity-Modell besteht zum Hauptteil aus 2-Komponenten-Silikon. Das Modell wurde entwickelt und im Rahmen dieser Studie validiert. Zur Ermittlung der Testpersonenanzahl wurde die Methodik des sequentiellen Dreieckstests genutzt. Nachdem 6 Anfänger (PJ-Studierende) und 6 Könner (Fachärzte für Viszeralchirurgie) jeweils ein Modell operiert hatten, wurde die Kontent-, die Konstrukt- und die Kriterienvalidität untersucht. Anschließend wurde das Modell und die Operationsdurchführung mit drei Methoden untersucht. Zum einen füllten die Teilnehmenden einen Fragebogen bezüglich der Realitätsnahe des Modells direkt nach der Operation aus. Außerdem bewerteten drei verblindete Bewerter die Operationen anhand des Competency assessment tool (CAT), welcher eine modifizierte Version des Fragebogens nach Miskovic darstellt, nach den folgenden Subskalen: „Instrumentengebrauch“, „Umgang mit dem Gewebe“, „Mängel und Fehler“, „Qualität des Endprodukts“. Zuletzt wurden die operierten Modelle bezüglicher der „Endergebnisse“ autopsiert und bewertet.
Die Ergebnisse zeigen, dass am SUBsON-Modell eine Narbenhernienoperation mit Netzimplantation authentisch durchgeführt werden kann. Die Testpersonen bewerteten das Modell als realitätsnah. Die Reliabilität war in allen Kategorien gut bis exzellent. Die Könner waren in allen Subskalen des CATs den Anfängern überlegen. Bei Betrachtung der Kriterienvalidität zeigte sich ein paradoxer Effekt: Bei der Präparation des Fatty Triangles erbrachten die Anfänger eine signifikant (p< 0,05) höhere Leistung als die Könner. Mögliche Erklärungen dafür sind mannigfaltig.
Die Leistungsunterschiede zwischen Anfängern und Könnern bestätigen die Konstruktvalidität von Modell und Fragebogen sowie die Realitätsnähe des Modells. In dieser Studie konnten Defizite vor allem unter Könnern bezüglich anatomischer Kenntnisse bei der Präparation des Fatty Triangles aufgezeigt werden. Das Modell kann zukünftig genutzt werden, um die Netzimplantation und die Präparation des Fatty Triangles zu üben als auch um die chirurgischen Leistungen zu evaluieren.
Einleitung
Platzwunden im Gesicht machen nahezu ein Drittel aller Platzwunden in der Notaufnahme aus (Singer et al., 2006). Diese werden zum Großteil nicht von Plastischen Chirurginnen und Chirurgen versorgt (Lee, Cho, et al., 2015), weshalb eine gute Grundausbildung junger Ärztinnen und Ärzte unabdingbar ist. Eine gängige Lehrmethode zur Vermittlung praktischer Fertigkeiten ist der konventionelle Ansatz „see one, do one“, welcher oft als unzureichend bewertet wird (Zahiri et al., 2015). Hingegen sind für die Vierschrittmethode nach Peyton zahlreiche Vorzüge dokumentiert (Herrmann-Werner et al., 2013; Krautter et al., 2015). Anhand eines eigens entwickelten Gesichtsmodells aus Silikon wurden beide Lehrmethoden im Hinblick auf ihren Lernerfolg bezüglich kommunikativer Fähigkeiten und handwerklicher Fertigkeiten, die Verankerung im Langzeitgedächtnis, die Dauer des Eingriffs sowie eine korrekte prozedurale Abfolge beim Versorgen von Gesichtswunden überprüft.
Material und Methoden
Zum Zeitpunkt der Teilnahme an der Studie befanden sich die Studierenden (n=20 bei einer Power von 0,8) entweder im Praktischen Jahr (11./12. Fachsemester) (n=10) oder im Blockpraktikum (10. Fachsemester) (n=10). Ausschlusskriterium war eine bereits selbstständig durchgeführte ambulante Naht im Gesichtsbereich.
Die Kohorte der konventionellen Methode als Kontrollgruppe (KG) und die der Peyton Methode als Experimentalgruppe (EG) wurden mittels Video-Tutorial angeleitet, bevor sie die Naht in Lokalanästhesie am Gesichtsmodell durchführten. Nach 7 Tagen erfolgte die Operation ein zweites Mal ohne Anleitung. Die Operationen wurden gefilmt und durch drei verblindete Bewertende anhand der Skalen „Instrumentengebrauch“, „Umgang mit dem Gewebe“, „Knappe Versäumnisse und Fehler“ sowie „Qualität des Endergebnisses“ des Competency Assessment Tools (CAT) bewertet (1 = Anfänger/in bis 4 = Erfahrene/r), welche wiederum in 12 Items eingeteilt waren (Miskovic et al., 2013). Die Berechnung der Unterschiede bezog handwerkliche Fertigkeiten, die Verankerung im Langzeitgedächtnis, die Kommunikation sowie Unterschiede zwischen den Ausbildungsständen ein. Zusätzlich wurde das Einhalten des korrekten prozeduralen Ablaufes überprüft, sowie die Zeit zur Durchführung gemessen und zwischen den Lehrmethoden verglichen. Zur Validierung des CAT wurde die Reliabilität der Skalen und die Interrater-Reliabilität berechnet.
Ergebnisse
Sowohl die Reliabilität der Skalen als auch die Interrater-Reliabilität zeigten zufriedenstellende Ergebnisse.
Bezüglich der Unterschiede auf Skalenebene zeigte die EG im Vergleich zur KG signifikant bessere Ergebnisse für die Mittelwerte aller vier Skalen (p < 0,05). Diese Ergebnisse bestätigten sich auch bei der Analyse einzelner Items. Bei Betrachtung der Unterschiede zwischen den OP Tagen zeigte sich bei der EG ein signifikanter Zuwachs der Leistung (p < 0,05). Bezüglich der kommunikativen Fähigkeiten berechnete sich für eines der beiden zugehörigen Items eine Überlegenheit der EG (p < 0,05). Bei detaillierter Betrachtung des Ausbildungsstandes ließ sich ein insgesamt besseres Abschneiden der Studierenden im Praktischen Jahr verglichen zu jenen im Blockpraktikum feststellen. Außerdem hielt die Kohorte der EG signifikant häufiger eine korrekte prozedurale Abfolge ein (p < 0,05) und benötigte deskriptiv weniger Zeit zur Durchführung der Prozedur.
Fazit
Die Peyton-Methode zeigte sich der konventionellen Methode im Hinblick auf das Erlernen einer Gesichtsnaht sowohl in der Qualität als auch in Bezug auf das Durchführen der Schritte in korrekter Reihenfolge überlegen. Zudem gibt es Evidenz, dass die Peyton Methode eine Verankerung des Gelernten im Langzeitgedächtnis fördert und die Durchführungsgeschwindigkeit erhöht. Die Ergebnisse sprechen somit für den Einsatz der Peyton Methode beim Erlernen komplexer chirurgischer Fähigkeiten.
Ausblick
Zukünftig könnte die feste Integration der Peyton Methode in das Curriculum die ärztliche Ausbildung verbessern. Insbesondere im Hinblick auf nachhaltiges und (Zeit-) effizientes Lernen besteht weiterer Forschungsbedarf. Außerdem wären weitere Untersuchungen zum Erlernen von Kommunikation mittels Vierschrittmethode nach Peyton wünschenswert.
Die vorliegende Studie sollte die Erwartungshaltung, Akzeptanz sowie die physische und psychische Durchführbarkeit hinsichtlich einer Übergewichts-Simulation bei Kindern und Jugendlichen überprüfen. Die Teilnahme an der Simulation ermöglichte es den Proband*innen, typische Alltagsprobleme adipöser Menschen realitätsnah zu erfahren.
Insgesamt 58 Schülerinnen und Schüler im Alter von 13 bis 16 Jahren nahmen an dem Projekt teil und durchliefen die Übergewichts-Simulation.
Die Ergebnisse zeigten eine positive Erwartungshaltung gegenüber der Simulation. Zudem wurde verdeutlicht, dass der Kurs von der überwiegenden Mehrheit als positiv empfunden und somit akzeptiert wurde. Auch zeigte sich, dass die Übungen physisch und psychisch gut durchführbar waren.
Eine fundierte Ausbildung ist in der interventionellen Kardiologie essentiell, um die teilweise komplexen Prozeduren erfolgreich und sicher durchführen zu können. Bei der perkutanen Koronarintervention (PCI) können u.a. Fehler beim Handling des Führungsdrahtes auftreten. So kann es einerseits zum Drahtverlust, andererseits zur distalen Koronargefäßperforation kommen. Daher ist es sinnvoll, die Technik des Katheterwechsels ohne inadäquate Drahtbewegung vor der ersten Intervention im Herzkatheterlabor am Modell zu trainieren. Für diesen Zweck wurde der DACH-BOSS-Simulator entwickelt, an dem der Katheterwechsel trainiert werden kann.
Die Validität des Modells wurde im Rahmen einer Studie bei 10 Medizinstudenten (S) sowie 10 angehenden interventionellen Kardiologen (F) untersucht. Jeder Teilnehmer führte eine Trainingsreihe bestehend aus 25 Prozeduren durch. Um den Trainingseffekt zu ermitteln, wurden die mittleren Punktzahlen der ersten 3 und der letzten 3 Prozeduren jedes Probanden in der Studenten- und Fortgeschrittenengruppe verglichen. Zur Bestimmung der Konstruktvalidität führte eine dritte Gruppe von 5 Experten (E, > 1000 PCIs) jeweils 3 Prozeduren durch. Ausmaß der Drahtbewegung und benötigte Zeit wurden mit Punkten bewertet und als Skills score dargestellt.
Bei den ersten 3 Prozeduren erzielten die Experten signifikant höhere Werte als die Studenten oder die Fortgeschrittenengruppe (E: 12,9±1,0; S: 7,1±2,6, p = 0,001;
F: 8,3±2,0; p = 0,001; Mann-Whitney-U). Anfänger und Fortgeschrittene durchliefen während der 25 Trainingsprozeduren eine Lernkurve; im Mittel verbesserte sich die Studentengruppe von 7,1±2,6 auf 12,2±2 (p=0,007, Wilcoxon) und die Fortgeschrittenengruppe von 8,3±2,0 auf 13,2±1,0 (p = 0,005, Wilcoxon).
Der DACH-BOSS-Simulator stellt somit ein valides Modell zum Training des Katheterwechsels ohne inadäquate Drahtbewegung dar. Angehende interventionelle Kardiologen können diesen wichtigen Schritt der Prozedur am Modell trainieren und erlernen. Ob die am Simulator erworbenen Fähigkeiten auf die klinische Prozedur übertragbar sind, muss in weiteren Studien untersucht werden.
How genomic and ecological traits shape island biodiversity - insights from individual-based models
(2020)
Life on oceanic islands provides a playground and comparably easy\-/studied basis
for the understanding of biodiversity in general. Island biota feature many
fascinating patterns: endemic species, species radiations and species with
peculiar trait syndromes. However, classic and current island biogeography
theory does not yet consider all the factors necessary to explain many of these
patterns. In response to this, there is currently a shift in island biogeography
research to systematically consider species traits and thus gain a more
functional perspective. Despite this recent development, a set of species
characteristics remains largely ignored in island biogeography, namely genomic
traits. Evidence suggests that genomic factors could explain many of the
speciation and adaptation patterns found in nature and thus may be highly
informative to explain the fascinating and iconic phenomena known for oceanic
islands, including species radiations and susceptibility to biotic invasions.
Unfortunately, the current lack of comprehensive meaningful data makes studying
these factors challenging. Even with paleontological data and space-for-time
rationales, data is bound to be incomplete due to the very environmental
processes taking place on oceanic islands, such as land slides and volcanism,
and lacks causal information due to the focus on correlative approaches. As
promising alternative, integrative mechanistic models can explicitly consider
essential underlying eco\-/evolutionary mechanisms. In fact, these models have
shown to be applicable to a variety of different systems and study questions.
In this thesis, I therefore examined present mechanistic island models to
identify how they might be used to address some of the current open questions in
island biodiversity research. Since none of the models simultaneously considered
speciation and adaptation at a genomic level, I developed a new genome- and
niche-explicit, individual-based model. I used this model to address three
different phenomena of island biodiversity: environmental variation, insular
species radiations and species invasions.
Using only a single model I could show that small-bodied species with flexible
genomes are successful under environmental variation, that a complex combination
of dispersal abilities, reproductive strategies and genomic traits affect the
occurrence of species radiations and that invasions are primarily driven by the
intensity of introductions and the trait characteristics of invasive
species. This highlights how the consideration of functional traits can promote
the understanding of some of the understudied phenomena in island biodiversity.
The results presented in this thesis exemplify the generality of integrative
models which are built on first principles. Thus, by applying such models to
various complex study questions, they are able to unveil multiple biodiversity
dynamics and patterns. The combination of several models such as the one I
developed to an eco\-/evolutionary model ensemble could further help to identify
fundamental eco\-/evolutionary principles. I conclude the thesis with an outlook
on how to use and extend my developed model to investigate geomorphological
dynamics in archipelagos and to allow dynamic genomes, which would further
increase the model's generality.
Diese Promotion befasst sich mit der Entwicklung eines neuartigen High-Fidelity, Full-Procedural Simulationsmodell für die Durchführung einer offenen Nabelhernienreparation mit präperitonealen Netzimplantation in Underlay-Position (NANEP-Modell). Das Übungsmodell setzten wir in einem eigens konstruierten Operationskurs in der Allgemein- und Viszeralchirugie der Universitätsklinik Würzburg ein. Ziel war die Validierung des Modells durch Untersuchung der Inhalts-Validität, der Konstrukt-Validität und der Differentiellen Validität. Die auf Video aufgezeichneten Operationen der Probanden wurden auf der Internetplattform Catlive mithilfe des Competency Assessment Tools bewertet. Der Lernzuwachs wurde gemessen und untersucht. Die operierten Modelle wurden zur Prüfung der Kriteriums-Validität autopsiert.
The present dissertation investigates the management of RFID implementations in retail trade. Our work contributes to this by investigating important aspects that have so far received little attention in scientific literature. We therefore perform three studies about three important aspects of managing RFID implementations. We evaluate in our first study customer acceptance of pervasive retail systems using privacy calculus theory. The results of our study reveal the most important aspects a retailer has to consider when implementing pervasive retail systems. In our second study we analyze RFID-enabled robotic inventory taking with the help of a simulation model. The results show that retailers should implement robotic inventory taking if the accuracy rates of the robots are as high as the robots’ manufacturers claim. In our third and last study we evaluate the potentials of RFID data for supporting managerial decision making. We propose three novel methods in order to extract useful information from RFID data and propose a generic information extraction process. Our work is geared towards practitioners who want to improve their RFID-enabled processes and towards scientists conducting RFID-based research.
Lifetime techniques are applied to diverse fields of study including materials sciences, semiconductor physics, biology, molecular biophysics and photochemistry.
Here we present DDRS4PALS, a software for the acquisition and simulation of lifetime spectra using the DRS4 evaluation board (Paul Scherrer Institute, Switzerland) for time resolved measurements and digitization of detector output pulses. Artifact afflicted pulses can be corrected or rejected prior to the lifetime calculation to provide the generation of high-quality lifetime spectra, which are crucial for a profound analysis, i.e. the decomposition of the true information. Moreover, the pulses can be streamed on an (external) hard drive during the measurement and subsequently downloaded in the offline mode without being connected to the hardware. This allows the generation of various lifetime spectra at different configurations from one single measurement and, hence, a meaningful comparison in terms of analyzability and quality. Parallel processing and an integrated JavaScript based language provide convenient options to accelerate and automate time consuming processes such as lifetime spectra simulations.
Neurobiology is widely supported by bioinformatics. Due to the big amount of data generated from the biological side a computational approach is required. This thesis presents four different cases of bioinformatic tools applied to the service of Neurobiology.
The first two tools presented belong to the field of image processing. In the first case, we make use of an algorithm based on the wavelet transformation to assess calcium activity events in cultured neurons. We designed an open source tool to assist neurobiology researchers in the analysis of calcium imaging videos. Such analysis is usually done manually which is time consuming and highly subjective. Our tool speeds up the work and offers the possibility of an unbiased detection of the calcium events. Even more important is that our algorithm not only detects the neuron spiking activity but also local spontaneous activity which is normally discarded because it is considered irrelevant. We showed that this activity is determinant in the calcium dynamics in neurons and it is involved in important functions like signal modulation and memory and learning.
The second project is a segmentation task. In our case we are interested in segmenting the neuron nuclei in electron microscopy images of c.elegans. Marking these structures is necessary in order to reconstruct the connectome of the organism. C.elegans is a great study case due to the simplicity of its nervous system (only 502 neurons). This worm, despite its simplicity has taught us a lot about neuronal mechanisms. There is still a lot of information we can extract from the c.elegans, therein lies the importance of reconstructing its connectome. There is a current version of the c.elegans connectome but it was done by hand and on a single subject which leaves a big room for errors. By automatizing the segmentation of the electron microscopy images we guarantee an unbiased approach and we will be able to verify the connectome on several subjects.
For the third project we moved from image processing applications to biological modeling. Because of the high complexity of even small biological systems it is necessary to analyze them with the help of computational tools. The term in silico was coined to refer to such computational models of biological systems. We designed an in silico model of the TNF (Tumor necrosis factor) ligand and its two principal receptors. This biological system is of high relevance because it is involved in the inflammation process. Inflammation is of most importance as protection mechanism but it can also lead to complicated diseases (e.g. cancer). Chronic inflammation processes can be particularly dangerous in the brain. In order to better understand the dynamics that govern the TNF system we created a model using the BioNetGen language. This is a rule based language that allows one to simulate systems where multiple agents are governed by a single rule. Using our model we characterized the TNF system and hypothesized about the relation of the ligand with each of the two receptors. Our hypotheses can be later used to define drug targets in the system or possible treatments for chronic inflammation or lack of the inflammatory response.
The final project deals with the protein folding problem. In our organism proteins are folded all the time, because only in their folded conformation are proteins capable of doing their job (with some very few exceptions). This folding process presents a great challenge for science because it has been shown to be an NP problem. NP means non deterministic Polynomial time problem. This basically means that this kind of problems cannot be efficiently solved. Nevertheless, somehow the body is capable of folding a protein in just milliseconds. This phenomenon puzzles not only biologists but also mathematicians. In mathematics NP problems have been studied for a long time and it is known that given the solution to one NP problem we could solve many of them (i.e. NP-complete problems). If we manage to understand how nature solves the protein folding problem then we might be able to apply this solution to many other problems. Our research intends to contribute to this discussion. Unfortunately, not to explain how nature solves the protein folding problem, but to explain that it does not solve the problem at all. This seems contradictory since I just mentioned that the body folds proteins all the time, but our hypothesis is that the organisms have learned to solve a simplified version of the NP problem. Nature does not solve the protein folding problem in its full complexity. It simply solves a small instance of the problem. An instance which is as simple as a convex optimization problem. We formulate the protein folding problem as an optimization problem to illustrate our claim and present some toy examples to illustrate the formulation. If our hypothesis is true, it means that protein folding is a simple problem. So we just need to understand and model the conditions of the vicinity inside the cell at the moment the folding process occurs. Once we understand this starting conformation and its influence in the folding process we will be able to design treatments for amyloid diseases such as Alzheimer's and Parkinson's.
In summary this thesis project contributes to the neurobiology research field from four different fronts. Two are practical contributions with immediate benefits, such as the calcium imaging video analysis tool and the TNF in silico model. The neuron nuclei segmentation is a contribution for the near future. A step towards the full annotation of the c.elegans connectome and later for the reconstruction of the connectome of other species. And finally, the protein folding project is a first impulse to change the way we conceive the protein folding process in nature. We try to point future research in a novel direction, where the amino code is not the most relevant characteristic of the process but the conditions within the cell.
This paper presents a measurement of the polarisation of tau leptons produced in Z/gamma* -> tau tau decays which is performed with a dataset of proton-proton collisions at root s = 8 TeV, corresponding to an integrated luminosity of 20.2 fb(-1) recorded with the ATLAS detector at the LHC in 2012. The Z/gamma* -> tau tau decays are reconstructed from a hadronically decaying tau lepton with a single charged particle in the final state, accompanied by a tau lepton that decays leptonically. The tau polarisation is inferred from the relative fraction of energy carried by charged and neutral hadrons in the hadronic tau decays. The polarisation is measured in a fiducial region that corresponds to the kinematic region accessible to this analysis. The tau polarisation extracted over the full phase space within the Z/gamma* mass range of 66 < mZ/gamma* < 116GeVis found to be P-tau = -0.14 +/- 0.02(stat)+/- 0.04(syst). It is in agreement with the Standard Model prediction of Pt = -0.1517 +/- 0.0019, which is obtained from the ALP-GEN event generator interfaced with the PYTHIA 6 parton shower modelling and the TAUOLA tau decay library.
Im Rahmen der vorliegenden Untersuchung liegt der Fokus auf der Überprüfung und Weiterentwicklung der Methode der Multiagentensysteme für die Prognosezwecke im Einzelhandel. Die konkrete Zielsetzung der Arbeit ist der Entwurf eines integrativen Systems zur Simulation möglicher Zukunftsszenarien des (räumlichen) Konsumentenverhaltens. Mit Hilfe einer agentenbasierten Modellierung ist es möglich die bisher vorherrschenden Top-Down Ansätze flexibel in ein Bottom-Up Modell zu integrieren. Die wichtigsten strukturprägenden Impulse im Einzelhandelssystem und somit auch auf die Konsumenten gehen aktuell von der Digitalisierung des Verkaufsvorgangs aus. Hierbei wird der „Raum-Zeit-Käfig“ der Kunden ausgeweitet und bestimmte Zwänge der räumlichen und zeitlichen Bindung innerhalb des Kaufprozesses entfallen. Die klassische zeitliche Abfolge des Einkaufsverhaltens wird aufgelöst; Information findet vermehrt digital statt. Vielmehr steht der Produktnutzen im Mittelpunkt, und zugehörige Dienstleistungen wie Information, Service und Logistik werden flexibel kombiniert. Vor diesem Hintergrund stellt die agentenbasierte Simulation einen dynamischen Ansatzpunkt dar, in dem eine Reihe der Defizite tradierter, statischer Methoden Berücksichtigung findet und sich vielfältige Einsatzmöglichkeiten für die Analyse der Wechselwirkungen zwischen Konsumentenverhalten und räumlichen Einzelhandelsstrukturen ergeben. Aufgrund der zunehmenden Digitalisierung des Einkaufsprozesses und den daraus entstehenden Informationen zum Konsumentenverhalten in Kombination mit immer komplexeren Fragestellungen ist in den kommenden Jahren eine verstärkte Dynamik bei der Anwendungshäufigkeit von Multiagentensimulationen in Einzelhandelsunternehmen zu erwarten.
Nowadays, data centers are becoming increasingly dynamic due to the common adoption of virtualization technologies. Systems can scale their capacity on demand by growing and shrinking their resources dynamically based on the current load. However, the complexity and performance of modern data centers is influenced not only by the software architecture, middleware, and computing resources, but also by network virtualization, network protocols, network services, and configuration. The field of network virtualization is not as mature as server virtualization and there are multiple competing approaches and technologies. Performance modeling and prediction techniques provide a powerful tool to analyze the performance of modern data centers. However, given the wide variety of network virtualization approaches, no common approach exists for modeling and evaluating the performance of virtualized networks.
The performance community has proposed multiple formalisms and models for evaluating the performance of infrastructures based on different network virtualization technologies. The existing performance models can be divided into two main categories: coarse-grained analytical models and highly-detailed simulation models. Analytical performance models are normally defined at a high level of abstraction and thus they abstract many details of the real network and therefore have limited predictive power. On the other hand, simulation models are normally focused on a selected networking technology and take into account many specific performance influencing factors, resulting in detailed models that are tightly bound to a given technology, infrastructure setup, or to a given protocol stack.
Existing models are inflexible, that means, they provide a single solution method without providing means for the user to influence the solution accuracy and solution overhead. To allow for flexibility in the performance prediction, the user is required to build multiple different performance models obtaining multiple performance predictions. Each performance prediction may then have different focus, different performance metrics, prediction accuracy, and solving time.
The goal of this thesis is to develop a modeling approach that does not require the user to have experience in any of the applied performance modeling formalisms. The approach offers the flexibility in the modeling and analysis by balancing between: (a) generic character and low overhead of coarse-grained analytical models, and (b) the more detailed simulation models with higher prediction accuracy.
The contributions of this thesis intersect with technologies and research areas, such as: software engineering, model-driven software development, domain-specific modeling, performance modeling and prediction, networking and data center networks, network virtualization, Software-Defined Networking (SDN), Network Function Virtualization (NFV). The main contributions of this thesis compose the Descartes Network Infrastructure (DNI) approach and include:
• Novel modeling abstractions for virtualized network infrastructures. This includes two meta-models that define modeling languages for modeling data center network performance. The DNI and miniDNI meta-models provide means for representing network infrastructures at two different abstraction levels. Regardless of which variant of the DNI meta-model is used, the modeling language provides generic modeling elements allowing to describe the majority of existing and future network technologies, while at the same time abstracting factors that have low influence on the overall performance. I focus on SDN and NFV as examples of modern virtualization technologies.
• Network deployment meta-model—an interface between DNI and other meta- models that allows to define mapping between DNI and other descriptive models. The integration with other domain-specific models allows capturing behaviors that are not reflected in the DNI model, for example, software bottlenecks, server virtualization, and middleware overheads.
• Flexible model solving with model transformations. The transformations enable solving a DNI model by transforming it into a predictive model. The model transformations vary in size and complexity depending on the amount of data abstracted in the transformation process and provided to the solver. In this thesis, I contribute six transformations that transform DNI models into various predictive models based on the following modeling formalisms: (a) OMNeT++ simulation, (b) Queueing Petri Nets (QPNs), (c) Layered Queueing Networks (LQNs). For each of these formalisms, multiple predictive models are generated (e.g., models with different level of detail): (a) two for OMNeT++, (b) two for QPNs, (c) two for LQNs. Some predictive models can be solved using multiple alternative solvers resulting in up to ten different automated solving methods for a single DNI model.
• A model extraction method that supports the modeler in the modeling process by automatically prefilling the DNI model with the network traffic data. The contributed traffic profile abstraction and optimization method provides a trade-off by balancing between the size and the level of detail of the extracted profiles.
• A method for selecting feasible solving methods for a DNI model. The method proposes a set of solvers based on trade-off analysis characterizing each transformation with respect to various parameters such as its specific limitations, expected prediction accuracy, expected run-time, required resources in terms of CPU and memory consumption, and scalability.
• An evaluation of the approach in the context of two realistic systems. I evaluate the approach with focus on such factors like: prediction of network capacity and interface throughput, applicability, flexibility in trading-off between prediction accuracy and solving time. Despite not focusing on the maximization of the prediction accuracy, I demonstrate that in the majority of cases, the prediction error is low—up to 20% for uncalibrated models and up to 10% for calibrated models depending on the solving technique.
In summary, this thesis presents the first approach to flexible run-time performance prediction in data center networks, including network based on SDN. It provides ability to flexibly balance between performance prediction accuracy and solving overhead. The approach provides the following key benefits:
• It is possible to predict the impact of changes in the data center network on the performance. The changes include: changes in network topology, hardware configuration, traffic load, and applications deployment.
• DNI can successfully model and predict the performance of multiple different of network infrastructures including proactive SDN scenarios.
• The prediction process is flexible, that is, it provides balance between the granularity of the predictive models and the solving time. The decreased prediction accuracy is usually rewarded with savings of the solving time and consumption of resources required for solving.
• The users are enabled to conduct performance analysis using multiple different prediction methods without requiring the expertise and experience in each of the modeling formalisms.
The components of the DNI approach can be also applied to scenarios that are not considered in this thesis. The approach is generalizable and applicable for the following examples: (a) networks outside of data centers may be analyzed with DNI as long as the background traffic profile is known; (b) uncalibrated DNI models may serve as a basis for design-time performance analysis; (c) the method for extracting and compacting of traffic profiles may be used for other, non-network workloads as well.