TY - JOUR A1 - Gehrke, Alexander A1 - Balbach, Nico A1 - Rauch, Yong-Mi A1 - Degkwitz, Andreas A1 - Puppe, Frank T1 - Erkennung von handschriftlichen Unterstreichungen in Alten Drucken JF - Bibliothek Forschung und Praxis N2 - Die Erkennung handschriftlicher Artefakte wie Unterstreichungen in Buchdrucken ermöglicht Rückschlüsse auf das Rezeptionsverhalten und die Provenienzgeschichte und wird auch für eine OCR benötigt. Dabei soll zwischen handschriftlichen Unterstreichungen und waagerechten Linien im Druck (z. B. Trennlinien usw.) unterschieden werden, da letztere nicht ausgezeichnet werden sollen. Im Beitrag wird ein Ansatz basierend auf einem auf Unterstreichungen trainierten Neuronalen Netz gemäß der U-Net Architektur vorgestellt, dessen Ergebnisse in einem zweiten Schritt mit heuristischen Regeln nachbearbeitet werden. Die Evaluationen zeigen, dass Unterstreichungen sehr gut erkannt werden, wenn bei der Binarisierung der Scans nicht zu viele Pixel der Unterstreichung wegen geringem Kontrast verloren gehen. Zukünftig sollen die Worte oberhalb der Unterstreichung mit OCR transkribiert werden und auch andere Artefakte wie handschriftliche Notizen in alten Drucken erkannt werden. N2 - The recognition of handwritten artefacts like underlines in historical printings allows inference on the reception and provenance history and is necessary for OCR (optical character recognition). In this context it is important to differentiate between handwritten and printed lines, since the latter are common in printings, but should be ignored. We present an approach based on neural nets with the U-Net architecture, whose segmentation results are post processed with heuristic rules. The evaluations show that handwritten underlines are very well recognized if the binarisation of the scans is adequate. Future work includes transcription of the underlined words with OCR and recognition of other artefacts like handwritten notes in historical printings. T2 - Recognition of handwritten underlines in historical printings KW - Brüder Grimm Privatbibliothek KW - Erkennung handschriftlicher Artefakte KW - Convolutional Neural Network KW - regelbasierte Nachbearbeitung KW - Grimm brothers personal library KW - handwritten artefact recognition KW - convolutional neural network KW - rule based post processing Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-193377 SN - 1865-7648 SN - 0341-4183 N1 - Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich. VL - 43 IS - 3 SP - 447 EP - 452 ER - TY - JOUR A1 - Ali, Qasim A1 - Montenegro, Sergio T1 - Decentralized control for scalable quadcopter formations JF - International Journal of Aerospace Engineering N2 - An innovative framework has been developed for teamwork of two quadcopter formations, each having its specified formation geometry, assigned task, and matching control scheme. Position control for quadcopters in one of the formations has been implemented through a Linear Quadratic Regulator Proportional Integral (LQR PI) control scheme based on explicit model following scheme. Quadcopters in the other formation are controlled through LQR PI servomechanism control scheme. These two control schemes are compared in terms of their performance and control effort. Both formations are commanded by respective ground stations through virtual leaders. Quadcopters in formations are able to track desired trajectories as well as hovering at desired points for selected time duration. In case of communication loss between ground station and any of the quadcopters, the neighboring quadcopter provides the command data, received from the ground station, to the affected unit. Proposed control schemes have been validated through extensive simulations using MATLAB®/Simulink® that provided favorable results. KW - scalable quadcopter Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-146704 VL - 2016 ER - TY - JOUR A1 - Ali, Qasim A1 - Montenegro, Sergio T1 - Explicit Model Following Distributed Control Scheme for Formation Flying of Mini UAVs JF - IEEE Access N2 - A centralized heterogeneous formation flight position control scheme has been formulated using an explicit model following design, based on a Linear Quadratic Regulator Proportional Integral (LQR PI) controller. The leader quadcopter is a stable reference model with desired dynamics whose output is perfectly tracked by the two wingmen quadcopters. The leader itself is controlled through the pole placement control method with desired stability characteristics, while the two followers are controlled through a robust and adaptive LQR PI control method. Selected 3-D formation geometry and static stability are maintained under a number of possible perturbations. With this control scheme, formation geometry may also be switched to any arbitrary shape during flight, provided a suitable collision avoidance mechanism is incorporated. In case of communication loss between the leader and any of the followers, the other follower provides the data, received from the leader, to the affected follower. The stability of the closed-loop system has been analyzed using singular values. The proposed approach for the tightly coupled formation flight of mini unmanned aerial vehicles has been validated with the help of extensive simulations using MATLAB/Simulink, which provided promising results. KW - quadcopter KW - robustness KW - intelligent vehicles KW - rotors KW - mathematical model KW - aerodynamics KW - adaptation models KW - vehicle dynamics KW - unmanned aerial vehicle KW - distributed control KW - formation flight KW - model following Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-146061 N1 - (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works VL - 4 IS - 397-406 ER - TY - JOUR A1 - Lugrin, Jean-Luc A1 - Latoschik, Marc Erich A1 - Habel, Michael A1 - Roth, Daniel A1 - Seufert, Christian A1 - Grafe, Silke T1 - Breaking Bad Behaviors: A New Tool for Learning Classroom Management Using Virtual Reality JF - Frontiers in ICT N2 - This article presents an immersive virtual reality (VR) system for training classroom management skills, with a specific focus on learning to manage disruptive student behavior in face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3D virtual simulation of a classroom populated by twenty-four semi-autonomous virtual students. The system has been designed as a companion tool for classroom management seminars in a syllabus for primary and secondary school teachers. This will allow lecturers to link theory with practice using the medium of VR. The system is therefore designed for two users: a trainee teacher and an instructor supervising the training session. The teacher is immersed in a real-time 3D simulation of a classroom by means of a head-mounted display and headphone. The instructor operates a graphical desktop console, which renders a view of the class and the teacher whose avatar movements are captured by a marker less tracking system. This console includes a 2D graphics menu with convenient behavior and feedback control mechanisms to provide human-guided training sessions. The system is built using low-cost consumer hardware and software. Its architecture and technical design are described in detail. A first evaluation confirms its conformance to critical usability requirements (i.e., safety and comfort, believability, simplicity, acceptability, extensibility, affordability, and mobility). Our initial results are promising and constitute the necessary first step toward a possible investigation of the efficiency and effectiveness of such a system in terms of learning outcomes and experience. KW - virtual reality training KW - immersive classroom management KW - immersive classroom KW - virtual agent interaction KW - student simulation Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-147945 VL - 3 IS - 26 ER - TY - JOUR A1 - Baumeister, Joachim A1 - Striffler, Albrecht A1 - Brandt, Marc A1 - Neumann, Michael T1 - Collaborative Decision Support and Documentation in Chemical Safety with KnowSEC JF - Journal of Cheminformatics N2 - To protect the health of human and environment, the European Union implemented the REACH regulation for chemical substances. REACH is an acronym for Registration, Evaluation, Authorization, and Restriction of Chemicals. Under REACH, the authorities have the task of assessing chemical substances, especially those that might pose a risk to human health or environment. The work under REACH is scientifically, technically and procedurally a complex and knowledge-intensive task that is jointly performed by the European Chemicals Agency and member state authorities in Europe. The assessment of substances under REACH conducted in the German Environment Agency is supported by the knowledge-based system KnowSEC, which is used for the screening, documentation, and decision support when working on chemical substances. The software KnowSEC integrates advanced semantic technologies and strong problem solving methods. It allows for the collaborative work on substances in the context of the European REACH regulation. We discuss the applied methods and process models and we report on experiences with the implementation and use of the system. KW - decision support KW - knowledge-based systems KW - ontologies KW - expert systems KW - semantic technologies Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-146575 VL - 8 IS - 21 ER - TY - JOUR A1 - Strohmeier, Michael A1 - Walter, Thomas A1 - Rothe, Julian A1 - Montenegro, Sergio T1 - Ultra-wideband based pose estimation for small unmanned aerial vehicles JF - IEEE Access N2 - This paper proposes a 3-D local pose estimation system for a small Unmanned Aerial Vehicle (UAV) with a weight limit of 200 g and a very small footprint of 10 cm×10cm. The system is realized by fusing 3-D position estimations from an Ultra-Wide Band (UWB) transceiver network with Inertial Measurement Unit (IMU) sensor data and data from a barometric pressure sensor. The 3-D position from the UWB network is estimated using Multi-Dimensional Scaling (MDS) and range measurements between the transceivers. The range measurements are obtained using Double-Sided Two-Way Ranging (DS-TWR), thus eliminating the need for an additional clock synchronization mechanism. The sensor fusion is accomplished using a loosely coupled Extended Kalman Filter (EKF) architecture. Extensive evaluation of the proposed system shows that a position accuracy with a Root-Mean-Square Error (RMSE) of 0.20cm can be obtained. The orientation angle can be estimated with an RMSE of 1.93°. KW - UAV KW - navigation KW - pose estimation KW - distance measurement KW - DecaWave KW - extended Kalman filter KW - UWB Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-177503 VL - 6 ER - TY - JOUR A1 - Baier, Pablo A. A1 - Baier-Saip, Jürgen A. A1 - Schilling, Klaus A1 - Oliveira, Jauvane C. T1 - Simulator for Minimally Invasive Vascular Interventions: Hardware and Software JF - Presence N2 - In the present work, a simulation system is proposed that can be used as an educational tool by physicians in training basic skills of minimally invasive vascular interventions. In order to accomplish this objective, initially the physical model of the wire proposed by Konings has been improved. As a result, a simpler and more stable method was obtained to calculate the equilibrium configuration of the wire. In addition, a geometrical method is developed to perform relaxations. It is particularly useful when the wire is hindered in the physical method because of the boundary conditions. Then a recipe is given to merge the physical and the geometrical methods, resulting in efficient relaxations. Moreover, tests have shown that the shape of the virtual wire agrees with the experiment. The proposed algorithm allows real-time executions, and furthermore, the hardware to assemble the simulator has a low cost. KW - simulation system KW - educational tool KW - invasive vascular interventions Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-140580 SN - 1531-3263 VL - 25 IS - 2 ER - TY - THES A1 - Hirth, Matthias Johannes Wilhem T1 - Modeling Crowdsourcing Platforms - A Use-Case Driven Approach T1 - Modellierung von Crowdsourcing-Plattformen anhand von Anwendungsfällen N2 - Computer systems have replaced human work-force in many parts of everyday life, but there still exists a large number of tasks that cannot be automated, yet. This also includes tasks, which we consider to be rather simple like the categorization of image content or subjective ratings. Traditionally, these tasks have been completed by designated employees or outsourced to specialized companies. However, recently the crowdsourcing paradigm is more and more applied to complete such human-labor intensive tasks. Crowdsourcing aims at leveraging the huge number of Internet users all around the globe, which form a potentially highly available, low-cost, and easy accessible work-force. To enable the distribution of work on a global scale, new web-based services emerged, so called crowdsourcing platforms, that act as mediator between employers posting tasks and workers completing tasks. However, the crowdsourcing approach, especially the large anonymous worker crowd, results in two types of challenges. On the one hand, there are technical challenges like the dimensioning of crowdsourcing platform infrastructure or the interconnection of crowdsourcing platforms and machine clouds to build hybrid services. On the other hand, there are conceptual challenges like identifying reliable workers or migrating traditional off-line work to the crowdsourcing environment. To tackle these challenges, this monograph analyzes and models current crowdsourcing systems to optimize crowdsourcing workflows and the underlying infrastructure. First, a categorization of crowdsourcing tasks and platforms is developed to derive generalizable properties. Based on this categorization and an exemplary analysis of a commercial crowdsourcing platform, models for different aspects of crowdsourcing platforms and crowdsourcing mechanisms are developed. A special focus is put on quality assurance mechanisms for crowdsourcing tasks, where the models are used to assess the suitability and costs of existing approaches for different types of tasks. Further, a novel quality assurance mechanism solely based on user-interactions is proposed and its feasibility is shown. The findings from the analysis of existing platforms, the derived models, and the developed quality assurance mechanisms are finally used to derive best practices for two crowdsourcing use-cases, crowdsourcing-based network measurements and crowdsourcing-based subjective user studies. These two exemplary use-cases cover aspects typical for a large range of crowdsourcing tasks and illustrated the potential benefits, but also resulting challenges when using crowdsourcing. With the ongoing digitalization and globalization of the labor markets, the crowdsourcing paradigm is expected to gain even more importance in the next years. This is already evident in the currently new emerging fields of crowdsourcing, like enterprise crowdsourcing or mobile crowdsourcing. The models developed in the monograph enable platform providers to optimize their current systems and employers to optimize their workflows to increase their commercial success. Moreover, the results help to improve the general understanding of crowdsourcing systems, a key for identifying necessary adaptions and future improvements. N2 - Computer haben menschliche Arbeitskräfte mittlerweile in vielen Bereichen des täglichen Lebens ersetzt. Dennoch gibt es immer noch eine große Anzahl von Aufgaben, die derzeit nicht oder nur teilweise automatisierbar sind. Hierzu gehören auch solche, welche als sehr einfach erachtet werden, beispielsweise das Kategorisieren von Bildinhalten oder subjektive Bewertungen. Traditionell wurden diese Aufgaben vorwiegend von eigens angestellten Mitarbeitern oder über Outsourcing gelöst. In den vergangenen Jahren wurde hierfür jedoch immer häufiger Crowdsourcing verwendet, wobei die große Anzahl an weltweiten Internetnutzern als hoch verfügbare, kostengünstige und einfach zu erreichende Arbeiterschaft eingesetzt wird. Um eine weltweite Verteilung von Arbeit zu ermöglichen hat sich eine neue Art von Internetdienstleistern entwickelt, die sogenannten Crowdsourcingplattformen. Diese dienen als Vermittler zwischen Arbeitgebern, welche Aufgaben auf den Plattformen einstellen und Arbeitnehmer, welche diese Aufgaben bearbeiten. Hierbei ergeben sich zwei Arten von Herausforderungen. Einerseits entstehen Herausforderungen technischer Art, wie etwa Fragen bezüglich der Dimensionierung der Plattforminfrastruktur oder der Realisierung von Programmierschnittstellen zur Verbindung von Crowdsourcingplattformen mit anderen Cloudanbietern. Andererseits ergeben sich konzeptionelle Herausforderungen, wie etwa die Identifikation vertrauenswürdiger Arbeitnehmer oder Methoden zur Migration von traditionellen Arbeitsaufgaben in Crowdsourcing-basierte Arbeit. In diesem Monograph werden beide Arten von Herausforderungen adressiert. Hierzu werden aktuelle Crowdsourcingsysteme analysiert und modelliert, um basieren auf den gewonnenen Erkenntnissen, Arbeitsabläufe im Crowdsourcing und die den Systemen zu Grunde liegende Infrastruktur zu optimieren. Zunächst wird hierfür eine Kategorisierung von Crowdsourcing Aufgaben und Plattformen entwickelt um generalisierbare Eigenschaften abzuleiten. Basierend auf dieser Kategorisierung und einer beispielhaften Analyse einer kommerziellen Crowdsourcingplattform werden Modelle entwickelt, die verschiedene Aspekte der Plattformen sowie der eingesetzten Mechanismen abbilden. Hierbei wird ein besonderer Fokus auf die Verlässlichkeit von Qualitätssicherungsmechanismen, deren Kosten und Einsetzbarkeit für verschiedene Aufgabentypen gelegt. Ferner wird ein neuer Qualitätssicherungsmechanismus vorgestellt und evaluiert, welcher lediglich auf den Interaktionen der Crowdsourcingarbeitnehmer mit der Nutzeroberfläche basiert. Die Erkenntnisse, aus der Analyse existierender Plattformen, den abgeleiteten Modellen und dem entwickelten Qualitätssicherungsmechanismus fließen schließlich in konkrete Designempfehlungen für zwei exemplarische Crowdsourcinganwendungsfälle ein. Die beiden gewählten Anwendungsfälle decken Aspekte einer Vielzahl von Crowdsourcingaufgaben ab und zeigen sowohl die Vorteile als auch die Herausforderungen beim Einsatz von Crowdsourcing. Aufgrund der zunehmenden Digitalisierung und Globalisierung des Arbeitsmarkes ist es zu erwarten, dass Crowdsourcing in den nächsten Jahren noch weiter an Bedeutung gewinnt. Dies zeigt sich bereits daran, dass Crowdsourcingansätze mittlerweile vermehrt in Unternehmen oder im mobilen Umfeld eingesetzt werden. Die Modelle aus diesem Monograph, ermöglichen Plattformbetreibern eine Optimierung ihrer Systeme und Arbeitgebern eine Optimierung ihrer Arbeitsabläufe. Weiterhin helfen die gewonnenen Erkenntnisse das prinzipielle Verständnis von Crowdsourcingsystemen zu verbessern, was wiederum eine Grundvoraussetzung für das Erkennen von Anpassungsbedarf und Optimierungspotential ist. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 02/16 KW - Open Innovation KW - Leistungsbewertung KW - Optimiertung KW - Modellierung KW - Studie KW - Crowdsourcing KW - Quality of Experience KW - Modelling KW - Performance Evaluation KW - Optimization KW - Modellierung KW - Nutzerstudie KW - Crowdsourcing Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-140726 SN - 1432-8801 ER - TY - THES A1 - Rygielski, Piotr T1 - Flexible Modeling of Data Center Networks for Capacity Management T1 - Elastische Modellierung von Rechenzentren-Netzen zwecks Kapazitätsverwaltung N2 - Nowadays, data centers are becoming increasingly dynamic due to the common adoption of virtualization technologies. Systems can scale their capacity on demand by growing and shrinking their resources dynamically based on the current load. However, the complexity and performance of modern data centers is influenced not only by the software architecture, middleware, and computing resources, but also by network virtualization, network protocols, network services, and configuration. The field of network virtualization is not as mature as server virtualization and there are multiple competing approaches and technologies. Performance modeling and prediction techniques provide a powerful tool to analyze the performance of modern data centers. However, given the wide variety of network virtualization approaches, no common approach exists for modeling and evaluating the performance of virtualized networks. The performance community has proposed multiple formalisms and models for evaluating the performance of infrastructures based on different network virtualization technologies. The existing performance models can be divided into two main categories: coarse-grained analytical models and highly-detailed simulation models. Analytical performance models are normally defined at a high level of abstraction and thus they abstract many details of the real network and therefore have limited predictive power. On the other hand, simulation models are normally focused on a selected networking technology and take into account many specific performance influencing factors, resulting in detailed models that are tightly bound to a given technology, infrastructure setup, or to a given protocol stack. Existing models are inflexible, that means, they provide a single solution method without providing means for the user to influence the solution accuracy and solution overhead. To allow for flexibility in the performance prediction, the user is required to build multiple different performance models obtaining multiple performance predictions. Each performance prediction may then have different focus, different performance metrics, prediction accuracy, and solving time. The goal of this thesis is to develop a modeling approach that does not require the user to have experience in any of the applied performance modeling formalisms. The approach offers the flexibility in the modeling and analysis by balancing between: (a) generic character and low overhead of coarse-grained analytical models, and (b) the more detailed simulation models with higher prediction accuracy. The contributions of this thesis intersect with technologies and research areas, such as: software engineering, model-driven software development, domain-specific modeling, performance modeling and prediction, networking and data center networks, network virtualization, Software-Defined Networking (SDN), Network Function Virtualization (NFV). The main contributions of this thesis compose the Descartes Network Infrastructure (DNI) approach and include: • Novel modeling abstractions for virtualized network infrastructures. This includes two meta-models that define modeling languages for modeling data center network performance. The DNI and miniDNI meta-models provide means for representing network infrastructures at two different abstraction levels. Regardless of which variant of the DNI meta-model is used, the modeling language provides generic modeling elements allowing to describe the majority of existing and future network technologies, while at the same time abstracting factors that have low influence on the overall performance. I focus on SDN and NFV as examples of modern virtualization technologies. • Network deployment meta-model—an interface between DNI and other meta- models that allows to define mapping between DNI and other descriptive models. The integration with other domain-specific models allows capturing behaviors that are not reflected in the DNI model, for example, software bottlenecks, server virtualization, and middleware overheads. • Flexible model solving with model transformations. The transformations enable solving a DNI model by transforming it into a predictive model. The model transformations vary in size and complexity depending on the amount of data abstracted in the transformation process and provided to the solver. In this thesis, I contribute six transformations that transform DNI models into various predictive models based on the following modeling formalisms: (a) OMNeT++ simulation, (b) Queueing Petri Nets (QPNs), (c) Layered Queueing Networks (LQNs). For each of these formalisms, multiple predictive models are generated (e.g., models with different level of detail): (a) two for OMNeT++, (b) two for QPNs, (c) two for LQNs. Some predictive models can be solved using multiple alternative solvers resulting in up to ten different automated solving methods for a single DNI model. • A model extraction method that supports the modeler in the modeling process by automatically prefilling the DNI model with the network traffic data. The contributed traffic profile abstraction and optimization method provides a trade-off by balancing between the size and the level of detail of the extracted profiles. • A method for selecting feasible solving methods for a DNI model. The method proposes a set of solvers based on trade-off analysis characterizing each transformation with respect to various parameters such as its specific limitations, expected prediction accuracy, expected run-time, required resources in terms of CPU and memory consumption, and scalability. • An evaluation of the approach in the context of two realistic systems. I evaluate the approach with focus on such factors like: prediction of network capacity and interface throughput, applicability, flexibility in trading-off between prediction accuracy and solving time. Despite not focusing on the maximization of the prediction accuracy, I demonstrate that in the majority of cases, the prediction error is low—up to 20% for uncalibrated models and up to 10% for calibrated models depending on the solving technique. In summary, this thesis presents the first approach to flexible run-time performance prediction in data center networks, including network based on SDN. It provides ability to flexibly balance between performance prediction accuracy and solving overhead. The approach provides the following key benefits: • It is possible to predict the impact of changes in the data center network on the performance. The changes include: changes in network topology, hardware configuration, traffic load, and applications deployment. • DNI can successfully model and predict the performance of multiple different of network infrastructures including proactive SDN scenarios. • The prediction process is flexible, that is, it provides balance between the granularity of the predictive models and the solving time. The decreased prediction accuracy is usually rewarded with savings of the solving time and consumption of resources required for solving. • The users are enabled to conduct performance analysis using multiple different prediction methods without requiring the expertise and experience in each of the modeling formalisms. The components of the DNI approach can be also applied to scenarios that are not considered in this thesis. The approach is generalizable and applicable for the following examples: (a) networks outside of data centers may be analyzed with DNI as long as the background traffic profile is known; (b) uncalibrated DNI models may serve as a basis for design-time performance analysis; (c) the method for extracting and compacting of traffic profiles may be used for other, non-network workloads as well. N2 - Durch Virtualisierung werden moderne Rechenzentren immer dynamischer. Systeme sind in der Lage ihre Kapazität hoch und runter zu skalieren , um die ankommende Last zu bedienen. Die Komplexität der modernen Systeme in Rechenzentren wird nicht nur von der Softwarearchitektur, Middleware und Rechenressourcen sondern auch von der Netzwerkvirtualisierung beeinflusst. Netzwerkvirtualisierung ist noch nicht so ausgereift wie die Virtualisierung von Rechenressourcen und es existieren derzeit unterschiedliche Netzwerkvirtualisierungstechnologien. Man kann aber keine der Technologien als Standardvirtualisierung für Netzwerke bezeichnen. Die Auswahl von Ansätzen durch Performanzanalyse von Netzwerken stellt eine Herausforderung dar, weil existierende Ansätze sich mehrheitlich auf einzelne Virtualisierungstechniken fokussieren und es keinen universellen Ansatz für Performanzanalyse gibt, der alle Techniken in Betracht nimmt. Die Forschungsgemeinschaft bietet verschiedene Performanzmodelle und Formalismen für Evaluierung der Performanz von virtualisierten Netzwerken an. Die bekannten Ansätze können in zwei Gruppen aufgegliedert werden: Grobetaillierte analytische Modelle und feindetaillierte Simulationsmodelle. Die analytischen Performanzmodelle abstrahieren viele Details und liefern daher nur beschränkt nutzbare Performanzvorhersagen. Auf der anderen Seite fokussiert sich die Gruppe der simulationsbasierenden Modelle auf bestimmte Teile des Systems (z.B. Protokoll, Typ von Switches) und ignoriert dadurch das große Bild der Systemlandschaft. ... KW - Modellierung KW - Leistungsbewertung KW - Netzwerk KW - Meta-modeling KW - Model transformation KW - Performance analysis KW - Simulation Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-146235 ER - TY - THES A1 - Reitwießner, Christian T1 - Multiobjective Optimization and Language Equations T1 - Mehrkriterielle Optimierung und Sprachgleichungen N2 - Praktische Optimierungsprobleme beinhalten oft mehrere gleichberechtigte, sich jedoch widersprechende Kriterien. Beispielsweise will man bei einer Reise zugleich möglichst schnell ankommen, sie soll aber auch nicht zu teuer sein. Im ersten Teil dieser Arbeit wird die algorithmische Beherrschbarkeit solcher mehrkriterieller Optimierungsprobleme behandelt. Es werden zunächst verschiedene Lösungsbegriffe diskutiert und auf ihre Schwierigkeit hin verglichen. Interessanterweise stellt sich heraus, dass diese Begriffe für ein einkriterielles Problem stets gleich schwer sind, sie sich ab zwei Kriterien allerdings stark unterscheiden könen (außer es gilt P = NP). In diesem Zusammenhang wird auch die Beziehung zwischen Such- und Entscheidungsproblemen im Allgemeinen untersucht. Schließlich werden neue und verbesserte Approximationsalgorithmen für verschieden Varianten des Problems des Handlungsreisenden gefunden. Dabei wird mit Mitteln der Diskrepanztheorie eine Technik entwickelt, die ein grundlegendes Hindernis der Mehrkriteriellen Optimierung aus dem Weg schafft: Gegebene Lösungen so zu kombinieren, dass die neue Lösung in allen Kriterien möglichst ausgewogen ist und gleichzeitig die Struktur der Lösungen nicht zu stark zerstört wird. Der zweite Teil der Arbeit widmet sich verschiedenen Aspekten von Gleichungssystemen für (formale) Sprachen. Einerseits werden konjunktive und Boolesche Grammatiken untersucht. Diese sind Erweiterungen der kontextfreien Grammatiken um explizite Durchschnitts- und Komplementoperationen. Es wird unter anderem gezeigt, dass man bei konjunktiven Grammatiken die Vereinigungsoperation stark einschränken kann, ohne dabei die erzeugte Sprache zu ändern. Außerdem werden bestimmte Schaltkreise untersucht, deren Gatter keine Wahrheitswerte sondern Mengen von Zahlen berechnen. Für diese Schaltkreise wird das Äquivalenzproblem betrachtet, also die Frage ob zwei gegebene Schaltkreise die gleiche Menge berechnen oder nicht. Es stellt sich heraus, dass, abhängig von den erlaubten Gattertypen, die Komplexität des Äquivalenzproblems stark variiert und für verschiedene Komplexitätsklassen vollständig ist, also als (parametrisierter) Vertreter für diese Klassen stehen kann. N2 - Practical optimization problems often comprise several incomparable and conflicting objectives. When booking a trip using several means of transport, for instance, it should be fast and at the same time not too expensive. The first part of this thesis is concerned with the algorithmic solvability of such multiobjective optimization problems. Several solution notions are discussed and compared with respect to their difficulty. Interestingly, these solution notions are always equally difficulty for a single-objective problem and they differ considerably already for two objectives (unless P = NP). In this context, the difference between search and decision problems is also investigated in general. Furthermore, new and improved approximation algorithms for several variants of the traveling salesperson problem are presented. Using tools from discrepancy theory, a general technique is developed that helps to avoid an obstacle that is often hindering in multiobjective approximation: The problem of combining two solutions such that the new solution is balanced in all objectives and also mostly retains the structure of the original solutions. The second part of this thesis is dedicated to several aspects of systems of equations for (formal) languages. Firstly, conjunctive and Boolean grammars are studied, which are extensions of context-free grammars by explicit intersection and complementation operations, respectively. Among other results, it is shown that one can considerably restrict the union operation on conjunctive grammars without changing the generated language. Secondly, certain circuits are investigated whose gates do not compute Boolean values but sets of natural numbers. For these circuits, the equivalence problem is studied, i.\,e.\ the problem of deciding whether two given circuits compute the same set or not. It is shown that, depending on the allowed types of gates, this problem is complete for several different complexity classes and can thus be seen as a parametrized) representative for all those classes. KW - Mehrkriterielle Optimierung KW - Approximationsalgorithmus KW - Travelling-salesman-Problem KW - Boolesche Grammatik KW - Theoretische Informatik KW - Komplexitätstheorie KW - Boolean Grammar Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-70146 ER -