TY - CHAP A1 - Truman, Samuel A1 - von Mammen, Sebastian T1 - Interactive Self-Assembling Agent Ensembles T2 - Proceedings of the 1st Games Technology Summit N2 - In this paper, we bridge the gap between procedural content generation (PCG) and user-generated content (UGC) by proposing and demonstrating an interactive agent-based model of self-assembling ensembles that can be directed though user input. We motivate these efforts by considering the opportunities technology provides to pursue game designs based on according game design frameworks. We present three different use cases of the proposed model that emphasize its potential to (1) self-assemble into predefined 3D graphical assets, (2) define new structures in the context of virtual environments by self-assembling layers on the surfaces of arbitrary 3D objects, and (3) allow novel structures to self-assemble only considering the model’s configuration and no external dependencies. To address the performance restrictions in computer games, we realized the prototypical model implementation by means of an efficient entity component system (ECS). We conclude the paper with an outlook on future steps to further explore novel interactive, dynamic PCG mechanics and to ensure their efficiency. KW - procedural content generation KW - user-generated content KW - game mechanics KW - agent-based models KW - self-assembly Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-246032 ER - TY - JOUR A1 - Hein, Rebecca M. A1 - Latoschik, Marc Erich A1 - Wienrich, Carolin T1 - Inter- and transcultural learning in cocial virtual reality: a proposal for an inter- and transcultural virtual object database to be used in the implementation, reflection, and evaluation of virtual encounters JF - Multimodal Technologies and Interaction N2 - Visual stimuli are frequently used to improve memory, language learning or perception, and understanding of metacognitive processes. However, in virtual reality (VR), there are few systematically and empirically derived databases. This paper proposes the first collection of virtual objects based on empirical evaluation for inter-and transcultural encounters between English- and German-speaking learners. We used explicit and implicit measurement methods to identify cultural associations and the degree of stereotypical perception for each virtual stimuli (n = 293) through two online studies, including native German and English-speaking participants. The analysis resulted in a final well-describable database of 128 objects (called InteractionSuitcase). In future applications, the objects can be used as a great interaction or conversation asset and behavioral measurement tool in social VR applications, especially in the field of foreign language education. For example, encounters can use the objects to describe their culture, or teachers can intuitively assess stereotyped attitudes of the encounters. KW - virtual stimuli KW - implicit association test KW - virtual reality KW - social VR KW - InteractionSuitcase Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-278974 SN - 2414-4088 VL - 6 IS - 7 ER - TY - THES A1 - Ostermayer, Ludwig T1 - Integration of Prolog and Java with the Connector Architecture CAPJa T1 - Integration von Prolog und Java mit Hilfe der Connector Architecture CAPJa N2 - Modern software is often realized as a modular combination of subsystems for, e. g., knowledge management, visualization, verification, or the interaction with users. As a result, software libraries from possibly different programming languages have to work together. Even more complex the case is if different programming paradigms have to be combined. This type of diversification of programming languages and paradigms in just one software application can only be mastered by mechanisms for a seamless integration of the involved programming languages. However, the integration of the common logic programming language Prolog and the popular object-oriented programming language Java is complicated by various interoperability problems which stem on the one hand from the paradigmatic gap between the programming languages, and on the other hand, from the diversity of the available Prolog systems. The subject of the thesis is the investigation of novel mechanisms for the integration of logic programming in Prolog and object–oriented programming in Java. We are particularly interested in an object–oriented, uniform approach which is not specific to just one Prolog system. Therefore, we have first identified several important criteria for the seamless integration of Prolog and Java from the object–oriented perspective. The main contribution of the thesis is a novel integration framework called the Connector Architecture for Prolog and Java (CAPJa). The framework is completely implemented in Java and imposes no modifications to the Java Virtual Machine or Prolog. CAPJa provides a semi–automated mechanism for the integration of Prolog predicates into Java. For compact, readable, and object–oriented queries to Prolog, CAPJa exploits lambda expressions with conditional and relational operators in Java. The communication between Java and Prolog is based on a fully automated mapping of Java objects to Prolog terms, and vice versa. In Java, an extensible system of gateways provides connectivity with various Prolog system and, moreover, makes any connected Prolog system easily interchangeable, without major adaption in Java. N2 - Moderne Software ist oft modular zusammengesetzt aus Subsystemen zur Wissensverwaltung, Visualisierung, Verfikation oder Benutzerinteraktion. Dabei müssen Programmbibliotheken aus möglicherweise verschiedenen Programmiersprachen miteinander zusammenarbeiten. Noch komplizierter ist der Fall, wenn auch noch verschiedene Programmierparadigmen miteinander kombiniert werden. Diese Art der Diversifikation an Programmiersprachen und –paradigmen in nur einer Software kann nur von nahtlosen Integrationsmechansimen für die beteiligten Programmiersprachen gemeistert werden. Gerade die Einbindung der gängigen Logikprogrammiersprache Prolog und der populären objektorientierten Programmiersprache Java wird durch zahlreiche Kompatibilitätsprobleme erschwert, welche auf der einen Seite von paradigmatischen Unterschieden der beiden Programmiersprachen herrühren und auf der anderen Seite von der Vielfalt der erhältlichen Prologimplementierungen. Gegenstand dieser Arbeit ist die Untersuchung von neuartigen Mechanismen für die Zusammenführung von Logikprogrammierung in Prolog und objektorienter Programmierung in Java. Besonders interessiert uns dabei ein objektorientierter, einheitlicher Ansatz, der nicht auf eine konkrete Prologimplementierung festgelegt ist. Aus diesem Grund haben wir zunächst wichtige Kriterien für die nahtlose Integration von Prolog und Java aus der objetorientierten Sicht identifziert. Der Hauptbeitrag dieser Arbeit ist ein neuartiges Integrationssystems, welches Connector Architecture for Prolog and Java (CAPJa) heißt. Das System ist komplett in Java implementiert und benötigt keine Anpassungen der Java Virtual Machine oder Prolog. CAPJa stellt einen halbautomatischen Mechanismus zur Vernetzung von Prolog Prädikaten mit Java zur Verfügung. Für kompakte, lesbare und objektorientierte Anfragen an Prolog nutzt CAPJa Lambdaausdrücke mit logischen und relationalen Operatoren in Java. Die Kommunikation zwischen Java und Prolog basiert auf einer automatisierten Abbildung von Java Objekten auf Prolog Terme, und umgekehrt. In Java bietet ein erweiterbares System von Schnittstellen Konnektivität zu einer Vielzahl an Prologimplmentierung und macht darüber hinaus jede verbundene Prologimplementierung einfach austauschbar, und zwar ohne größere Anpassung in Java. KW - Logische Programmierung KW - Objektorientierte Programmierung KW - PROLOG KW - Java KW - Multi-Paradigm Programming KW - Logic Programming KW - Object-Oriented Programming KW - Multi-Paradigm Programming Framework Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-150713 ER - TY - JOUR A1 - Riedmann, Anna A1 - Schaper, Philipp A1 - Lugrin, Birgit T1 - Integration of a social robot and gamification in adult learning and effects on motivation, engagement and performance JF - AI & Society N2 - Learning is a central component of human life and essential for personal development. Therefore, utilizing new technologies in the learning context and exploring their combined potential are considered essential to support self-directed learning in a digital age. A learning environment can be expanded by various technical and content-related aspects. Gamification in the form of elements from video games offers a potential concept to support the learning process. This can be supplemented by technology-supported learning. While the use of tablets is already widespread in the learning context, the integration of a social robot can provide new perspectives on the learning process. However, simply adding new technologies such as social robots or gamification to existing systems may not automatically result in a better learning environment. In the present study, game elements as well as a social robot were integrated separately and conjointly into a learning environment for basic Spanish skills, with a follow-up on retained knowledge. This allowed us to investigate the respective and combined effects of both expansions on motivation, engagement and learning effect. This approach should provide insights into the integration of both additions in an adult learning context. We found that the additions of game elements and the robot did not significantly improve learning, engagement or motivation. Based on these results and a literature review, we outline relevant factors for meaningful integration of gamification and social robots in learning environments in adult learning. KW - social robot KW - gamification KW - technology-supported learning KW - adult learning Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-324208 SN - 0951-5666 ER - TY - JOUR A1 - Li, Ningbo A1 - Guan, Lianwu A1 - Gao, Yanbin A1 - Du, Shitong A1 - Wu, Menghao A1 - Guang, Xingxing A1 - Cong, Xiaodan T1 - Indoor and outdoor low-cost seamless integrated navigation system based on the integration of INS/GNSS/LIDAR system JF - Remote Sensing N2 - Global Navigation Satellite System (GNSS) provides accurate positioning data for vehicular navigation in open outdoor environment. In an indoor environment, Light Detection and Ranging (LIDAR) Simultaneous Localization and Mapping (SLAM) establishes a two-dimensional map and provides positioning data. However, LIDAR can only provide relative positioning data and it cannot directly provide the latitude and longitude of the current position. As a consequence, GNSS/Inertial Navigation System (INS) integrated navigation could be employed in outdoors, while the indoors part makes use of INS/LIDAR integrated navigation and the corresponding switching navigation will make the indoor and outdoor positioning consistent. In addition, when the vehicle enters the garage, the GNSS signal will be blurred for a while and then disappeared. Ambiguous GNSS satellite signals will lead to the continuous distortion or overall drift of the positioning trajectory in the indoor condition. Therefore, an INS/LIDAR seamless integrated navigation algorithm and a switching algorithm based on vehicle navigation system are designed. According to the experimental data, the positioning accuracy of the INS/LIDAR navigation algorithm in the simulated environmental experiment is 50% higher than that of the Dead Reckoning (DR) algorithm. Besides, the switching algorithm developed based on the INS/LIDAR integrated navigation algorithm can achieve 80% success rate in navigation mode switching. KW - vehicular navigation KW - GNSS/INS integrated navigation KW - INS/LIDAR integrated navigation KW - switching navigation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-216229 SN - 2072-4292 VL - 12 IS - 19 ER - TY - JOUR A1 - Schlör, Daniel A1 - Ring, Markus A1 - Hotho, Andreas T1 - iNALU: Improved Neural Arithmetic Logic Unit JF - Frontiers in Artificial Intelligence N2 - Neural networks have to capture mathematical relationships in order to learn various tasks. They approximate these relations implicitly and therefore often do not generalize well. The recently proposed Neural Arithmetic Logic Unit (NALU) is a novel neural architecture which is able to explicitly represent the mathematical relationships by the units of the network to learn operations such as summation, subtraction or multiplication. Although NALUs have been shown to perform well on various downstream tasks, an in-depth analysis reveals practical shortcomings by design, such as the inability to multiply or divide negative input values or training stability issues for deeper networks. We address these issues and propose an improved model architecture. We evaluate our model empirically in various settings from learning basic arithmetic operations to more complex functions. Our experiments indicate that our model solves stability issues and outperforms the original NALU model in means of arithmetic precision and convergence. KW - neural networks KW - machine learning KW - arithmetic calculations KW - neural architecture KW - experimental evaluation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-212301 SN - 2624-8212 VL - 3 ER - TY - JOUR A1 - Lopez-Arreguin, A. J. R. A1 - Montenegro, S. T1 - Improving engineering models of terramechanics for planetary exploration JF - Results in Engineering N2 - This short letter proposes more consolidated explicit solutions for the forces and torques acting on typical rover wheels, that can be used as a method to determine their average mobility characteristics in planetary soils. The closed loop solutions stand in one of the verified methods, but at difference of the previous, observables are decoupled requiring a less amount of physical parameters to measure. As a result, we show that with knowledge of terrain properties, wheel driving performance rely in a single observable only. Because of their generality, the formulated equations established here can have further implications in autonomy and control of rovers or planetary soil characterization. KW - Wheel KW - Terramechanics KW - Forces KW - Torque KW - Robotics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-202490 VL - 3 ER - TY - CHAP A1 - Sanusi, Khaleel Asyraaf Mat A1 - Klemke, Roland T1 - Immersive Multimodal Environments for Psychomotor Skills Training T2 - Proceedings of the 1st Games Technology Summit N2 - Modern immersive multimodal technologies enable the learners to completely get immersed in various learning situations in a way that feels like experiencing an authentic learning environment. These environments also allow the collection of multimodal data, which can be used with artificial intelligence to further improve the immersion and learning outcomes. The use of artificial intelligence has been widely explored for the interpretation of multimodal data collected from multiple sensors, thus giving insights to support learners’ performance by providing personalised feedback. In this paper, we present a conceptual approach for creating immersive learning environments, integrated with multi-sensor setup to help learners improve their psychomotor skills in a remote setting. KW - immersive learning technologies KW - multimodal learning KW - sensor devices KW - artificial intelligence KW - psychomotor training Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-246016 ER - TY - RPRT A1 - Vomhoff, Viktoria A1 - Geißler, Stefan A1 - Hoßfeld, Tobias T1 - Identification of Signaling Patterns in Mobile IoT Signaling Traffic T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - We attempt to identify sequences of signaling dialogs, to strengthen our understanding of the signaling behavior of IoT devices by examining a dataset containing over 270.000 distinct IoT devices whose signaling traffic has been observed over a 31-day period in a 2G network [4]. We propose a set of rules that allows the assembly of signaling dialogs into so-called sessions in order to identify common patterns and lay the foundation for future research in the areas of traffic modeling and anomaly detection. KW - Datennetz KW - IoT Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280819 ER - TY - THES A1 - Aschenbrenner, Doris T1 - Human Robot Interaction Concepts for Human Supervisory Control and Telemaintenance Applications in an Industry 4.0 Environment T1 - Mensch-Roboter-Interaktionskonzepte für Fernsteuerungs- und Fernwartungsanwendungen in einer Industrie 4.0 Umgebung N2 - While teleoperation of technical highly sophisticated systems has already been a wide field of research, especially for space and robotics applications, the automation industry has not yet benefited from its results. Besides the established fields of application, also production lines with industrial robots and the surrounding plant components are in need of being remotely accessible. This is especially critical for maintenance or if an unexpected problem cannot be solved by the local specialists. Special machine manufacturers, especially robotics companies, sell their technology worldwide. Some factories, for example in emerging economies, lack qualified personnel for repair and maintenance tasks. When a severe failure occurs, an expert of the manufacturer needs to fly there, which leads to long down times of the machine or even the whole production line. With the development of data networks, a huge part of those travels can be omitted, if appropriate teleoperation equipment is provided. This thesis describes the development of a telemaintenance system, which was established in an active production line for research purposes. The customer production site of Braun in Marktheidenfeld, a factory which belongs to Procter & Gamble, consists of a six-axis cartesian industrial robot by KUKA Industries, a two-component injection molding system and an assembly unit. The plant produces plastic parts for electric toothbrushes. In the research projects "MainTelRob" and "Bayern.digital", during which this plant was utilised, the Zentrum für Telematik e.V. (ZfT) and its project partners develop novel technical approaches and procedures for modern telemaintenance. The term "telemaintenance" hereby refers to the integration of computer science and communication technologies into the maintenance strategy. It is particularly interesting for high-grade capital-intensive goods like industrial robots. Typical telemaintenance tasks are for example the analysis of a robot failure or difficult repair operations. The service department of KUKA Industries is responsible for the worldwide distributed customers who own more than one robot. Currently such tasks are offered via phone support and service staff which travels abroad. They want to expand their service activities on telemaintenance and struggle with the high demands of teleoperation especially regarding security infrastructure. In addition, the facility in Marktheidenfeld has to keep up with the high international standards of Procter & Gamble and wants to minimize machine downtimes. Like 71.6 % of all German companies, P&G sees a huge potential for early information on their production system, but complains about the insufficient quality and the lack of currentness of data. The main research focus of this work lies on the human machine interface for all human tasks in a telemaintenance setup. This thesis provides own work in the use of a mobile device in context of maintenance, describes new tools on asynchronous remote analysis and puts all parts together in an integrated telemaintenance infrastructure. With the help of Augmented Reality, the user performance and satisfaction could be raised. A special regard is put upon the situation awareness of the remote expert realized by different camera viewpoints. In detail the work consists of: - Support of maintenance tasks with a mobile device - Development and evaluation of a context-aware inspection tool - Comparison of a new touch-based mobile robot programming device to the former teach pendant - Study on Augmented Reality support for repair tasks with a mobile device - Condition monitoring for a specific plant with industrial robot - Human computer interaction for remote analysis of a single plant cycle - A big data analysis tool for a multitude of cycles and similar plants - 3D process visualization for a specific plant cycle with additional virtual information - Network architecture in hardware, software and network infrastructure - Mobile device computer supported collaborative work for telemaintenance - Motor exchange telemaintenance example in running production environment - Augmented reality supported remote plant visualization for better situation awareness N2 - Die Fernsteuerung technisch hochentwickelter Systeme ist seit vielen Jahren ein breites Forschungsfeld, vor allem im Bereich von Weltraum- und Robotikanwendungen. Allerdings hat die Automatisierungsindustrie bislang zu wenig von den Ergebnissen dieses Forschungsgebiets profitiert. Auch Fertigungslinien mit Industrierobotern und weiterer Anlagenkomponenten müssen über die Ferne zugänglich sein, besonders bei Wartungsfällen oder wenn unvorhergesehene Probleme nicht von den lokalen Spezialisten gelöst werden können. Hersteller von Sondermaschinen wie Robotikfirmen verkaufen ihre Technologie weltweit. Kunden dieser Firmen besitzen beispielsweise Fabriken in Schwellenländern, wo es an qualifizierten Personal für Reparatur und Wartung mangelt. Wenn ein ernster Fehler auftaucht, muss daher ein Experte des Sondermaschinenherstellers zum Kunden fliegen. Das führt zu langen Stillstandzeiten der Maschine. Durch die Weiterentwicklung der Datennetze könnte ein großer Teil dieser Reisen unterbleiben, wenn eine passende Fernwartungsinfrastruktur vorliegen würde. Diese Arbeit beschreibt die Entwicklung eines Fernwartungssystems, welches in einer aktiven Produktionsumgebung für Forschungszwecke eingerichtet wurde. Die Fertigungsanlage des Kunden wurde von Procter & Gamble in Marktheidenfeld zur Verfügung gestellt und besteht aus einem sechsachsigen, kartesischen Industrieroboter von KUKA Industries, einer Zweikomponentenspritzgussanlage und einer Montageeinheit. Die Anlage produziert Plastikteile für elektrische Zahnbürsten. Diese Anlage wurde im Rahmen der Forschungsprojekte "MainTelRob" und "Bayern.digital" verwendet, in denen das Zentrum für Telematik e.V. (ZfT) und seine Projektpartner neue Ansätze und Prozeduren für moderne Fernwartungs-Technologien entwickeln. Fernwartung bedeutet für uns die umfassende Integration von Informatik und Kommunikationstechnologien in der Wartungsstrategie. Das ist vor allem für hochentwickelte, kapitalintensive Güter wie Industrierobotern interessant. Typische Fernwartungsaufgaben sind beispielsweise die Analyse von Roboterfehlermeldungen oder schwierige Reparaturmaßnahmen. Die Service-Abteilung von KUKA Industries ist für die weltweit verteilten Kunden zuständig, die teilweise auch mehr als einen Roboter besitzen. Aktuell werden derartige Aufgaben per Telefonauskunft oder mobilen Servicekräften, die zum Kunden reisen, erledigt. Will man diese komplizierten Aufgaben durch Fernwartung ersetzen um die Serviceaktivitäten auszuweiten muss man mit den hohen Anforderungen von Fernsteuerung zurechtkommen, besonders in Bezug auf Security Infrastruktur. Eine derartige umfassende Herangehensweise an Fernwartung bietet aber auch einen lokalen Mehrwert beim Kunden: Die Fabrik in Marktheidenfeld muss den hohen internationalen Standards von Procter & Gamble folgen und will daher die Stillstandzeiten weiter verringern. Wie 71,6 Prozent aller deutschen Unternehmen sieht auch P&G Marktheidenfeld ein großes Potential für frühe Informationen aus ihrem Produktionssystem, haben aber aktuell noch Probleme mit der Aktualität und Qualität dieser Daten. Der Hauptfokus der hier vorgestellten Forschung liegt auf der Mensch-Maschine-Schnittstelle für alle Aufgaben eines umfassenden Fernwartungskontextes. Diese Arbeit stellt die eigene Arbeiten bei der Verwendung mobiler Endgeräte im Kontext der Wartung und neue Softwarewerkzeuge für die asynchrone Fernanalyse vor und integriert diese Aspekte in eine Fernwartungsinfrastruktur. In diesem Kontext kann gezeigt werden, dass der Einsatz von Augmented Reality die Nutzerleistung und gleichzeitig die Zufriedenheit steigern kann. Dabei wird auf das sogenannte "situative Bewusstsein" des entfernten Experten besonders Wert gelegt. Im Detail besteht die Arbeit aus: - Unterstützung von Wartungsaufgaben mit mobilen Endgeräten - Entwicklung und Evaluation kontextsensitiver Inspektionssoftware - Vergleich von touch-basierten Roboterprogrammierung mit der Vorgängerversion des Programmierhandgeräts - Studien über die Unterstützung von Reparaturaufgaben durch Augmented Reality - Zustandsüberwachung für eine spezielle Anlage mit Industrieroboter - Mensch-Maschine Interaktion für die Teleanalyse eines Produktionszyklus - Grafische Big Data Analyse einer Vielzahl von Produktionszyklen - 3D Prozess Visualisierung und Anreicherung mit virtuellen Informationen - Hardware, Software und Netzwerkarchitektur für die Fernwartung - Computerunterstützte Zusammenarbeit mit Verwendung mobiler Endgeräte für die Fernwartung - Fernwartungsbeispiel: Durchführung eines Motortauschs in der laufenden Produktion - Augmented Reality unterstütze Visualisierung des Anlagenkontextes für die Steigerung des situativen Bewusstseins T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 13 KW - Fernwartung KW - Robotik KW - Mensch-Maschine-Schnittstelle KW - Erweiterte Realität KW - Situation Awareness KW - Industrie 4.0 KW - Industrial internet Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-150520 SN - 978-3-945459-18-8 ER - TY - RPRT A1 - Funda, Christoph A1 - Konheiser, Tobias A1 - German, Reinhard A1 - Hielscher, Kai-Steffen T1 - How to Model and Predict the Scalability of a Hardware-In-The-Loop Test Bench for Data Re-Injection? T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This paper describes a novel application of an empirical network calculus model based on measurements of a hardware-in-the-loop (HIL) test system. The aim is to predict the performance of a HIL test bench for open-loop re-injection in the context of scalability. HIL test benches are distributed computer systems including software, hardware, and networking devices. They are used to validate complex technical systems, but have not yet been system under study themselves. Our approach is to use measurements from the HIL system to create an empirical model for arrival and service curves. We predict the performance and design the previously unknown parameters of the HIL simulator with network calculus (NC), namely the buffer sizes and the minimum needed pre-buffer time for the playback buffer. We furthermore show, that it is possible to estimate the CPU load from arrival and service-curves based on the utilization theorem, and hence estimate the scalability of the HIL system in the context of the number of sensor streams. KW - hardware-in-the-loop simulation KW - computer performance evaluation KW - network calculus KW - scalability evaluation Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322150 ER - TY - THES A1 - Duelli, Michael T1 - Heuristic Design and Provisioning of Resilient Multi-Layer Networks T1 - Heuristische Planung und Betrieb von ausfallsicheren Mehrschichtnetzen N2 - To jointly provide different services/technologies, like IP and Ethernet or IP and SDH/SONET, in a single network, equipment of multiple technologies needs to be deployed to the sites/Points of Presence (PoP) and interconnected with each other. Therein, a technology may provide transport functionality to other technologies and increase the number of available resources by using multiplexing techniques. By providing its own switching functionality, each technology creates connections in a logical layer which leads to the notion of multi-layer networks. The design of such networks comprises the deployment and interconnection of components to suit to given traffic demands. To prevent traffic loss due to failures of networking equipment, protection mechanisms need to be established. In multi-layer networks, protection usually can be applied in any of the considered layers. In turn, the hierarchical structure of multi-layer networks also bears shared risk groups (SRG). To achieve a cost-optimal resilient network, an appropriate combination of multiplexing techniques, technologies, and their interconnections needs to be found. Thus, network design is a combinatorial problem with a large parameter and solution space. After the design stage, the resources of a multi-layer network can be provided to traffic demands. Especially, dynamic capacity provisioning requires interaction of sites and layers, as well as accurate retrieval of constraint information. In recent years, generalized multiprotocol label switching (GMPLS) and path computation elements (PCE) have emerged as possible approaches for these challenges. Like the design, the provisioning of multi-layer networks comprises a variety of optimization parameters, like blocking probability, resilience, and energy efficiency. In this work, we introduce several efficient heuristics to approach the considered optimization problems. We perform capital expenditure (CAPEX)-aware design of multi-layer networks from scratch, based on IST NOBEL phase 2 project's cost and equipment data. We comprise traffic and resilience requirements in different and multiple layers as well as different network architectures. On top of the designed networks, we consider the dynamic provisioning of multi-layer traffic based on the GMPLS and PCE architecture. We evaluate different PCE deployments, information retrieval strategies, and re-optimization. Finally, we show how information about provisioning utilization can be used to provide a feedback for network design. N2 - Um in einem Netz verschiedene Dienste/Schichten, z.B. IP und Ethernet oder IP und SDH/SONET, parallel anbieten zu können, müssen Komponenten mehrerer Technologien an den Standorten verbaut und miteinander verbunden werden. Hierbei kann eine Technologie eine Transportschicht für andere Technologien fungieren und die Zahl der verfügbaren Ressourcen durch Multiplex Techniken erhöhen. Durch die Bereitstellung eigener Switching Funktionalität erzeugt jede Technologie Verbindungen in einer logischen Schicht. Dies führt zu der Bezeichnung Mehrschichtnetz (engl. multi-layer network). Die Planung solcher Netze hat das Verbauen und Verbinden von Komponenten zum Ziel, so dass gegebene Verkehrsströme realisiert werden können. Um Unterbrechungen der Verkehrsströme aufgrund von Fehlern in den Netzkomponenten zu verhinden, müssen Schutzmechanismen eingebaut werden. In Mehrschichtnetzen können solche Schutzmechanismen in jeder beliebigen Schicht betrachtet werden. Allerdings birgt die hierarchische Struktur von Mehrschichtnetzen das Risiko von shared risk groups (SRG). Um ein Kosten-optimales ausfallsicheres Netz zu erhalten, muss eine passende Kombination aus Multiplex Techniken, Technologien und deren Verbindungen gefunden werden. Die Netzplanung ist daher ein kombinatorisches Problem mit einem großen Parameter- und Lösungsraum. Nach der Planungsphase können die Ressourcen eines Mehrschichtnetzes für Verkehrsströme vorgehalten werden. Die Betrachtung von dynamischer Kapazitätsanforderungen erfordert die Interaktion von Knoten und Schichten sowie akkurate Gewinnung von Informationen zur Auslastung. In jüngster Zeit sind das Generalized Multiprotocol Label Switching (GMPLS) und das Path Computation Element (PCE) als mögliche Lösungsansätze für diese Herausforderungen entstanden. Wie in der Planung beinhaltet auch der Betrieb von Mehrschichtnetzen eine Vielzahl von Optimierungsparametern, wie die Blockierungswahrscheinlichkeit, Ausfallsicherheit und Energie-Effizienz. In dieser Arbeit, führen wir verschiedene effiziente Heuristiken ein, um die betrachteten Optimierungsprobleme anzugehen. Wir planen Mehrschichtnetze von Grund auf und minimieren hinsichtlich der Anschaffungskosten basierend auf den Kosten- und Komponenten-Daten des IST NOBEL Phase 2 Projekts. Wir berücksichtigen Anforderungen der Verkehrsströme und Ausfallsicherheit in verschiedenen Schichten und in mehreren Schichten gleichzeitig sowie verschiedene Netzarchitekturen. Aufsetzend auf dem geplanten Netz betrachten wir den Betrieb von Mehrschichtnetzen mit dynamischen Verkehr basiert auf der GMPLS und PCE Architektur. Wir bewerten verschiedene PCE Installationen, Strategien zur Informationsgewinnung und Re-Optimierung. Abschließend, zeigen wir wie Information über die Auslastung im Betrieb genutzt werden kann um Rückmeldung an die Netzplanung zu geben. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 02/12 KW - Mehrschichtsystem KW - Planung KW - Ressourcenmanagement KW - Ausfallsicheres System KW - Mehrschichtnetze KW - Pfadberechnungselement KW - Ausfallsicherheit KW - Multi-Layer KW - Design KW - Resilience KW - Resource Management KW - Path Computation Element Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-69433 ER - TY - THES A1 - Schmidt, Marco T1 - Ground Station Networks for Efficient Operation of Distributed Small Satellite Systems T1 - Effizienter Betrieb von Verteilten Kleinsatelliten-Systemen mit Bodenstationsnetzwerken N2 - The field of small satellite formations and constellations attracted growing attention, based on recent advances in small satellite engineering. The utilization of distributed space systems allows the realization of innovative applications and will enable improved temporal and spatial resolution in observation scenarios. On the other side, this new paradigm imposes a variety of research challenges. In this monograph new networking concepts for space missions are presented, using networks of ground stations. The developed approaches combine ground station resources in a coordinated way to achieve more robust and efficient communication links. Within this thesis, the following topics were elaborated to improve the performance in distributed space missions: Appropriate scheduling of contact windows in a distributed ground system is a necessary process to avoid low utilization of ground stations. The theoretical basis for the novel concept of redundant scheduling was elaborated in detail. Additionally to the presented algorithm was a scheduling system implemented, its performance was tested extensively with real world scheduling problems. In the scope of data management, a system was developed which autonomously synchronizes data frames in ground station networks and uses this information to detect and correct transmission errors. The system was validated with hardware in the loop experiments, demonstrating the benefits of the developed approach. N2 - Satellitenformationen und Konstellationen rücken immer mehr in den Fokus aktueller Forschung, ausgelöst durch die jüngsten Fortschritte in der Kleinsatelliten-Entwicklung. Der Einsatz von verteilten Weltraumsystemen ermöglicht die Realisierung von innovativen Anwendungen auf Basis von hoher zeitlicher und räumlicher Auflösung in Observationsszenarien. Allerdings bringt dieses neue Paradigma der Raumfahrttechnik auch Herausforderungen in verschiedenen Forschungsfeldern mit sich. In dieser Dissertation werden neue Netzwerk-Konzepte für Raumfahrtmissionen unter Einsatz von Bodenstationnetzwerken vorgestellt. Die präsentierten Verfahren koordinieren verfügbare Bodenstationsressourcen um einen robusten und effizienten Kommunikationslink zu ermöglichen. In dieser Arbeit werden dabei folgende Themenfelder behandelt um die Performance in verteilten Raumfahrtmissionen zu steigern: Das Verteilen von Kontaktfenster (sogenanntes Scheduling) in verteilten Bodenstationssystem ist ein notwendiger Prozess um eine niedrige Auslastung der Stationen zu vermeiden. Die theoretische Grundlage für das Konzept des redundanten Scheduling wurde erarbeitet. Zusätztlich wurde das Verfahren in Form eines Scheduling Systems implementiert und dessen Performance ausführlich an real-world Szenarien getestet. Im Rahmen des Themenfeldes Data Management wurde ein System entwickelt, welches autonom Datenframes in Bodenstationsnetzwerken synchronisieren kann. Die in den Datenframes enthaltene Information wird genutzt um Übertragungsfehler zu erkennen und zu korrigieren. Das System wurde mit Hardware-in-the-loop Experimenten validiert und die Vorteile des entwickelten Verfahrens wurden gezeigt. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 6 KW - Kleinsatellit KW - Bodenstation KW - Verteiltes System KW - Scheduling KW - Ground Station Networks KW - Small Satellites KW - Distributed Space Systems Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-64999 SN - 978-3-923959-77-8 ER - TY - JOUR A1 - Maiwald, Ferdinand A1 - Bruschke, Jonas A1 - Schneider, Danilo A1 - Wacker, Markus A1 - Niebling, Florian T1 - Giving historical photographs a new perspective: introducing camera orientation parameters as new metadata in a large-scale 4D application JF - Remote Sensing N2 - The ongoing digitization of historical photographs in archives allows investigating the quality, quantity, and distribution of these images. However, the exact interior and exterior camera orientations of these photographs are usually lost during the digitization process. The proposed method uses content-based image retrieval (CBIR) to filter exterior images of single buildings in combination with metadata information. The retrieved photographs are automatically processed in an adapted structure-from-motion (SfM) pipeline to determine the camera parameters. In an interactive georeferencing process, the calculated camera positions are transferred into a global coordinate system. As all image and camera data are efficiently stored in the proposed 4D database, they can be conveniently accessed afterward to georeference newly digitized images by using photogrammetric triangulation and spatial resection. The results show that the CBIR and the subsequent SfM are robust methods for various kinds of buildings and different quantity of data. The absolute accuracy of the camera positions after georeferencing lies in the range of a few meters likely introduced by the inaccurate LOD2 models used for transformation. The proposed photogrammetric method, the database structure, and the 4D visualization interface enable adding historical urban photographs and 3D models from other locations. KW - historical images KW - 4D-GIS KW - content-based image retrieval KW - Structure-from-Motion KW - camera orientation KW - feature matching Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-311103 SN - 2072-4292 VL - 15 IS - 7 ER - TY - THES A1 - Reith, Steffen T1 - Generalized Satisfiability Problems T1 - Verallgemeinerte Erfüllbarkeitsprobleme N2 - In the last 40 years, complexity theory has grown to a rich and powerful field in theoretical computer science. The main task of complexity theory is the classification of problems with respect to their consumption of resources (e.g., running time or required memory). To study the computational complexity (i.e., consumption of resources) of problems, similar problems are grouped into so called complexity classes. During the systematic study of numerous problems of practical relevance, no efficient algorithm for a great number of studied problems was found. Moreover, it was unclear whether such algorithms exist. A major breakthrough in this situation was the introduction of the complexity classes P and NP and the identification of hardest problems in NP. These hardest problems of NP are nowadays known as NP-complete problems. One prominent example of an NP-complete problem is the satisfiability problem of propositional formulas (SAT). Here we get a propositional formula as an input and it must be decided whether an assignment for the propositional variables exists, such that this assignment satisfies the given formula. The intensive study of NP led to numerous related classes, e.g., the classes of the polynomial-time hierarchy PH, P, #P, PP, NL, L and #L. During the study of these classes, problems related to propositional formulas were often identified to be complete problems for these classes. Hence some questions arise: Why is SAT so hard to solve? Are there modifications of SAT which are complete for other well-known complexity classes? In the context of these questions a result by E. Post is extremely useful. He identified and characterized all classes of Boolean functions being closed under superposition. It is possible to study problems which are connected to generalized propositional logic by using this result, which was done in this thesis. Hence, many different problems connected to propositional logic were studied and classified with respect to their computational complexity, clearing the borderline between easy and hard problems. N2 - In den letzten 40 Jahren hat sich die Komplexitätstheorie zu einem reichen und mächtigen Gebiet innerhalb der theoretischen Informatik entwickelt. Dabei ist die hauptsächliche Aufgabenstellung der Komplexitätstheorie die Klassifikation von Problemen bezüglich des Bedarfs von Rechenzeit oder Speicherplatz zu ihrer Lösung. Um die Komplexität von Problemen (d.h. den Bedarf von Resourcen) einzuordnen, werden Probleme mit ähnlichem Ressourcenbedarf in gleiche sogenannte Komplexitätsklassen einsortiert. Bei der systematischen Untersuchung einer Vielzahl von praktisch relevanten Problemen wurden jedoch keine effizienten Algorithmen für viele der untersuchten Probleme gefunden und es ist unklar, ob solche Algorithmen überhaupt existieren. Ein Durchbruch bei der Untersuchung dieser Problematik war die Einführung der Komplexitätsklassen P und NP und die Identifizierung von schwersten Problemen in NP. Diese schwierigsten Probleme von NP sind heute als sogenannte NP-vollständige Probleme bekannt. Ein prominentes Beispiel für ein NP-vollständiges Problem ist das Erfüllbarkeitsproblem für aussagenlogische Formeln (SAT). Hier ist eine aussagenlogische Formel als Eingabe gegeben und es muss bestimmt werden, ob eine Belegung der Wahrheitswertevariablen existiert, so dass diese Belegung die gegebene Formel erfüllt. Das intensive Studium der Klasse NP führte zu einer Vielzahl von anderen Komplexitätsklassen, wie z.B. die der Polynomialzeithierarchie PH, P, #P, PP, NL, L oder #L. Beim Studium dieser Klassen wurden sehr oft Probleme im Zusammenhang mit aussagenlogischen Formeln als schwierigste (vollständige) Probleme für diese Klassen identifiziert. Deshalb stellt sich folgende Frage: Welche Eigenschaften des Erfüllbarkeitsproblems SAT bewirken, dass es eines der schwersten Probleme der Klasse NP ist? Gibt es Einschränkungen oder Verallgemeinerungen des Erfüllbarkeitsproblems, die vollständig für andere bekannte Komplexitätsklassen sind? Im Zusammenhang mit solchen Fragestellungen ist ein Ergebnis von E. Post von extremem Nutzen. Er identifizierte und charakterisierte alle Klassen von Booleschen Funktionen, die unter Superposition abgeschlossen sind. Mit Hilfe dieses Resultats ist es möglich, Probleme im Zusammenhang mit verallgemeinerten Aussagenlogiken zu studieren, was in der vorliegenden Arbeit durchgeführt wurde. Dabei wurde eine Vielzahl von verschiedenen Problemen, die in Zusammenhang mit der Aussagenlogik stehen, studiert und bezüglich ihrer Komplexität klassifiziert. Dadurch wird die Grenzlinie zwischen einfach lösbaren Problemen und schweren Problemen sichtbar. KW - Erfüllbarkeitsproblem KW - Komplexitätstheorie KW - Boolesche Funktionen KW - Isomorphie KW - abgeschlossene Klassen KW - Zählprobleme KW - Computational complexity KW - Boolean functions KW - Boolean isomorphism KW - Boolean equivalence KW - Dichotomy KW - counting problems Y1 - 2001 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-74 ER - TY - THES A1 - Glaßer, Christian T1 - Forbidden-Patterns and Word Extensions for Concatenation Hierarchies T1 - Verbotsmuster und Worterweiterungen für Konkatenationshierarchien N2 - Starfree regular languages can be build up from alphabet letters by using only Boolean operations and concatenation. The complexity of these languages can be measured with the so-called dot-depth. This measure leads to concatenation hierarchies like the dot-depth hierarchy (DDH) and the closely related Straubing-Thérien hierarchy (STH). The question whether the single levels of these hierarchies are decidable is still open and is known as the dot-depth problem. In this thesis we prove/reprove the decidability of some lower levels of both hierarchies. More precisely, we characterize these levels in terms of patterns in finite automata (subgraphs in the transition graph) that are not allowed. Therefore, such characterizations are called forbidden-pattern characterizations. The main results of the thesis are as follows: forbidden-pattern characterization for level 3/2 of the DDH (this implies the decidability of this level) decidability of the Boolean hierarchy over level 1/2 of the DDH definition of decidable hierarchies having close relations to the DDH and STH Moreover, we prove/reprove the decidability of the levels 1/2 and 3/2 of both hierarchies in terms of forbidden-pattern characterizations. We show the decidability of the Boolean hierarchies over level 1/2 of the DDH and over level 1/2 of the STH. A technique which uses word extensions plays the central role in the proofs of these results. With this technique it is possible to treat the levels 1/2 and 3/2 of both hierarchies in a uniform way. Furthermore, it can be used to prove the decidability of the mentioned Boolean hierarchies. Among other things we provide a combinatorial tool that allows to partition words of arbitrary length into factors of bounded length such that every second factor u leads to a loop with label u in a given finite automaton. N2 - Sternfreie reguläre Sprachen können aus Buchstaben unter Verwendung Boolescher Operationen und Konkatenation aufgebaut werden. Die Komplexität solcher Sprachen lässt sich durch die sogenannte "Dot-Depth" messen. Dieses Maß führt zu Konkatenationshierarchien wie der Dot-Depth-Hierachie (DDH) und der Straubing-Thérien-Hierarchie (STH). Die Frage nach der Entscheidbarkeit der einzelnen Stufen dieser Hierarchien ist als (immer noch offenes) Dot-Depth-Problem bekannt. In dieser Arbeit beweisen wir die Entscheidbarkeit einiger unterer Stufen beider Hierarchien. Genauer gesagt charakterisieren wir diese Stufen durch das Verbot von bestimmten Mustern in endlichen Automaten. Solche Charakterisierungen werden Verbotsmustercharakterisierungen genannt. Die Hauptresultate der Arbeit lassen sich wie folgt zusammenfassen: Verbotsmustercharakterisierung der Stufe 3/2 der DDH (dies hat die Entscheidbarkeit dieser Stufe zur Folge) Entscheidbarkeit der Booleschen Hierarchie über der Stufe 1/2 der DDH Definition von entscheidbaren Hierarchien mit engen Verbindungen zur DDH und STH Darüber hinaus beweisen wir die Entscheidbarkeit der Stufen 1/2 und 3/2 beider Hierarchien (wieder mittels Verbotsmustercharakterisierungen) und die der Booleschen Hierarchien über den Stufen 1/2 der DDH bzw. STH. Dabei stützen sich die Beweise größtenteils auf eine Technik, die von Eigenschaften bestimmter Worterweiterungen Gebrauch macht. Diese Technik erlaubt eine einheitliche Vorgehensweise bei der Untersuchung der Stufen 1/2 und 3/2 beider Hierarchien. Außerdem wird sie in den Beweisen der Entscheidbarkeit der genannten Booleschen Hierarchien verwendet. Unter anderem wird ein kombinatorisches Hilfsmittel zur Verfügung gestellt, das es erlaubt, Wörter beliebiger Länge in Faktoren beschränkter Länge zu zerlegen, so dass jeder zweite Faktor u zu einer u-Schleife in einem gegebenen endlichen Automaten führt. KW - Automatentheorie KW - Formale Sprache KW - Entscheidbarkeit KW - Reguläre Sprache KW - Berechenbarkeit KW - Theoretische Informatik KW - reguläre Sprachen KW - endliche Automaten KW - Dot-Depth Problem KW - Entscheidbarkeit KW - Verbotsmuster KW - Worterweiterungen KW - Theoretical Computer Science KW - regular languages KW - finite automata KW - dot-depth problem KW - decidability KW - forbidden patterns KW - word extensions Y1 - 2001 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-1179153 ER - TY - JOUR A1 - Toepfer, Martin A1 - Corovic, Hamo A1 - Fette, Georg A1 - Klügl, Peter A1 - Störk, Stefan A1 - Puppe, Frank T1 - Fine-grained information extraction from German transthoracic echocardiography reports JF - BMC Medical Informatics and Decision Making N2 - Background Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. Methods A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. Results The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. Conclusions The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research. Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-125509 VL - 15 IS - 91 ER - TY - RPRT A1 - Dworzak, Manuel A1 - Großmann, Marcel A1 - Le, Duy Thanh T1 - Federated Learning for Service Placement in Fog and Edge Computing T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Service orchestration requires enormous attention and is a struggle nowadays. Of course, virtualization provides a base level of abstraction for services to be deployable on a lot of infrastructures. With container virtualization, the trend to migrate applications to a micro-services level in order to be executable in Fog and Edge Computing environments increases manageability and maintenance efforts rapidly. Similarly, network virtualization adds effort to calibrate IP flows for Software-Defined Networks and eventually route it by means of Network Function Virtualization. Nevertheless, there are concepts like MAPE-K to support micro-service distribution in next-generation cloud and network environments. We want to explore, how a service distribution can be improved by adopting machine learning concepts for infrastructure or service changes. Therefore, we show how federated machine learning is integrated into a cloud-to-fog-continuum without burdening single nodes. KW - fog computing KW - SDN KW - orchestration KW - federated learning Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322193 ER - TY - JOUR A1 - Krenzer, Adrian A1 - Makowski, Kevin A1 - Hekalo, Amar A1 - Fitting, Daniel A1 - Troya, Joel A1 - Zoller, Wolfram G. A1 - Hann, Alexander A1 - Puppe, Frank T1 - Fast machine learning annotation in the medical domain: a semi-automated video annotation tool for gastroenterologists JF - BioMedical Engineering OnLine N2 - Background Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g., visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. Methods In our framework, an expert reviews the video and annotates a few video frames to verify the object’s annotations for the non-expert. In a second step, a non-expert has visual confirmation of the given object and can annotate all following and preceding frames with AI assistance. After the expert has finished, relevant frames will be selected and passed on to an AI model. This information allows the AI model to detect and mark the desired object on all following and preceding frames with an annotation. Therefore, the non-expert can adjust and modify the AI predictions and export the results, which can then be used to train the AI model. Results Using this framework, we were able to reduce workload of domain experts on average by a factor of 20 on our data. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated AI model enhances the annotation speed further. Through a prospective study with 10 participants, we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. Conclusion In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source. KW - object detection KW - machine learning KW - deep learning KW - annotation KW - endoscopy KW - gastroenterology KW - automation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-300231 VL - 21 IS - 1 ER - TY - THES A1 - Budig, Benedikt T1 - Extracting Spatial Information from Historical Maps: Algorithms and Interaction T1 - Extraktion räumlicher Informationen aus historischen Landkarten: Algorithmen und Interaktion N2 - Historical maps are fascinating documents and a valuable source of information for scientists of various disciplines. Many of these maps are available as scanned bitmap images, but in order to make them searchable in useful ways, a structured representation of the contained information is desirable. This book deals with the extraction of spatial information from historical maps. This cannot be expected to be solved fully automatically (since it involves difficult semantics), but is also too tedious to be done manually at scale. The methodology used in this book combines the strengths of both computers and humans: it describes efficient algorithms to largely automate information extraction tasks and pairs these algorithms with smart user interactions to handle what is not understood by the algorithm. The effectiveness of this approach is shown for various kinds of spatial documents from the 16th to the early 20th century. N2 - Historische Landkarten sind faszinierende Dokumente und eine wertvolle Informationsquelle für Wissenschaftler verschiedener Fächer. Viele dieser Karten liegen als gescannte Bitmap-Bilder vor, aber um sie auf nützliche Art durchsuchbar zu machen ist eine strukturierte Repräsentation der enthaltenen Informationen wünschenswert. Dieses Buch beschäftigt sich mit der Extraktion räumlicher Informationen aus historischen Landkarten. Man kann nicht erwarten, dass dies vollautomatisch geschieht (da komplizierte Semantik involviert ist), aber es ist auch zu aufwändig, um im großen Stil manuell durchgeführt zu werden. Die Methodik, die in diesem Buch verwendet wird, kombiniert die Stärken von Computern und Menschen: Es werden effiziente Algorithmen beschrieben, die Extraktionsaufgaben weitgehend automatisieren, und dazu passende Nutzerinteraktionen entworfen, mit denen Fälle gelöst werden, die die Algorithmen nicht vestehen. Die Effekitivität dieses Ansatzes wird anhand verschiedener räumlicher Dokumente aus dem 16. bis frühen 20. Jahrhundert gezeigt. KW - Karte KW - Effizienter Algorithmus KW - Interaktion KW - Information Extraction KW - Smart User Interaction KW - Historical Maps KW - Itineraries KW - Deep Georeferencing KW - Benutzerinteraktion KW - Historische Landkarten KW - Itinerare KW - Georeferenzierung KW - Historische Karte KW - Raumdaten Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-160955 SN - 978-3-95826-092-4 SN - 978-3-95826-093-1 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, ISBN 978-3-95826-092-4, 32,90 Euro. PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - THES A1 - Niebler, Thomas T1 - Extracting and Learning Semantics from Social Web Data T1 - Extraktion und Lernen von Semantik aus Social Web-Daten N2 - Making machines understand natural language is a dream of mankind that existed since a very long time. Early attempts at programming machines to converse with humans in a supposedly intelligent way with humans relied on phrase lists and simple keyword matching. However, such approaches cannot provide semantically adequate answers, as they do not consider the specific meaning of the conversation. Thus, if we want to enable machines to actually understand language, we need to be able to access semantically relevant background knowledge. For this, it is possible to query so-called ontologies, which are large networks containing knowledge about real-world entities and their semantic relations. However, creating such ontologies is a tedious task, as often extensive expert knowledge is required. Thus, we need to find ways to automatically construct and update ontologies that fit human intuition of semantics and semantic relations. More specifically, we need to determine semantic entities and find relations between them. While this is usually done on large corpora of unstructured text, previous work has shown that we can at least facilitate the first issue of extracting entities by considering special data such as tagging data or human navigational paths. Here, we do not need to detect the actual semantic entities, as they are already provided because of the way those data are collected. Thus we can mainly focus on the problem of assessing the degree of semantic relatedness between tags or web pages. However, there exist several issues which need to be overcome, if we want to approximate human intuition of semantic relatedness. For this, it is necessary to represent words and concepts in a way that allows easy and highly precise semantic characterization. This also largely depends on the quality of data from which these representations are constructed. In this thesis, we extract semantic information from both tagging data created by users of social tagging systems and human navigation data in different semantic-driven social web systems. Our main goal is to construct high quality and robust vector representations of words which can the be used to measure the relatedness of semantic concepts. First, we show that navigation in the social media systems Wikipedia and BibSonomy is driven by a semantic component. After this, we discuss and extend methods to model the semantic information in tagging data as low-dimensional vectors. Furthermore, we show that tagging pragmatics influences different facets of tagging semantics. We then investigate the usefulness of human navigational paths in several different settings on Wikipedia and BibSonomy for measuring semantic relatedness. Finally, we propose a metric-learning based algorithm in adapt pre-trained word embeddings to datasets containing human judgment of semantic relatedness. This work contributes to the field of studying semantic relatedness between words by proposing methods to extract semantic relatedness from web navigation, learn highquality and low-dimensional word representations from tagging data, and to learn semantic relatedness from any kind of vector representation by exploiting human feedback. Applications first and foremest lie in ontology learning for the Semantic Web, but also semantic search or query expansion. N2 - Einer der großen Träume der Menschheit ist es, Maschinen dazu zu bringen, natürliche Sprache zu verstehen. Frühe Versuche, Computer dahingehend zu programmieren, dass sie mit Menschen vermeintlich intelligente Konversationen führen können, basierten hauptsächlich auf Phrasensammlungen und einfachen Stichwortabgleichen. Solche Ansätze sind allerdings nicht in der Lage, inhaltlich adäquate Antworten zu liefern, da der tatsächliche Inhalt der Konversation nicht erfasst werden kann. Folgerichtig ist es notwendig, dass Maschinen auf semantisch relevantes Hintergrundwissen zugreifen können, um diesen Inhalt zu verstehen. Solches Wissen ist beispielsweise in Ontologien vorhanden. Ontologien sind große Datenbanken von vernetztem Wissen über Objekte und Gegenstände der echten Welt sowie über deren semantische Beziehungen. Das Erstellen solcher Ontologien ist eine sehr kostspielige und aufwändige Aufgabe, da oft tiefgreifendes Expertenwissen benötigt wird. Wir müssen also Wege finden, um Ontologien automatisch zu erstellen und aktuell zu halten, und zwar in einer Art und Weise, dass dies auch menschlichem Empfinden von Semantik und semantischer Ähnlichkeit entspricht. Genauer gesagt ist es notwendig, semantische Entitäten und deren Beziehungen zu bestimmen. Während solches Wissen üblicherweise aus Textkorpora extrahiert wird, ist es möglich, zumindest das erste Problem - semantische Entitäten zu bestimmen - durch Benutzung spezieller Datensätze zu umgehen, wie zum Beispiel Tagging- oder Navigationsdaten. In diesen Arten von Datensätzen ist es nicht notwendig, Entitäten zu extrahieren, da sie bereits aufgrund inhärenter Eigenschaften bei der Datenakquise vorhanden sind. Wir können uns also hauptsächlich auf die Bestimmung von semantischen Relationen und deren Intensität fokussieren. Trotzdem müssen hier noch einige Hindernisse überwunden werden. Beispielsweise ist es notwendig, Repräsentationen für semantische Entitäten zu finden, so dass es möglich ist, sie einfach und semantisch hochpräzise zu charakterisieren. Dies hängt allerdings auch erheblich von der Qualität der Daten ab, aus denen diese Repräsentationen konstruiert werden. In der vorliegenden Arbeit extrahieren wir semantische Informationen sowohl aus Taggingdaten, von Benutzern sozialer Taggingsysteme erzeugt, als auch aus Navigationsdaten von Benutzern semantikgetriebener Social Media-Systeme. Das Hauptziel dieser Arbeit ist es, hochqualitative und robuste Vektordarstellungen von Worten zu konstruieren, die dann dazu benutzt werden können, die semantische Ähnlichkeit von Konzepten zu bestimmen. Als erstes zeigen wir, dass Navigation in Social Media Systemen unter anderem durch eine semantische Komponente getrieben wird. Danach diskutieren und erweitern wir Methoden, um die semantische Information in Taggingdaten als niedrigdimensionale sogenannte “Embeddings” darzustellen. Darüberhinaus demonstrieren wir, dass die Taggingpragmatik verschiedene Facetten der Taggingsemantik beeinflusst. Anschließend untersuchen wir, inwieweit wir menschliche Navigationspfade zur Bestimmung semantischer Ähnlichkeit benutzen können. Hierzu betrachten wir mehrere Datensätze, die Navigationsdaten in verschiedenen Rahmenbedingungen beinhalten. Als letztes stellen wir einen neuartigen Algorithmus vor, um bereits trainierte Word Embeddings im Nachhinein an menschliche Intuition von Semantik anzupassen. Diese Arbeit steuert wertvolle Beiträge zum Gebiet der Bestimmung von semantischer Ähnlichkeit bei: Es werden Methoden vorgestellt werden, um hochqualitative semantische Information aus Web-Navigation und Taggingdaten zu extrahieren, diese mittels niedrigdimensionaler Vektordarstellungen zu modellieren und selbige schließlich besser an menschliches Empfinden von semantischer Ähnlichkeit anzupassen, indem aus genau diesem Empfinden gelernt wird. Anwendungen liegen in erster Linie darin, Ontologien für das Semantic Web zu lernen, allerdings auch in allen Bereichen, die Vektordarstellungen von semantischen Entitäten benutzen. KW - Semantik KW - Maschinelles Lernen KW - Soziale Software KW - Semantics KW - User Behavior KW - Social Web KW - Machine Learning Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-178666 ER - TY - JOUR A1 - Wienrich, Carolin A1 - Latoschik, Marc Erich T1 - eXtended Artificial Intelligence: New Prospects of Human-AI Interaction Research JF - Frontiers in Virtual Reality N2 - Artificial Intelligence (AI) covers a broad spectrum of computational problems and use cases. Many of those implicate profound and sometimes intricate questions of how humans interact or should interact with AIs. Moreover, many users or future users do have abstract ideas of what AI is, significantly depending on the specific embodiment of AI applications. Human-centered-design approaches would suggest evaluating the impact of different embodiments on human perception of and interaction with AI. An approach that is difficult to realize due to the sheer complexity of application fields and embodiments in reality. However, here XR opens new possibilities to research human-AI interactions. The article’s contribution is twofold: First, it provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum as a framework for and a perspective of different approaches of XR-AI combinations. It motivates XR-AI combinations as a method to learn about the effects of prospective human-AI interfaces and shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces. Second, the article provides two exemplary experiments investigating the aforementioned approach for two distinct AI-systems. The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system. Here the article introduces two paradigmatic implementations of the proposed XR testbed for human-AI interactions and interfaces and shows how a valid and systematic investigation can be conducted. In sum, the article opens new perspectives on how XR benefits human-centered AI design and development. KW - human-artificial intelligence interface KW - human-artificial intelligence interaction KW - XR-artificial intelligence continuum KW - XR-artificial intelligence combination KW - research methods KW - human-centered, human-robot KW - recommender system Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260296 VL - 2 ER - TY - JOUR A1 - Loda, Sophia A1 - Krebs, Jonathan A1 - Danhof, Sophia A1 - Schreder, Martin A1 - Solimando, Antonio G. A1 - Strifler, Susanne A1 - Rasche, Leo A1 - Kortüm, Martin A1 - Kerscher, Alexander A1 - Knop, Stefan A1 - Puppe, Frank A1 - Einsele, Hermann A1 - Bittrich, Max T1 - Exploration of artificial intelligence use with ARIES in multiple myeloma research JF - Journal of Clinical Medicine N2 - Background: Natural language processing (NLP) is a powerful tool supporting the generation of Real-World Evidence (RWE). There is no NLP system that enables the extensive querying of parameters specific to multiple myeloma (MM) out of unstructured medical reports. We therefore created a MM-specific ontology to accelerate the information extraction (IE) out of unstructured text. Methods: Our MM ontology consists of extensive MM-specific and hierarchically structured attributes and values. We implemented “A Rule-based Information Extraction System” (ARIES) that uses this ontology. We evaluated ARIES on 200 randomly selected medical reports of patients diagnosed with MM. Results: Our system achieved a high F1-Score of 0.92 on the evaluation dataset with a precision of 0.87 and recall of 0.98. Conclusions: Our rule-based IE system enables the comprehensive querying of medical reports. The IE accelerates the extraction of data and enables clinicians to faster generate RWE on hematological issues. RWE helps clinicians to make decisions in an evidence-based manner. Our tool easily accelerates the integration of research evidence into everyday clinical practice. KW - natural language processing KW - ontology KW - artificial intelligence KW - multiple myeloma KW - real world evidence Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-197231 SN - 2077-0383 VL - 8 IS - 7 ER - TY - JOUR A1 - Ali, Qasim A1 - Montenegro, Sergio T1 - Explicit Model Following Distributed Control Scheme for Formation Flying of Mini UAVs JF - IEEE Access N2 - A centralized heterogeneous formation flight position control scheme has been formulated using an explicit model following design, based on a Linear Quadratic Regulator Proportional Integral (LQR PI) controller. The leader quadcopter is a stable reference model with desired dynamics whose output is perfectly tracked by the two wingmen quadcopters. The leader itself is controlled through the pole placement control method with desired stability characteristics, while the two followers are controlled through a robust and adaptive LQR PI control method. Selected 3-D formation geometry and static stability are maintained under a number of possible perturbations. With this control scheme, formation geometry may also be switched to any arbitrary shape during flight, provided a suitable collision avoidance mechanism is incorporated. In case of communication loss between the leader and any of the followers, the other follower provides the data, received from the leader, to the affected follower. The stability of the closed-loop system has been analyzed using singular values. The proposed approach for the tightly coupled formation flight of mini unmanned aerial vehicles has been validated with the help of extensive simulations using MATLAB/Simulink, which provided promising results. KW - quadcopter KW - robustness KW - intelligent vehicles KW - rotors KW - mathematical model KW - aerodynamics KW - adaptation models KW - vehicle dynamics KW - unmanned aerial vehicle KW - distributed control KW - formation flight KW - model following Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-146061 N1 - (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works VL - 4 IS - 397-406 ER - TY - RPRT A1 - Sertbas Bülbül, Nurefsan A1 - Ergenc, Doganalp A1 - Fischer, Mathias T1 - Evaluating Dynamic Path Reconfiguration for Time Sensitive Networks T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - In time-sensitive networks (TSN) based on 802.1Qbv, i.e., the time-aware Shaper (TAS) protocol, precise transmission schedules and, paths are used to ensure end-to-end deterministic communication. Such resource reservations for data flows are usually established at the startup time of an application and remain untouched until the flow ends. There is no way to migrate existing flows easily to alternative paths without inducing additional delay or wasting resources. Therefore, some of the new flows cannot be embedded due to capacity limitations on certain links which leads to sub-optimal flow assignment. As future networks will need to support a large number of lowlatency flows, accommodating new flows at runtime and adapting existing flows accordingly becomes a challenging problem. In this extended abstract we summarize a previously published paper of us [1]. We combine software-defined networking (SDN), which provides better control of network flows, with TSN to be able to seamlessly migrate time-sensitive flows. For that, we formulate an optimization problem and propose different dynamic path configuration strategies under deterministic communication requirements. Our simulation results indicate that regularly reconfiguring the flow assignments can improve the latency of time-sensitive flows and can increase the number of flows embedded in the network around 4% in worst-case scenarios while still satisfying individual flow deadlines. KW - Datennetz KW - SDN KW - dynamic flow migration KW - reconfiguration KW - TSN KW - path computation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280743 ER - TY - JOUR A1 - Oberdörfer, Sebastian A1 - Heidrich, David A1 - Birnstiel, Sandra A1 - Latoschik, Marc Erich T1 - Enchanted by Your Surrounding? Measuring the Effects of Immersion and Design of Virtual Environments on Decision-Making JF - Frontiers in Virtual Reality N2 - Impaired decision-making leads to the inability to distinguish between advantageous and disadvantageous choices. The impairment of a person’s decision-making is a common goal of gambling games. Given the recent trend of gambling using immersive Virtual Reality it is crucial to investigate the effects of both immersion and the virtual environment (VE) on decision-making. In a novel user study, we measured decision-making using three virtual versions of the Iowa Gambling Task (IGT). The versions differed with regard to the degree of immersion and design of the virtual environment. While emotions affect decision-making, we further measured the positive and negative affect of participants. A higher visual angle on a stimulus leads to an increased emotional response. Thus, we kept the visual angle on the Iowa Gambling Task the same between our conditions. Our results revealed no significant impact of immersion or the VE on the IGT. We further found no significant difference between the conditions with regard to positive and negative affect. This suggests that neither the medium used nor the design of the VE causes an impairment of decision-making. However, in combination with a recent study, we provide first evidence that a higher visual angle on the IGT leads to an effect of impairment. KW - virtual reality KW - virtual environments KW - immersion KW - decision-making KW - iowa gambling task Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260101 VL - 2 ER - TY - RPRT A1 - Großmann, Marcel A1 - Homeyer, Tobias T1 - Emulation of Multipath Transmissions in P4 Networks with Kathará T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Packets sent over a network can either get lost or reach their destination. Protocols like TCP try to solve this problem by resending the lost packets. However, retransmissions consume a lot of time and are cumbersome for the transmission of critical data. Multipath solutions are quite common to address this reliability issue and are available on almost every layer of the ISO/OSI model. We propose a solution based on a P4 network to duplicate packets in order to send them to their destination via multiple routes. The last network hop ensures that only a single copy of the traffic is further forwarded to its destination by adopting a concept similar to Bloom filters. Besides, if fast delivery is requested we provide a P4 prototype, which randomly forwards the packets over different transmission paths. For reproducibility, we implement our approach in a container-based network emulation system called Kathará. KW - P4 KW - multipath KW - emulation KW - Kathará Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322095 ER - TY - THES A1 - Menth, Michael T1 - Efficient admission control and routing for resilient communication networks T1 - Effiziente Zugangskontrolle und Verkehrslenkung für ausfallsichere Kommunikationsnetze N2 - This work is subdivided into two main areas: resilient admission control and resilient routing. The work gives an overview of the state of the art of quality of service mechanisms in communication networks and proposes a categorization of admission control (AC) methods. These approaches are investigated regarding performance, more precisely, regarding the potential resource utilization by dimensioning the capacity for a network with a given topology, traffic matrix, and a required flow blocking probability. In case of a failure, the affected traffic is rerouted over backup paths which increases the traffic rate on the respective links. To guarantee the effectiveness of admission control also in failure scenarios, the increased traffic rate must be taken into account for capacity dimensioning and leads to resilient AC. Capacity dimensioning is not feasible for existing networks with already given link capacities. For the application of resilient NAC in this case, the size of distributed AC budgets must be adapted according to the traffic matrix in such a way that the maximum blocking probability for all flows is minimized and that the capacity of all links is not exceeded by the admissible traffic rate in any failure scenario. Several algorithms for the solution of that problem are presented and compared regarding their efficiency and fairness. A prototype for resilient AC was implemented in the laboratories of Siemens AG in Munich within the scope of the project KING. Resilience requires additional capacity on the backup paths for failure scenarios. The amount of this backup capacity depends on the routing and can be minimized by routing optimization. New protection switching mechanisms are presented that deviate the traffic quickly around outage locations. They are simple and can be implemented, e.g, by MPLS technology. The Self-Protecting Multi-Path (SPM) is a multi-path consisting of disjoint partial paths. The traffic is distributed over all faultless partial paths according to an optimized load balancing function both in the working case and in failure scenarios. Performance studies show that the network topology and the traffic matrix also influence the amount of required backup capacity significantly. The example of the COST-239 network illustrates that conventional shortest path routing may need 50% more capacity than the optimized SPM if all single link and node failures are protected. N2 - Diese Arbeit gliedert sich in zwei Hauptteile: Ausfallsichere Zugangskontrolle und ausfallsichere Verkehrslenkung. Die Arbeit beschreibt zu Beginn den Stand der Technik für Dienstgütemechanismen in Kommunikationsnetzen und nimmt eine Kategorisierung von Zugangskontrollmethoden vor. Diese Ansätze werden hinsichtlich ihrer potentiellen Auslastung der Leitungskapaztitäten untersucht, indem die Kapazität für ein Netz mit gegebnener Topology, Verkehrsmatrix und geforderten Flussblockierwahrscheinlichkeiten dimensioniert wird. Im Falle eines Fehlers werden betroffene Flüsse automatisch über Ersatzpfade umgeleitet, was die Verkehrslast auf deren Übertragungsleitungen erhöht. Um die Wirksamkeit der Zugangskontrolle auch in Fehlerfällen zu gewährleisten, muss diese erhöhte Verkehrslast bei der Dimensionierung berücksichtigt werden, was zu einer ausfallsicheren Zugangskontrolle führt. Kapazitätsdimensionierung ist in bereits existierenden Netzen mit festen Linkbandbreiten nicht mehr möglich. Für die Anwendung von ausfallsicherer Zugangskontrolle in diesem Fall muss die Größe von verteilten Zugangskontrollbudgets gemäß der Verkehrsmatrix so angepasst werden, dass die maximale Blockierwahrscheinlichkeit aller Flüsse minimiert wird und die Kapazität aller Links in keinem Fehlerfall durch die zulässige Verkehrrate überschritten wird. Es werden unterschiedliche Algorithmen für die Lösung dieses Problems vorgeschlagen und hinsichtlich ihrer Effizienz und Fairness verglichen. Ein Prototyp für ausfallsichere Zugangskontrolle wurde im Rahmen des KING-Projektes in den Labors der Siemens AG in München implementiert. Ausfallsicherheit benötigt Zusatzkapazitäten auf den Ersatzpfaden für Fehlerfälle. Die Menge der Zusatzkapazität hängt von der Verkehrslenkung ab und kann durch Optimierung verringert werden. Es werden neue Mechanismen für Ersatzschaltungen vorgestellt, die den Verkehr schnell um Fehlerstellen im Netz herumleiten können. Sie zeichnen sich durch ihre Einfachheit aus und können z.B. in MPLS-Technologie implementiert werden. Der "Self-Protecting Multi-Path" (SPM) ist ein Multipfad, der aus disjunkten Teilpfaden besteht. Der Verkehr wird sowohl im Normalbetrieb als auch in Ausfallszenarien über alle intakten Teilpfade gemäß einer optimierten Lastverteilungsfunktion weitergeleitet. Leistungsuntersuchungen zeigen, dass die Menge an benötigter Zusatzkapazität deutlich von der Netztopologie und der Verkehrsmatrix abhängt. Das Beispiel des COST-239 Netzes veranschaulicht, dass herkömmliche Verkehrslenkung auf den kürzesten Wegen 50% mehr Kapazität benötigen kann als der optimierte SPM, wenn alle einzelnen Übertragungsleitungs- und Vermittlungsknotenausfälle geschützt werden. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 03/04 KW - Kommunikation KW - Netzwerk KW - Ausfallsicheres System KW - Kommunikationsnetze KW - Ausfallsicherheit KW - Zugangskontrolle KW - Verkehrslenkung KW - Communication Networks KW - Resilience KW - Admission Control KW - Routing Y1 - 2004 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-9949 ER - TY - JOUR A1 - Madeira, Octavia A1 - Gromer, Daniel A1 - Latoschik, Marc Erich A1 - Pauli, Paul T1 - Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze JF - Frontiers in Virtual Reality N2 - The Elevated Plus-Maze (EPM) is a well-established apparatus to measure anxiety in rodents, i.e., animals exhibiting an increased relative time spent in the closed vs. the open arms are considered anxious. To examine whether such anxiety-modulated behaviors are conserved in humans, we re-translated this paradigm to a human setting using virtual reality in a Cave Automatic Virtual Environment (CAVE) system. In two studies, we examined whether the EPM exploration behavior of humans is modulated by their trait anxiety and also assessed the individuals’ levels of acrophobia (fear of height), claustrophobia (fear of confined spaces), sensation seeking, and the reported anxiety when on the maze. First, we constructed an exact virtual copy of the animal EPM adjusted to human proportions. In analogy to animal EPM studies, participants (N = 30) freely explored the EPM for 5 min. In the second study (N = 61), we redesigned the EPM to make it more human-adapted and to differentiate influences of trait anxiety and acrophobia by introducing various floor textures and lower walls of closed arms to the height of standard handrails. In the first experiment, hierarchical regression analyses of exploration behavior revealed the expected association between open arm avoidance and Trait Anxiety, an even stronger association with acrophobic fear. In the second study, results revealed that acrophobia was associated with avoidance of open arms with mesh-floor texture, whereas for trait anxiety, claustrophobia, and sensation seeking, no effect was detected. Also, subjects’ fear rating was moderated by all psychometrics but trait anxiety. In sum, both studies consistently indicate that humans show no general open arm avoidance analogous to rodents and that human EPM behavior is modulated strongest by acrophobic fear, whereas trait anxiety plays a subordinate role. Thus, we conclude that the criteria for cross-species validity are met insufficiently in this case. Despite the exploratory nature, our studies provide in-depth insights into human exploration behavior on the virtual EPM. KW - elevated plus-maze KW - EPM KW - anxiety KW - virtual reality KW - translational neuroscience KW - acrophobia KW - trait anxiety Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-258709 VL - 2 ER - TY - RPRT A1 - Odhah, Najib A1 - Grass, Eckhard A1 - Kraemer, Rolf T1 - Effective Rate of URLLC with Short Block-Length Information Theory T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - Shannon channel capacity estimation, based on large packet length is used in traditional Radio Resource Management (RRM) optimization. This is good for the normal transmission of data in a wired or wireless system. For industrial automation and control, rather short packages are used due to the short-latency requirements. Using Shannon’s formula leads in this case to inaccurate RRM solutions, thus another formula should be used to optimize radio resources in short block-length packet transmission, which is the basic of Ultra-Reliable Low-Latency Communications (URLLCs). The stringent requirement of delay Quality of Service (QoS) for URLLCs requires a link-level channel model rather than a physical level channel model. After finding the basic and accurate formula of the achievable rate of short block-length packet transmission, the RRM optimization problem can be accurately formulated and solved under the new constraints of URLLCs. In this short paper, the current mathematical models, which are used in formulating the effective transmission rate of URLLCs, will be briefly explained. Then, using this rate in RRM for URLLC will be discussed. KW - Datennetz KW - URLLC KW - RRM KW - delay QoS exponent KW - decoding error rate KW - delay bound violation probability KW - short block-length KW - effective Bandwidth Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280859 ER - TY - JOUR A1 - Dumic, Emil A1 - Bjelopera, Anamaria A1 - Nüchter, Andreas T1 - Dynamic point cloud compression based on projections, surface reconstruction and video compression JF - Sensors N2 - In this paper we will present a new dynamic point cloud compression based on different projection types and bit depth, combined with the surface reconstruction algorithm and video compression for obtained geometry and texture maps. Texture maps have been compressed after creating Voronoi diagrams. Used video compression is specific for geometry (FFV1) and texture (H.265/HEVC). Decompressed point clouds are reconstructed using a Poisson surface reconstruction algorithm. Comparison with the original point clouds was performed using point-to-point and point-to-plane measures. Comprehensive experiments show better performance for some projection maps: cylindrical, Miller and Mercator projections. KW - 3DTK toolkit KW - map projections KW - point cloud compression KW - point-to-point measure KW - point-to-plane measure KW - Poisson surface reconstruction KW - octree Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-252231 SN - 1424-8220 VL - 22 IS - 1 ER - TY - JOUR A1 - Buchin, Kevin A1 - Buchin, Maike A1 - Byrka, Jaroslaw A1 - Nöllenburg, Martin A1 - Okamoto, Yoshio A1 - Silveira, Rodrigo I. A1 - Wolff, Alexander T1 - Drawing (Complete) Binary Tanglegrams JF - Algorithmica N2 - A binary tanglegram is a drawing of a pair of rooted binary trees whose leaf sets are in one-to-one correspondence; matching leaves are connected by inter-tree edges. For applications, for example, in phylogenetics, it is essential that both trees are drawn without edge crossings and that the inter-tree edges have as few crossings as possible. It is known that finding a tanglegram with the minimum number of crossings is NP-hard and that the problem is fixed-parameter tractable with respect to that number. We prove that under the Unique Games Conjecture there is no constant-factor approximation for binary trees. We show that the problem is NP-hard even if both trees are complete binary trees. For this case we give an O(n 3)-time 2-approximation and a new, simple fixed-parameter algorithm. We show that the maximization version of the dual problem for binary trees can be reduced to a version of MaxCut for which the algorithm of Goemans and Williamson yields a 0.878-approximation. KW - NP-hardness KW - crossing minimization KW - binary tanglegram KW - approximation algorithm KW - fixed-parameter tractability Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-124622 VL - 62 ER - TY - THES A1 - Houshiar, Hamidreza T1 - Documentation and mapping with 3D point cloud processing T1 - Dokumentation und Kartierung mittels 3D-Punktwolkenverarbeitung N2 - 3D point clouds are a de facto standard for 3D documentation and modelling. The advances in laser scanning technology broadens the usability and access to 3D measurement systems. 3D point clouds are used in many disciplines such as robotics, 3D modelling, archeology and surveying. Scanners are able to acquire up to a million of points per second to represent the environment with a dense point cloud. This represents the captured environment with a very high degree of detail. The combination of laser scanning technology with photography adds color information to the point clouds. Thus the environment is represented more realistically. Full 3D models of environments, without any occlusion, require multiple scans. Merging point clouds is a challenging process. This thesis presents methods for point cloud registration based on the panorama images generated from the scans. Image representation of point clouds introduces 2D image processing methods to 3D point clouds. Several projection methods for the generation of panorama maps of point clouds are presented in this thesis. Additionally, methods for point cloud reduction and compression based on the panorama maps are proposed. Due to the large amounts of data generated from the 3D measurement systems these methods are necessary to improve the point cloud processing, transmission and archiving. This thesis introduces point cloud processing methods as a novel framework for the digitisation of archeological excavations. The framework replaces the conventional documentation methods for excavation sites. It employs point clouds for the generation of the digital documentation of an excavation with the help of an archeologist on-site. The 3D point cloud is used not only for data representation but also for analysis and knowledge generation. Finally, this thesis presents an autonomous indoor mobile mapping system. The mapping system focuses on the sensor placement planning method. Capturing a complete environment requires several scans. The sensor placement planning method solves for the minimum required scans to digitise large environments. Combining this method with a navigation system on a mobile robot platform enables it to acquire data fully autonomously. This thesis introduces a novel hole detection method for point clouds to detect obscured parts of a captured environment. The sensor placement planning method selects the next scan position with the most coverage of the obscured environment. This reduces the required number of scans. The navigation system on the robot platform consist of path planning, path following and obstacle avoidance. This guarantees the safe navigation of the mobile robot platform between the scan positions. The sensor placement planning method is designed as a stand alone process that could be used with a mobile robot platform for autonomous mapping of an environment or as an assistant tool for the surveyor on scanning projects. N2 - 3D-Punktwolken sind der de facto Standard bei der Dokumentation und Modellierung in 3D. Die Fortschritte in der Laserscanningtechnologie erweitern die Verwendbarkeit und die Verfügbarkeit von 3D-Messsystemen. 3D-Punktwolken werden in vielen Disziplinen verwendet, wie z.B. in der Robotik, 3D-Modellierung, Archäologie und Vermessung. Scanner sind in der Lage bis zu einer Million Punkte pro Sekunde zu erfassen, um die Umgebung mit einer dichten Punktwolke abzubilden und mit einem hohen Detaillierungsgrad darzustellen. Die Kombination der Laserscanningtechnologie mit Methoden der Photogrammetrie fügt den Punktwolken Farbinformationen hinzu. Somit wird die Umgebung realistischer dargestellt. Vollständige 3D-Modelle der Umgebung ohne Verschattungen benötigen mehrere Scans. Punktwolken zusammenzufügen ist eine anspruchsvolle Aufgabe. Diese Arbeit stellt Methoden zur Punktwolkenregistrierung basierend auf aus den Scans erzeugten Panoramabildern vor. Die Darstellung einer Punktwolke als Bild bringt Methoden der 2D-Bildverarbeitung an 3D-Punktwolken heran. Der Autor stellt mehrere Projektionsmethoden zur Erstellung von Panoramabildern aus 3D-Punktwolken vor. Außerdem werden Methoden zur Punktwolkenreduzierung und -kompression basierend auf diesen Panoramabildern vorgeschlagen. Aufgrund der großen Datenmenge, die von 3D-Messsystemen erzeugt wird, sind diese Methoden notwendig, um die Punktwolkenverarbeitung, -übertragung und -archivierung zu verbessern. Diese Arbeit präsentiert Methoden der Punktwolkenverarbeitung als neuartige Ablaufstruktur für die Digitalisierung von archäologischen Ausgrabungen. Durch diesen Ablauf werden konventionellen Methoden auf Ausgrabungsstätten ersetzt. Er verwendet Punktwolken für die Erzeugung der digitalen Dokumentation einer Ausgrabung mithilfe eines Archäologen vor Ort. Die 3D-Punktwolke kommt nicht nur für die Anzeige der Daten, sondern auch für die Analyse und Wissensgenerierung zum Einsatz. Schließlich stellt diese Arbeit ein autonomes Indoor-Mobile-Mapping-System mit Fokus auf der Positionsplanung des Messgeräts vor. Die Positionsplanung bestimmt die minimal benötigte Anzahl an Scans, um großflächige Umgebungen zu digitalisieren. Kombiniert mit einem Navigationssystem auf einer mobilen Roboterplattform ermöglicht diese Methode die vollautonome Datenerfassung. Diese Arbeit stellt eine neuartige Erkennungsmethode für Lücken in Punktwolken vor, um verdeckte Bereiche der erfassten Umgebung zu bestimmen. Die Positionsplanung bestimmt als nächste Scanposition diejenige mit der größten Abdeckung der verdeckten Umgebung. Das Navigationssystem des Roboters besteht aus der Pfadplanung, der Pfadverfolgung und einer Hindernisvermeidung um eine sichere Fortbewegung der mobilen Roboterplattform zwischen den Scanpositionen zu garantieren. Die Positionsplanungsmethode wurde als eigenständiges Verfahren entworfen, das auf einer mobilen Roboterplattform zur autonomen Kartierung einer Umgebung zum Einsatz kommen oder einem Vermesser bei einem Scanprojekt als Unterstützung dienen kann. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 12 KW - 3D Punktwolke KW - Robotik KW - Registrierung KW - 3D Pointcloud KW - Feature Based Registration KW - Compression KW - Computer Vision KW - Robotics KW - Panorama Images Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-144493 SN - 978-3-945459-14-0 ER - TY - THES A1 - Klein, Dominik Werner T1 - Design and Evaluation of Components for Future Internet Architectures T1 - Entwurf und Bewertung von Komponenten für zukünftige Internet Architekturen N2 - Die derzeitige Internetarchitektur wurde nicht in einem geplanten Prozess konzipiert und entwickelt, sondern hat vielmehr eine evolutionsartige Entwicklung hinter sich. Auslöser für die jeweiligen Evolutionsschritte waren dabei meist aufstrebende Anwendungen, welche neue Anforderungen an die zugrundeliegende Netzarchitektur gestellt haben. Um diese Anforderungen zu erfüllen, wurden häufig neuartige Dienste oder Protokolle spezifiziert und in die bestehende Architektur integriert. Dieser Prozess ist jedoch meist mit hohem Aufwand verbunden und daher sehr träge, was die Entwicklung und Verbreitung innovativer Dienste beeinträchtigt. Derzeitig diskutierte Konzepte wie Software-Defined Networking (SDN) oder Netzvirtualisierung (NV) werden als eine Möglichkeit angesehen, die Altlasten der bestehenden Internetarchitektur zu lösen. Beiden Konzepten gemein ist die Idee, logische Netze über dem physikalischen Substrat zu betreiben. Diese logischen Netze sind hochdynamisch und können so flexibel an die Anforderungen der jeweiligen Anwendungen angepasst werden. Insbesondere erlaubt das Konzept der Virtualisierung intelligentere Netzknoten, was innovative neue Anwendungsfälle ermöglicht. Ein häufig in diesem Zusammenhang diskutierter Anwendungsfall ist die Mobilität sowohl von Endgeräten als auch von Diensten an sich. Die Mobilität der Dienste wird hierbei ausgenutzt, um die Zugriffsverzögerung oder die belegten Ressourcen im Netz zu reduzieren, indem die Dienste zum Beispiel in für den Nutzer geographisch nahe Datenzentren migriert werden. Neben den reinen Mechanismen bezüglich Dienst- und Endgerätemobilität sind in diesem Zusammenhang auch geeignete Überwachungslösungen relevant, welche die vom Nutzer wahrgenommene Dienstgüte bewerten können. Diese Lösungen liefern wichtige Entscheidungshilfen für die Migration oder überwachen mögliche Effekte der Migration auf die erfahrene Dienstgüte beim Nutzer. Im Falle von Video Streaming ermöglicht ein solcher Anwendungsfall die flexible Anpassung der Streaming Topologie für mobile Nutzer, um so die Videoqualität unabhängig vom Zugangsnetz aufrechterhalten zu können. Im Rahmen dieser Doktorarbeit wird der beschriebene Anwendungsfall am Beispiel einer Video Streaming Anwendung näher analysiert und auftretende Herausforderungen werden diskutiert. Des Weiteren werden Lösungsansätze vorgestellt und bezüglich ihrer Effizienz ausgewertet. Im Detail beschäftigt sich die Arbeit mit der Leistungsanalyse von Mechanismen für die Dienstmobilität und entwickelt eine Architektur zur Optimierung der Dienstmobilität. Im Bereich Endgerätemobilität werden Verbesserungen entwickelt, welche die Latenz zwischen Endgerät und Dienst reduzieren oder die Konnektivität unabhängig vom Zugangsnetz gewährleisten. Im letzten Teilbereich wird eine Lösung zur Überwachung der Videoqualität im Netz entwickelt und bezüglich ihrer Genauigkeit analysiert. N2 - Today’s Internet architecture was not designed from scratch but was driven by new services that emerged during its development. Hence, it is often described as patchwork where additional patches are applied in case new services require modifications to the existing architecture. This process however is rather slow and hinders the development of innovative network services with certain architecture or network requirements. Currently discussed technologies like Software-Defined Networking (SDN) or Network Virtualization (NV) are seen as key enabling technologies to overcome this rigid best effort legacy of the Internet. Both technologies offer the possibility to create virtual networks that accommodate the specific needs of certain services. These logical networks are operated on top of a physical substrate and facilitate flexible network resource allocation as physical resources can be added and removed depending on the current network and load situation. In addition, the clear separation and isolation of networks foster the development of application-aware networks that fulfill the special requirements of emerging applications. A prominent use case that benefits from these extended capabilities of the network is denoted with service component mobility. Services hosted on Virtual Machines (VMs) follow their consuming mobile endpoints, so that access latency as well as consumed network resources are reduced. Especially for applications like video streaming, which consume a large fraction of the available resources, is this an important means to relieve the resource constraints and eventually provide better service quality. Service and endpoint mobility both allow an adaptation of the used paths between an offered service, i.e., video streaming and the consuming users in case the service quality drops due to network problems. To make evidence-based adaptations in case of quality drops, a scalable monitoring component is required that is able to monitor the service quality for video streaming applications with reliable accuracy. This monograph details challenges that arise when deploying a certain service, i.e., video streaming, in a future virtualized network architecture and discusses possible solutions. In particular, this work evaluates the performance of mechanisms enabling service mobility and presents an optimized architecture for service mobility. Concerning endpoint mobility, improvements are developed that reduce the latency between endpoints and consumed services and ensure connectivity regardless of the used mobile access network. In the last part, a network-based video quality monitoring solution is developed and its accuracy is evaluated. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/14 KW - Leistungsbewertung KW - Netzwerkmanagement KW - Virtuelles Netzwerk KW - Mobiles Internet KW - Service Mobility KW - Endpoint Mobility KW - Video Quality Monitoring Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-93134 SN - 1432-8801 ER - TY - JOUR A1 - Steininger, Michael A1 - Kobs, Konstantin A1 - Davidson, Padraig A1 - Krause, Anna A1 - Hotho, Andreas T1 - Density-based weighting for imbalanced regression JF - Machine Learning N2 - In many real world settings, imbalanced data impedes model performance of learning algorithms, like neural networks, mostly for rare cases. This is especially problematic for tasks focusing on these rare occurrences. For example, when estimating precipitation, extreme rainfall events are scarce but important considering their potential consequences. While there are numerous well studied solutions for classification settings, most of them cannot be applied to regression easily. Of the few solutions for regression tasks, barely any have explored cost-sensitive learning which is known to have advantages compared to sampling-based methods in classification tasks. In this work, we propose a sample weighting approach for imbalanced regression datasets called DenseWeight and a cost-sensitive learning approach for neural network regression with imbalanced data called DenseLoss based on our weighting scheme. DenseWeight weights data points according to their target value rarities through kernel density estimation (KDE). DenseLoss adjusts each data point’s influence on the loss according to DenseWeight, giving rare data points more influence on model training compared to common data points. We show on multiple differently distributed datasets that DenseLoss significantly improves model performance for rare data points through its density-based weighting scheme. Additionally, we compare DenseLoss to the state-of-the-art method SMOGN, finding that our method mostly yields better performance. Our approach provides more control over model training as it enables us to actively decide on the trade-off between focusing on common or rare cases through a single hyperparameter, allowing the training of better models for rare data points. KW - supervised learning KW - imbalanced regression KW - cost-sensitive learning KW - sample weighting KW - Kerneldensity estimation Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-269177 SN - 1573-0565 VL - 110 IS - 8 ER - TY - JOUR A1 - Seufert, Anika A1 - Schröder, Svenja A1 - Seufert, Michael T1 - Delivering User Experience over Networks: Towards a Quality of Experience Centered Design Cycle for Improved Design of Networked Applications JF - SN Computer Science N2 - To deliver the best user experience (UX), the human-centered design cycle (HCDC) serves as a well-established guideline to application developers. However, it does not yet cover network-specific requirements, which become increasingly crucial, as most applications deliver experience over the Internet. The missing network-centric view is provided by Quality of Experience (QoE), which could team up with UX towards an improved overall experience. By considering QoE aspects during the development process, it can be achieved that applications become network-aware by design. In this paper, the Quality of Experience Centered Design Cycle (QoE-CDC) is proposed, which provides guidelines on how to design applications with respect to network-specific requirements and QoE. Its practical value is showcased for popular application types and validated by outlining the design of a new smartphone application. We show that combining HCDC and QoE-CDC will result in an application design, which reaches a high UX and avoids QoE degradation. KW - user experience KW - human-centered design KW - design cycle KW - application design KW - quality of experience Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-271762 SN - 2661-8907 VL - 2 IS - 6 ER - TY - THES A1 - Nogatz, Falco T1 - Defining and Implementing Domain-Specific Languages with Prolog T1 - Definition und Implementierung domänenspezifischer Sprachen mit Prolog N2 - The landscape of today’s programming languages is manifold. With the diversity of applications, the difficulty of adequately addressing and specifying the used programs increases. This often leads to newly designed and implemented domain-specific languages. They enable domain experts to express knowledge in their preferred format, resulting in more readable and concise programs. Due to its flexible and declarative syntax without reserved keywords, the logic programming language Prolog is particularly suitable for defining and embedding domain-specific languages. This thesis addresses the questions and challenges that arise when integrating domain-specific languages into Prolog. We compare the two approaches to define them either externally or internally, and provide assisting tools for each. The grammar of a formal language is usually defined in the extended Backus–Naur form. In this work, we handle this formalism as a domain-specific language in Prolog, and define term expansions that allow to translate it into equivalent definite clause grammars. We present the package library(dcg4pt) for SWI-Prolog, which enriches them by an additional argument to automatically process the term’s corresponding parse tree. To simplify the work with definite clause grammars, we visualise their application by a web-based tracer. The external integration of domain-specific languages requires the programmer to keep the grammar, parser, and interpreter in sync. In many cases, domain-specific languages can instead be directly embedded into Prolog by providing appropriate operator definitions. In addition, we propose syntactic extensions for Prolog to expand its expressiveness, for instance to state logic formulas with their connectives verbatim. This allows to use all tools that were originally written for Prolog, for instance code linters and editors with syntax highlighting. We present the package library(plammar), a standard-compliant parser for Prolog source code, written in Prolog. It is able to automatically infer from example sentences the required operator definitions with their classes and precedences as well as the required Prolog language extensions. As a result, we can automatically answer the question: Is it possible to model these example sentences as valid Prolog clauses, and how? We discuss and apply the two approaches to internal and external integrations for several domain-specific languages, namely the extended Backus–Naur form, GraphQL, XPath, and a controlled natural language to represent expert rules in if-then form. The created toolchain with library(dcg4pt) and library(plammar) yields new application opportunities for static Prolog source code analysis, which we also present. N2 - Die Landschaft der heutigen Programmiersprachen ist vielfältig. Mit ihren unterschiedlichen Anwendungsbereichen steigt zugleich die Schwierigkeit, die eingesetzten Programme adäquat anzusprechen und zu spezifizieren. Immer häufiger werden hierfür domänenspezifische Sprachen entworfen und implementiert. Sie ermöglichen Domänenexperten, Wissen in ihrem bevorzugten Format auszudrücken, was zu lesbareren Programmen führt. Durch ihre flexible und deklarative Syntax ohne vorbelegte Schlüsselwörter ist die logische Programmsprache Prolog besonders geeignet, um domänenspezifische Sprachen zu definieren und einzubetten. Diese Arbeit befasst sich mit den Fragen und Herausforderungen, die sich bei der Integration von domänenspezifischen Sprachen in Prolog ergeben. Wir vergleichen die zwei Ansätze, sie entweder extern oder intern zu definieren, und stellen jeweils Hilfsmittel zur Verfügung. Die Grammatik einer formalen Sprache wird häufig in der erweiterten Backus–Naur–Form definiert. Diesen Formalismus behandeln wir in dieser Arbeit als eine domänenspezifische Sprache in Prolog und definieren Termexpansionen, die es erlauben, ihn in äquivalente Definite Clause Grammars für Prolog zu übersetzen. Durch das Modul library(dcg4pt) werden sie um ein zusätzliches Argument erweitert, das den Syntaxbaum eines Terms automatisch erzeugt. Um die Arbeit mit Definite Clause Grammars zu erleichtern, visualisieren wir ihre Anwendung in einem webbasierten Tracer. Meist können domänenspezifische Sprachen jedoch auch mittels passender Operatordefinitionen direkt in Prolog eingebettet werden. Dies ermöglicht die Verwendung aller Werkzeuge, die ursprünglich für Prolog geschrieben wurden, z.B. zum Code-Linting und Syntax-Highlighting. In dieser Arbeit stellen wir den standardkonformen Prolog-Parser library(plammar) vor. Er ist in Prolog geschrieben und in der Lage, aus Beispielsätzen automatisch die erforderlichen Operatoren mit ihren Klassen und Präzedenzen abzuleiten. Um die Ausdruckskraft von Prolog noch zu erweitern, schlagen wir Ergänzungen zum ISO Standard vor. Sie erlauben es, weitere Sprachen direkt einzubinden, und werden ebenfalls von library(plammar) identifiziert. So ist es bspw. möglich, logische Formeln direkt mit den bekannten Symbolen für Konjunktion, Disjunktion, usw. als Prolog-Programme anzugeben. Beide Ansätze der internen und externen Integration werden für mehrere domänen-spezifische Sprachen diskutiert und beispielhaft für GraphQL, XPath, die erweiterte Backus–Naur–Form sowie Expertenregeln in Wenn–Dann–Form umgesetzt. Die vorgestellten Werkzeuge um library(dcg4pt) und library(plammar) ergeben zudem neue Anwendungsmöglichkeiten auch für die statische Quellcodeanalyse von Prolog-Programmen. KW - PROLOG KW - Domänenspezifische Sprache KW - logic programming KW - knowledge representation KW - definite clause grammars Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-301872 ER - TY - JOUR A1 - Müller, Konstantin A1 - Leppich, Robert A1 - Geiß, Christian A1 - Borst, Vanessa A1 - Pelizari, Patrick Aravena A1 - Kounev, Samuel A1 - Taubenböck, Hannes T1 - Deep neural network regression for normalized digital surface model generation with Sentinel-2 imagery JF - IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing N2 - In recent history, normalized digital surface models (nDSMs) have been constantly gaining importance as a means to solve large-scale geographic problems. High-resolution surface models are precious, as they can provide detailed information for a specific area. However, measurements with a high resolution are time consuming and costly. Only a few approaches exist to create high-resolution nDSMs for extensive areas. This article explores approaches to extract high-resolution nDSMs from low-resolution Sentinel-2 data, allowing us to derive large-scale models. We thereby utilize the advantages of Sentinel 2 being open access, having global coverage, and providing steady updates through a high repetition rate. Several deep learning models are trained to overcome the gap in producing high-resolution surface maps from low-resolution input data. With U-Net as a base architecture, we extend the capabilities of our model by integrating tailored multiscale encoders with differently sized kernels in the convolution as well as conformed self-attention inside the skip connection gates. Using pixelwise regression, our U-Net base models can achieve a mean height error of approximately 2 m. Moreover, through our enhancements to the model architecture, we reduce the model error by more than 7%. KW - Deep learning KW - multiscale encoder KW - sentinel KW - surface model Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-349424 SN - 1939-1404 VL - 16 ER - TY - JOUR A1 - Ali, Qasim A1 - Montenegro, Sergio T1 - Decentralized control for scalable quadcopter formations JF - International Journal of Aerospace Engineering N2 - An innovative framework has been developed for teamwork of two quadcopter formations, each having its specified formation geometry, assigned task, and matching control scheme. Position control for quadcopters in one of the formations has been implemented through a Linear Quadratic Regulator Proportional Integral (LQR PI) control scheme based on explicit model following scheme. Quadcopters in the other formation are controlled through LQR PI servomechanism control scheme. These two control schemes are compared in terms of their performance and control effort. Both formations are commanded by respective ground stations through virtual leaders. Quadcopters in formations are able to track desired trajectories as well as hovering at desired points for selected time duration. In case of communication loss between ground station and any of the quadcopters, the neighboring quadcopter provides the command data, received from the ground station, to the affected unit. Proposed control schemes have been validated through extensive simulations using MATLAB®/Simulink® that provided favorable results. KW - scalable quadcopter Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-146704 VL - 2016 ER - TY - RPRT A1 - Raffeck, Simon A1 - Geißler, Stefan A1 - Hoßfeld, Tobias T1 - DBM: Decentralized Burst Mitigation for Self-Organizing LoRa Deployments T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - This work proposes a novel approach to disperse dense transmission intervals and reduce bursty traffic patterns without the need for centralized control. Furthermore, by keeping the mechanism as close to the Long Range Wide Area Network (LoRaWAN) standard as possible the suggested mechanism can be deployed within existing networks and can even be co-deployed with other devices. KW - Datennetz KW - LoRa Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280809 ER - TY - RPRT A1 - Rossi, Angelo Pio A1 - Maurelli, Francesco A1 - Unnithan, Vikram A1 - Dreger, Hendrik A1 - Mathewos, Kedus A1 - Pradhan, Nayan A1 - Corbeanu, Dan-Andrei A1 - Pozzobon, Riccardo A1 - Massironi, Matteo A1 - Ferrari, Sabrina A1 - Pernechele, Claudia A1 - Paoletti, Lorenzo A1 - Simioni, Emanuele A1 - Maurizio, Pajola A1 - Santagata, Tommaso A1 - Borrmann, Dorit A1 - Nüchter, Andreas A1 - Bredenbeck, Anton A1 - Zevering, Jasper A1 - Arzberger, Fabian A1 - Reyes Mantilla, Camilo Andrés T1 - DAEDALUS - Descent And Exploration in Deep Autonomy of Lava Underground Structures BT - Open Space Innovation Platform (OSIP) Lunar Caves-System Study N2 - The DAEDALUS mission concept aims at exploring and characterising the entrance and initial part of Lunar lava tubes within a compact, tightly integrated spherical robotic device, with a complementary payload set and autonomous capabilities. The mission concept addresses specifically the identification and characterisation of potential resources for future ESA exploration, the local environment of the subsurface and its geologic and compositional structure. A sphere is ideally suited to protect sensors and scientific equipment in rough, uneven environments. It will house laser scanners, cameras and ancillary payloads. The sphere will be lowered into the skylight and will explore the entrance shaft, associated caverns and conduits. Lidar (light detection and ranging) systems produce 3D models with high spatial accuracy independent of lighting conditions and visible features. Hence this will be the primary exploration toolset within the sphere. The additional payload that can be accommodated in the robotic sphere consists of camera systems with panoramic lenses and scanners such as multi-wavelength or single-photon scanners. A moving mass will trigger movements. The tether for lowering the sphere will be used for data communication and powering the equipment during the descending phase. Furthermore, the connector tether-sphere will host a WIFI access point, such that data of the conduit can be transferred to the surface relay station. During the exploration phase, the robot will be disconnected from the cable, and will use wireless communication. Emergency autonomy software will ensure that in case of loss of communication, the robot will continue the nominal mission. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 21 KW - Lunar Caves KW - Spherical Robot KW - Lunar Exploration KW - Mapping KW - 3D Laser Scanning KW - Mond KW - Daedalus-Projekt KW - Lava Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-227911 SN - 978-3-945459-33-1 SN - 1868-7466 ER - TY - JOUR A1 - Du, Shitong A1 - Lauterbach, Helge A. A1 - Li, Xuyou A1 - Demisse, Girum G. A1 - Borrmann, Dorit A1 - Nüchter, Andreas T1 - Curvefusion — A Method for Combining Estimated Trajectories with Applications to SLAM and Time-Calibration JF - Sensors N2 - Mapping and localization of mobile robots in an unknown environment are essential for most high-level operations like autonomous navigation or exploration. This paper presents a novel approach for combining estimated trajectories, namely curvefusion. The robot used in the experiments is equipped with a horizontally mounted 2D profiler, a constantly spinning 3D laser scanner and a GPS module. The proposed algorithm first combines trajectories from different sensors to optimize poses of the planar three degrees of freedom (DoF) trajectory, which is then fed into continuous-time simultaneous localization and mapping (SLAM) to further improve the trajectory. While state-of-the-art multi-sensor fusion methods mainly focus on probabilistic methods, our approach instead adopts a deformation-based method to optimize poses. To this end, a similarity metric for curved shapes is introduced into the robotics community to fuse the estimated trajectories. Additionally, a shape-based point correspondence estimation method is applied to the multi-sensor time calibration. Experiments show that the proposed fusion method can achieve relatively better accuracy, even if the error of the trajectory before fusion is large, which demonstrates that our method can still maintain a certain degree of accuracy in an environment where typical pose estimation methods have poor performance. In addition, the proposed time-calibration method also achieves high accuracy in estimating point correspondences. KW - mapping KW - continuous-time SLAM KW - deformation-based method KW - time calibration Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-219988 SN - 1424-8220 VL - 20 IS - 23 ER - TY - RPRT A1 - Metzger, Florian T1 - Crowdsensed QoE for the community - a concept to make QoE assessment accessible N2 - In recent years several community testbeds as well as participatory sensing platforms have successfully established themselves to provide open data to everyone interested. Each of them with a specific goal in mind, ranging from collecting radio coverage data up to environmental and radiation data. Such data can be used by the community in their decision making, whether to subscribe to a specific mobile phone service that provides good coverage in an area or in finding a sunny and warm region for the summer holidays. However, the existing platforms are usually limiting themselves to directly measurable network QoS. If such a crowdsourced data set provides more in-depth derived measures, this would enable an even better decision making. A community-driven crowdsensing platform that derives spatial application-layer user experience from resource-friendly bandwidth estimates would be such a case, video streaming services come to mind as a prime example. In this paper we present a concept for such a system based on an initial prototype that eases the collection of data necessary to determine mobile-specific QoE at large scale. In addition we reason why the simple quality metric proposed here can hold its own. KW - Quality of Experience KW - Crowdsourcing KW - Crowdsensing KW - QoE Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203748 N1 - Originally written in 2017, but never published. ER - TY - THES A1 - Fink, Martin T1 - Crossings, Curves, and Constraints in Graph Drawing T1 - Kreuzungen, Kurven und Constraints beim Zeichnen von Graphen N2 - In many cases, problems, data, or information can be modeled as graphs. Graphs can be used as a tool for modeling in any case where connections between distinguishable objects occur. Any graph consists of a set of objects, called vertices, and a set of connections, called edges, such that any edge connects a pair of vertices. For example, a social network can be modeled by a graph by transforming the users of the network into vertices and friendship relations between users into edges. Also physical networks like computer networks or transportation networks, for example, the metro network of a city, can be seen as graphs. For making graphs and, thereby, the data that is modeled, well-understandable for users, we need a visualization. Graph drawing deals with algorithms for visualizing graphs. In this thesis, especially the use of crossings and curves is investigated for graph drawing problems under additional constraints. The constraints that occur in the problems investigated in this thesis especially restrict the positions of (a part of) the vertices; this is done either as a hard constraint or as an optimization criterion. N2 - Viele Probleme, Informationen oder Daten lassen sich mit Hilfe von Graphen modellieren. Graphen können überall dort eingesetzt werden, wo Verbindungen zwischen unterscheidbaren Objekten auftreten. Ein Graph besteht aus einer Menge von Objekten, genannt Knoten, und einer Menge von Verbindungen, genannt Kanten, zwischen je einem Paar von Knoten. Ein soziales Netzwerk lässt sich etwa als Graph modellieren, indem die teilnehmenden Personen als Knoten und Freundschaftsbeziehungen als Kanten dargestellt werden. Physikalische Netzwerke wie etwa Computernetze oder Transportnetze - wie beispielsweise das U-Bahnliniennetz einer Stadt - lassen sich ebenfalls als Graph auffassen. Um Graphen und die damit modellierten Daten gut erfassen zu können benötigen wir eine Visualisierung. Das Graphenzeichnen befasst sich mit dem Entwickeln von Algorithmen zur Visualisierung von Graphen. Diese Dissertation beschäftigt sich insbesondere mit dem Einsatz von Kreuzungen und Kurven beim Zeichnen von Graphen unter Nebenbedingungen (Constraints). Die in den untersuchten Problemen auftretenden Nebenbedingungen sorgen unter anderem dafür, dass die Lage eines Teils der Knoten - als feste Anforderung oder als Optimierungskriterium - vorgegeben ist. KW - Graphenzeichnen KW - Kreuzung KW - Kurve KW - Graph KW - graph drawing KW - crossing minimization KW - curves KW - labeling KW - metro map KW - Kreuzungsminimierung KW - Landkartenbeschriftung KW - U-Bahnlinienplan Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-98235 SN - 978-3-95826-002-3 (print) SN - 978-3-95826-003-0 (online) PB - Würzburg University Press ER - TY - JOUR A1 - Steininger, Michael A1 - Abel, Daniel A1 - Ziegler, Katrin A1 - Krause, Anna A1 - Paeth, Heiko A1 - Hotho, Andreas T1 - ConvMOS: climate model output statistics with deep learning JF - Data Mining and Knowledge Discovery N2 - Climate models are the tool of choice for scientists researching climate change. Like all models they suffer from errors, particularly systematic and location-specific representation errors. One way to reduce these errors is model output statistics (MOS) where the model output is fitted to observational data with machine learning. In this work, we assess the use of convolutional Deep Learning climate MOS approaches and present the ConvMOS architecture which is specifically designed based on the observation that there are systematic and location-specific errors in the precipitation estimates of climate models. We apply ConvMOS models to the simulated precipitation of the regional climate model REMO, showing that a combination of per-location model parameters for reducing location-specific errors and global model parameters for reducing systematic errors is indeed beneficial for MOS performance. We find that ConvMOS models can reduce errors considerably and perform significantly better than three commonly used MOS approaches and plain ResNet and U-Net models in most cases. Our results show that non-linear MOS models underestimate the number of extreme precipitation events, which we alleviate by training models specialized towards extreme precipitation events with the imbalanced regression method DenseLoss. While we consider climate MOS, we argue that aspects of ConvMOS may also be beneficial in other domains with geospatial data, such as air pollution modeling or weather forecasts. KW - Klima KW - Modell KW - Deep learning KW - Neuronales Netz KW - climate KW - neural networks KW - model output statistics Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-324213 SN - 1384-5810 VL - 37 IS - 1 ER - TY - JOUR A1 - Glémarec, Yann A1 - Lugrin, Jean-Luc A1 - Bosser, Anne-Gwenn A1 - Buche, Cédric A1 - Latoschik, Marc Erich T1 - Controlling the stage: a high-level control system for virtual audiences in Virtual Reality JF - Frontiers in Virtual Reality N2 - This article presents a novel method for controlling a virtual audience system (VAS) in Virtual Reality (VR) application, called STAGE, which has been originally designed for supervised public speaking training in university seminars dedicated to the preparation and delivery of scientific talks. We are interested in creating pedagogical narratives: narratives encompass affective phenomenon and rather than organizing events changing the course of a training scenario, pedagogical plans using our system focus on organizing the affects it arouses for the trainees. Efficiently controlling a virtual audience towards a specific training objective while evaluating the speaker’s performance presents a challenge for a seminar instructor: the high level of cognitive and physical demands required to be able to control the virtual audience, whilst evaluating speaker’s performance, adjusting and allowing it to quickly react to the user’s behaviors and interactions. It is indeed a critical limitation of a number of existing systems that they rely on a Wizard of Oz approach, where the tutor drives the audience in reaction to the user’s performance. We address this problem by integrating with a VAS a high-level control component for tutors, which allows using predefined audience behavior rules, defining custom ones, as well as intervening during run-time for finer control of the unfolding of the pedagogical plan. At its core, this component offers a tool to program, select, modify and monitor interactive training narratives using a high-level representation. The STAGE offers the following features: i) a high-level API to program pedagogical narratives focusing on a specific public speaking situation and training objectives, ii) an interactive visualization interface iii) computation and visualization of user metrics, iv) a semi-autonomous virtual audience composed of virtual spectators with automatic reactions to the speaker and surrounding spectators while following the pedagogical plan V) and the possibility for the instructor to embody a virtual spectator to ask questions or guide the speaker from within the Virtual Environment. We present here the design, and implementation of the tutoring system and its integration in STAGE, and discuss its reception by end-users. KW - virtual reality KW - virtual agent KW - behavior perception KW - public speaking KW - education Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-284601 SN - 2673-4192 VL - 3 ER - TY - THES A1 - Löffler, Andre T1 - Constrained Graph Layouts: Vertices on the Outer Face and on the Integer Grid T1 - Graphzeichnen unter Nebenbedingungen: Knoten auf der Außenfacette und mit ganzzahligen Koordinaten N2 - Constraining graph layouts - that is, restricting the placement of vertices and the routing of edges to obey certain constraints - is common practice in graph drawing. In this book, we discuss algorithmic results on two different restriction types: placing vertices on the outer face and on the integer grid. For the first type, we look into the outer k-planar and outer k-quasi-planar graphs, as well as giving a linear-time algorithm to recognize full and closed outer k-planar graphs Monadic Second-order Logic. For the second type, we consider the problem of transferring a given planar drawing onto the integer grid while perserving the original drawings topology; we also generalize a variant of Cauchy's rigidity theorem for orthogonal polyhedra of genus 0 to those of arbitrary genus. N2 - Das Einschränken von Zeichnungen von Graphen, sodass diese bestimmte Nebenbedingungen erfüllen - etwa solche, die das Platzieren von Knoten oder den Verlauf von Kanten beeinflussen - sind im Graphzeichnen allgegenwärtig. In dieser Arbeit befassen wir uns mit algorithmischen Resultaten zu zwei speziellen Einschränkungen, nämlich dem Platzieren von Knoten entweder auf der Außenfacette oder auf ganzzahligen Koordinaten. Für die erste Einschränkung untersuchen wir die außen k-planaren und außen k-quasi-planaren Graphen und geben einen auf monadische Prädikatenlogik zweiter Stufe basierenden Algorithmus an, der überprüft, ob ein Graph voll außen k-planar ist. Für die zweite Einschränkung untersuchen wir das Problem, eine gegebene planare Zeichnung eines Graphen auf das ganzzahlige Koordinatengitter zu transportieren, ohne dabei die Topologie der Zeichnung zu verändern; außerdem generalisieren wir eine Variante von Cauchys Starrheitssatz für orthogonale Polyeder von Geschlecht 0 auf solche von beliebigem Geschlecht. KW - Graphenzeichnen KW - Komplexität KW - Algorithmus KW - Algorithmische Geometrie KW - Kombinatorik KW - Planare Graphen KW - Polyeder KW - Konvexe Zeichnungen Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-215746 SN - 978-3-95826-146-4 SN - 978-3-95826-147-1 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, ISBN 978-3-95826-146-4, 32,90 EUR PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - JOUR A1 - Latoschik, Marc Erich A1 - Wienrich, Carolin T1 - Congruence and plausibility, not presence: pivotal conditions for XR experiences and effects, a novel approach JF - Frontiers in Virtual Reality N2 - Presence is often considered the most important quale describing the subjective feeling of being in a computer-generated and/or computer-mediated virtual environment. The identification and separation of orthogonal presence components, i.e., the place illusion and the plausibility illusion, has been an accepted theoretical model describing Virtual Reality (VR) experiences for some time. This perspective article challenges this presence-oriented VR theory. First, we argue that a place illusion cannot be the major construct to describe the much wider scope of virtual, augmented, and mixed reality (VR, AR, MR: or XR for short). Second, we argue that there is no plausibility illusion but merely plausibility, and we derive the place illusion caused by the congruent and plausible generation of spatial cues and similarly for all the current model’s so-defined illusions. Finally, we propose congruence and plausibility to become the central essential conditions in a novel theoretical model describing XR experiences and effects. KW - XR KW - experience KW - presence KW - congruence KW - plausibility KW - coherence KW - theory KW - prediction Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-284787 SN - 2673-4192 VL - 3 ER - TY - JOUR A1 - Böhler, Elmar A1 - Creignou, Nadia A1 - Galota, Matthias A1 - Reith, Steffen A1 - Schnoor, Henning A1 - Vollmer, Heribert T1 - Complexity Classifications for Different Equivalence and Audit Problems for Boolean Circuits JF - Logical Methods in Computer Science N2 - We study Boolean circuits as a representation of Boolean functions and conskier different equivalence, audit, and enumeration problems. For a number of restricted sets of gate types (bases) we obtain efficient algorithms, while for all other gate types we show these problems are at least NP-hard. KW - hierarchy KW - satisfiability problems Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-131121 VL - 8 IS - 3:27 SP - 1 EP - 25 ER - TY - THES A1 - Kosub, Sven T1 - Complexity and Partitions T1 - Komplexität von Partitionen N2 - Computational complexity theory usually investigates the complexity of sets, i.e., the complexity of partitions into two parts. But often it is more appropriate to represent natural problems by partitions into more than two parts. A particularly interesting class of such problems consists of classification problems for relations. For instance, a binary relation R typically defines a partitioning of the set of all pairs (x,y) into four parts, classifiable according to the cases where R(x,y) and R(y,x) hold, only R(x,y) or only R(y,x) holds or even neither R(x,y) nor R(y,x) is true. By means of concrete classification problems such as Graph Embedding or Entailment (for propositional logic), this thesis systematically develops tools, in shape of the boolean hierarchy of NP-partitions and its refinements, for the qualitative analysis of the complexity of partitions generated by NP-relations. The Boolean hierarchy of NP-partitions is introduced as a generalization of the well-known and well-studied Boolean hierarchy (of sets) over NP. Whereas the latter hierarchy has a very simple structure, the situation is much more complicated for the case of partitions into at least three parts. To get an idea of this hierarchy, alternative descriptions of the partition classes are given in terms of finite, labeled lattices. Based on these characterizations the Embedding Conjecture is established providing the complete information on the structure of the hierarchy. This conjecture is supported by several results. A natural extension of the Boolean hierarchy of NP-partitions emerges from the lattice-characterization of its classes by considering partition classes generated by finite, labeled posets. It turns out that all significant ideas translate from the case of lattices. The induced refined Boolean hierarchy of NP-partitions enables us more accuratly capturing the complexity of certain relations (such as Graph Embedding) and a description of projectively closed partition classes. N2 - Die klassische Komplexitätstheorie untersucht in erster Linie die Komplexität von Mengen, d.h. von Zerlegungen (Partitionen) einer Grundmenge in zwei Teile. Häufig werden aber natürliche Fragestellungen viel angemessener durch Zerlegungen in mehr als zwei Teile abgebildet. Eine besonders interessante Klasse solcher Fragestellungen sind Klassifikationsprobleme für Relationen. Zum Beispiel definiert eine Binärrelation R typischerweise eine Zerlegung der Menge aller Paare (x,y) in vier Teile, klassifizierbar danach, ob R(x,y) und R(y,x), R(x,y) aber nicht R(y,x), nicht R(x,y) aber dafür R(y,x) oder weder R(x,y) noch R(y,x) gilt. Anhand konkreter Klassifikationsprobleme, wie zum Beispiel der Einbettbarkeit von Graphen und der Folgerbarkeit für aussagenlogische Formeln, werden in der Dissertation Instrumente für eine qualitative Analyse der Komplexität von Partitionen, die von NP-Relationen erzeugt werden, in Form der Booleschen Hierarchie der NP-Partitionen und ihrer Erweiterungen systematisch entwickelt. Die Boolesche Hierarchie der NP-Partitionen wird als Verallgemeinerung der bereits bekannten und wohluntersuchten Boolesche Hierarchie über NP eingeführt. Während die letztere Hierarchie eine sehr einfache Struktur aufweist, stellt sich die Boolesche Hierarchie der NP-Partitionen im Falle von Zerlegungen in mindestens 3 Teile als sehr viel komplizierter heraus. Um einen Überblick über diese Hierarchien zu erlangen, werden alternative Beschreibungen der Klassen der Hierarchien mittels endlicher, bewerteter Verbände angegeben. Darauf aufbauend wird die Einbettungsvermutung aufgestellt, die uns die vollständige Information über die Struktur der Hierarchie liefert. Diese Vermutung wird mit verschiedene Resultaten untermauert. Eine Erweiterung der Booleschen Hierarchie der NP-Partitionen ergibt sich auf natürliche Weise aus der Charakterisierung ihrer Klassen durch Verbände. Dazu werden Klassen betrachtet, die von endlichen, bewerteten Halbordnungen erzeugt werden. Es zeigt sich, dass die wesentlichen Konzepte vom Verbandsfall übertragen werden können. Die entstehende Verfeinerung der Booleschen Hierarchie der NP-Partitionen ermöglicht die exaktere Analyse der Komplexität bestimmter Relationen (wie zum Beispiel der Einbettbarkeit von Graphen) und die Beschreibung projektiv abgeschlossener Partitionenklassen. KW - Partition KW - Boolesche Hierarchie KW - Komplexitätsklasse NP KW - Theoretische Informatik KW - Komplexitätstheorie KW - NP KW - Boolesche Hierarchie KW - Partitionen KW - Verbände KW - Halbordnungen KW - Theoretical computer science KW - computational complexity KW - NP KW - Boolean hierarchy KW - partitions KW - lattices KW - posets Y1 - 2001 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-2808 ER -