TY - JOUR A1 - Li, Ningbo A1 - Guan, Lianwu A1 - Gao, Yanbin A1 - Du, Shitong A1 - Wu, Menghao A1 - Guang, Xingxing A1 - Cong, Xiaodan T1 - Indoor and outdoor low-cost seamless integrated navigation system based on the integration of INS/GNSS/LIDAR system JF - Remote Sensing N2 - Global Navigation Satellite System (GNSS) provides accurate positioning data for vehicular navigation in open outdoor environment. In an indoor environment, Light Detection and Ranging (LIDAR) Simultaneous Localization and Mapping (SLAM) establishes a two-dimensional map and provides positioning data. However, LIDAR can only provide relative positioning data and it cannot directly provide the latitude and longitude of the current position. As a consequence, GNSS/Inertial Navigation System (INS) integrated navigation could be employed in outdoors, while the indoors part makes use of INS/LIDAR integrated navigation and the corresponding switching navigation will make the indoor and outdoor positioning consistent. In addition, when the vehicle enters the garage, the GNSS signal will be blurred for a while and then disappeared. Ambiguous GNSS satellite signals will lead to the continuous distortion or overall drift of the positioning trajectory in the indoor condition. Therefore, an INS/LIDAR seamless integrated navigation algorithm and a switching algorithm based on vehicle navigation system are designed. According to the experimental data, the positioning accuracy of the INS/LIDAR navigation algorithm in the simulated environmental experiment is 50% higher than that of the Dead Reckoning (DR) algorithm. Besides, the switching algorithm developed based on the INS/LIDAR integrated navigation algorithm can achieve 80% success rate in navigation mode switching. KW - vehicular navigation KW - GNSS/INS integrated navigation KW - INS/LIDAR integrated navigation KW - switching navigation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-216229 SN - 2072-4292 VL - 12 IS - 19 ER - TY - RPRT A1 - Lhamo, Osel A1 - Nguyen, Giang T. A1 - Fitzek, Frank H. P. T1 - Virtual Queues for QoS Compliance of Haptic Data Streams in Teleoperation T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - Tactile Internet aims at allowing perceived real-time interactions between humans and machines. This requires satisfying a stringent latency requirement of haptic data streams whose data rates vary drastically as the results of perceptual codecs. This introduces a complex problem for the underlying network infrastructure to fulfill the pre-defined level of Quality of Service (QoS). However, novel networking hardware with data plane programming capability allows processing packets differently and opens up a new opportunity. For example, a dynamic and network-aware resource management strategy can help satisfy the QoS requirements of different priority flows without wasting precious bandwidth. This paper introduces virtual queues for service differentiation between different types of traffic streams, leveraging protocol independent switch architecture (PISA). We propose coordinating the management of all the queues and dynamically adapting their sizes to minimize packet loss and delay due to network congestion and ensure QoS compliance. KW - Datennetz KW - data plane programming KW - software defined network KW - P4 KW - virtual queue KW - haptic data Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280762 ER - TY - JOUR A1 - Lesch, Veronika A1 - König, Maximilian A1 - Kounev, Samuel A1 - Stein, Anthony A1 - Krupitzer, Christian T1 - Tackling the rich vehicle routing problem with nature-inspired algorithms JF - Applied Intelligence N2 - In the last decades, the classical Vehicle Routing Problem (VRP), i.e., assigning a set of orders to vehicles and planning their routes has been intensively researched. As only the assignment of order to vehicles and their routes is already an NP-complete problem, the application of these algorithms in practice often fails to take into account the constraints and restrictions that apply in real-world applications, the so called rich VRP (rVRP) and are limited to single aspects. In this work, we incorporate the main relevant real-world constraints and requirements. We propose a two-stage strategy and a Timeline algorithm for time windows and pause times, and apply a Genetic Algorithm (GA) and Ant Colony Optimization (ACO) individually to the problem to find optimal solutions. Our evaluation of eight different problem instances against four state-of-the-art algorithms shows that our approach handles all given constraints in a reasonable time. KW - logistics KW - rich vehicle routing problem KW - ant-colony optimization KW - genetic algorithm KW - real-world application Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-268942 SN - 1573-7497 VL - 52 ER - TY - THES A1 - Lehrieder, Frank T1 - Performance Evaluation and Optimization of Content Distribution using Overlay Networks T1 - Leistungsbewertung und Optimierung von Overlay Netzwerken zum Verteilen großer Datenmengen N2 - The work presents a performance evaluation and optimization of so-called overlay networks for content distribution in the Internet. Chapter 1 describes the importance which have such networks in today's Internet, for example, for the transmission of video content. The focus of this work is on overlay networks based on the peer-to-peer principle. These are characterized by the fact that users who download content, also contribute to the distribution process by sharing parts of the data to other users. This enables efficient content distribution because each user not only consumes resources in the system, but also provides its own resources. Chapter 2 of the monograph contains a detailed description of the functionality of today's most popular overlay network BitTorrent. It explains the various components and their interaction. This is followed by an illustration of why such overlay networks for Internet service providers (ISPs) are problematic. The reason lies in the large amount of inter-ISP traffic that is produced by these overlay networks. Since this inter-ISP traffic leads to high costs for ISPs, they try to reduce it by improved mechanisms for overlay networks. One optimization approach is the use of topology awareness within the overlay networks. It provides users of the overlay networks with information about the underlying physical network topology. This allows them to avoid inter-ISP traffic by exchanging data preferrentially with other users that are connected to the same ISP. Another approach to save inter-ISP traffic is caching. In this case the ISP provides additional computers in its network, called caches, which store copies of popular content. The users of this ISP can then obtain such content from the cache. This prevents that the content must be retrieved from locations outside of the ISP's network, and saves costly inter-ISP traffic in this way. In the third chapter of the thesis, the results of a comprehensive measurement study of overlay networks, which can be found in today's Internet, are presented. After a short description of the measurement methodology, the results of the measurements are described. These results contain data on a variety of characteristics of current P2P overlay networks in the Internet. These include the popularity of content, i.e., how many users are interested in specific content, the evolution of the popularity and the size of the files. The distribution of users within the Internet is investigated in detail. Special attention is given to the number of users that exchange a particular file within the same ISP. On the basis of these measurement results, an estimation of the traffic savings that can achieved by topology awareness is derived. This new estimation is of scientific and practical importance, since it is not limited to individual ISPs and files, but considers the whole Internet and the total amount of data exchanged in overlay networks. Finally, the characteristics of regional content are considered, in which the popularity is limited to certain parts of the Internet. This is for example the case of videos in German, Italian or French language. Chapter 4 of the thesis is devoted to the optimization of overlay networks for content distribution through caching. It presents a deterministic flow model that describes the influence of caches. On the basis of this model, it derives an estimate of the inter-ISP traffic that is generated by an overlay network, and which part can be saved by caches. The results show that the influence of the cache depends on the structure of the overlay networks, and that caches can also lead to an increase in inter-ISP traffic under certain circumstances. The described model is thus an important tool for ISPs to decide for which overlay networks caches are useful and to dimension them. Chapter 5 summarizes the content of the work and emphasizes the importance of the findings. In addition, it explains how the findings can be applied to the optimization of future overlay networks. Special attention is given to the growing importance of video-on-demand and real-time video transmissions. N2 - Die Arbeit beschäftigt sich mit der Leistungsbewertung und Optimierung von sogenannten Overlay-Netzwerken zum Verteilen von großen Datenmengen im Internet. Im Kapitel 1 der Arbeit wird die große Bedeutung erläutert, die solche Netzwerke im heutigen Internet haben, beispielsweise für die Übertragung von Video-Inhalten. Im Fokus der Arbeit liegen Overlay-Netzwerke, die auf dem Peer-to-peer Prinzip basieren. Diese zeichnen sich dadurch aus, dass Nutzer, die Inhalte herunterladen, auch gleichzeitig an dem Verteilprozess teilnehmen, indem sie Teile der Daten an andere Nutzer weitergeben. Dies ermöglicht eine effiziente Verteilung der Daten, weil jeder Nutzer nicht nur Ressourcen im System belegt, sondern auch eigene Ressourcen einbringt. Kapitel 2 der Arbeit enthält eine detaillierte Beschreibung der Funktionsweise des heute populärsten Overlay-Netzwerks BitTorrent. Es werden die einzelnen Komponenten erläutert und deren Zusammenspiel erklärt. Darauf folgt eine Darstellung, warum solche Overlay-Netzwerke für Internet-Anbieter (Internet service provider, ISP) problematisch sind. Der Grund dafür liegt in der großen Menge an Inter-ISP Verkehr, den diese Overlays erzeugen. Da solcher Inter-ISP Verkehr zu hohen Kosten für ISPs führt, versuchen diese den Inter-ISP Verkehr zu reduzieren, indem sie die Mechanismen der Overlay-Netzwerke optimieren. Ein Ansatz zur Optimierung ist die Verwendung von Topologiebewusstsein innerhalb der Overlay-Netzwerke. Dabei erhalten die Nutzer der Overlay-Netzwerke Informationen über die zugrunde liegende, physikalische Netzwerktopologie. Diese ermöglichen es ihnen, Inter-ISP Verkehr zu vermeiden, indem sie Daten bevorzugt mit anderen Nutzern austauschen, die mit dem gleichen ISP verbunden sind. Ein weiterer Ansatz, um Inter-ISP Verkehr einzusparen, ist Caching. Dabei stellt der ISP zusätzliche Rechner, sogenannte Caches, in seinem Netzwerk zur Verfügung, die Kopien populärer Inhalte zwischenspeichern. Die Nutzer dieses ISP können solche Inhalte nun von den Caches beziehen. Dies verhindert, dass populäre Inhalte mehrfach von außerhalb des betrachteten ISPs bezogen werden müssen, und spart so kostenintensiven Inter-ISP Verkehr ein. Im dritten Kapitel der Arbeit werden Ergebnisse einer umfassenden Messung von Overlay-Netzwerken vorgestellt, wie sie heute im Internet anzutreffen sind. Nach einer kurzen Darstellung der bei der Messung verwendeten Methodik werden die Resultate der Messungen beschrieben. Diese Ergebnisse enthalten Daten über eine Vielzahl von Eigenschaften von heutigen P2P-basierten Overlay-Netzwerken im Internet. Dazu zählen die Popularität von Inhalten, d.h., wie viele Nutzer an bestimmten Inhalten interessiert sind, die zeitliche Entwicklung der Popularität und die Größe der Daten. Im Detail wird auch die Verteilung der Nutzer über das Internet analysiert. Ein besonderes Augenmerk liegt dabei auf der Anzahl der Nutzer, die gleichzeitig und im Netz desselben ISP eine bestimmte Datei tauschen. Auf der Basis dieser Messergebnisse wird eine Abschätzung durchgeführt, welches Einsparpotential die Optimierung von Overlay-Netzwerken durch Topologiebewusstsein bietet. Diese neuartige Abschätzung ist von wissenschaftlicher und praktischer Bedeutung, da sie sich nicht auf einzelne ISPs und Dateien beschränkt, sondern des gesamte Internet und die Menge aller in Overlay-Netzwerken verfügbaren Dateien umfasst. Schließlich werden die Besonderheiten von regionalen Inhalten betrachtet, bei denen sich die Popularität auf bestimmte Teile des Internets beschränkt. Dies ist beispielsweise bei Videos in deutscher, italienischer oder französischer Sprache der Fall. Kapitel 4 der Arbeit widmet sich der Optimierung von Overlay-Netzwerken zum Verteilen großer Datenmengen durch Caching. Es wird ein deterministisches Flussmodel entwickelt, das den Einfluss von Caches beschreibt. Auf der Basis dieses Modells leitet er eine Abschätzung des Inter-ISP Verkehrs ab, der von einem Overlay-Netzwerk erzeugt wird, und welcher Teil davon durch Caches eingespart werden kann. Die Ergebnisse zeigen, dass der Einfluss von Caches von der Struktur der Overlay-Netzwerke abhängt und dass Caches unter bestimmten Umständen auch zu einem erhöhten Inter-ISP Verkehr führen können. Das beschriebene Modell ist somit ein wichtiges Hilfsmittel für ISPs um zu entscheiden, für welche Overlay-Netzwerke Caches sinnvoll sind, und um diese anschließend richtig zu dimensionieren. Kapitel 5 fasst den Inhalt der Arbeit zusammen und stellt die Bedeutung der gewonnenen Erkenntnisse heraus. Abschließend wird erläutert, in welcher Weise die in der Arbeit beschriebenen Ergebnisse wichtige Grundlagen für die Optimierung von zukünftigen Overlay-Netzwerken darstellen werden. Dabei wird besonders auf die wachsende Bedeutung von Video-On-Demand und Echt-Zeit Video-Übertragungen eingegangen. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/13 KW - Leistungsbewertung KW - Verteiltes System KW - Overlay-Netz KW - Overlay Netzwerke KW - Overlay networks Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-76018 ER - TY - RPRT A1 - Le, Duy Thanh A1 - Großmann, Marcel A1 - Krieger, Udo R. T1 - Cloudless Resource Monitoring in a Fog Computing System Enabled by an SDN/NFV Infrastructure T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - Today’s advanced Internet-of-Things applications raise technical challenges on cloud, edge, and fog computing. The design of an efficient, virtualized, context-aware, self-configuring orchestration system of a fog computing system constitutes a major development effort within this very innovative area of research. In this paper we describe the architecture and relevant implementation aspects of a cloudless resource monitoring system interworking with an SDN/NFV infrastructure. It realizes the basic monitoring component of the fundamental MAPE-K principles employed in autonomic computing. Here we present the hierarchical layering and functionality within the underlying fog nodes to generate a working prototype of an intelligent, self-managed orchestrator for advanced IoT applications and services. The latter system has the capability to monitor automatically various performance aspects of the resource allocation among multiple hosts of a fog computing system interconnected by SDN. KW - Datennetz KW - fog computing KW - SDN/NVF KW - container virtualization KW - autonomic orchestration KW - docker Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280723 ER - TY - JOUR A1 - Latoschik, Marc Erich A1 - Wienrich, Carolin T1 - Congruence and plausibility, not presence: pivotal conditions for XR experiences and effects, a novel approach JF - Frontiers in Virtual Reality N2 - Presence is often considered the most important quale describing the subjective feeling of being in a computer-generated and/or computer-mediated virtual environment. The identification and separation of orthogonal presence components, i.e., the place illusion and the plausibility illusion, has been an accepted theoretical model describing Virtual Reality (VR) experiences for some time. This perspective article challenges this presence-oriented VR theory. First, we argue that a place illusion cannot be the major construct to describe the much wider scope of virtual, augmented, and mixed reality (VR, AR, MR: or XR for short). Second, we argue that there is no plausibility illusion but merely plausibility, and we derive the place illusion caused by the congruent and plausible generation of spatial cues and similarly for all the current model’s so-defined illusions. Finally, we propose congruence and plausibility to become the central essential conditions in a novel theoretical model describing XR experiences and effects. KW - XR KW - experience KW - presence KW - congruence KW - plausibility KW - coherence KW - theory KW - prediction Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-284787 SN - 2673-4192 VL - 3 ER - TY - JOUR A1 - Landeck, Maximilian A1 - Alvarez Igarzábal, Federico A1 - Unruh, Fabian A1 - Habenicht, Hannah A1 - Khoshnoud, Shiva A1 - Wittmann, Marc A1 - Lugrin, Jean-Luc A1 - Latoschik, Marc Erich T1 - Journey through a virtual tunnel: Simulated motion and its effects on the experience of time JF - Frontiers in Virtual Reality N2 - This paper examines the relationship between time and motion perception in virtual environments. Previous work has shown that the perception of motion can affect the perception of time. We developed a virtual environment that simulates motion in a tunnel and measured its effects on the estimation of the duration of time, the speed at which perceived time passes, and the illusion of self-motion, also known as vection. When large areas of the visual field move in the same direction, vection can occur; observers often perceive this as self-motion rather than motion of the environment. To generate different levels of vection and investigate its effects on time perception, we developed an abstract procedural tunnel generator. The generator can simulate different speeds and densities of tunnel sections (visibly distinguishable sections that form the virtual tunnel), as well as the degree of embodiment of the user avatar (with or without virtual hands). We exposed participants to various tunnel simulations with different durations, speeds, and densities in a remote desktop and a virtual reality (VR) laboratory study. Time passed subjectively faster under high-speed and high-density conditions in both studies. The experience of self-motion was also stronger under high-speed and high-density conditions. Both studies revealed a significant correlation between the perceived passage of time and perceived self-motion. Subjects in the virtual reality study reported a stronger self-motion experience, a faster perceived passage of time, and shorter time estimates than subjects in the desktop study. Our results suggest that a virtual tunnel simulation can manipulate time perception in virtual reality. We will explore these results for the development of virtual reality applications for therapeutic approaches in our future work. This could be particularly useful in treating disorders like depression, autism, and schizophrenia, which are known to be associated with distortions in time perception. For example, the tunnel could be therapeutically applied by resetting patients’ time perceptions by exposing them to the tunnel under different conditions, such as increasing or decreasing perceived time. KW - passage of time KW - illusion of self-motion KW - vection KW - virtual tunnel KW - therapeutic application KW - virtual reality KW - extended reality (XR) Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-301519 SN - 2673-4192 VL - 3 ER - TY - JOUR A1 - Krupitzer, Christian A1 - Eberhardinger, Benedikt A1 - Gerostathopoulos, Ilias A1 - Raibulet, Claudia T1 - Introduction to the special issue “Applications in Self-Aware Computing Systems and their Evaluation” JF - Computers N2 - The joint 1st Workshop on Evaluations and Measurements in Self-Aware Computing Systems (EMSAC 2019) and Workshop on Self-Aware Computing (SeAC) was held as part of the FAS* conference alliance in conjunction with the 16th IEEE International Conference on Autonomic Computing (ICAC) and the 13th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO) in Umeå, Sweden on 20 June 2019. The goal of this one-day workshop was to bring together researchers and practitioners from academic environments and from the industry to share their solutions, ideas, visions, and doubts in self-aware computing systems in general and in the evaluation and measurements of such systems in particular. The workshop aimed to enable discussions, partnerships, and collaborations among the participants. This special issue follows the theme of the workshop. It contains extended versions of workshop presentations as well as additional contributions. KW - self-aware computing systems KW - quality evaluation KW - measurements KW - quality assurance KW - autonomous KW - self-adaptive KW - self-managing systems Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203439 SN - 2073-431X VL - 9 IS - 1 ER - TY - JOUR A1 - Krenzer, Adrian A1 - Makowski, Kevin A1 - Hekalo, Amar A1 - Fitting, Daniel A1 - Troya, Joel A1 - Zoller, Wolfram G. A1 - Hann, Alexander A1 - Puppe, Frank T1 - Fast machine learning annotation in the medical domain: a semi-automated video annotation tool for gastroenterologists JF - BioMedical Engineering OnLine N2 - Background Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g., visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. Methods In our framework, an expert reviews the video and annotates a few video frames to verify the object’s annotations for the non-expert. In a second step, a non-expert has visual confirmation of the given object and can annotate all following and preceding frames with AI assistance. After the expert has finished, relevant frames will be selected and passed on to an AI model. This information allows the AI model to detect and mark the desired object on all following and preceding frames with an annotation. Therefore, the non-expert can adjust and modify the AI predictions and export the results, which can then be used to train the AI model. Results Using this framework, we were able to reduce workload of domain experts on average by a factor of 20 on our data. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated AI model enhances the annotation speed further. Through a prospective study with 10 participants, we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. Conclusion In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source. KW - object detection KW - machine learning KW - deep learning KW - annotation KW - endoscopy KW - gastroenterology KW - automation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-300231 VL - 21 IS - 1 ER - TY - JOUR A1 - Krenzer, Adrian A1 - Heil, Stefan A1 - Fitting, Daniel A1 - Matti, Safa A1 - Zoller, Wolfram G. A1 - Hann, Alexander A1 - Puppe, Frank T1 - Automated classification of polyps using deep learning architectures and few-shot learning JF - BMC Medical Imaging N2 - Background Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification. Methods We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database. Results For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations. Conclusion Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning. KW - machine learning KW - deep learning KW - endoscopy KW - gastroenterology KW - automation KW - image classification KW - transformer KW - deep metric learning KW - few-shot learning Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-357465 VL - 23 ER -