TY - THES A1 - Loh, Frank T1 - Monitoring the Quality of Streaming and Internet of Things Applications T1 - Monitoring der Qualität von Video Streaming und Internet der Dingen Anwendungen N2 - The ongoing and evolving usage of networks presents two critical challenges for current and future networks that require attention: (1) the task of effectively managing the vast and continually increasing data traffic and (2) the need to address the substantial number of end devices resulting from the rapid adoption of the Internet of Things. Besides these challenges, there is a mandatory need for energy consumption reduction, a more efficient resource usage, and streamlined processes without losing service quality. We comprehensively address these efforts, tackling the monitoring and quality assessment of streaming applications, a leading contributor to the total Internet traffic, as well as conducting an exhaustive analysis of the network performance within a Long Range Wide Area Network (LoRaWAN), one of the rapidly emerging LPWAN solutions. N2 - Die fortlaufende und sich weiterentwickelnde Nutzung von Netzwerken stellt zwei entscheidende Herausforderungen für aktuelle und zukünftige Netzwerke dar, die Aufmerksamkeit erfordern: (1) die Aufgabe, den enormen und kontinuierlich wachsenden Datenverkehr effektiv zu verwalten, und (2) die Notwendigkeit, die durch die Einführung des Internets der Dinge resultierende große Anzahl von Endgeräten zu bewältigen. Neben diesen Herausforderungen besteht ein zwingender Bedarf an einer Reduzierung des Energieverbrauchs, einer effizienteren Ressourcennutzung und von optimierten Prozessen ohne Einbußen bei der Servicequalität. Wir gehen diese Bemühungen umfassend an und befassen uns mit der Überwachung und Qualitätsbewertung von Streaming-Anwendungen, die einen wesentlichen Beitrag zum gesamten Internetverkehr leisten, sowie mit der Durchführung einer generellen Analyse der Netzwerkleistung innerhalb eines Long Range Wide Area Network (LoRaWAN), einer der am schnellsten wachsenden LPWAN-Lösungen. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/24 KW - Leistungsbewertung KW - Simulation KW - LoRaWAN KW - Quality of Experience KW - Streaming KW - Leistungsbewertung KW - Simulation KW - LoRaWAN KW - Quality of Experience KW - Video Streaming Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-350969 SN - 1432-8801 N1 - Formal korrigierte, inhaltlich identische Version zur Dissertation unter https://doi.org/10.25972/OPUS-34783. ET - korrigierte Version ER - TY - THES A1 - Loh, Frank T1 - Monitoring the Quality of Streaming and Internet of Things Applications T1 - Monitoring der Qualität von Video Streaming und Internet der Dingen Anwendungen N2 - The ongoing and evolving usage of networks presents two critical challenges for current and future networks that require attention: (1) the task of effectively managing the vast and continually increasing data traffic and (2) the need to address the substantial number of end devices resulting from the rapid adoption of the Internet of Things. Besides these challenges, there is a mandatory need for energy consumption reduction, a more efficient resource usage, and streamlined processes without losing service quality. We comprehensively address these efforts, tackling the monitoring and quality assessment of streaming applications, a leading contributor to the total Internet traffic, as well as conducting an exhaustive analysis of the network performance within a Long Range Wide Area Network (LoRaWAN), one of the rapidly emerging LPWAN solutions. N2 - Die fortlaufende und sich weiterentwickelnde Nutzung von Netzwerken stellt zwei entscheidende Herausforderungen für aktuelle und zukünftige Netzwerke dar, die Aufmerksamkeit erfordern: (1) die Aufgabe, den enormen und kontinuierlich wachsenden Datenverkehr effektiv zu verwalten, und (2) die Notwendigkeit, die durch die Einführung des Internets der Dinge resultierende große Anzahl von Endgeräten zu bewältigen. Neben diesen Herausforderungen besteht ein zwingender Bedarf an einer Reduzierung des Energieverbrauchs, einer effizienteren Ressourcennutzung und von optimierten Prozessen ohne Einbußen bei der Servicequalität. Wir gehen diese Bemühungen umfassend an und befassen uns mit der Überwachung und Qualitätsbewertung von Streaming-Anwendungen, die einen wesentlichen Beitrag zum gesamten Internetverkehr leisten, sowie mit der Durchführung einer generellen Analyse der Netzwerkleistung innerhalb eines Long Range Wide Area Network (LoRaWAN), einer der am schnellsten wachsenden LPWAN-Lösungen. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/24 KW - Leistungsbewertung KW - Simulation KW - LoRaWAN KW - Quality of Experience KW - Streaming KW - Video Streaming KW - Energy Efficiency Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-347831 SN - 1432-8801 N1 - Eine formal korrigierte, inhaltlich identische Version dieser Dissertation ist unter https://doi.org/10.25972/OPUS-35096 erschienen. ER - TY - THES A1 - Drobczyk, Martin T1 - Ultra-Wideband Wireless Network for Enhanced Intra-Spacecraft Communication T1 - Drahtloses Ultra-Breitband-Netzwerk für verbesserte Intra-Spacecraft-Kommunikation N2 - Wireless communication networks already comprise an integral part of both the private and industrial sectors and are successfully replacing existing wired networks. They enable the development of novel applications and offer greater flexibility and efficiency. Although some efforts are already underway in the aerospace sector to deploy wireless communication networks on board spacecraft, none of these projects have yet succeeded in replacing the hard-wired state-of-the-art architecture for intra-spacecraft communication. The advantages are evident as the reduction of the wiring harness saves time, mass, and costs, and makes the whole integration process more flexible. It also allows for easier scaling when interconnecting different systems. This dissertation deals with the design and implementation of a wireless network architecture to enhance intra-spacecraft communications by breaking with the state-of-the-art standards that have existed in the space industry for decades. The potential and benefits of this novel wireless network architecture are evaluated, an innovative design using ultra-wideband technology is presented. It is combined with a Medium Access Control (MAC) layer tailored for low-latency and deterministic networks supporting even mission-critical applications. As demonstrated by the Wireless Compose experiment on the International Space Station (ISS), this technology is not limited to communications but also enables novel positioning applications. To adress the technological challenges, extensive studies have been carried out on electromagnetic compatibility, space radiation, and data robustness. The architecture was evaluated from various perspectives and successfully demonstrated in space. Overall, this research highlights how a wireless network can improve and potentially replace existing state-of-the-art communication systems on board spacecraft in future missions. And it will help to adapt and ultimately accelerate the implementation of wireless networks in space systems. N2 - Drahtlose Kommunikationsnetzwerke sind sowohl im privaten als auch im industriellen Bereich bereits ein fester Bestandteil und ersetzen erfolgreich bestehende drahtgebundene Netzwerke. Sie ermöglichen die Entwicklung neuer Anwendungen und bieten mehr Flexibilität und Effizienz. Obwohl in der Raumfahrt bereits einige Anstrengungen unternommen wurden, um drahtlose Kommunikationsnetzwerke an Bord von Raumfahrzeugen einzusetzen, ist es bisher noch keinem dieser Projekte gelungen, die moderne drahtgebundene Architektur für die Kommunikation innerhalb von Raumfahrzeugen zu ersetzen. Die Vorteile liegen auf der Hand: Die Reduzierung des Kabelbaums spart Zeit, Masse und Kosten und macht den gesamten Integrationsprozess flexibler. Außerdem ist eine einfachere Skalierung möglich, wenn verschiedene Systeme miteinander verbunden werden. Diese Dissertation befasst sich mit dem Entwurf und der Implementierung einer drahtlosen Netzwerkarchitektur zur Verbesserung der Kommunikation innerhalb von Raumfahrzeugen, indem mit den seit Jahrzehnten in der Raumfahrtindustrie bestehenden Standards gebrochen wird. Das Potential und die Vorteile dieser neuartigen drahtlosen Netzwerkarchitektur werden bewertet und ein innovatives Design mit Ultrabreitbandtechnologie wird vorgestellt. Es wird mit einer Medium Access Control (MAC) Schicht kombiniert, die für Netzwerke mit niedriger Latenz und Determinismus ausgelegt ist und sogar missionskritische Anwendungen unterstützt. Wie das Wireless Compose-Experiment auf der Internationalen Raumstation ISS gezeigt hat, ist diese Technologie nicht auf Kommunikation beschränkt, sondern ermöglicht auch neuartige Positionierungsanwendung. Um die technologischen Herausforderungen zu meistern, wurden umfangreiche Studien zur elektromagnetischen Verträglichkeit, Weltraumstrahlung und Robustheit der Daten durchgeführt. Die Architektur wurde aus verschiedenen Blickwinkeln bewertet und erfolgreich im Weltraum demonstriert. Insgesamt zeigt diese Forschung, wie ein drahtloses Netzwerk die bestehenden hochmodernen Kommunikationssysteme an Bord von Raumfahrzeugen bei zukünftigen Missionen verbessern und möglicherweise ersetzen kann. Darüber hinaus wird sie dazu beitragen, die Implementierung von drahtlosen Netzwerken in Raumfahrtsystemen an die jeweiligen Gegebenheiten anzupassen und damit letztlich die Integration dieser Systeme zu beschleunigen. KW - Raumfahrttechnik KW - ISS KW - Funktechnik KW - Ultraweitband KW - Avionik KW - Intra-Spacecraft Communication KW - Localization KW - Wireless Network KW - In-Orbit demonstration Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-359564 ER - TY - THES A1 - Kobs, Konstantin T1 - Think outside the Black Box: Model-Agnostic Deep Learning with Domain Knowledge T1 - Think outside the Black Box: Modellagnostisches Deep Learning mit Domänenwissen N2 - Deep Learning (DL) models are trained on a downstream task by feeding (potentially preprocessed) input data through a trainable Neural Network (NN) and updating its parameters to minimize the loss function between the predicted and the desired output. While this general framework has mainly remained unchanged over the years, the architectures of the trainable models have greatly evolved. Even though it is undoubtedly important to choose the right architecture, we argue that it is also beneficial to develop methods that address other components of the training process. We hypothesize that utilizing domain knowledge can be helpful to improve DL models in terms of performance and/or efficiency. Such model-agnostic methods can be applied to any existing or future architecture. Furthermore, the black box nature of DL models motivates the development of techniques to understand their inner workings. Considering the rapid advancement of DL architectures, it is again crucial to develop model-agnostic methods. In this thesis, we explore six principles that incorporate domain knowledge to understand or improve models. They are applied either on the input or output side of the trainable model. Each principle is applied to at least two DL tasks, leading to task-specific implementations. To understand DL models, we propose to use Generated Input Data coming from a controllable generation process requiring knowledge about the data properties. This way, we can understand the model’s behavior by analyzing how it changes when one specific high-level input feature changes in the generated data. On the output side, Gradient-Based Attribution methods create a gradient at the end of the NN and then propagate it back to the input, indicating which low-level input features have a large influence on the model’s prediction. The resulting input features can be interpreted by humans using domain knowledge. To improve the trainable model in terms of downstream performance, data and compute efficiency, or robustness to unwanted features, we explore principles that each address one of the training components besides the trainable model. Input Masking and Augmentation directly modifies the training input data, integrating knowledge about the data and its impact on the model’s output. We also explore the use of Feature Extraction using Pretrained Multimodal Models which can be seen as a beneficial preprocessing step to extract useful features. When no training data is available for the downstream task, using such features and domain knowledge expressed in other modalities can result in a Zero-Shot Learning (ZSL) setting, completely eliminating the trainable model. The Weak Label Generation principle produces new desired outputs using knowledge about the labels, giving either a good pretraining or even exclusive training dataset to solve the downstream task. Finally, improving and choosing the right Loss Function is another principle we explore in this thesis. Here, we enrich existing loss functions with knowledge about label interactions or utilize and combine multiple task-specific loss functions in a multitask setting. We apply the principles to classification, regression, and representation tasks as well as to image and text modalities. We propose, apply, and evaluate existing and novel methods to understand and improve the model. Overall, this thesis introduces and evaluates methods that complement the development and choice of DL model architectures. N2 - Deep-Learning-Modelle (DL-Modelle) werden trainiert, indem potenziell vorverarbeitete Eingangsdaten durch ein trainierbares Neuronales Netz (NN) geleitet und dessen Parameter aktualisiert werden, um die Verlustfunktion zwischen der Vorhersage und der gewünschten Ausgabe zu minimieren. Während sich dieser allgemeine Ablauf kaum geändert hat, haben sich die verwendeten NN-Architekturen erheblich weiterentwickelt. Auch wenn die Wahl der Architektur für die Aufgabe zweifellos wichtig ist, schlagen wir in dieser Arbeit vor, Methoden für andere Komponenten des Trainingsprozesses zu entwickeln. Wir vermuten, dass die Verwendung von Domänenwissen hilfreich bei der Verbesserung von DL-Modellen bezüglich ihrer Leistung und/oder Effizienz sein kann. Solche modellagnostischen Methoden sind dann bei jeder bestehenden oder zukünftigen NN-Architektur anwendbar. Die Black-Box-Natur von DL-Modellen motiviert zudem die Entwicklung von Methoden, die zum Verständnis der Funktionsweise dieser Modelle beitragen. Angesichts der schnellen Architektur-Entwicklung ist es wichtig, modellagnostische Methoden zu entwickeln. In dieser Arbeit untersuchen wir sechs Prinzipien, die Domänenwissen verwenden, um Modelle zu verstehen oder zu verbessern. Sie werden auf Trainingskomponenten im Eingang oder Ausgang des Modells angewendet. Jedes Prinzip wird dann auf mindestens zwei DL-Aufgaben angewandt, was zu aufgabenspezifischen Implementierungen führt. Um DL-Modelle zu verstehen, verwenden wir kontrolliert generierte Eingangsdaten, was Wissen über die Dateneigenschaften benötigt. So können wir das Verhalten des Modells verstehen, indem wir die Ausgabeänderung bei der Änderung von abstrahierten Eingabefeatures beobachten. Wir untersuchen zudem gradienten-basierte Attribution-Methoden, die am Ausgang des NN einen Gradienten anlegen und zur Eingabe zurückführen. Eingabefeatures mit großem Einfluss auf die Modellvorhersage können so identifiziert und von Menschen mit Domänenwissen interpretiert werden. Um Modelle zu verbessern (in Bezug auf die Ergebnisgüte, Daten- und Recheneffizienz oder Robustheit gegenüber ungewollten Eingaben), untersuchen wir Prinzipien, die jeweils eine Trainingskomponente neben dem trainierbaren Modell betreffen. Das Maskieren und Augmentieren von Eingangsdaten modifiziert direkt die Trainingsdaten und integriert dabei Wissen über ihren Einfluss auf die Modellausgabe. Die Verwendung von vortrainierten multimodalen Modellen zur Featureextraktion kann als ein Vorverarbeitungsschritt angesehen werden. Bei fehlenden Trainingsdaten können die Features und Domänenwissen in anderen Modalitäten als Zero-Shot Setting das trainierbare Modell gänzlich eliminieren. Das Weak-Label-Generierungs-Prinzip erzeugt neue gewünschte Ausgaben anhand von Wissen über die Labels, was zu einem Pretrainings- oder exklusiven Trainigsdatensatz führt. Schließlich ist die Verbesserung und Auswahl der Verlustfunktion ein weiteres untersuchtes Prinzip. Hier reichern wir bestehende Verlustfunktionen mit Wissen über Label-Interaktionen an oder kombinieren mehrere aufgabenspezifische Verlustfunktionen als Multi-Task-Ansatz. Wir wenden die Prinzipien auf Klassifikations-, Regressions- und Repräsentationsaufgaben sowie Bild- und Textmodalitäten an. Wir stellen bestehende und neue Methoden vor, wenden sie an und evaluieren sie für das Verstehen und Verbessern von DL-Modellen, was die Entwicklung und Auswahl von DL-Modellarchitekturen ergänzt. KW - Deep learning KW - Neuronales Netz KW - Maschinelles Lernen KW - Machine Learning KW - Model-Agnostic KW - Domain Knowledge Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-349689 ER - TY - THES A1 - Zink, Johannes T1 - Algorithms for Drawing Graphs and Polylines with Straight-Line Segments T1 - Algorithmen zum Zeichnen von Graphen und Polygonzügen mittels Strecken N2 - Graphs provide a key means to model relationships between entities. They consist of vertices representing the entities, and edges representing relationships between pairs of entities. To make people conceive the structure of a graph, it is almost inevitable to visualize the graph. We call such a visualization a graph drawing. Moreover, we have a straight-line graph drawing if each vertex is represented as a point (or a small geometric object, e.g., a rectangle) and each edge is represented as a line segment between its two vertices. A polyline is a very simple straight-line graph drawing, where the vertices form a sequence according to which the vertices are connected by edges. An example of a polyline in practice is a GPS trajectory. The underlying road network, in turn, can be modeled as a graph. This book addresses problems that arise when working with straight-line graph drawings and polylines. In particular, we study algorithms for recognizing certain graphs representable with line segments, for generating straight-line graph drawings, and for abstracting polylines. In the first part, we first examine, how and in which time we can decide whether a given graph is a stick graph, that is, whether its vertices can be represented as vertical and horizontal line segments on a diagonal line, which intersect if and only if there is an edge between them. We then consider the visual complexity of graphs. Specifically, we investigate, for certain classes of graphs, how many line segments are necessary for any straight-line graph drawing, and whether three (or more) different slopes of the line segments are sufficient to draw all edges. Last, we study the question, how to assign (ordered) colors to the vertices of a graph with both directed and undirected edges such that no neighboring vertices get the same color and colors are ascending along directed edges. Here, the special property of the considered graph is that the vertices can be represented as intervals that overlap if and only if there is an edge between them. The latter problem is motivated by an application in automated drawing of cable plans with vertical and horizontal line segments, which we cover in the second part. We describe an algorithm that gets the abstract description of a cable plan as input, and generates a drawing that takes into account the special properties of these cable plans, like plugs and groups of wires. We then experimentally evaluate the quality of the resulting drawings. In the third part, we study the problem of abstracting (or simplifying) a single polyline and a bundle of polylines. In this problem, the objective is to remove as many vertices as possible from the given polyline(s) while keeping each resulting polyline sufficiently similar to its original course (according to a given similarity measure). N2 - Graphen stellen ein wichtiges Mittel dar, um Beziehungen zwischen Objekten zu modellieren. Sie bestehen aus Knoten, die die Objekte repräsentieren, und Kanten, die Beziehungen zwischen Paaren von Objekten abbilden. Um Menschen die Struktur eines Graphen zu vermitteln, ist es nahezu unumgänglich den Graphen zu visualisieren. Eine solche Visualisierung nennen wir Graphzeichnung. Eine Graphzeichnung ist geradlinig, wenn jeder Knoten als ein Punkt (oder ein kleines geometrisches Objekt, z. B. ein Rechteck) und jede Kante als eine Strecke zwischen ihren beiden Knoten dargestellt ist. Eine sehr einfache geradlinige Graphzeichnung, bei der alle Knoten eine Folge bilden, entlang der die Knoten durch Kanten verbunden sind, nennen wir Polylinie. Ein Beispiel für eine Polylinie in der Praxis ist eine GPS-Trajektorie. Das zugrundeliegende Straßennetzwerk wiederum kann als Graph repräsentiert werden. In diesem Buch befassen wir uns mit Fragen, die sich bei der Arbeit mit geradlinigen Graphzeichnungen und Polylinien stellen. Insbesondere untersuchen wir Algorithmen zum Erkennen von bestimmten mit Strecken darstellbaren Graphen, zum Generieren von geradlinigen Graphzeichnungen und zum Abstrahieren von Polylinien. Im ersten Teil schauen wir uns zunächst an, wie und in welcher Zeit wir entscheiden können, ob ein gegebener Graph ein Stickgraph ist, das heißt, ob sich seine Knoten als vertikale und horizontale Strecken auf einer diagonalen Geraden darstellen lassen, die sich genau dann schneiden, wenn zwischen ihnen eine Kante liegt. Anschließend betrachten wir die visuelle Komplexität von Graphen. Konkret untersuchen wir für bestimmte Graphklassen, wie viele Strecken für jede geradlinige Graphzeichnung notwendig sind, und, ob drei (oder mehr) verschiedene Streckensteigungen ausreichend sind, um alle Kanten zu zeichnen. Zuletzt beschäftigen wir uns mit der Frage, wie wir den Knoten eines Graphen mit gerichteten und ungerichteten Kanten (geordnete) Farben zuweisen können, sodass keine benachbarten Knoten dieselbe Farbe haben und Farben entlang gerichteter Kanten aufsteigend sind. Hierbei ist die spezielle Eigenschaft der betrachteten Graphen, dass sich die Knoten als Intervalle darstellen lassen, die sich genau dann überschneiden, wenn eine Kanten zwischen ihnen verläuft. Das letztgenannte Problem ist motiviert von einer Anwendung beim automatisierten Zeichnen von Kabelplänen mit vertikalen und horizontalen Streckenverläufen, womit wir uns im zweiten Teil befassen. Wir beschreiben einen Algorithmus, welcher die abstrakte Beschreibung eines Kabelplans entgegennimmt und daraus eine Zeichnung generiert, welche die speziellen Eigenschaften dieser Kabelpläne, wie Stecker und Gruppen von zusammengehörigen Drähten, berücksichtigt. Anschließend evaluieren wir die Qualität der so erzeugten Zeichnungen experimentell. Im dritten Teil befassen wir uns mit dem Abstrahieren bzw. Vereinfachen einer einzelnen Polylinie und eines Bündels von Polylinien. Bei diesem Problem sollen aus einer oder mehreren gegebenen Polylinie(n) so viele Knoten wie möglich entfernt werden, wobei jede resultierende Polylinie ihrem ursprünglichen Verlauf (nach einem gegeben Maß) hinreichend ähnlich bleiben muss. KW - Graphenzeichnen KW - Algorithmische Geometrie KW - Algorithmus KW - Algorithmik KW - Polygonzüge KW - graph drawing KW - complexity KW - algorithms KW - straight-line segments KW - polylines KW - graphs KW - Strecken KW - Graphen Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-354756 ER - TY - RPRT A1 - Vomhoff, Viktoria A1 - Geissler, Stefan A1 - Gebert, Steffen A1 - Hossfeld, Tobias T1 - Towards Understanding the Global IPX Network from an MVNO Perspective T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - In this paper, we work to understand the global IPX network from the perspective of an MVNO. In order to do this, we provide a brief description of the global architecture of mobile carriers. We provide initial results with respect to mapping the vast and complex interconnection network enabling global roaming from the point of view of a single MVNO. Finally, we provide preliminary results regarding the quality of service observed under global roaming conditions. KW - global IPX network Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322121 ER - TY - RPRT A1 - Navade, Piyush A1 - Maile, Lisa A1 - German, Reinhard T1 - Multiple DCLC Routing Algorithms for Ultra-Reliable and Time-Sensitive Applications T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This paper discusses the problem of finding multiple shortest disjoint paths in modern communication networks, which is essential for ultra-reliable and time-sensitive applications. Dijkstra’s algorithm has been a popular solution for the shortest path problem, but repetitive use of it to find multiple paths is not scalable. The Multiple Disjoint Path Algorithm (MDPAlg), published in 2021, proposes the use of a single full graph to construct multiple disjoint paths. This paper proposes modifications to the algorithm to include a delay constraint, which is important in time-sensitive applications. Different delay constraint least-cost routing algorithms are compared in a comprehensive manner to evaluate the benefits of the adapted MDPAlg algorithm. Fault tolerance, and thereby reliability, is ensured by generating multiple link-disjoint paths from source to destination. KW - Dijkstra’s algorithm KW - shortest path routing KW - disjoint multi-paths KW - delay constrained KW - least cost Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322177 ER - TY - RPRT A1 - Simon, Manuel A1 - Gallenmüller, Sebastian A1 - Carle, Georg T1 - Never Miss Twice - Add-On-Miss Table Updates in Software Data Planes T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - State Management at line rate is crucial for critical applications in next-generation networks. P4 is a language used in software-defined networking to program the data plane. The data plane can profit in many circumstances when it is allowed to manage its state without any detour over a controller. This work is based on a previous study by investigating the potential and performance of add-on-miss insertions of state by the data plane. The state keeping capabilities of P4 are limited regarding the amount of data and the update frequency. We follow the tentative specification of an upcoming portable-NIC-architecture and implement these changes into the software P4 target T4P4S. We show that insertions are possible with only a slight overhead compared to lookups and evaluate the influence of the rate of insertions on their latency. KW - SDN KW - state management KW - P4 KW - Add-on-Miss Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322071 ER - TY - RPRT A1 - Brisch, Fabian A1 - Kassler, Andreas A1 - Vestin, Jonathan A1 - Pieska, Marcus A1 - Amend, Markus T1 - Accelerating Transport Layer Multipath Packet Scheduling for 5G-ATSSS T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Utilizing multiple access networks such as 5G, 4G, and Wi-Fi simultaneously can lead to increased robustness, resiliency, and capacity for mobile users. However, transparently implementing packet distribution over multiple paths within the core of the network faces multiple challenges including scalability to a large number of customers, low latency, and high-capacity packet processing requirements. In this paper, we offload congestion-aware multipath packet scheduling to a smartNIC. However, such hardware acceleration faces multiple challenges due to programming language and platform limitations. We implement different multipath schedulers in P4 with different complexity in order to cope with dynamically changing path capacities. Using testbed measurements, we show that our CMon scheduler, which monitors path congestion in the data plane and dynamically adjusts scheduling weights for the different paths based on path state information, can process more than 3.5 Mpps packets 25 μs latency. KW - multipath packet scheduling KW - P4 KW - MP-DCCP KW - 5G KW - ATSSSS Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322052 ER - TY - RPRT A1 - Hasslinger, Gerhard A1 - Ntougias, Konstantinos A1 - Hasslinger, Frank A1 - Hohlfeld, Oliver T1 - Performance Analysis of Basic Web Caching Strategies (LFU, LRU, FIFO, ...) with Time-To-Live Data Validation T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Web caches often use a Time-to-live (TTL) limit to validate data consistency with web servers. We study the impact of TTL constraints on the hit ratio of basic strategies in caches of fixed size. We derive analytical results and confirm their accuracy in comparison to simulations. We propose a score-based caching method with awareness of the current TTL per data for improving the hit ratio close to the upper bound. KW - LRU KW - LFU KW - FIFO caching strategies KW - hit ratio analysis and simulation KW - TTL validation of data consistency Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322048 ER - TY - RPRT A1 - Funda, Christoph A1 - Marín García, Pablo A1 - German, Reinhard A1 - Hielscher, Kai-Steffen T1 - Online Algorithm for Arrival & Service Curve Estimation T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This paper presents a novel concept to extend state-of-the-art buffer monitoring with additional measures to estimate service-curves. The online algorithm for service-curve estimation replaces the state-of-the-art timestamp logging, as we expect it to overcome the main disadvantages of generating a huge amount of data and using a lot of CPU resources to store the data to a file during operation. We prove the accuracy of the online-algorithm offline with timestamp data and compare the derived bounds to the measured delay and backlog. We also do a proof-of- concept of the online-algorithm, implement it in LabVIEW and compare its performance to the timestamp logging by CPU load and data-size of the log-file. However, the implementation is still work-in-progress. KW - hardware-in-the-loop streaming system KW - network calculus KW - service-curve estimation KW - performance monitoring Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322112 ER - TY - RPRT A1 - Mazigh, Sadok Mehdi A1 - Beausencourt, Marcel A1 - Bode, Max Julius A1 - Scheffler, Thomas T1 - Using P4-INT on Tofino for Measuring Device Performance Characteristics in a Network Lab T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This paper presents a prototypical implementation of the In-band Network Telemetry (INT) specification in P4 and demonstrates a use case, where a Tofino Switch is used to measure device and network performance in a lab setting. This work is based on research activities in the area of P4 data plane programming conducted at the network lab of HTW Berlin. KW - P4-INT Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322084 ER - TY - RPRT A1 - Nguyen, Kien A1 - Loh, Frank A1 - Hoßfeld, Tobias T1 - Challenges of Serverless Deployment in Edge-MEC-Cloud T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - The emerging serverless computing may meet Edge Cloud in a beneficial manner as the two offer flexibility and dynamicity in optimizing finite hardware resources. However, the lack of proper study of a joint platform leaves a gap in literature about consumption and performance of such integration. To this end, this paper identifies the key questions and proposes a methodology to answer them. KW - Edge-MEC-Cloud Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322025 ER - TY - RPRT A1 - Raffeck, Simon A1 - Geißler, Stefan A1 - Hoßfeld, Tobias T1 - Towards Understanding the Signaling Traffic in 5G Core Networks T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - The Fifth Generation (5G) communication technology, its infrastructure and architecture, though already deployed in campus and small scale networks, is still undergoing continuous changes and research. Especially, in the light of future large scale deployments and industrial use cases, a detailed analysis of the performance and utilization with regard to latency and service times constraints is crucial. To this end, a fine granular investigation of the Network Function (NF) based core system and the duration for all the tasks performed by these services is necessary. This work presents the first steps towards analyzing the signaling traffic in 5G core networks, and introduces a tool to automatically extract sequence diagrams and service times for NF tasks from traffic traces. KW - signaling traffic KW - 5G core network Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322106 ER - TY - RPRT A1 - Großmann, Marcel A1 - Homeyer, Tobias T1 - Emulation of Multipath Transmissions in P4 Networks with Kathará T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Packets sent over a network can either get lost or reach their destination. Protocols like TCP try to solve this problem by resending the lost packets. However, retransmissions consume a lot of time and are cumbersome for the transmission of critical data. Multipath solutions are quite common to address this reliability issue and are available on almost every layer of the ISO/OSI model. We propose a solution based on a P4 network to duplicate packets in order to send them to their destination via multiple routes. The last network hop ensures that only a single copy of the traffic is further forwarded to its destination by adopting a concept similar to Bloom filters. Besides, if fast delivery is requested we provide a P4 prototype, which randomly forwards the packets over different transmission paths. For reproducibility, we implement our approach in a container-based network emulation system called Kathará. KW - P4 KW - multipath KW - emulation KW - Kathará Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322095 ER - TY - RPRT A1 - Grigorjew, Alexej A1 - Schumann, Lukas Kilian A1 - Diederich, Philip A1 - Hoßfeld, Tobias A1 - Kellerer, Wolfgang T1 - Understanding the Performance of Different Packet Reception and Timestamping Methods in Linux T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This document briefly presents some renowned packet reception techniques for network packets in Linux systems. Further, it compares their performance when measuring packet timestamps with respect to throughput and accuracy. Both software and hardware timestamps are compared, and various parameters are examined, including frame size, link speed, network interface card, and CPU load. The results indicate that hardware timestamping offers significantly better accuracy with no downsides, and that packet reception techniques that avoid system calls offer superior measurement throughput. KW - packet reception method KW - timestamping method KW - Linux Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322064 ER - TY - THES A1 - Kanbar, Farah T1 - Asymptotic and Stationary Preserving Schemes for Kinetic and Hyperbolic Partial Differential Equations T1 - Asymptotische und Stationäre Erhaltungsverfahren für Kinetische und Hyperbolische Partielle Differentialgleichungen N2 - In this thesis, we are interested in numerically preserving stationary solutions of balance laws. We start by developing finite volume well-balanced schemes for the system of Euler equations and the system of MHD equations with gravitational source term. Since fluid models and kinetic models are related, this leads us to investigate AP schemes for kinetic equations and their ability to preserve stationary solutions. Kinetic models typically have a stiff term, thus AP schemes are needed to capture good solutions of the model. For such kinetic models, equilibrium solutions are reached after large time. Thus we need a new technique to numerically preserve stationary solutions for AP schemes. We find a criterion for SP schemes for kinetic equations which states, that AP schemes under a particular discretization are also SP. In an attempt to mimic our result for kinetic equations in the context of fluid models, for the isentropic Euler equations we developed an AP scheme in the limit of the Mach number going to zero. Our AP scheme is proven to have a SP property under the condition that the pressure is a function of the density and the latter is obtained as a solution of an elliptic equation. The properties of the schemes we developed and its criteria are validated numerically by various test cases from the literature. N2 - In dieser Arbeit interessieren wir uns für numerisch erhaltende stationäre Lösungen von Erhaltungsgleichungen. Wir beginnen mit der Entwicklung von well-balanced Finite-Volumen Verfahren für das System der Euler-Gleichungen und das System der MHD-Gleichungen mit Gravitationsquell term. Da Strömungsmodelle und kinetische Modelle miteinander verwandt sind, untersuchen wir asymptotisch erhaltende (AP) Verfahren für kinetische Gleichungen und ihre Fähigkeit, stationäre Lösungen zu erhalten. Kinetische Modelle haben typischerweise einen steifen Term, so dass AP Verfahren erforderlich sind, um gute Lösungen des Modells zu erhalten. Bei solchen kinetischen Modellen werden Gleichgewichtslösungen erst nach langer Zeit erreicht. Daher benötigen wir eine neue Technik, um stationäre Lösungen für AP Verfahren numerisch zu erhalten. Wir finden ein Kriterium für stationär-erhaltende (SP) Verfahren für kinetische Gleichungen, das besagt, dass AP Verfahren unter einer bestimmten Diskretisierung auch SP sind. In dem Versuch unser Ergebnis für kinetische Gleichungen im Kontext von Strömungsmodellen nachzuahmen, haben wir für die isentropen Euler-Gleichungen ein AP Verfahren für den Grenzwert der Mach-Zahl gegen Null, entwickelt. Unser AP Verfahren hat nachweislich eine SP Eigenschaft unter der Bedingung, dass der Druck eine Funktion der Dichte ist und letztere als Lösung einer elliptischen Gleichung erhalten wird. Die Eigenschaften des von uns entwickelten und seine Kriterien werden anhand verschiedener Testfälle aus der Literatur numerisch validiert. N2 - In this thesis, we are interested in numerically preserving stationary solutions of balance laws. We start by developing finite volume well-balanced schemes for the system of Euler equations and the system of Magnetohydrodynamics (MHD) equations with gravitational source term. Since fluid models and kinetic models are related, this leads us to investigate Asymptotic Preserving (AP) schemes for kinetic equations and their ability to preserve stationary solutions. In an attempt to mimic our result for kinetic equations in the context of fluid models, for the isentropic Euler equations we developed an AP scheme in the limit of the Mach number going to zero. The properties of the schemes we developed and its criteria are validated numerically by various test cases from the literature. KW - Angewandte Mathematik KW - Hyperbolische Differentialgleichung KW - Kinetische Gleichung KW - Euler-Lagrange-Gleichung KW - Magnetohydrodynamische Gleichung KW - Euler equations KW - isentropic Euler equations KW - MHD equations KW - kinetic equations KW - well-balanced scheme KW - asymptotic preserving KW - stationary preserving KW - hyperbolic partial differential equations Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-301903 SN - 978-3-95826-210-2 SN - 978-3-95826-211-9 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, ISBN 978-3-95826-210-2, 29,80 EUR. PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - THES A1 - Steininger, Michael T1 - Deep Learning for Geospatial Environmental Regression T1 - Deep Learning für Regressionsmodelle mit georäumlichen Umweltdaten N2 - Environmental issues have emerged especially since humans burned fossil fuels, which led to air pollution and climate change that harm the environment. These issues’ substantial consequences evoked strong efforts towards assessing the state of our environment. Various environmental machine learning (ML) tasks aid these efforts. These tasks concern environmental data but are common ML tasks otherwise, i.e., datasets are split (training, validatition, test), hyperparameters are optimized on validation data, and test set metrics measure a model’s generalizability. This work focuses on the following environmental ML tasks: Regarding air pollution, land use regression (LUR) estimates air pollutant concentrations at locations where no measurements are available based on measured locations and each location’s land use (e.g., industry, streets). For LUR, this work uses data from London (modeled) and Zurich (measured). Concerning climate change, a common ML task is model output statistics (MOS), where a climate model’s output for a study area is altered to better fit Earth observations and provide more accurate climate data. This work uses the regional climate model (RCM) REMO and Earth observations from the E-OBS dataset for MOS. Another task regarding climate is grain size distribution interpolation where soil properties at locations without measurements are estimated based on the few measured locations. This can provide climate models with soil information, that is important for hydrology. For this task, data from Lower Franconia is used. Such environmental ML tasks commonly have a number of properties: (i) geospatiality, i.e., their data refers to locations relative to the Earth’s surface. (ii) The environmental variables to estimate or predict are usually continuous. (iii) Data can be imbalanced due to relatively rare extreme events (e.g., extreme precipitation). (iv) Multiple related potential target variables can be available per location, since measurement devices often contain different sensors. (v) Labels are spatially often only sparsely available since conducting measurements at all locations of interest is usually infeasible. These properties present challenges but also opportunities when designing ML methods for such tasks. In the past, environmental ML tasks have been tackled with conventional ML methods, such as linear regression or random forests (RFs). However, the field of ML has made tremendous leaps beyond these classic models through deep learning (DL). In DL, models use multiple layers of neurons, producing increasingly higher-level feature representations with growing layer depth. DL has made previously infeasible ML tasks feasible, improved the performance for many tasks in comparison to existing ML models significantly, and eliminated the need for manual feature engineering in some domains due to its ability to learn features from raw data. To harness these advantages for environmental domains it is promising to develop novel DL methods for environmental ML tasks. This thesis presents methods for dealing with special challenges and exploiting opportunities inherent to environmental ML tasks in conjunction with DL. To this end, the proposed methods explore the following techniques: (i) Convolutions as in convolutional neural networks (CNNs) to exploit reoccurring spatial patterns in geospatial data. (ii) Posing the problems as regression tasks to estimate the continuous variables. (iii) Density-based weighting to improve estimation performance for rare and extreme events. (iv) Multi-task learning to make use of multiple related target variables. (v) Semi–supervised learning to cope with label sparsity. Using these techniques, this thesis considers four research questions: (i) Can air pollution be estimated without manual feature engineering? This is answered positively by the introduction of the CNN-based LUR model MapLUR as well as the off-the-shelf LUR solution OpenLUR. (ii) Can colocated pollution data improve spatial air pollution models? Multi-task learning for LUR is developed for this, showing potential for improvements with colocated data. (iii) Can DL models improve the quality of climate model outputs? The proposed DL climate MOS architecture ConvMOS demonstrates this. Additionally, semi-supervised training of multilayer perceptrons (MLPs) for grain size distribution interpolation is presented, which can provide improved input data. (iv) Can DL models be taught to better estimate climate extremes? To this end, density-based weighting for imbalanced regression (DenseLoss) is proposed and applied to the DL architecture ConvMOS, improving climate extremes estimation. These methods show how especially DL techniques can be developed for environmental ML tasks with their special characteristics in mind. This allows for better models than previously possible with conventional ML, leading to more accurate assessment and better understanding of the state of our environment. N2 - Umweltprobleme sind vor allem seit der Verbrennung fossiler Brennstoffe durch den Menschen entstanden. Dies hat zu Luftverschmutzung und Klimawandel geführt, was die Umwelt schädigt. Die schwerwiegenden Folgen dieser Probleme haben starke Bestrebungen ausgelöst, den Zustand unserer Umwelt zu untersuchen. Verschiedene Ansätze des maschinellen Lernens (ML) im Umweltbereich unterstützen diese Bestrebungen. Bei diesen Aufgaben handelt es sich um gewöhnliche ML-Aufgaben, z. B. werden die Datensätze aufgeteilt (Training, Validation, Test), Hyperparameter werden auf den Validierungsdaten optimiert, und die Metriken auf den Testdaten messen die Generalisierungsfähigkeit eines Modells, aber sie befassen sich mit Umweltdaten. Diese Arbeit konzentriert sich auf die folgenden Umwelt-ML-Aufgaben: In Bezug auf Luftverschmutzung schätzt Land Use Regression (LUR) die Luftschadstoffkonzentration an Orten, an denen keine Messungen verfügbar sind auf Basis von gemessenen Orten und der Landnutzung (z. B. Industrie, Straßen) der Orte. Für LUR werden in dieser Arbeit Daten aus London (modelliert) und Zürich (gemessen) verwendet. Im Zusammenhang mit dem Klimawandel ist eine häufige ML-Aufgabe Model Output Statistics (MOS), bei der die Ausgaben eines Klimamodells so angepasst werden, dass sie mit Erdbeobachtungen besser übereinstimmen. Dadurch werden genauere Klimadaten erzeugt. Diese Arbeit verwendet das regionale Klimamodell REMO und Erdbeobachtungen aus dem E-OBS-Datensatz für MOS. Eine weitere Aufgabe im Zusammenhang mit dem Klima ist die Interpolation von Korngrößenverteilungen. Hierbei werden Bodeneigenschaften an Orten ohne Messungen auf Basis von wenigen gemessenen Orten geschätzt, um Klimamodelle mit Bodeninformationen zu versorgen, die für die Hydrologie wichtig sind. Für diese Aufgabe werden in dieser Arbeit Bodenmessungen aus Unterfranken herangezogen. Solche Umwelt-ML-Aufgaben haben oft eine Reihe von Eigenschaften: (i) Georäumlichkeit, d. h. ihre Daten beziehen sich auf Standorte relativ zur Erdoberfläche. (ii) Die zu schätzenden oder vorherzusagenden Umweltvariablen sind normalerweise kontinuierlich. (iii) Daten können unbalanciert sein, was auf relativ seltene Extremereignisse (z. B. extreme Niederschläge) zurückzuführen ist. (iv) Pro Standort können mehrere verwandte potenzielle Zielvariablen verfügbar sein, da Messgeräte oft verschiedene Sensoren enthalten. (v) Zielwerte sind räumlich oft nur spärlich vorhanden, da die Durchführung von Messungen an allen gewünschten Orten in der Regel nicht möglich ist. Diese Eigenschaften stellen eine Herausforderung, aber auch eine Chance bei der Entwicklung von ML-Methoden für derlei Aufgaben dar. In der Vergangenheit wurden ML-Aufgaben im Umweltbereich mit konventionellen ML-Methoden angegangen, wie z. B. lineare Regression oder Random Forests (RFs). In den letzten Jahren hat der Bereich ML jedoch durch Deep Learning (DL) enorme Fortschritte über diese klassischen Modelle hinaus gemacht. Bei DL verwenden die Modelle mehrere Schichten von Neuronen, die mit zunehmender Schichtungstiefe immer abstraktere Merkmalsdarstellungen erzeugen. DL hat zuvor undurchführbare ML-Aufgaben realisierbar gemacht, die Leistung für viele Aufgaben im Vergleich zu bestehenden ML-Modellen erheblich verbessert und die Notwendigkeit für manuelles Feature-Engineering in einigen Bereichen aufgrund seiner Fähigkeit, Features aus Rohdaten zu lernen, eliminiert. Um diese Vorteile für ML-Aufgaben in der Umwelt nutzbar zu machen, ist es vielversprechend, geeignete DL-Methoden für diesen Bereich zu entwickeln. In dieser Arbeit werden Methoden zur Bewältigung der besonderen Herausforderungen und zur Nutzung der Möglichkeiten von Umwelt-ML-Aufgaben in Verbindung mit DL vorgestellt. Zu diesem Zweck werden in den vorgeschlagenen Methoden die folgenden Techniken untersucht: (i) Faltungen wie in Convolutional Neural Networks (CNNs), um wiederkehrende räumliche Muster in Geodaten zu nutzen. (ii) Probleme als Regressionsaufgaben stellen, um die kontinuierlichen Variablen zu schätzen. (iii) Dichtebasierte Gewichtung zur Verbesserung der Schätzungen bei seltenen und extremen Ereignissen. (iv) Multi-Task-Lernen, um mehrere verwandte Zielvariablen zu nutzen. (v) Halbüber- wachtes Lernen, um auch mit wenigen bekannten Zielwerten zurechtzukommen. Mithilfe dieser Techniken werden in der Arbeit vier Forschungsfragen untersucht: (i) Kann Luftverschmutzung ohne manuelles Feature Engineering geschätzt werden? Dies wird durch die Einführung des CNN-basierten LUR-Modells MapLUR sowie der automatisierten LUR–Lösung OpenLUR positiv beantwortet. (ii) Können kolokalisierte Verschmutzungsdaten räumliche Luftverschmutzungsmodelle verbessern? Hierfür wird Multi-Task-Learning für LUR entwickelt, das Potenzial für Verbesserungen mit kolokalisierten Daten zeigt. (iii) Können DL-Modelle die Qualität der Ausgaben von Klimamodellen verbessern? Die vorgeschlagene DL-MOS-Architektur ConvMOS demonstriert das. Zusätzlich wird halbüberwachtes Training von Multilayer Perceptrons (MLPs) für die Interpolation von Korngrößenverteilungen vorgestellt, das verbesserte Eingabedaten liefern kann. (iv) Kann man DL-Modellen beibringen, Klimaextreme besser abzuschätzen? Zu diesem Zweck wird eine dichtebasierte Gewichtung für unbalancierte Regression (DenseLoss) vorgeschlagen und auf die DL-Architektur ConvMOS angewendet, um die Schätzung von Klimaextremen zu verbessern. Diese Methoden zeigen, wie speziell DL-Techniken für Umwelt-ML-Aufgaben unter Berücksichtigung ihrer besonderen Eigenschaften entwickelt werden können. Dies ermöglicht bessere Modelle als konventionelles ML bisher erlaubt hat, was zu einer genaueren Bewertung und einem besseren Verständnis des Zustands unserer Umwelt führt. KW - Deep learning KW - Modellierung KW - Umwelt KW - Geospatial KW - Environmental KW - Regression KW - Neuronales Netz KW - Maschinelles Lernen KW - Geoinformationssystem Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-313121 ER - TY - THES A1 - Eismann, Simon T1 - Performance Engineering of Serverless Applications and Platforms T1 - Performanz Engineering von Serverless Anwendungen und Plattformen N2 - Serverless computing is an emerging cloud computing paradigm that offers a highlevel application programming model with utilization-based billing. It enables the deployment of cloud applications without managing the underlying resources or worrying about other operational aspects. Function-as-a-Service (FaaS) platforms implement serverless computing by allowing developers to execute code on-demand in response to events with continuous scaling while having to pay only for the time used with sub-second metering. Cloud providers have further introduced many fully managed services for databases, messaging buses, and storage that also implement a serverless computing model. Applications composed of these fully managed services and FaaS functions are quickly gaining popularity in both industry and in academia. However, due to this rapid adoption, much information surrounding serverless computing is inconsistent and often outdated as the serverless paradigm evolves. This makes the performance engineering of serverless applications and platforms challenging, as there are many open questions, such as: What types of applications is serverless computing well suited for, and what are its limitations? How should serverless applications be designed, configured, and implemented? Which design decisions impact the performance properties of serverless platforms and how can they be optimized? These and many other open questions can be traced back to an inconsistent understanding of serverless applications and platforms, which could present a major roadblock in the adoption of serverless computing. In this thesis, we address the lack of performance knowledge surrounding serverless applications and platforms from multiple angles: we conduct empirical studies to further the understanding of serverless applications and platforms, we introduce automated optimization methods that simplify the operation of serverless applications, and we enable the analysis of design tradeoffs of serverless platforms by extending white-box performance modeling. N2 - Serverless Computing ist ein neues Cloud-Computing-Paradigma, das ein High-Level-Anwendungsprogrammiermodell mit nutzungsbasierter Abrechnung bietet. Es ermöglicht die Bereitstellung von Cloud-Anwendungen, ohne dass die zugrunde liegenden Ressourcen verwaltet werden müssen oder man sich um andere betriebliche Aspekte kümmern muss. FaaS-Plattformen implementieren Serverless Computing, indem sie Entwicklern die Möglichkeit geben, Code nach Bedarf als Reaktion auf Ereignisse mit kontinuierlicher Skalierung auszuführen, während sie nur für die genutzte Zeit mit sekundengenauer Abrechnung zahlen müssen. Cloud-Anbieter haben darüber hinaus viele vollständig verwaltete Dienste für Datenbanken, Messaging-Busse und Orchestrierung eingeführt, die ebenfalls ein Serverless Computing-Modell implementieren. Anwendungen, die aus diesen vollständig verwalteten Diensten und FaaS-Funktionen bestehen, werden sowohl in der Industrie als auch in der Wissenschaft immer beliebter. Aufgrund dieser schnellen Verbreitung sind jedoch viele Informationen zum Serverless Computing inkonsistent und oft veraltet, da sich das Serverless Paradigma weiterentwickelt. Dies macht das Performanz-Engineering von Serverless Anwendungen und Plattformen zu einer Herausforderung, da es viele offene Fragen gibt, wie zum Beispiel: Für welche Arten von Anwendungen ist Serverless Computing gut geeignet und wo liegen seine Grenzen? Wie sollten Serverless Anwendungen konzipiert, konfiguriert und implementiert werden? Welche Designentscheidungen wirken sich auf die Performanzeigenschaften von Serverless Plattformen aus und wie können sie optimiert werden? Diese und viele andere offene Fragen lassen sich auf ein uneinheitliches Verständnis von Serverless Anwendungen und Plattformen zurückführen, was ein großes Hindernis für die Einführung von Serverless Computing darstellen könnte. In dieser Arbeit adressieren wir den Mangel an Performanzwissen zu Serverless Anwendungen und Plattformen aus mehreren Blickwinkeln: Wir führen empirische Studien durch, um das Verständnis von Serverless Anwendungen und Plattformen zu fördern, wir stellen automatisierte Optimierungsmethoden vor, die das benötigte Wissen für den Betrieb von Serverless Anwendungen reduzieren, und wir erweitern die White-Box-Performanzmodellierungerung für die Analyse von Designkompromissen von Serverless Plattformen. KW - Leistungsbewertung KW - software performance KW - Cloud Computing Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-303134 ER - TY - THES A1 - Nogatz, Falco T1 - Defining and Implementing Domain-Specific Languages with Prolog T1 - Definition und Implementierung domänenspezifischer Sprachen mit Prolog N2 - The landscape of today’s programming languages is manifold. With the diversity of applications, the difficulty of adequately addressing and specifying the used programs increases. This often leads to newly designed and implemented domain-specific languages. They enable domain experts to express knowledge in their preferred format, resulting in more readable and concise programs. Due to its flexible and declarative syntax without reserved keywords, the logic programming language Prolog is particularly suitable for defining and embedding domain-specific languages. This thesis addresses the questions and challenges that arise when integrating domain-specific languages into Prolog. We compare the two approaches to define them either externally or internally, and provide assisting tools for each. The grammar of a formal language is usually defined in the extended Backus–Naur form. In this work, we handle this formalism as a domain-specific language in Prolog, and define term expansions that allow to translate it into equivalent definite clause grammars. We present the package library(dcg4pt) for SWI-Prolog, which enriches them by an additional argument to automatically process the term’s corresponding parse tree. To simplify the work with definite clause grammars, we visualise their application by a web-based tracer. The external integration of domain-specific languages requires the programmer to keep the grammar, parser, and interpreter in sync. In many cases, domain-specific languages can instead be directly embedded into Prolog by providing appropriate operator definitions. In addition, we propose syntactic extensions for Prolog to expand its expressiveness, for instance to state logic formulas with their connectives verbatim. This allows to use all tools that were originally written for Prolog, for instance code linters and editors with syntax highlighting. We present the package library(plammar), a standard-compliant parser for Prolog source code, written in Prolog. It is able to automatically infer from example sentences the required operator definitions with their classes and precedences as well as the required Prolog language extensions. As a result, we can automatically answer the question: Is it possible to model these example sentences as valid Prolog clauses, and how? We discuss and apply the two approaches to internal and external integrations for several domain-specific languages, namely the extended Backus–Naur form, GraphQL, XPath, and a controlled natural language to represent expert rules in if-then form. The created toolchain with library(dcg4pt) and library(plammar) yields new application opportunities for static Prolog source code analysis, which we also present. N2 - Die Landschaft der heutigen Programmiersprachen ist vielfältig. Mit ihren unterschiedlichen Anwendungsbereichen steigt zugleich die Schwierigkeit, die eingesetzten Programme adäquat anzusprechen und zu spezifizieren. Immer häufiger werden hierfür domänenspezifische Sprachen entworfen und implementiert. Sie ermöglichen Domänenexperten, Wissen in ihrem bevorzugten Format auszudrücken, was zu lesbareren Programmen führt. Durch ihre flexible und deklarative Syntax ohne vorbelegte Schlüsselwörter ist die logische Programmsprache Prolog besonders geeignet, um domänenspezifische Sprachen zu definieren und einzubetten. Diese Arbeit befasst sich mit den Fragen und Herausforderungen, die sich bei der Integration von domänenspezifischen Sprachen in Prolog ergeben. Wir vergleichen die zwei Ansätze, sie entweder extern oder intern zu definieren, und stellen jeweils Hilfsmittel zur Verfügung. Die Grammatik einer formalen Sprache wird häufig in der erweiterten Backus–Naur–Form definiert. Diesen Formalismus behandeln wir in dieser Arbeit als eine domänenspezifische Sprache in Prolog und definieren Termexpansionen, die es erlauben, ihn in äquivalente Definite Clause Grammars für Prolog zu übersetzen. Durch das Modul library(dcg4pt) werden sie um ein zusätzliches Argument erweitert, das den Syntaxbaum eines Terms automatisch erzeugt. Um die Arbeit mit Definite Clause Grammars zu erleichtern, visualisieren wir ihre Anwendung in einem webbasierten Tracer. Meist können domänenspezifische Sprachen jedoch auch mittels passender Operatordefinitionen direkt in Prolog eingebettet werden. Dies ermöglicht die Verwendung aller Werkzeuge, die ursprünglich für Prolog geschrieben wurden, z.B. zum Code-Linting und Syntax-Highlighting. In dieser Arbeit stellen wir den standardkonformen Prolog-Parser library(plammar) vor. Er ist in Prolog geschrieben und in der Lage, aus Beispielsätzen automatisch die erforderlichen Operatoren mit ihren Klassen und Präzedenzen abzuleiten. Um die Ausdruckskraft von Prolog noch zu erweitern, schlagen wir Ergänzungen zum ISO Standard vor. Sie erlauben es, weitere Sprachen direkt einzubinden, und werden ebenfalls von library(plammar) identifiziert. So ist es bspw. möglich, logische Formeln direkt mit den bekannten Symbolen für Konjunktion, Disjunktion, usw. als Prolog-Programme anzugeben. Beide Ansätze der internen und externen Integration werden für mehrere domänen-spezifische Sprachen diskutiert und beispielhaft für GraphQL, XPath, die erweiterte Backus–Naur–Form sowie Expertenregeln in Wenn–Dann–Form umgesetzt. Die vorgestellten Werkzeuge um library(dcg4pt) und library(plammar) ergeben zudem neue Anwendungsmöglichkeiten auch für die statische Quellcodeanalyse von Prolog-Programmen. KW - PROLOG KW - Domänenspezifische Sprache KW - logic programming KW - knowledge representation KW - definite clause grammars Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-301872 ER - TY - RPRT A1 - Bösch, Carolin A1 - Stieler, Malena A1 - Lydon, Salomon A1 - Hesse, Martin A1 - Ali, Hassan A1 - Finzel, Matthias A1 - Faraz Ali, Syed A1 - Salian, Yash A1 - Alnoor, Hiba A1 - John, Jeena A1 - Lakkad, Harsh A1 - Bhosale, Devraj A1 - Jafarian, Timon A1 - Parvathi, Uma A1 - Ezzatpoor, Narges A1 - Datar, Tanuja T1 - Venus Research Station N2 - Because of the extreme conditions in the atmosphere, Venus has been less explored than for example Mars. Only a few probes have been able to survive on the surface for very short periods in the past and have sent data. The atmosphere is also far from being fully explored. It could even be that building blocks of life can be found in more moderate layers of the planet’s atmosphere. It can therefore be assumed that the planet Venus will increasingly become a focus of exploration. One way to collect significantly more data in situ is to build and operate an atmospheric research station over an extended period of time. This could carry out measurements at different positions and at different times and thus significantly expand our knowledge of the planet. In this work, the design of a Venus Research Station floating within the Venusian atmosphere is presented, which is complemented by the design of deployable atmospheric Scouts. The design of these components is done on a conceptual basis. T3 - Raumfahrttechnik und Extraterrestrik - 4 KW - Venus KW - Research Station Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-328695 SN - 2747-9374 ER - TY - THES A1 - Bleier, Michael T1 - Underwater Laser Scanning - Refractive Calibration, Self-calibration and Mapping for 3D Reconstruction T1 - Laserscanning unter Wasser - Refraktive Kalibrierung, Selbstkalibrierung und Kartierung zur 3D Rekonstruktion N2 - There is great interest in affordable, precise and reliable metrology underwater: Archaeologists want to document artifacts in situ with high detail. In marine research, biologists require the tools to monitor coral growth and geologists need recordings to model sediment transport. Furthermore, for offshore construction projects, maintenance and inspection millimeter-accurate measurements of defects and offshore structures are essential. While the process of digitizing individual objects and complete sites on land is well understood and standard methods, such as Structure from Motion or terrestrial laser scanning, are regularly applied, precise underwater surveying with high resolution is still a complex and difficult task. Applying optical scanning techniques in water is challenging due to reduced visibility caused by turbidity and light absorption. However, optical underwater scanners provide significant advantages in terms of achievable resolution and accuracy compared to acoustic systems. This thesis proposes an underwater laser scanning system and the algorithms for creating dense and accurate 3D scans in water. It is based on laser triangulation and the main optical components are an underwater camera and a cross-line laser projector. The prototype is configured with a motorized yaw axis for capturing scans from a tripod. Alternatively, it is mounted to a moving platform for mobile mapping. The main focus lies on the refractive calibration of the underwater camera and laser projector, the image processing and 3D reconstruction. For highest accuracy, the refraction at the individual media interfaces must be taken into account. This is addressed by an optimization-based calibration framework using a physical-geometric camera model derived from an analytical formulation of a ray-tracing projection model. In addition to scanning underwater structures, this work presents the 3D acquisition of semi-submerged structures and the correction of refraction effects. As in-situ calibration in water is complex and time-consuming, the challenge of transferring an in-air scanner calibration to water without re-calibration is investigated, as well as self-calibration techniques for structured light. The system was successfully deployed in various configurations for both static scanning and mobile mapping. An evaluation of the calibration and 3D reconstruction using reference objects and a comparison of free-form surfaces in clear water demonstrate the high accuracy potential in the range of one millimeter to less than one centimeter, depending on the measurement distance. Mobile underwater mapping and motion compensation based on visual-inertial odometry is demonstrated using a new optical underwater scanner based on fringe projection. Continuous registration of individual scans allows the acquisition of 3D models from an underwater vehicle. RGB images captured in parallel are used to create 3D point clouds of underwater scenes in full color. 3D maps are useful to the operator during the remote control of underwater vehicles and provide the building blocks to enable offshore inspection and surveying tasks. The advancing automation of the measurement technology will allow non-experts to use it, significantly reduce acquisition time and increase accuracy, making underwater metrology more cost-effective. N2 - Das Interesse an präziser, zuverlässiger und zugleich kostengünstiger Unterwassermesstechnik ist groß. Beispielsweise wollen Archäologen Artefakte in situ mit hoher Detailtreue dokumentieren und in der Meeresforschung benötigen Biologen Messwerkzeuge zur Beobachtung des Korallenwachstums. Auch Geologen sind auf Messdaten angewiesen, um Sedimenttransporte zu modellieren. Darüber hinaus ist für die Errichtung von Offshore-Bauwerken, sowie deren Wartung und Inspektion eine millimetergenaue Vermessung von vorhandenen Strukturen und Defekten unerlässlich. Während die Digitalisierung einzelner Objekte und ganzer Areale an Land gut erforscht ist und verschiedene Standardmethoden, wie zum Beispiel Structure from Motion oder terrestrisches Laserscanning, regelmäßig eingesetzt werden, ist die präzise und hochauflösende Unterwasservermessung nach wie vor eine komplexe und schwierige Aufgabe. Die Anwendung optischer Messtechnik im Wasser ist aufgrund der eingeschränkten Sichttiefe durch Trübung und Lichtabsorption eine Herausforderung. Optische Unterwasserscanner bieten jedoch Vorteile hinsichtlich der erreichbaren Auflösung und Genauigkeit gegenüber akustischen Systemen. In dieser Arbeit werden ein Unterwasser-Laserscanning-System und die Algorithmen zur Erzeugung von 3D-Scans mit hoher Punktdichte im Wasser vorgestellt. Es basiert auf Lasertriangulation und die optischen Hauptkomponenten sind eine Unterwasserkamera und ein Kreuzlinienlaserprojektor. Das System ist mit einer motorisierten Drehachse ausgestattet, um Scans von einem Stativ aus aufzunehmen. Alternativ kann es von einer beweglichen Plattform aus für mobile Kartierung eingesetzt werden. Das Hauptaugenmerk liegt auf der refraktiven Kalibrierung der Unterwasserkamera und des Laserprojektors, der Bildverarbeitung und der 3D-Rekonstruktion. Um höchste Genauigkeit zu erreichen, muss die Brechung an den einzelnen Medienübergängen berücksichtigt werden. Dies wird durch ein physikalisch-geometrisches Kameramodell, das auf einer analytischen Beschreibung der Strahlenverfolgung basiert, und ein optimierungsbasiertes Kalibrierverfahren erreicht. Neben dem Scannen von Unterwasserstrukturen wird in dieser Arbeit auch die 3D-Erfassung von teilweise im Wasser befindlichen Strukturen und die Korrektur der dabei auftretenden Brechungseffekte vorgestellt. Da die Kalibrierung im Wasser komplex und zeitintensiv ist, wird die Übertragung einer Kalibrierung des Scanners in Luft auf die Bedingungen im Wasser ohne Neukalibrierung, sowie die Selbstkalibrierung für Lichtschnittverfahren untersucht. Das System wurde in verschiedenen Konfigurationen sowohl für statisches Scannen als auch für die mobile Kartierung erfolgreich eingesetzt. Die Validierung der Kalibrierung und der 3D-Rekonstruktion anhand von Referenzobjekten und der Vergleich von Freiformflächen in klarem Wasser zeigen das hohe Genauigkeitspotenzial im Bereich von einem Millimeter bis weniger als einem Zentimeter in Abhängigkeit von der Messdistanz. Die mobile Unterwasserkartierung und Bewegungskompensation anhand visuell-inertialer Odometrie wird mit einem neuen optischen Unterwasserscanner auf Basis der Streifenprojektion demonstriert. Dabei ermöglicht die kontinuierliche Registrierung von Einzelscans die Erfassung von 3D-Modellen von einem Unterwasserfahrzeug aus. Mit Hilfe von parallel aufgenommenen RGB-Bildern werden dabei farbige 3D-Punktwolken der Unterwasserszenen erstellt. Diese 3D-Karten dienen beispielsweise dem Bediener bei der Fernsteuerung von Unterwasserfahrzeugen und bilden die Grundlage für Offshore-Inspektions- und Vermessungsaufgaben. Die fortschreitende Automatisierung der Messtechnik wird somit auch eine Verwendung durch Nichtfachleute ermöglichen und gleichzeitig die Erfassungszeit erheblich verkürzen und die Genauigkeit verbessern, was die Vermessung im Wasser kostengünstiger und effizienter macht. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 28 KW - Selbstkalibrierung KW - Punktwolke KW - Bildverarbeitung KW - 3D Reconstruction KW - Self-calibration KW - Underwater Scanning KW - Underwater Mapping KW - Dreidimensionale Rekonstruktion KW - 3D-Rekonstruktion Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322693 SN - 978-3-945459-45-4 ER - TY - RPRT A1 - Großmann, Marcel A1 - Le, Duy Thanh T1 - Visualization of Network Emulation Enabled by Kathará T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - In network research, reproducibility of experiments is not always easy to achieve. Infrastructures are cumbersome to set up or are not available due to vendor-specific devices. Emulators try to overcome those issues to a given extent and are available in different service models. Unfortunately, the usability of emulators requires time-consuming efforts and a deep understanding of their functionality. At first, we analyze to which extent currently available open-source emulators support network configurations and how user-friendly they are. With these insights, we describe, how an ease-to-use emulator is implemented and may run as a Network Emulator as a Service (NEaaS). Therefore, virtualization plays a major role in order to deploy a NEaaS based on Kathará. KW - Network Emulator KW - Visualized Kathará KW - Containerization Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322189 ER - TY - RPRT A1 - Dworzak, Manuel A1 - Großmann, Marcel A1 - Le, Duy Thanh T1 - Federated Learning for Service Placement in Fog and Edge Computing T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Service orchestration requires enormous attention and is a struggle nowadays. Of course, virtualization provides a base level of abstraction for services to be deployable on a lot of infrastructures. With container virtualization, the trend to migrate applications to a micro-services level in order to be executable in Fog and Edge Computing environments increases manageability and maintenance efforts rapidly. Similarly, network virtualization adds effort to calibrate IP flows for Software-Defined Networks and eventually route it by means of Network Function Virtualization. Nevertheless, there are concepts like MAPE-K to support micro-service distribution in next-generation cloud and network environments. We want to explore, how a service distribution can be improved by adopting machine learning concepts for infrastructure or service changes. Therefore, we show how federated machine learning is integrated into a cloud-to-fog-continuum without burdening single nodes. KW - fog computing KW - SDN KW - orchestration KW - federated learning Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322193 ER - TY - THES A1 - Krenzer, Adrian T1 - Machine learning to support physicians in endoscopic examinations with a focus on automatic polyp detection in images and videos T1 - Maschinelles Lernen zur Unterstützung von Ärzten bei endoskopischen Untersuchungen mit Schwerpunkt auf der automatisierten Polypenerkennung in Bildern und Videos N2 - Deep learning enables enormous progress in many computer vision-related tasks. Artificial Intel- ligence (AI) steadily yields new state-of-the-art results in the field of detection and classification. Thereby AI performance equals or exceeds human performance. Those achievements impacted many domains, including medical applications. One particular field of medical applications is gastroenterology. In gastroenterology, machine learning algorithms are used to assist examiners during interventions. One of the most critical concerns for gastroenterologists is the development of Colorectal Cancer (CRC), which is one of the leading causes of cancer-related deaths worldwide. Detecting polyps in screening colonoscopies is the essential procedure to prevent CRC. Thereby, the gastroenterologist uses an endoscope to screen the whole colon to find polyps during a colonoscopy. Polyps are mucosal growths that can vary in severity. This thesis supports gastroenterologists in their examinations with automated detection and clas- sification systems for polyps. The main contribution is a real-time polyp detection system. This system is ready to be installed in any gastroenterology practice worldwide using open-source soft- ware. The system achieves state-of-the-art detection results and is currently evaluated in a clinical trial in four different centers in Germany. The thesis presents two additional key contributions: One is a polyp detection system with ex- tended vision tested in an animal trial. Polyps often hide behind folds or in uninvestigated areas. Therefore, the polyp detection system with extended vision uses an endoscope assisted by two additional cameras to see behind those folds. If a polyp is detected, the endoscopist receives a vi- sual signal. While the detection system handles the additional two camera inputs, the endoscopist focuses on the main camera as usual. The second one are two polyp classification models, one for the classification based on shape (Paris) and the other on surface and texture (NBI International Colorectal Endoscopic (NICE) classification). Both classifications help the endoscopist with the treatment of and the decisions about the detected polyp. The key algorithms of the thesis achieve state-of-the-art performance. Outstandingly, the polyp detection system tested on a highly demanding video data set shows an F1 score of 90.25 % while working in real-time. The results exceed all real-time systems in the literature. Furthermore, the first preliminary results of the clinical trial of the polyp detection system suggest a high Adenoma Detection Rate (ADR). In the preliminary study, all polyps were detected by the polyp detection system, and the system achieved a high usability score of 96.3 (max 100). The Paris classification model achieved an F1 score of 89.35 % which is state-of-the-art. The NICE classification model achieved an F1 score of 81.13 %. Furthermore, a large data set for polyp detection and classification was created during this thesis. Therefore a fast and robust annotation system called Fast Colonoscopy Annotation Tool (FastCAT) was developed. The system simplifies the annotation process for gastroenterologists. Thereby the i gastroenterologists only annotate key parts of the endoscopic video. Afterward, those video parts are pre-labeled by a polyp detection AI to speed up the process. After the AI has pre-labeled the frames, non-experts correct and finish the annotation. This annotation process is fast and ensures high quality. FastCAT reduces the overall workload of the gastroenterologist on average by a factor of 20 compared to an open-source state-of-art annotation tool. N2 - Deep Learning ermöglicht enorme Fortschritte bei vielen Aufgaben im Bereich der Computer Vision. Künstliche Intelligenz (KI) liefert ständig neue Spitzenergebnisse im Bereich der Erkennung und Klassifizierung. Dabei erreicht oder übertrifft die Leistung von KI teilweise die menschliche Leistung. Diese Errungenschaften wirken sich auf viele Bereiche aus, darunter auch auf medizinische Anwendungen. Ein besonderer Bereich der medizinischen Anwendungen ist die Gastroenterologie. In der Gastroenterologie werden Algorithmen des maschinellen Lernens eingesetzt, um den Untersucher bei medizinischen Eingriffen zu unterstützen. Eines der größten Probleme für Gastroenterologen ist die Entwicklung von Darmkrebs, die weltweit eine der häufigsten krebsbedingten Todesursachen ist. Die Erkennung von Polypen bei Darmspiegelungen ist das wichtigste Verfahren zur Vorbeugung von Darmkrebs. Dabei untersucht der Gastroenterologe den Dickdarm im Rahmen einer Koloskopie, um z.B. Polypen zu finden. Polypen sind Schleimhautwucherungen, die unterschiedlich stark ausgeprägt sein können. Diese Arbeit unterstützt Gastroenterologen bei ihren Untersuchungen mit automatischen Erkennungssystemen und Klassifizierungssystemen für Polypen. Der Hauptbeitrag ist ein Echtzeitpolypenerkennungssystem. Dieses System kann in jeder gastroenterologischen Praxis weltweit mit Open- Source-Software installiert werden. Das System erzielt Erkennungsergebnisse auf dem neusten Stand der Technik und wird derzeit in einer klinischen Studie in vier verschiedenen Praxen in Deutschland evaluiert. In dieser Arbeit werden zwei weitere wichtige Beiträge vorgestellt: Zum einen ein Polypenerkennungssystem mit erweiterter Sicht, das in einem Tierversuch getestet wurde. Polypen verstecken sich oft hinter Falten oder in nicht untersuchten Bereichen. Daher verwendet das Polypenerkennungssystem mit erweiterter Sicht ein Endoskop, das von zwei zusätzlichen Kameras unterstützt wird, um hinter diese Falten zu sehen. Wenn ein Polyp entdeckt wird, erhält der Endoskopiker ein visuelles Signal. Während das Erkennungssystem die beiden zusätzlichen Kameraeingaben verarbeitet, konzentriert sich der Endoskopiker wie gewohnt auf die Hauptkamera. Das zweite sind zwei Polypenklassifizierungsmodelle, eines für die Klassifizierung anhand der Form (Paris) und das andere anhand der Oberfläche und Textur (NICE-Klassifizierung). Beide Klassifizierungen helfen dem Endoskopiker bei der Behandlung und Entscheidung über den erkannten Polypen. Die Schlüsselalgorithmen der Dissertation erreichen eine Leistung, die dem neuesten Stand der Technik entspricht. Herausragend ist, dass das auf einem anspruchsvollen Videodatensatz getestete Polypenerkennungssystem einen F1-Wert von 90,25 % aufweist, während es in Echtzeit arbeitet. Die Ergebnisse übertreffen alle Echtzeitsysteme für Polypenerkennung in der Literatur. Darüber hinaus deuten die ersten vorläufigen Ergebnisse einer klinischen Studie des Polypenerkennungssystems auf eine hohe Adenomdetektionsrate ADR hin. In dieser Studie wurden alle Polypen durch das Polypenerkennungssystem erkannt, und das System erreichte einen hohe Nutzerfreundlichkeit von 96,3 (maximal 100). Bei der automatischen Klassifikation von Polypen basierend auf der Paris Klassifikations erreichte das in dieser Arbeit entwickelte System einen F1-Wert von 89,35 %, was dem neuesten Stand der Technik entspricht. Das NICE-Klassifikationsmodell erreichte eine F1- Wert von 81,13 %. Darüber hinaus wurde im Rahmen dieser Arbeit ein großer Datensatz zur Polypenerkennung und -klassifizierung erstellt. Dafür wurde ein schnelles und robustes Annotationssystem namens FastCAT entwickelt. Das System vereinfacht den Annotationsprozess für Gastroenterologen. Die Gastroenterologen annotieren dabei nur die wichtigsten Teile des endoskopischen Videos. Anschließend werden diese Videoteile von einer Polypenerkennungs-KI vorverarbeitet, um den Prozess zu beschleunigen. Nachdem die KI die Bilder vorbeschriftet hat, korrigieren und vervollständigen Nicht-Experten die Annotationen. Dieser Annotationsprozess ist schnell und gewährleistet eine hohe Qualität. FastCAT reduziert die Gesamtarbeitsbelastung des Gastroenterologen im Durchschnitt um den Faktor 20 im Vergleich zu einem Open-Source-Annotationstool auf dem neuesten Stand der Technik. KW - Deep Learning KW - Maschinelles Lernen KW - Maschinelles Sehen KW - Machine Learning KW - Object Detection KW - Medical Image Analysis KW - Computer Vision Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-319119 ER - TY - RPRT A1 - Martino, Luigi A1 - Deutschmann, Jörg A1 - Hielscher, Kai-Steffen A1 - German, Reinhard T1 - Towards a 5G Satellite Communication Framework for V2X T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - In recent years, satellite communication has been expanding its field of application in the world of computer networks. This paper aims to provide an overview of how a typical scenario involving 5G Non-Terrestrial Networks (NTNs) for vehicle to everything (V2X) applications is characterized. In particular, a first implementation of a system that integrates them together will be described. Such a framework will later be used to evaluate the performance of applications such as Vehicle Monitoring (VM), Remote Driving (RD), Voice Over IP (VoIP), and others. Different configuration scenarios such as Low Earth Orbit and Geostationary Orbit will be considered. KW - 5G KW - non-terrestrial networks KW - satellite communication Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322148 ER - TY - RPRT A1 - Rauber, Christof A. O. A1 - Brechtel, Lukas A1 - Schotten, Hans D. T1 - JCAS-Enabled Sensing as a Service in 6th-Generation Mobile Communication Networks T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - The introduction of new types of frequency spectrum in 6G technology facilitates the convergence of conventional mobile communications and radar functions. Thus, the mobile network itself becomes a versatile sensor system. This enables mobile network operators to offer a sensing service in addition to conventional data and telephony services. The potential benefits are expected to accrue to various stakeholders, including individuals, the environment, and society in general. The paper discusses technological development, possible integration, and use cases, as well as future development areas. KW - Sensing-aaS KW - JCAS KW - 6G Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322135 ER - TY - RPRT A1 - Loh, Frank A1 - Raffeck, Simon A1 - Geißler, Stefan A1 - Hoßfeld, Tobias T1 - Paving the Way for an Energy Efficient and Sustainable Future Internet of Things T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - In this work, we describe the network from data collection to data processing and storage as a system based on different layers. We outline the different layers and highlight major tasks and dependencies with regard to energy consumption and energy efficiency. With this view, we can outwork challenges and questions a future system architect must answer to provide a more sustainable, green, resource friendly, and energy efficient application or system. Therefore, all system layers must be considered individually but also altogether for future IoT solutions. This requires, in particular, novel sustainability metrics in addition to current Quality of Service and Quality of Experience metrics to provide a high power, user satisfying, and sustainable network. KW - Internet of Things KW - energy efficiency KW - sustainability Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322161 ER - TY - RPRT A1 - Funda, Christoph A1 - Konheiser, Tobias A1 - German, Reinhard A1 - Hielscher, Kai-Steffen T1 - How to Model and Predict the Scalability of a Hardware-In-The-Loop Test Bench for Data Re-Injection? T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This paper describes a novel application of an empirical network calculus model based on measurements of a hardware-in-the-loop (HIL) test system. The aim is to predict the performance of a HIL test bench for open-loop re-injection in the context of scalability. HIL test benches are distributed computer systems including software, hardware, and networking devices. They are used to validate complex technical systems, but have not yet been system under study themselves. Our approach is to use measurements from the HIL system to create an empirical model for arrival and service curves. We predict the performance and design the previously unknown parameters of the HIL simulator with network calculus (NC), namely the buffer sizes and the minimum needed pre-buffer time for the playback buffer. We furthermore show, that it is possible to estimate the CPU load from arrival and service-curves based on the utilization theorem, and hence estimate the scalability of the HIL system in the context of the number of sensor streams. KW - hardware-in-the-loop simulation KW - computer performance evaluation KW - network calculus KW - scalability evaluation Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322150 ER - TY - JOUR A1 - Steininger, Michael A1 - Abel, Daniel A1 - Ziegler, Katrin A1 - Krause, Anna A1 - Paeth, Heiko A1 - Hotho, Andreas T1 - ConvMOS: climate model output statistics with deep learning JF - Data Mining and Knowledge Discovery N2 - Climate models are the tool of choice for scientists researching climate change. Like all models they suffer from errors, particularly systematic and location-specific representation errors. One way to reduce these errors is model output statistics (MOS) where the model output is fitted to observational data with machine learning. In this work, we assess the use of convolutional Deep Learning climate MOS approaches and present the ConvMOS architecture which is specifically designed based on the observation that there are systematic and location-specific errors in the precipitation estimates of climate models. We apply ConvMOS models to the simulated precipitation of the regional climate model REMO, showing that a combination of per-location model parameters for reducing location-specific errors and global model parameters for reducing systematic errors is indeed beneficial for MOS performance. We find that ConvMOS models can reduce errors considerably and perform significantly better than three commonly used MOS approaches and plain ResNet and U-Net models in most cases. Our results show that non-linear MOS models underestimate the number of extreme precipitation events, which we alleviate by training models specialized towards extreme precipitation events with the imbalanced regression method DenseLoss. While we consider climate MOS, we argue that aspects of ConvMOS may also be beneficial in other domains with geospatial data, such as air pollution modeling or weather forecasts. KW - Klima KW - Modell KW - Deep learning KW - Neuronales Netz KW - climate KW - neural networks KW - model output statistics Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-324213 SN - 1384-5810 VL - 37 IS - 1 ER - TY - JOUR A1 - Kempf, Sebastian A1 - Krug, Markus A1 - Puppe, Frank T1 - KIETA: Key-insight extraction from scientific tables JF - Applied Intelligence N2 - An important but very time consuming part of the research process is literature review. An already large and nevertheless growing ground set of publications as well as a steadily increasing publication rate continue to worsen the situation. Consequently, automating this task as far as possible is desirable. Experimental results of systems are key-insights of high importance during literature review and usually represented in form of tables. Our pipeline KIETA exploits these tables to contribute to the endeavor of automation by extracting them and their contained knowledge from scientific publications. The pipeline is split into multiple steps to guarantee modularity as well as analyzability, and agnosticim regarding the specific scientific domain up until the knowledge extraction step, which is based upon an ontology. Additionally, a dataset of corresponding articles has been manually annotated with information regarding table and knowledge extraction. Experiments show promising results that signal the possibility of an automated system, while also indicating limits of extracting knowledge from tables without any context. KW - table extraction KW - table understanding KW - ontology KW - key-insight extraction KW - information extraction Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-324180 SN - 0924-669X VL - 53 IS - 8 ER - TY - JOUR A1 - Renaut, Léo A1 - Frei, Heike A1 - Nüchter, Andreas T1 - Lidar pose tracking of a tumbling spacecraft using the smoothed normal distribution transform JF - Remote Sensing N2 - Lidar sensors enable precise pose estimation of an uncooperative spacecraft in close range. In this context, the iterative closest point (ICP) is usually employed as a tracking method. However, when the size of the point clouds increases, the required computation time of the ICP can become a limiting factor. The normal distribution transform (NDT) is an alternative algorithm which can be more efficient than the ICP, but suffers from robustness issues. In addition, lidar sensors are also subject to motion blur effects when tracking a spacecraft tumbling with a high angular velocity, leading to a loss of precision in the relative pose estimation. This work introduces a smoothed formulation of the NDT to improve the algorithm’s robustness while maintaining its efficiency. Additionally, two strategies are investigated to mitigate the effects of motion blur. The first consists in un-distorting the point cloud, while the second is a continuous-time formulation of the NDT. Hardware-in-the-loop tests at the European Proximity Operations Simulator demonstrate the capability of the proposed methods to precisely track an uncooperative spacecraft under realistic conditions within tens of milliseconds, even when the spacecraft tumbles with a significant angular rate. KW - pose tracking KW - uncooperative space rendezvous KW - lidar KW - normal distribution transform Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-313738 SN - 2072-4292 VL - 15 IS - 9 ER - TY - JOUR A1 - Maiwald, Ferdinand A1 - Bruschke, Jonas A1 - Schneider, Danilo A1 - Wacker, Markus A1 - Niebling, Florian T1 - Giving historical photographs a new perspective: introducing camera orientation parameters as new metadata in a large-scale 4D application JF - Remote Sensing N2 - The ongoing digitization of historical photographs in archives allows investigating the quality, quantity, and distribution of these images. However, the exact interior and exterior camera orientations of these photographs are usually lost during the digitization process. The proposed method uses content-based image retrieval (CBIR) to filter exterior images of single buildings in combination with metadata information. The retrieved photographs are automatically processed in an adapted structure-from-motion (SfM) pipeline to determine the camera parameters. In an interactive georeferencing process, the calculated camera positions are transferred into a global coordinate system. As all image and camera data are efficiently stored in the proposed 4D database, they can be conveniently accessed afterward to georeference newly digitized images by using photogrammetric triangulation and spatial resection. The results show that the CBIR and the subsequent SfM are robust methods for various kinds of buildings and different quantity of data. The absolute accuracy of the camera positions after georeferencing lies in the range of a few meters likely introduced by the inaccurate LOD2 models used for transformation. The proposed photogrammetric method, the database structure, and the 4D visualization interface enable adding historical urban photographs and 3D models from other locations. KW - historical images KW - 4D-GIS KW - content-based image retrieval KW - Structure-from-Motion KW - camera orientation KW - feature matching Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-311103 SN - 2072-4292 VL - 15 IS - 7 ER - TY - JOUR A1 - Bräuer-Burchardt, Christian A1 - Munkelt, Christoph A1 - Bleier, Michael A1 - Heinze, Matthias A1 - Gebhart, Ingo A1 - Kühmstedt, Peter A1 - Notni, Gunther T1 - Underwater 3D scanning system for cultural heritage documentation JF - Remote Sensing N2 - Three-dimensional capturing of underwater archeological sites or sunken shipwrecks can support important documentation purposes. In this study, a novel 3D scanning system based on structured illumination is introduced, which supports cultural heritage documentation and measurement tasks in underwater environments. The newly developed system consists of two monochrome measurement cameras, a projection unit that produces aperiodic sinusoidal fringe patterns, two flashlights, a color camera, an inertial measurement unit (IMU), and an electronic control box. The opportunities and limitations of the measurement principles of the 3D scanning system are discussed and compared to other 3D recording methods such as laser scanning, ultrasound, and photogrammetry, in the context of underwater applications. Some possible operational scenarios concerning cultural heritage documentation are introduced and discussed. A report on application activities in water basins and offshore environments including measurement examples and results of the accuracy measurements is given. The study shows that the new 3D scanning system can be used for both the topographic documentation of underwater sites and to generate detailed true-scale 3D models including the texture and color information of objects that must remain under water. KW - underwater 3D scanning KW - structured light illumination KW - object reconstruction KW - 3D model generation KW - site mapping Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-311116 SN - 2072-4292 VL - 15 IS - 7 ER - TY - JOUR A1 - Wang, Xiaoliang A1 - Liu, Xuan A1 - Xiao, Yun A1 - Mao, Yue A1 - Wang, Nan A1 - Wang, Wei A1 - Wu, Shufan A1 - Song, Xiaoyong A1 - Wang, Dengfeng A1 - Zhong, Xingwang A1 - Zhu, Zhu A1 - Schilling, Klaus A1 - Damaren, Christopher T1 - On-orbit verification of RL-based APC calibrations for micrometre level microwave ranging system JF - Mathematics N2 - Micrometre level ranging accuracy between satellites on-orbit relies on the high-precision calibration of the antenna phase center (APC), which is accomplished through properly designed calibration maneuvers batch estimation algorithms currently. However, the unmodeled perturbations of the space dynamic and sensor-induced uncertainty complicated the situation in reality; ranging accuracy especially deteriorated outside the antenna main-lobe when maneuvers performed. This paper proposes an on-orbit APC calibration method that uses a reinforcement learning (RL) process, aiming to provide the high accuracy ranging datum for onboard instruments with micrometre level. The RL process used here is an improved Temporal Difference advantage actor critic algorithm (TDAAC), which mainly focuses on two neural networks (NN) for critic and actor function. The output of the TDAAC algorithm will autonomously balance the APC calibration maneuvers amplitude and APC-observed sensitivity with an object of maximal APC estimation accuracy. The RL-based APC calibration method proposed here is fully tested in software and on-ground experiments, with an APC calibration accuracy of less than 2 mrad, and the on-orbit maneuver data from 11–12 April 2022, which achieved 1–1.5 mrad calibration accuracy after RL training. The proposed RL-based APC algorithm may extend to prove mass calibration scenes with actions feedback to attitude determination and control system (ADCS), showing flexibility of spacecraft payload applications in the future. KW - reinforcement learning KW - antenna phase center calibration KW - K band ranging (KBR) KW - micrometre level microwave ranging KW - MSC: 49M37 KW - MSC: 65K05 KW - MSC: 90C30 KW - MSC: 90C40 Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-303970 SN - 2227-7390 VL - 11 IS - 4 ER - TY - JOUR A1 - Fischer, Norbert A1 - Hartelt, Alexander A1 - Puppe, Frank T1 - Line-level layout recognition of historical documents with background knowledge JF - Algorithms N2 - Digitization and transcription of historic documents offer new research opportunities for humanists and are the topics of many edition projects. However, manual work is still required for the main phases of layout recognition and the subsequent optical character recognition (OCR) of early printed documents. This paper describes and evaluates how deep learning approaches recognize text lines and can be extended to layout recognition using background knowledge. The evaluation was performed on five corpora of early prints from the 15th and 16th Centuries, representing a variety of layout features. While the main text with standard layouts could be recognized in the correct reading order with a precision and recall of up to 99.9%, also complex layouts were recognized at a rate as high as 90% by using background knowledge, the full potential of which was revealed if many pages of the same source were transcribed. KW - layout recognition KW - background knowledge KW - historical document analysis KW - fully convolutional neural networks KW - baseline detection KW - text line detection Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-310938 SN - 1999-4893 VL - 16 IS - 3 ER - TY - JOUR A1 - Wienrich, Carolin A1 - Carolus, Astrid A1 - Markus, André A1 - Augustin, Yannik A1 - Pfister, Jan A1 - Hotho, Andreas T1 - Long-term effects of perceived friendship with intelligent voice assistants on usage behavior, user experience, and social perceptions JF - Computers N2 - Social patterns and roles can develop when users talk to intelligent voice assistants (IVAs) daily. The current study investigates whether users assign different roles to devices and how this affects their usage behavior, user experience, and social perceptions. Since social roles take time to establish, we equipped 106 participants with Alexa or Google assistants and some smart home devices and observed their interactions for nine months. We analyzed diverse subjective (questionnaire) and objective data (interaction data). By combining social science and data science analyses, we identified two distinct clusters—users who assigned a friendship role to IVAs over time and users who did not. Interestingly, these clusters exhibited significant differences in their usage behavior, user experience, and social perceptions of the devices. For example, participants who assigned a role to IVAs attributed more friendship to them used them more frequently, reported more enjoyment during interactions, and perceived more empathy for IVAs. In addition, these users had distinct personal requirements, for example, they reported more loneliness. This study provides valuable insights into the role-specific effects and consequences of voice assistants. Recent developments in conversational language models such as ChatGPT suggest that the findings of this study could make an important contribution to the design of dialogic human–AI interactions. KW - intelligent voice assistant KW - smart speaker KW - social relationship KW - social role KW - long-term analysis KW - social interaction KW - human–computer interaction KW - anthropomorphism Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-313552 SN - 2073-431X VL - 12 IS - 4 ER - TY - JOUR A1 - Dresia, Kai A1 - Kurudzija, Eldin A1 - Deeken, Jan A1 - Waxenegger-Wilfing, Günther T1 - Improved wall temperature prediction for the LUMEN rocket combustion chamber with neural networks JF - Aerospace N2 - Accurate calculations of the heat transfer and the resulting maximum wall temperature are essential for the optimal design of reliable and efficient regenerative cooling systems. However, predicting the heat transfer of supercritical methane flowing in cooling channels of a regeneratively cooled rocket combustor presents a significant challenge. High-fidelity CFD calculations provide sufficient accuracy but are computationally too expensive to be used within elaborate design optimization routines. In a previous work it has been shown that a surrogate model based on neural networks is able to predict the maximum wall temperature along straight cooling channels with convincing precision when trained with data from CFD simulations for simple cooling channel segments. In this paper, the methodology is extended to cooling channels with curvature. The predictions of the extended model are tested against CFD simulations with different boundary conditions for the representative LUMEN combustor contour with varying geometries and heat flux densities. The high accuracy of the extended model’s predictions, suggests that it will be a valuable tool for designing and analyzing regenerative cooling systems with greater efficiency and effectiveness. KW - neural network KW - surrogate model KW - heat transfer KW - machine learning KW - LUMEN KW - rocket engine KW - regenerative cooling Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-319169 SN - 2226-4310 VL - 10 IS - 5 ER - TY - JOUR A1 - Salihoglu, Rana A1 - Srivastava, Mugdha A1 - Liang, Chunguang A1 - Schilling, Klaus A1 - Szalay, Aladar A1 - Bencurova, Elena A1 - Dandekar, Thomas T1 - PRO-Simat: Protein network simulation and design tool JF - Computational and Structural Biotechnology Journal N2 - PRO-Simat is a simulation tool for analysing protein interaction networks, their dynamic change and pathway engineering. It provides GO enrichment, KEGG pathway analyses, and network visualisation from an integrated database of more than 8 million protein-protein interactions across 32 model organisms and the human proteome. We integrated dynamical network simulation using the Jimena framework, which quickly and efficiently simulates Boolean genetic regulatory networks. It enables simulation outputs with in-depth analysis of the type, strength, duration and pathway of the protein interactions on the website. Furthermore, the user can efficiently edit and analyse the effect of network modifications and engineering experiments. In case studies, applications of PRO-Simat are demonstrated: (i) understanding mutually exclusive differentiation pathways in Bacillus subtilis, (ii) making Vaccinia virus oncolytic by switching on its viral replication mainly in cancer cells and triggering cancer cell apoptosis and (iii) optogenetic control of nucleotide processing protein networks to operate DNA storage. Multilevel communication between components is critical for efficient network switching, as demonstrated by a general census on prokaryotic and eukaryotic networks and comparing design with synthetic networks using PRO-Simat. The tool is available at https://prosimat.heinzelab.de/ as a web-based query server. KW - network simulation KW - protein analysis KW - signalling pathways KW - dynamic protein-protein interactions KW - optogenetics KW - oncolytic virus KW - DNA storage Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-350034 SN - 2001-0370 VL - 21 ER - TY - JOUR A1 - Rack, Christian A1 - Fernando, Tamara A1 - Yalcin, Murat A1 - Hotho, Andreas A1 - Latoschik, Marc Erich T1 - Who is Alyx? A new behavioral biometric dataset for user identification in XR JF - Frontiers in Virtual Reality N2 - Introduction: This paper addresses the need for reliable user identification in Extended Reality (XR), focusing on the scarcity of public datasets in this area. Methods: We present a new dataset collected from 71 users who played the game “Half-Life: Alyx” on an HTC Vive Pro for 45 min across two separate sessions. The dataset includes motion and eye-tracking data, along with physiological data from a subset of 31 users. Benchmark performance is established using two state-of-the-art deep learning architectures, Convolutional Neural Networks (CNN) and Gated Recurrent Units (GRU). Results: The best model achieved a mean accuracy of 95% for user identification within 2 min when trained on the first session and tested on the second. Discussion: The dataset is freely available and serves as a resource for future research in XR user identification, thereby addressing a significant gap in the field. Its release aims to facilitate advancements in user identification methods and promote reproducibility in XR research. KW - dataset KW - behaviometric KW - deep learning KW - physiological dataset KW - user identification Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-353979 SN - 2673-4192 VL - 4 ER - TY - JOUR A1 - Greubel, André A1 - Andres, Daniela A1 - Hennecke, Martin T1 - Analyzing reporting on ransomware incidents: a case study JF - Social Sciences N2 - Knowledge about ransomware is important for protecting sensitive data and for participating in public debates about suitable regulation regarding its security. However, as of now, this topic has received little to no attention in most school curricula. As such, it is desirable to analyze what citizens can learn about this topic outside of formal education, e.g., from news articles. This analysis is both relevant to analyzing the public discourse about ransomware, as well as to identify what aspects of this topic should be included in the limited time available for this topic in formal education. Thus, this paper was motivated both by educational and media research. The central goal is to explore how the media reports on this topic and, additionally, to identify potential misconceptions that could stem from this reporting. To do so, we conducted an exploratory case study into the reporting of 109 media articles regarding a high-impact ransomware event: the shutdown of the Colonial Pipeline (located in the east of the USA). We analyzed how the articles introduced central terminology, what details were provided, what details were not, and what (mis-)conceptions readers might receive from them. Our results show that an introduction of the terminology and technical concepts of security is insufficient for a complete understanding of the incident. Most importantly, the articles may lead to four misconceptions about ransomware that are likely to lead to misleading conclusions about the responsibility for the incident and possible political and technical options to prevent such attacks in the future. KW - media analysis KW - informal education KW - IT security KW - ransomware KW - misconceptions Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-313746 SN - 2076-0760 VL - 12 IS - 5 ER - TY - JOUR A1 - Krenzer, Adrian A1 - Banck, Michael A1 - Makowski, Kevin A1 - Hekalo, Amar A1 - Fitting, Daniel A1 - Troya, Joel A1 - Sudarevic, Boban A1 - Zoller, Wolfgang G. A1 - Hann, Alexander A1 - Puppe, Frank T1 - A real-time polyp-detection system with clinical application in colonoscopy using deep convolutional neural networks JF - Journal of Imaging N2 - Colorectal cancer (CRC) is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is with a colonoscopy. During this procedure, the gastroenterologist searches for polyps. However, there is a potential risk of polyps being missed by the gastroenterologist. Automated detection of polyps helps to assist the gastroenterologist during a colonoscopy. There are already publications examining the problem of polyp detection in the literature. Nevertheless, most of these systems are only used in the research context and are not implemented for clinical application. Therefore, we introduce the first fully open-source automated polyp-detection system scoring best on current benchmark data and implementing it ready for clinical application. To create the polyp-detection system (ENDOMIND-Advanced), we combined our own collected data from different hospitals and practices in Germany with open-source datasets to create a dataset with over 500,000 annotated images. ENDOMIND-Advanced leverages a post-processing technique based on video detection to work in real-time with a stream of images. It is integrated into a prototype ready for application in clinical interventions. We achieve better performance compared to the best system in the literature and score a F1-score of 90.24% on the open-source CVC-VideoClinicDB benchmark. KW - machine learning KW - deep learning KW - endoscopy KW - gastroenterology KW - automation KW - object detection KW - video object detection KW - real-time Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-304454 SN - 2313-433X VL - 9 IS - 2 ER - TY - JOUR A1 - Hossfeld, Tobias A1 - Heegaard, Poul E. A1 - Kellerer, Wolfgang T1 - Comparing the scalability of communication networks and systems JF - IEEE Access N2 - Scalability is often mentioned in literature, but a stringent definition is missing. In particular, there is no general scalability assessment which clearly indicates whether a system scales or not or whether a system scales better than another. The key contribution of this article is the definition of a scalability index (SI) which quantifies if a system scales in comparison to another system, a hypothetical system, e.g., linear system, or the theoretically optimal system. The suggested SI generalizes different metrics from literature, which are specialized cases of our SI. The primary target of our scalability framework is, however, benchmarking of two systems, which does not require any reference system. The SI is demonstrated and evaluated for different use cases, that are (1) the performance of an IoT load balancer depending on the system load, (2) the availability of a communication system depending on the size and structure of the network, (3) scalability comparison of different location selection mechanisms in fog computing with respect to delays and energy consumption; (4) comparison of time-sensitive networking (TSN) mechanisms in terms of efficiency and utilization. Finally, we discuss how to use and how not to use the SI and give recommendations and guidelines in practice. To the best of our knowledge, this is the first work which provides a general SI for the comparison and benchmarking of systems, which is the primary target of our scalability analysis. KW - communication networks KW - performance KW - availability KW - scalability Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-349403 VL - 11 ER - TY - RPRT A1 - Herrmann, Martin A1 - Rizk, Amr T1 - On Data Plane Multipath Scheduling for Connected Mobility Applications T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Cooperative, connected and automated mobility (CCAM) systems depend on a reliable communication to provide their service and more crucially to ensure the safety of users. One way to ensure the reliability of a data transmission is to use multiple transmission technologies in combination with redundant flows. In this paper, we describe a system requiring multipath communication in the context of CCAM. To this end, we introduce a data plane-based scheduler that uses replication and integration modules to provide redundant and transparent multipath communication. We provide an analytical model for the full replication module of the system and give an overview of how and where the data-plane scheduler components can be realized. KW - multipath scheduling KW - connected mobility applications Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-353444 N1 - Ursprüngliche Version unter https://doi.org/10.25972/OPUS-32203 ET - aktualisierte Version ER - TY - RPRT A1 - Herrmann, Martin A1 - Rizk, Amr T1 - On Data Plane Multipath Scheduling for Connected Mobility Applications T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Cooperative, connected and automated mobility (CCAM) systems depend on a reliable communication to provide their service and more crucially to ensure the safety of users. One way to ensure the reliability of a data transmission is to use multiple transmission technologies in combination with redundant flows. In this paper, we describe a system requiring multipath communication in the context of CCAM. To this end, we introduce a data plane-based scheduler that uses replication and integration modules to provide redundant and transparent multipath communication. We provide an analytical model for the full replication module of the system and give an overview of how and where the data-plane scheduler components can be realized. KW - multipath scheduling KW - connected mobility applications Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322033 N1 - Aktualisierte Version unter https://doi.org/10.25972/OPUS-35344 ER - TY - JOUR A1 - Müller, Konstantin A1 - Leppich, Robert A1 - Geiß, Christian A1 - Borst, Vanessa A1 - Pelizari, Patrick Aravena A1 - Kounev, Samuel A1 - Taubenböck, Hannes T1 - Deep neural network regression for normalized digital surface model generation with Sentinel-2 imagery JF - IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing N2 - In recent history, normalized digital surface models (nDSMs) have been constantly gaining importance as a means to solve large-scale geographic problems. High-resolution surface models are precious, as they can provide detailed information for a specific area. However, measurements with a high resolution are time consuming and costly. Only a few approaches exist to create high-resolution nDSMs for extensive areas. This article explores approaches to extract high-resolution nDSMs from low-resolution Sentinel-2 data, allowing us to derive large-scale models. We thereby utilize the advantages of Sentinel 2 being open access, having global coverage, and providing steady updates through a high repetition rate. Several deep learning models are trained to overcome the gap in producing high-resolution surface maps from low-resolution input data. With U-Net as a base architecture, we extend the capabilities of our model by integrating tailored multiscale encoders with differently sized kernels in the convolution as well as conformed self-attention inside the skip connection gates. Using pixelwise regression, our U-Net base models can achieve a mean height error of approximately 2 m. Moreover, through our enhancements to the model architecture, we reduce the model error by more than 7%. KW - Deep learning KW - multiscale encoder KW - sentinel KW - surface model Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-349424 SN - 1939-1404 VL - 16 ER - TY - JOUR A1 - Liman, Leon A1 - May, Bernd A1 - Fette, Georg A1 - Krebs, Jonathan A1 - Puppe, Frank T1 - Using a clinical data warehouse to calculate and present key metrics for the radiology department: implementation and performance evaluation JF - JMIR Medical Informatics N2 - Background: Due to the importance of radiologic examinations, such as X-rays or computed tomography scans, for many clinical diagnoses, the optimal use of the radiology department is 1 of the primary goals of many hospitals. Objective: This study aims to calculate the key metrics of this use by creating a radiology data warehouse solution, where data from radiology information systems (RISs) can be imported and then queried using a query language as well as a graphical user interface (GUI). Methods: Using a simple configuration file, the developed system allowed for the processing of radiology data exported from any kind of RIS into a Microsoft Excel, comma-separated value (CSV), or JavaScript Object Notation (JSON) file. These data were then imported into a clinical data warehouse. Additional values based on the radiology data were calculated during this import process by implementing 1 of several provided interfaces. Afterward, the query language and GUI of the data warehouse were used to configure and calculate reports on these data. For the most common types of requested reports, a web interface was created to view their numbers as graphics. Results: The tool was successfully tested with the data of 4 different German hospitals from 2018 to 2021, with a total of 1,436,111 examinations. The user feedback was good, since all their queries could be answered if the available data were sufficient. The initial processing of the radiology data for using them with the clinical data warehouse took (depending on the amount of data provided by each hospital) between 7 minutes and 1 hour 11 minutes. Calculating 3 reports of different complexities on the data of each hospital was possible in 1-3 seconds for reports with up to 200 individual calculations and in up to 1.5 minutes for reports with up to 8200 individual calculations. Conclusions: A system was developed with the main advantage of being generic concerning the export of different RISs as well as concerning the configuration of queries for various reports. The queries could be configured easily using the GUI of the data warehouse, and their results could be exported into the standard formats Excel and CSV for further processing. KW - data warehouse KW - eHealth KW - hospital data KW - electronic health records KW - radiology KW - statistics and numerical data KW - medical records Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-349411 SN - 2291-9694 VL - 11 ER - TY - JOUR A1 - Seufert, Anika A1 - Poignée, Fabian A1 - Seufert, Michael A1 - Hoßfeld, Tobias T1 - Share and multiply: modeling communication and generated traffic in private WhatsApp groups JF - IEEE Access N2 - Group-based communication is a highly popular communication paradigm, which is especially prominent in mobile instant messaging (MIM) applications, such as WhatsApp. Chat groups in MIM applications facilitate the sharing of various types of messages (e.g., text, voice, image, video) among a large number of participants. As each message has to be transmitted to every other member of the group, which multiplies the traffic, this has a massive impact on the underlying communication networks. However, most chat groups are private and network operators cannot obtain deep insights into MIM communication via network measurements due to end-to-end encryption. Thus, the generation of traffic is not well understood, given that it depends on sizes of communication groups, speed of communication, and exchanged message types. In this work, we provide a huge data set of 5,956 private WhatsApp chat histories, which contains over 76 million messages from more than 117,000 users. We describe and model the properties of chat groups and users, and the communication within these chat groups, which gives unprecedented insights into private MIM communication. In addition, we conduct exemplary measurements for the most popular message types, which empower the provided models to estimate the traffic over time in a chat group. KW - communication models KW - group-based communication KW - mobile instant messaging KW - mobile messaging application KW - private chat groups KW - WhatsApp Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-349430 VL - 11 ER - TY - JOUR A1 - Helmer, Philipp A1 - Rodemers, Philipp A1 - Hottenrott, Sebastian A1 - Leppich, Robert A1 - Helwich, Maja A1 - Pryss, Rüdiger A1 - Kranke, Peter A1 - Meybohm, Patrick A1 - Winkler, Bernd E. A1 - Sammeth, Michael T1 - Evaluating blood oxygen saturation measurements by popular fitness trackers in postoperative patients: a prospective clinical trial JF - iScience N2 - Summary Blood oxygen saturation is an important clinical parameter, especially in postoperative hospitalized patients, monitored in clinical practice by arterial blood gas (ABG) and/or pulse oximetry that both are not suitable for a long-term continuous monitoring of patients during the entire hospital stay, or beyond. Technological advances developed recently for consumer-grade fitness trackers could—at least in theory—help to fill in this gap, but benchmarks on the applicability and accuracy of these technologies in hospitalized patients are currently lacking. We therefore conducted at the postanaesthesia care unit under controlled settings a prospective clinical trial with 201 patients, comparing in total >1,000 oxygen blood saturation measurements by fitness trackers of three brands with the ABG gold standard and with pulse oximetry. Our results suggest that, despite of an overall still tolerable measuring accuracy, comparatively high dropout rates severely limit the possibilities of employing fitness trackers, particularly during the immediate postoperative period of hospitalized patients. Highlights •The accuracy of O2 measurements by fitness trackers is tolerable (RMSE ≲4%) •Correlation with arterial blood gas measurements is fair to moderate (PCC = [0.46; 0.64]) •Dropout rates of fitness trackers during O2 monitoring are high (∼1/3 values missing) •Fitness trackers cannot be recommended for O2 measuring during critical monitoring KW - multidisciplinary KW - health sciences KW - clinical measurement in health technology KW - bioelectronics KW - fitness trackers Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-349913 SN - 2589-0042 VL - 26 IS - 11 ER - TY - THES A1 - Dombrovski, Veaceslav T1 - Software Framework to Support Operations of Nanosatellite Formations T1 - Software Framework für die Unterstützung des Betriebs von Nanosatelliten-Formationen N2 - Since the first CubeSat launch in 2003, the hardware and software complexity of the nanosatellites was continuosly increasing. To keep up with the continuously increasing mission complexity and to retain the primary advantages of a CubeSat mission, a new approach for the overall space and ground software architecture and protocol configuration is elaborated in this work. The aim of this thesis is to propose a uniform software and protocol architecture as a basis for software development, test, simulation and operation of multiple pico-/nanosatellites based on ultra-low power components. In contrast to single-CubeSat missions, current and upcoming nanosatellite formation missions require faster and more straightforward development, pre-flight testing and calibration procedures as well as simultaneous operation of multiple satellites. A dynamic and decentral Compass mission network was established in multiple active CubeSat missions, consisting of uniformly accessible nodes. Compass middleware was elaborated to unify the communication and functional interfaces between all involved mission-related software and hardware components. All systems can access each other via dynamic routes to perform service-based M2M communication. With the proposed model-based communication approach, all states, abilities and functionalities of a system are accessed in a uniform way. The Tiny scripting language was designed to allow dynamic code execution on ultra-low power components as a basis for constraint-based in-orbit scheduler and experiment execution. The implemented Compass Operations front-end enables far-reaching monitoring and control capabilities of all ground and space systems. Its integrated constraint-based operations task scheduler allows the recording of complex satellite operations, which are conducted automatically during the overpasses. The outcome of this thesis became an enabling technology for UWE-3, UWE-4 and NetSat CubeSat missions. N2 - Seit dem Launch des ersten CubeSats im Jahr 2003, hat die Komplexität der Nanosatelliten stetig zugenommen. Um mit den wachsenden Anforderungen Schritt zu halten und gleichzeitig nicht auf die Hauptvorteile einer CubeSat Mission zu verzichten, wird eine einheitliche Protokoll- und Softwarearchitektur für den gesamten Weltraum- und Bodensegment vorgeschlagen. Diese Arbeit schlägt eine einheitliche Software- und Protokoll-Architektur vor als Basis für Softwareentwicklung, Tests und Betrieb von mehreren Pico-/Nanosatelliten. Im Gegensatz zu Missionen mit nur einem CubeSat, erfordern künftige Nanosatelliten-Formationen eine schnellere und einfachere Entwicklung, Vorflug-Tests, Kalibrierungsvorgänge sowie die Möglichkeit mehrere Satelliten gleichzeitig zu betreiben. Ein dynamisches und dezentrales Compass Missionsnetzwerk wurde in mehreren CubeSat-Missionen realisiert, bestehend aus einheitlich zugänglichen Knoten. Die Compass-Middleware wurde entwickelt, um sowohl die Kommunikation als auch funktionale Schnittstellen zwischen allen beteiligten Software und Hardware Systemen in einer Mission zu vereinheitlichen: Rechner des Bedienpersonals, Bodenstationen, Mission-Server, Testeinrichtungen, Simulationen und Subsysteme aller Satelliten. Mit dem Ansatz der modellbasierten Kommunikation wird auf alle Zustände und Funktionen eines Systems einheitlich zugegriffen. Die entwickelte Tiny Skriptsprache ermöglicht die Ausführung von dynamischem Code auf energiesparenden Systemen, um so in-orbit Scheduler zu realisieren. Das Compass Operations Front-End bietet zahlreiche grafische Komponenten, mit denen alle Weltraum- und Bodensegment-Systeme einheitlich überwacht, kontrolliert und bedient werden. Der integrierte Betrieb-Scheduler ermöglicht die Aufzeichnung von komplexen Satellitenbetrieb-Aufgaben, die beim Überflug automatisch ausgeführt werden. Die Ergebnisse dieser Arbeit wurden zur Enabling-Technologie für UWE-3, UWE-4 und NetSat Missionen. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 23 KW - Kleinsatellit KW - Softwaresystem KW - Kommunikationsprotokoll KW - Betriebssystem KW - Compiler KW - Satellite formation KW - Distributed computing KW - Compass framework KW - Model based mission realization KW - Model based communication Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-249314 SN - 978-3-945459-38-6 ER -