004 Datenverarbeitung; Informatik
Refine
Has Fulltext
- yes (32)
Year of publication
- 2023 (32) (remove)
Document Type
- Working Paper (19)
- Journal article (10)
- Doctoral Thesis (3)
Language
- English (32)
Keywords
- P4 (3)
- 5G (2)
- Deep learning (2)
- SDN (2)
- connected mobility applications (2)
- multipath scheduling (2)
- network calculus (2)
- 3D Reconstruction (1)
- 3D-Rekonstruktion (1)
- 4D-GIS (1)
- 5G core network (1)
- 6G (1)
- ATSSSS (1)
- Add-on-Miss (1)
- Bildverarbeitung (1)
- Computer Vision (1)
- Containerization (1)
- Deep Learning (1)
- Dijkstra’s algorithm (1)
- Domänenspezifische Sprache (1)
- Dreidimensionale Rekonstruktion (1)
- Edge-MEC-Cloud (1)
- FIFO caching strategies (1)
- IT security (1)
- Internet of Things (1)
- JCAS (1)
- Kathará (1)
- Klima (1)
- LFU (1)
- LRU (1)
- Linux (1)
- MP-DCCP (1)
- Machine Learning (1)
- Maschinelles Lernen (1)
- Maschinelles Sehen (1)
- Medical Image Analysis (1)
- Modell (1)
- Network Emulator (1)
- Neuronales Netz (1)
- Object Detection (1)
- P4-INT (1)
- PROLOG <Programmiersprache> (1)
- Punktwolke (1)
- Selbstkalibrierung (1)
- Self-calibration (1)
- Sensing-aaS (1)
- Structure-from-Motion (1)
- TTL validation of data consistency (1)
- Underwater Mapping (1)
- Underwater Scanning (1)
- Visualized Kathará (1)
- WhatsApp (1)
- anthropomorphism (1)
- availability (1)
- background knowledge (1)
- baseline detection (1)
- camera orientation (1)
- climate (1)
- communication models (1)
- communication networks (1)
- computer performance evaluation (1)
- content-based image retrieval (1)
- data warehouse (1)
- definite clause grammars (1)
- delay constrained (1)
- disjoint multi-paths (1)
- eHealth (1)
- electronic health records (1)
- emulation (1)
- energy efficiency (1)
- feature matching (1)
- federated learning (1)
- fog computing (1)
- fully convolutional neural networks (1)
- global IPX network (1)
- group-based communication (1)
- hardware-in-the-loop simulation (1)
- hardware-in-the-loop streaming system (1)
- historical document analysis (1)
- historical images (1)
- hit ratio analysis and simulation (1)
- hospital data (1)
- human–computer interaction (1)
- informal education (1)
- information extraction (1)
- intelligent voice assistant (1)
- key-insight extraction (1)
- knowledge representation (1)
- layout recognition (1)
- least cost (1)
- logic programming (1)
- long-term analysis (1)
- media analysis (1)
- medical records (1)
- misconceptions (1)
- mobile instant messaging (1)
- mobile messaging application (1)
- model output statistics (1)
- multipath (1)
- multipath packet scheduling (1)
- multiscale encoder (1)
- neural networks (1)
- non-terrestrial networks (1)
- ontology (1)
- orchestration (1)
- packet reception method (1)
- performance (1)
- performance monitoring (1)
- private chat groups (1)
- radiology (1)
- ransomware (1)
- satellite communication (1)
- scalability (1)
- scalability evaluation (1)
- sentinel (1)
- service-curve estimation (1)
- shortest path routing (1)
- signaling traffic (1)
- smart speaker (1)
- social interaction (1)
- social relationship (1)
- social role (1)
- state management (1)
- statistics and numerical data (1)
- surface model (1)
- sustainability (1)
- table extraction (1)
- table understanding (1)
- text line detection (1)
- timestamping method (1)
Institute
- Institut für Informatik (32) (remove)
EU-Project number / Contract (GA) number
- 101069547 (1)
There is great interest in affordable, precise and reliable metrology underwater:
Archaeologists want to document artifacts in situ with high detail.
In marine research, biologists require the tools to monitor coral growth and geologists need recordings to model sediment transport.
Furthermore, for offshore construction projects, maintenance and inspection millimeter-accurate measurements of defects and offshore structures are essential.
While the process of digitizing individual objects and complete sites on land is well understood and standard methods, such as Structure from Motion or terrestrial laser scanning, are regularly applied, precise underwater surveying with high resolution is still a complex and difficult task.
Applying optical scanning techniques in water is challenging due to reduced visibility caused by turbidity and light absorption.
However, optical underwater scanners provide significant advantages in terms of achievable resolution and accuracy compared to acoustic systems.
This thesis proposes an underwater laser scanning system and the algorithms for creating dense and accurate 3D scans in water.
It is based on laser triangulation and the main optical components are an underwater camera and a cross-line laser projector.
The prototype is configured with a motorized yaw axis for capturing scans from a tripod.
Alternatively, it is mounted to a moving platform for mobile mapping.
The main focus lies on the refractive calibration of the underwater camera and laser projector, the image processing and 3D reconstruction.
For highest accuracy, the refraction at the individual media interfaces must be taken into account.
This is addressed by an optimization-based calibration framework using a physical-geometric camera model derived from an analytical formulation of a ray-tracing projection model.
In addition to scanning underwater structures, this work presents the 3D acquisition of semi-submerged structures and the correction of refraction effects.
As in-situ calibration in water is complex and time-consuming, the challenge of transferring an in-air scanner calibration to water without re-calibration is investigated, as well as self-calibration techniques for structured light.
The system was successfully deployed in various configurations for both static scanning and mobile mapping.
An evaluation of the calibration and 3D reconstruction using reference objects and a comparison of free-form surfaces in clear water demonstrate the high accuracy potential in the range of one millimeter to less than one centimeter, depending on the measurement distance.
Mobile underwater mapping and motion compensation based on visual-inertial odometry is demonstrated using a new optical underwater scanner based on fringe projection.
Continuous registration of individual scans allows the acquisition of 3D models from an underwater vehicle.
RGB images captured in parallel are used to create 3D point clouds of underwater scenes in full color.
3D maps are useful to the operator during the remote control of underwater vehicles and provide the building blocks to enable offshore inspection and surveying tasks.
The advancing automation of the measurement technology will allow non-experts to use it, significantly reduce acquisition time and increase accuracy, making underwater metrology more cost-effective.
Utilizing multiple access networks such as 5G, 4G, and Wi-Fi simultaneously can lead to increased robustness, resiliency, and capacity for mobile users. However, transparently implementing packet distribution over multiple paths within the core of the network faces multiple challenges including scalability to a large number of customers, low latency, and high-capacity packet processing requirements. In this paper, we offload congestion-aware multipath packet scheduling to a smartNIC. However, such hardware acceleration faces multiple challenges due to programming language and platform limitations. We implement different multipath schedulers in P4 with different complexity in order to cope with dynamically changing path capacities. Using testbed measurements, we show that our CMon scheduler, which monitors path congestion in the data plane and dynamically adjusts scheduling weights for the different paths based on path state information, can process more than 3.5 Mpps packets 25 μs latency.
Service orchestration requires enormous attention and is a struggle nowadays. Of course, virtualization provides a base level of abstraction for services to be deployable on a lot of infrastructures. With container virtualization, the trend to migrate applications to a micro-services level in order to be executable in Fog and Edge Computing environments increases manageability and maintenance efforts rapidly. Similarly, network virtualization adds effort to calibrate IP flows for Software-Defined Networks and eventually route it by means of Network Function Virtualization. Nevertheless, there are concepts like MAPE-K to support micro-service distribution in next-generation cloud and network environments. We want to explore, how a service distribution can be improved by adopting machine learning concepts for infrastructure or service changes. Therefore, we show how federated machine learning is integrated into a cloud-to-fog-continuum without burdening single nodes.
Digitization and transcription of historic documents offer new research opportunities for humanists and are the topics of many edition projects. However, manual work is still required for the main phases of layout recognition and the subsequent optical character recognition (OCR) of early printed documents. This paper describes and evaluates how deep learning approaches recognize text lines and can be extended to layout recognition using background knowledge. The evaluation was performed on five corpora of early prints from the 15th and 16th Centuries, representing a variety of layout features. While the main text with standard layouts could be recognized in the correct reading order with a precision and recall of up to 99.9%, also complex layouts were recognized at a rate as high as 90% by using background knowledge, the full potential of which was revealed if many pages of the same source were transcribed.
How to Model and Predict the Scalability of a Hardware-In-The-Loop Test Bench for Data Re-Injection?
(2023)
This paper describes a novel application of an empirical network calculus model based on measurements of a hardware-in-the-loop (HIL) test system. The aim is to predict the performance of a HIL test bench for open-loop re-injection in the context of scalability. HIL test benches are distributed computer systems including software, hardware, and networking devices. They are used to validate complex technical systems, but have not yet been system under study themselves. Our approach is to use measurements from the HIL system to create an empirical model for arrival and service curves. We predict the performance and design the previously unknown parameters of the HIL simulator with network calculus (NC), namely the buffer sizes and the minimum needed pre-buffer time for the playback buffer. We furthermore show, that it is possible to estimate the CPU load from arrival and service-curves based on the utilization theorem, and hence estimate the scalability of the HIL system in the context of the number of sensor streams.
This paper presents a novel concept to extend state-of-the-art buffer monitoring with additional measures to estimate service-curves. The online algorithm for service-curve estimation replaces the state-of-the-art timestamp logging, as we expect it to overcome the main disadvantages of generating a huge amount of data and using a lot of CPU resources to store the data to a file during operation. We prove the accuracy of the online-algorithm offline with timestamp data and compare the derived bounds to the measured delay and backlog. We also do a proof-of- concept of the online-algorithm, implement it in LabVIEW and compare its performance to the timestamp logging by CPU load and data-size of the log-file. However, the implementation is still work-in-progress.
Knowledge about ransomware is important for protecting sensitive data and for participating in public debates about suitable regulation regarding its security. However, as of now, this topic has received little to no attention in most school curricula. As such, it is desirable to analyze what citizens can learn about this topic outside of formal education, e.g., from news articles. This analysis is both relevant to analyzing the public discourse about ransomware, as well as to identify what aspects of this topic should be included in the limited time available for this topic in formal education. Thus, this paper was motivated both by educational and media research. The central goal is to explore how the media reports on this topic and, additionally, to identify potential misconceptions that could stem from this reporting. To do so, we conducted an exploratory case study into the reporting of 109 media articles regarding a high-impact ransomware event: the shutdown of the Colonial Pipeline (located in the east of the USA). We analyzed how the articles introduced central terminology, what details were provided, what details were not, and what (mis-)conceptions readers might receive from them. Our results show that an introduction of the terminology and technical concepts of security is insufficient for a complete understanding of the incident. Most importantly, the articles may lead to four misconceptions about ransomware that are likely to lead to misleading conclusions about the responsibility for the incident and possible political and technical options to prevent such attacks in the future.
Understanding the Performance of Different Packet Reception and Timestamping Methods in Linux
(2023)
This document briefly presents some renowned packet reception techniques for network packets in Linux systems. Further, it compares their performance when measuring packet timestamps with respect to throughput and accuracy. Both software and hardware timestamps are compared, and various parameters are examined, including frame size, link speed, network interface card, and CPU load. The results indicate that hardware timestamping offers significantly better accuracy with no downsides, and that packet reception techniques that avoid system calls offer superior measurement throughput.
Packets sent over a network can either get lost or reach their destination. Protocols like TCP try to solve this problem by resending the lost packets. However, retransmissions consume a lot of time and are cumbersome for the transmission of critical data. Multipath solutions are quite common to address this reliability issue and are available on almost every layer of the ISO/OSI model. We propose a solution based on a P4 network to duplicate packets in order to send them to their destination via multiple routes. The last network hop ensures that only a single copy of the traffic is further forwarded to its destination by adopting a concept similar to Bloom filters. Besides, if fast delivery is requested we provide a P4 prototype, which randomly forwards the packets over different transmission paths. For reproducibility, we implement our approach in a container-based network emulation system called Kathará.
In network research, reproducibility of experiments is not always easy to achieve. Infrastructures are cumbersome to set up or are not available due to vendor-specific devices. Emulators try to overcome those issues to a given extent and are available in different service models. Unfortunately, the usability of emulators requires time-consuming efforts and a deep understanding of their functionality. At first, we analyze to which extent currently available open-source emulators support network configurations and how user-friendly they are. With these insights, we describe, how an ease-to-use emulator is implemented and may run as a Network Emulator as a Service (NEaaS). Therefore, virtualization plays a major role in order to deploy a NEaaS based on Kathará.