004 Datenverarbeitung; Informatik
Refine
Has Fulltext
- yes (38) (remove)
Year of publication
- 2023 (38) (remove)
Document Type
- Working Paper (19)
- Journal article (12)
- Doctoral Thesis (5)
- Conference Proceeding (1)
- Preprint (1)
Language
- English (38)
Keywords
- Deep learning (3)
- P4 (3)
- 5G (2)
- SDN (2)
- connected mobility applications (2)
- multipath scheduling (2)
- network calculus (2)
- 3D Reconstruction (1)
- 3D-Rekonstruktion (1)
- 4D-GIS (1)
- 5G core network (1)
- 6G (1)
- ATSSSS (1)
- Accessibility (1)
- Add-on-Miss (1)
- BPM (1)
- BPMN (1)
- Benutzererlebnis (1)
- Benutzerforschung (1)
- Bildverarbeitung (1)
- CHI Conference (1)
- Computer Vision (1)
- Containerization (1)
- Deep Learning (1)
- Dijkstra’s algorithm (1)
- Domänenspezifische Sprache (1)
- Dreidimensionale Rekonstruktion (1)
- Edge-MEC-Cloud (1)
- Emotion inference (1)
- Emotionserkennung (1)
- Emotionsinterpretation (1)
- FIFO caching strategies (1)
- Gastroenterologische Endoskopie (1)
- Gefühl (1)
- Human-centered computing / Access (1)
- Human-centered computing / Human computer interaction (HCI) / Interaction paradigms / Mixed / augmented reality (1)
- Human-centered computing / Human computer interaction (HCI) / Interaction paradigms / Virtual reality (1)
- Human-centered computing / Human computer interaction (HCI) / Interactiondevices (1)
- Human-centered computing / Human computerinteraction (HCI) / Interaction techniques (1)
- IT security (1)
- Internet of Things (1)
- IoT (1)
- IoT-driven processes (1)
- JCAS (1)
- Kathará (1)
- Klima (1)
- Kryoelektronenmikroskopie (1)
- LFU (1)
- LRU (1)
- Linux (1)
- MP-DCCP (1)
- Machine Learning (1)
- Maschinelles Lernen (1)
- Maschinelles Sehen (1)
- Medical Image Analysis (1)
- Metaverse (1)
- Methode (1)
- Modell (1)
- Mycoplasma pneumoniae (1)
- Network Emulator (1)
- Neuronales Netz (1)
- Object Detection (1)
- P4-INT (1)
- PROLOG <Programmiersprache> (1)
- Polypektomie (1)
- Punktwolke (1)
- Selbstkalibrierung (1)
- Self-calibration (1)
- Sensing-aaS (1)
- Structure-from-Motion (1)
- TTL validation of data consistency (1)
- Tomografie (1)
- Underwater Mapping (1)
- Underwater Scanning (1)
- Visualized Kathará (1)
- WhatsApp (1)
- Wissenschaftliche Beobachtung (1)
- anthropomorphism (1)
- availability (1)
- background knowledge (1)
- baseline detection (1)
- bit (1)
- camera orientation (1)
- climate (1)
- cognitive impairment (1)
- communication models (1)
- communication networks (1)
- computer performance evaluation (1)
- content-based image retrieval (1)
- cosmology (1)
- cryo-EM (1)
- cryo-ET (1)
- data warehouse (1)
- decision support system (1)
- deep learning (1)
- definite clause grammars (1)
- delay constrained (1)
- dementia (1)
- digital twin (1)
- disjoint multi-paths (1)
- eHealth (1)
- electronic health records (1)
- emergent time (1)
- emulation (1)
- energy efficiency (1)
- extended reality (1)
- feature matching (1)
- federated learning (1)
- fog computing (1)
- fully convolutional neural networks (1)
- future energy grid exploration (1)
- global IPX network (1)
- group-based communication (1)
- hardware-in-the-loop simulation (1)
- hardware-in-the-loop streaming system (1)
- historical document analysis (1)
- historical images (1)
- hit ratio analysis and simulation (1)
- hospital data (1)
- human–computer interaction (1)
- informal education (1)
- information extraction (1)
- intelligent voice assistant (1)
- key-insight extraction (1)
- knowledge representation (1)
- layout recognition (1)
- least cost (1)
- local energy system (1)
- logic programming (1)
- long-term analysis (1)
- media analysis (1)
- medical records (1)
- membrane protein (1)
- misconceptions (1)
- mobile instant messaging (1)
- mobile messaging application (1)
- model output statistics (1)
- multipath (1)
- multipath packet scheduling (1)
- multiscale encoder (1)
- mycoplasma (1)
- neural networks (1)
- non-terrestrial networks (1)
- ontology (1)
- orchestration (1)
- packet reception method (1)
- particle picking (1)
- performance (1)
- performance monitoring (1)
- phase space (1)
- phase transition (1)
- pneumoniae (1)
- private chat groups (1)
- qubit (1)
- radiology (1)
- ransomware (1)
- satellite communication (1)
- scalability (1)
- scalability evaluation (1)
- sentinel (1)
- service-curve estimation (1)
- shortest path routing (1)
- signaling traffic (1)
- simulation (1)
- smart meter data utilization (1)
- smart speaker (1)
- social interaction (1)
- social relationship (1)
- social role (1)
- state management (1)
- statistics and numerical data (1)
- surface model (1)
- sustainability (1)
- table extraction (1)
- table understanding (1)
- text line detection (1)
- timestamping method (1)
- tomography (1)
- visual proteomics (1)
Institute
Sonstige beteiligte Institutionen
EU-Project number / Contract (GA) number
- 101069547 (1)
The ongoing digitization of historical photographs in archives allows investigating the quality, quantity, and distribution of these images. However, the exact interior and exterior camera orientations of these photographs are usually lost during the digitization process. The proposed method uses content-based image retrieval (CBIR) to filter exterior images of single buildings in combination with metadata information. The retrieved photographs are automatically processed in an adapted structure-from-motion (SfM) pipeline to determine the camera parameters. In an interactive georeferencing process, the calculated camera positions are transferred into a global coordinate system. As all image and camera data are efficiently stored in the proposed 4D database, they can be conveniently accessed afterward to georeference newly digitized images by using photogrammetric triangulation and spatial resection. The results show that the CBIR and the subsequent SfM are robust methods for various kinds of buildings and different quantity of data. The absolute accuracy of the camera positions after georeferencing lies in the range of a few meters likely introduced by the inaccurate LOD2 models used for transformation. The proposed photogrammetric method, the database structure, and the 4D visualization interface enable adding historical urban photographs and 3D models from other locations.
An important but very time consuming part of the research process is literature review. An already large and nevertheless growing ground set of publications as well as a steadily increasing publication rate continue to worsen the situation. Consequently, automating this task as far as possible is desirable. Experimental results of systems are key-insights of high importance during literature review and usually represented in form of tables. Our pipeline KIETA exploits these tables to contribute to the endeavor of automation by extracting them and their contained knowledge from scientific publications. The pipeline is split into multiple steps to guarantee modularity as well as analyzability, and agnosticim regarding the specific scientific domain up until the knowledge extraction step, which is based upon an ontology. Additionally, a dataset of corresponding articles has been manually annotated with information regarding table and knowledge extraction. Experiments show promising results that signal the possibility of an automated system, while also indicating limits of extracting knowledge from tables without any context.
Climate models are the tool of choice for scientists researching climate change. Like all models they suffer from errors, particularly systematic and location-specific representation errors. One way to reduce these errors is model output statistics (MOS) where the model output is fitted to observational data with machine learning. In this work, we assess the use of convolutional Deep Learning climate MOS approaches and present the ConvMOS architecture which is specifically designed based on the observation that there are systematic and location-specific errors in the precipitation estimates of climate models. We apply ConvMOS models to the simulated precipitation of the regional climate model REMO, showing that a combination of per-location model parameters for reducing location-specific errors and global model parameters for reducing systematic errors is indeed beneficial for MOS performance. We find that ConvMOS models can reduce errors considerably and perform significantly better than three commonly used MOS approaches and plain ResNet and U-Net models in most cases. Our results show that non-linear MOS models underestimate the number of extreme precipitation events, which we alleviate by training models specialized towards extreme precipitation events with the imbalanced regression method DenseLoss. While we consider climate MOS, we argue that aspects of ConvMOS may also be beneficial in other domains with geospatial data, such as air pollution modeling or weather forecasts.
There is great interest in affordable, precise and reliable metrology underwater:
Archaeologists want to document artifacts in situ with high detail.
In marine research, biologists require the tools to monitor coral growth and geologists need recordings to model sediment transport.
Furthermore, for offshore construction projects, maintenance and inspection millimeter-accurate measurements of defects and offshore structures are essential.
While the process of digitizing individual objects and complete sites on land is well understood and standard methods, such as Structure from Motion or terrestrial laser scanning, are regularly applied, precise underwater surveying with high resolution is still a complex and difficult task.
Applying optical scanning techniques in water is challenging due to reduced visibility caused by turbidity and light absorption.
However, optical underwater scanners provide significant advantages in terms of achievable resolution and accuracy compared to acoustic systems.
This thesis proposes an underwater laser scanning system and the algorithms for creating dense and accurate 3D scans in water.
It is based on laser triangulation and the main optical components are an underwater camera and a cross-line laser projector.
The prototype is configured with a motorized yaw axis for capturing scans from a tripod.
Alternatively, it is mounted to a moving platform for mobile mapping.
The main focus lies on the refractive calibration of the underwater camera and laser projector, the image processing and 3D reconstruction.
For highest accuracy, the refraction at the individual media interfaces must be taken into account.
This is addressed by an optimization-based calibration framework using a physical-geometric camera model derived from an analytical formulation of a ray-tracing projection model.
In addition to scanning underwater structures, this work presents the 3D acquisition of semi-submerged structures and the correction of refraction effects.
As in-situ calibration in water is complex and time-consuming, the challenge of transferring an in-air scanner calibration to water without re-calibration is investigated, as well as self-calibration techniques for structured light.
The system was successfully deployed in various configurations for both static scanning and mobile mapping.
An evaluation of the calibration and 3D reconstruction using reference objects and a comparison of free-form surfaces in clear water demonstrate the high accuracy potential in the range of one millimeter to less than one centimeter, depending on the measurement distance.
Mobile underwater mapping and motion compensation based on visual-inertial odometry is demonstrated using a new optical underwater scanner based on fringe projection.
Continuous registration of individual scans allows the acquisition of 3D models from an underwater vehicle.
RGB images captured in parallel are used to create 3D point clouds of underwater scenes in full color.
3D maps are useful to the operator during the remote control of underwater vehicles and provide the building blocks to enable offshore inspection and surveying tasks.
The advancing automation of the measurement technology will allow non-experts to use it, significantly reduce acquisition time and increase accuracy, making underwater metrology more cost-effective.
Service orchestration requires enormous attention and is a struggle nowadays. Of course, virtualization provides a base level of abstraction for services to be deployable on a lot of infrastructures. With container virtualization, the trend to migrate applications to a micro-services level in order to be executable in Fog and Edge Computing environments increases manageability and maintenance efforts rapidly. Similarly, network virtualization adds effort to calibrate IP flows for Software-Defined Networks and eventually route it by means of Network Function Virtualization. Nevertheless, there are concepts like MAPE-K to support micro-service distribution in next-generation cloud and network environments. We want to explore, how a service distribution can be improved by adopting machine learning concepts for infrastructure or service changes. Therefore, we show how federated machine learning is integrated into a cloud-to-fog-continuum without burdening single nodes.
In network research, reproducibility of experiments is not always easy to achieve. Infrastructures are cumbersome to set up or are not available due to vendor-specific devices. Emulators try to overcome those issues to a given extent and are available in different service models. Unfortunately, the usability of emulators requires time-consuming efforts and a deep understanding of their functionality. At first, we analyze to which extent currently available open-source emulators support network configurations and how user-friendly they are. With these insights, we describe, how an ease-to-use emulator is implemented and may run as a Network Emulator as a Service (NEaaS). Therefore, virtualization plays a major role in order to deploy a NEaaS based on Kathará.
This paper discusses the problem of finding multiple shortest disjoint paths in modern communication networks, which is essential for ultra-reliable and time-sensitive applications. Dijkstra’s algorithm has been a popular solution for the shortest path problem, but repetitive use of it to find multiple paths is not scalable. The Multiple Disjoint Path Algorithm (MDPAlg), published in 2021, proposes the use of a single full graph to construct multiple disjoint paths. This paper proposes modifications to the algorithm to include a delay constraint, which is important in time-sensitive applications. Different delay constraint least-cost routing algorithms are compared in a comprehensive manner to evaluate the benefits of the adapted MDPAlg algorithm. Fault tolerance, and thereby reliability, is ensured by generating multiple link-disjoint paths from source to destination.
In this work, we describe the network from data collection to data processing and storage as a system based on different layers. We outline the different layers and highlight major tasks and dependencies with regard to energy consumption and energy efficiency. With this view, we can outwork challenges and questions a future system architect must answer to provide a more sustainable, green, resource friendly, and energy efficient application or system. Therefore, all system layers must be considered individually but also altogether for future IoT solutions. This requires, in particular, novel sustainability metrics in addition to current Quality of Service and Quality of Experience metrics to provide a high power, user satisfying, and sustainable network.
How to Model and Predict the Scalability of a Hardware-In-The-Loop Test Bench for Data Re-Injection?
(2023)
This paper describes a novel application of an empirical network calculus model based on measurements of a hardware-in-the-loop (HIL) test system. The aim is to predict the performance of a HIL test bench for open-loop re-injection in the context of scalability. HIL test benches are distributed computer systems including software, hardware, and networking devices. They are used to validate complex technical systems, but have not yet been system under study themselves. Our approach is to use measurements from the HIL system to create an empirical model for arrival and service curves. We predict the performance and design the previously unknown parameters of the HIL simulator with network calculus (NC), namely the buffer sizes and the minimum needed pre-buffer time for the playback buffer. We furthermore show, that it is possible to estimate the CPU load from arrival and service-curves based on the utilization theorem, and hence estimate the scalability of the HIL system in the context of the number of sensor streams.
In recent years, satellite communication has been expanding its field of application in the world of computer networks. This paper aims to provide an overview of how a typical scenario involving 5G Non-Terrestrial Networks (NTNs) for vehicle to everything (V2X) applications is characterized. In particular, a first implementation of a system that integrates them together will be described. Such a framework will later be used to evaluate the performance of applications such as Vehicle Monitoring (VM), Remote Driving (RD), Voice Over IP (VoIP), and others. Different configuration scenarios such as Low Earth Orbit and Geostationary Orbit will be considered.