004 Datenverarbeitung; Informatik
Refine
Has Fulltext
- yes (38) (remove)
Year of publication
- 2023 (38) (remove)
Document Type
- Working Paper (19)
- Journal article (12)
- Doctoral Thesis (5)
- Conference Proceeding (1)
- Preprint (1)
Language
- English (38)
Keywords
- Deep learning (3)
- P4 (3)
- 5G (2)
- SDN (2)
- connected mobility applications (2)
- multipath scheduling (2)
- network calculus (2)
- 3D Reconstruction (1)
- 3D-Rekonstruktion (1)
- 4D-GIS (1)
- 5G core network (1)
- 6G (1)
- ATSSSS (1)
- Accessibility (1)
- Add-on-Miss (1)
- BPM (1)
- BPMN (1)
- Benutzererlebnis (1)
- Benutzerforschung (1)
- Bildverarbeitung (1)
- CHI Conference (1)
- Computer Vision (1)
- Containerization (1)
- Deep Learning (1)
- Dijkstra’s algorithm (1)
- Domänenspezifische Sprache (1)
- Dreidimensionale Rekonstruktion (1)
- Edge-MEC-Cloud (1)
- Emotion inference (1)
- Emotionserkennung (1)
- Emotionsinterpretation (1)
- FIFO caching strategies (1)
- Gastroenterologische Endoskopie (1)
- Gefühl (1)
- Human-centered computing / Access (1)
- Human-centered computing / Human computer interaction (HCI) / Interaction paradigms / Mixed / augmented reality (1)
- Human-centered computing / Human computer interaction (HCI) / Interaction paradigms / Virtual reality (1)
- Human-centered computing / Human computer interaction (HCI) / Interactiondevices (1)
- Human-centered computing / Human computerinteraction (HCI) / Interaction techniques (1)
- IT security (1)
- Internet of Things (1)
- IoT (1)
- IoT-driven processes (1)
- JCAS (1)
- Kathará (1)
- Klima (1)
- Kryoelektronenmikroskopie (1)
- LFU (1)
- LRU (1)
- Linux (1)
- MP-DCCP (1)
- Machine Learning (1)
- Maschinelles Lernen (1)
- Maschinelles Sehen (1)
- Medical Image Analysis (1)
- Metaverse (1)
- Methode (1)
- Modell (1)
- Mycoplasma pneumoniae (1)
- Network Emulator (1)
- Neuronales Netz (1)
- Object Detection (1)
- P4-INT (1)
- PROLOG <Programmiersprache> (1)
- Polypektomie (1)
- Punktwolke (1)
- Selbstkalibrierung (1)
- Self-calibration (1)
- Sensing-aaS (1)
- Structure-from-Motion (1)
- TTL validation of data consistency (1)
- Tomografie (1)
- Underwater Mapping (1)
- Underwater Scanning (1)
- Visualized Kathará (1)
- WhatsApp (1)
- Wissenschaftliche Beobachtung (1)
- anthropomorphism (1)
- availability (1)
- background knowledge (1)
- baseline detection (1)
- bit (1)
- camera orientation (1)
- climate (1)
- cognitive impairment (1)
- communication models (1)
- communication networks (1)
- computer performance evaluation (1)
- content-based image retrieval (1)
- cosmology (1)
- cryo-EM (1)
- cryo-ET (1)
- data warehouse (1)
- decision support system (1)
- deep learning (1)
- definite clause grammars (1)
- delay constrained (1)
- dementia (1)
- digital twin (1)
- disjoint multi-paths (1)
- eHealth (1)
- electronic health records (1)
- emergent time (1)
- emulation (1)
- energy efficiency (1)
- extended reality (1)
- feature matching (1)
- federated learning (1)
- fog computing (1)
- fully convolutional neural networks (1)
- future energy grid exploration (1)
- global IPX network (1)
- group-based communication (1)
- hardware-in-the-loop simulation (1)
- hardware-in-the-loop streaming system (1)
- historical document analysis (1)
- historical images (1)
- hit ratio analysis and simulation (1)
- hospital data (1)
- human–computer interaction (1)
- informal education (1)
- information extraction (1)
- intelligent voice assistant (1)
- key-insight extraction (1)
- knowledge representation (1)
- layout recognition (1)
- least cost (1)
- local energy system (1)
- logic programming (1)
- long-term analysis (1)
- media analysis (1)
- medical records (1)
- membrane protein (1)
- misconceptions (1)
- mobile instant messaging (1)
- mobile messaging application (1)
- model output statistics (1)
- multipath (1)
- multipath packet scheduling (1)
- multiscale encoder (1)
- mycoplasma (1)
- neural networks (1)
- non-terrestrial networks (1)
- ontology (1)
- orchestration (1)
- packet reception method (1)
- particle picking (1)
- performance (1)
- performance monitoring (1)
- phase space (1)
- phase transition (1)
- pneumoniae (1)
- private chat groups (1)
- qubit (1)
- radiology (1)
- ransomware (1)
- satellite communication (1)
- scalability (1)
- scalability evaluation (1)
- sentinel (1)
- service-curve estimation (1)
- shortest path routing (1)
- signaling traffic (1)
- simulation (1)
- smart meter data utilization (1)
- smart speaker (1)
- social interaction (1)
- social relationship (1)
- social role (1)
- state management (1)
- statistics and numerical data (1)
- surface model (1)
- sustainability (1)
- table extraction (1)
- table understanding (1)
- text line detection (1)
- timestamping method (1)
- tomography (1)
- visual proteomics (1)
Institute
Sonstige beteiligte Institutionen
EU-Project number / Contract (GA) number
- 101069547 (1)
This paper discusses the problem of finding multiple shortest disjoint paths in modern communication networks, which is essential for ultra-reliable and time-sensitive applications. Dijkstra’s algorithm has been a popular solution for the shortest path problem, but repetitive use of it to find multiple paths is not scalable. The Multiple Disjoint Path Algorithm (MDPAlg), published in 2021, proposes the use of a single full graph to construct multiple disjoint paths. This paper proposes modifications to the algorithm to include a delay constraint, which is important in time-sensitive applications. Different delay constraint least-cost routing algorithms are compared in a comprehensive manner to evaluate the benefits of the adapted MDPAlg algorithm. Fault tolerance, and thereby reliability, is ensured by generating multiple link-disjoint paths from source to destination.
In recent history, normalized digital surface models (nDSMs) have been constantly gaining importance as a means to solve large-scale geographic problems. High-resolution surface models are precious, as they can provide detailed information for a specific area. However, measurements with a high resolution are time consuming and costly. Only a few approaches exist to create high-resolution nDSMs for extensive areas. This article explores approaches to extract high-resolution nDSMs from low-resolution Sentinel-2 data, allowing us to derive large-scale models. We thereby utilize the advantages of Sentinel 2 being open access, having global coverage, and providing steady updates through a high repetition rate. Several deep learning models are trained to overcome the gap in producing high-resolution surface maps from low-resolution input data. With U-Net as a base architecture, we extend the capabilities of our model by integrating tailored multiscale encoders with differently sized kernels in the convolution as well as conformed self-attention inside the skip connection gates. Using pixelwise regression, our U-Net base models can achieve a mean height error of approximately 2 m. Moreover, through our enhancements to the model architecture, we reduce the model error by more than 7%.
This paper presents a prototypical implementation of the In-band Network Telemetry (INT) specification in P4 and demonstrates a use case, where a Tofino Switch is used to measure device and network performance in a lab setting. This work is based on research activities in the area of P4 data plane programming conducted at the network lab of HTW Berlin.
In recent years, satellite communication has been expanding its field of application in the world of computer networks. This paper aims to provide an overview of how a typical scenario involving 5G Non-Terrestrial Networks (NTNs) for vehicle to everything (V2X) applications is characterized. In particular, a first implementation of a system that integrates them together will be described. Such a framework will later be used to evaluate the performance of applications such as Vehicle Monitoring (VM), Remote Driving (RD), Voice Over IP (VoIP), and others. Different configuration scenarios such as Low Earth Orbit and Geostationary Orbit will be considered.
The ongoing digitization of historical photographs in archives allows investigating the quality, quantity, and distribution of these images. However, the exact interior and exterior camera orientations of these photographs are usually lost during the digitization process. The proposed method uses content-based image retrieval (CBIR) to filter exterior images of single buildings in combination with metadata information. The retrieved photographs are automatically processed in an adapted structure-from-motion (SfM) pipeline to determine the camera parameters. In an interactive georeferencing process, the calculated camera positions are transferred into a global coordinate system. As all image and camera data are efficiently stored in the proposed 4D database, they can be conveniently accessed afterward to georeference newly digitized images by using photogrammetric triangulation and spatial resection. The results show that the CBIR and the subsequent SfM are robust methods for various kinds of buildings and different quantity of data. The absolute accuracy of the camera positions after georeferencing lies in the range of a few meters likely introduced by the inaccurate LOD2 models used for transformation. The proposed photogrammetric method, the database structure, and the 4D visualization interface enable adding historical urban photographs and 3D models from other locations.
In this work, we describe the network from data collection to data processing and storage as a system based on different layers. We outline the different layers and highlight major tasks and dependencies with regard to energy consumption and energy efficiency. With this view, we can outwork challenges and questions a future system architect must answer to provide a more sustainable, green, resource friendly, and energy efficient application or system. Therefore, all system layers must be considered individually but also altogether for future IoT solutions. This requires, in particular, novel sustainability metrics in addition to current Quality of Service and Quality of Experience metrics to provide a high power, user satisfying, and sustainable network.
Background: Due to the importance of radiologic examinations, such as X-rays or computed tomography scans, for many clinical diagnoses, the optimal use of the radiology department is 1 of the primary goals of many hospitals.
Objective: This study aims to calculate the key metrics of this use by creating a radiology data warehouse solution, where data from radiology information systems (RISs) can be imported and then queried using a query language as well as a graphical user interface (GUI).
Methods: Using a simple configuration file, the developed system allowed for the processing of radiology data exported from any kind of RIS into a Microsoft Excel, comma-separated value (CSV), or JavaScript Object Notation (JSON) file. These data were then imported into a clinical data warehouse. Additional values based on the radiology data were calculated during this import process by implementing 1 of several provided interfaces. Afterward, the query language and GUI of the data warehouse were used to configure and calculate reports on these data. For the most common types of requested reports, a web interface was created to view their numbers as graphics.
Results: The tool was successfully tested with the data of 4 different German hospitals from 2018 to 2021, with a total of 1,436,111 examinations. The user feedback was good, since all their queries could be answered if the available data were sufficient. The initial processing of the radiology data for using them with the clinical data warehouse took (depending on the amount of data provided by each hospital) between 7 minutes and 1 hour 11 minutes. Calculating 3 reports of different complexities on the data of each hospital was possible in 1-3 seconds for reports with up to 200 individual calculations and in up to 1.5 minutes for reports with up to 8200 individual calculations.
Conclusions: A system was developed with the main advantage of being generic concerning the export of different RISs as well as concerning the configuration of queries for various reports. The queries could be configured easily using the GUI of the data warehouse, and their results could be exported into the standard formats Excel and CSV for further processing.
Deep learning enables enormous progress in many computer vision-related tasks. Artificial Intel- ligence (AI) steadily yields new state-of-the-art results in the field of detection and classification. Thereby AI performance equals or exceeds human performance. Those achievements impacted many domains, including medical applications.
One particular field of medical applications is gastroenterology. In gastroenterology, machine learning algorithms are used to assist examiners during interventions. One of the most critical concerns for gastroenterologists is the development of Colorectal Cancer (CRC), which is one of the leading causes of cancer-related deaths worldwide. Detecting polyps in screening colonoscopies is the essential procedure to prevent CRC. Thereby, the gastroenterologist uses an endoscope to screen the whole colon to find polyps during a colonoscopy. Polyps are mucosal growths that can vary in severity.
This thesis supports gastroenterologists in their examinations with automated detection and clas- sification systems for polyps. The main contribution is a real-time polyp detection system. This system is ready to be installed in any gastroenterology practice worldwide using open-source soft- ware. The system achieves state-of-the-art detection results and is currently evaluated in a clinical trial in four different centers in Germany.
The thesis presents two additional key contributions: One is a polyp detection system with ex- tended vision tested in an animal trial. Polyps often hide behind folds or in uninvestigated areas. Therefore, the polyp detection system with extended vision uses an endoscope assisted by two additional cameras to see behind those folds. If a polyp is detected, the endoscopist receives a vi- sual signal. While the detection system handles the additional two camera inputs, the endoscopist focuses on the main camera as usual.
The second one are two polyp classification models, one for the classification based on shape (Paris) and the other on surface and texture (NBI International Colorectal Endoscopic (NICE) classification). Both classifications help the endoscopist with the treatment of and the decisions about the detected polyp.
The key algorithms of the thesis achieve state-of-the-art performance. Outstandingly, the polyp detection system tested on a highly demanding video data set shows an F1 score of 90.25 % while working in real-time. The results exceed all real-time systems in the literature. Furthermore, the first preliminary results of the clinical trial of the polyp detection system suggest a high Adenoma Detection Rate (ADR). In the preliminary study, all polyps were detected by the polyp detection system, and the system achieved a high usability score of 96.3 (max 100). The Paris classification model achieved an F1 score of 89.35 % which is state-of-the-art. The NICE classification model achieved an F1 score of 81.13 %.
Furthermore, a large data set for polyp detection and classification was created during this thesis. Therefore a fast and robust annotation system called Fast Colonoscopy Annotation Tool (FastCAT) was developed. The system simplifies the annotation process for gastroenterologists. Thereby the
i
gastroenterologists only annotate key parts of the endoscopic video. Afterward, those video parts are pre-labeled by a polyp detection AI to speed up the process. After the AI has pre-labeled the frames, non-experts correct and finish the annotation. This annotation process is fast and ensures high quality. FastCAT reduces the overall workload of the gastroenterologist on average by a factor of 20 compared to an open-source state-of-art annotation tool.
The Internet of Things (IoT) enables a variety of smart applications, including smart home, smart manufacturing, and smart city. By enhancing Business Process Management Systems with IoT capabilities, the execution and monitoring of business processes can be significantly improved. Providing a holistic support for modeling, executing and monitoring IoT-driven processes, however, constitutes a challenge. Existing process modeling and process execution languages, such as BPMN 2.0, are unable to fully meet the IoT characteristics (e.g., asynchronicity and parallelism) of IoT-driven processes. In this article, we present BPMNE4IoT—A holistic framework for modeling, executing and monitoring IoT-driven processes. We introduce various artifacts and events based on the BPMN 2.0 metamodel that allow realizing the desired IoT awareness of business processes. The framework is evaluated along two real-world scenarios from two different domains. Moreover, we present a user study for comparing BPMNE4IoT and BPMN 2.0. In particular, this study has confirmed that the BPMNE4IoT framework facilitates the support of IoT-driven processes.
An important but very time consuming part of the research process is literature review. An already large and nevertheless growing ground set of publications as well as a steadily increasing publication rate continue to worsen the situation. Consequently, automating this task as far as possible is desirable. Experimental results of systems are key-insights of high importance during literature review and usually represented in form of tables. Our pipeline KIETA exploits these tables to contribute to the endeavor of automation by extracting them and their contained knowledge from scientific publications. The pipeline is split into multiple steps to guarantee modularity as well as analyzability, and agnosticim regarding the specific scientific domain up until the knowledge extraction step, which is based upon an ontology. Additionally, a dataset of corresponding articles has been manually annotated with information regarding table and knowledge extraction. Experiments show promising results that signal the possibility of an automated system, while also indicating limits of extracting knowledge from tables without any context.