TY - JOUR A1 - Schlör, Daniel A1 - Ring, Markus A1 - Hotho, Andreas T1 - iNALU: Improved Neural Arithmetic Logic Unit JF - Frontiers in Artificial Intelligence N2 - Neural networks have to capture mathematical relationships in order to learn various tasks. They approximate these relations implicitly and therefore often do not generalize well. The recently proposed Neural Arithmetic Logic Unit (NALU) is a novel neural architecture which is able to explicitly represent the mathematical relationships by the units of the network to learn operations such as summation, subtraction or multiplication. Although NALUs have been shown to perform well on various downstream tasks, an in-depth analysis reveals practical shortcomings by design, such as the inability to multiply or divide negative input values or training stability issues for deeper networks. We address these issues and propose an improved model architecture. We evaluate our model empirically in various settings from learning basic arithmetic operations to more complex functions. Our experiments indicate that our model solves stability issues and outperforms the original NALU model in means of arithmetic precision and convergence. KW - neural networks KW - machine learning KW - arithmetic calculations KW - neural architecture KW - experimental evaluation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-212301 SN - 2624-8212 VL - 3 ER - TY - RPRT A1 - Vomhoff, Viktoria A1 - Geissler, Stefan A1 - Gebert, Steffen A1 - Hossfeld, Tobias T1 - Towards Understanding the Global IPX Network from an MVNO Perspective T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - In this paper, we work to understand the global IPX network from the perspective of an MVNO. In order to do this, we provide a brief description of the global architecture of mobile carriers. We provide initial results with respect to mapping the vast and complex interconnection network enabling global roaming from the point of view of a single MVNO. Finally, we provide preliminary results regarding the quality of service observed under global roaming conditions. KW - global IPX network Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322121 ER - TY - JOUR A1 - Loh, Frank A1 - Poignée, Fabian A1 - Wamser, Florian A1 - Leidinger, Ferdinand A1 - Hoßfeld, Tobias T1 - Uplink vs. Downlink: Machine Learning-Based Quality Prediction for HTTP Adaptive Video Streaming JF - Sensors N2 - Streaming video is responsible for the bulk of Internet traffic these days. For this reason, Internet providers and network operators try to make predictions and assessments about the streaming quality for an end user. Current monitoring solutions are based on a variety of different machine learning approaches. The challenge for providers and operators nowadays is that existing approaches require large amounts of data. In this work, the most relevant quality of experience metrics, i.e., the initial playback delay, the video streaming quality, video quality changes, and video rebuffering events, are examined using a voluminous data set of more than 13,000 YouTube video streaming runs that were collected with the native YouTube mobile app. Three Machine Learning models are developed and compared to estimate playback behavior based on uplink request information. The main focus has been on developing a lightweight approach using as few features and as little data as possible, while maintaining state-of-the-art performance. KW - HTTP adaptive video streaming KW - quality of experience prediction KW - machine learning Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-241121 SN - 1424-8220 VL - 21 IS - 12 ER - TY - CHAP A1 - Truman, Samuel A1 - von Mammen, Sebastian T1 - Interactive Self-Assembling Agent Ensembles T2 - Proceedings of the 1st Games Technology Summit N2 - In this paper, we bridge the gap between procedural content generation (PCG) and user-generated content (UGC) by proposing and demonstrating an interactive agent-based model of self-assembling ensembles that can be directed though user input. We motivate these efforts by considering the opportunities technology provides to pursue game designs based on according game design frameworks. We present three different use cases of the proposed model that emphasize its potential to (1) self-assemble into predefined 3D graphical assets, (2) define new structures in the context of virtual environments by self-assembling layers on the surfaces of arbitrary 3D objects, and (3) allow novel structures to self-assemble only considering the model’s configuration and no external dependencies. To address the performance restrictions in computer games, we realized the prototypical model implementation by means of an efficient entity component system (ECS). We conclude the paper with an outlook on future steps to further explore novel interactive, dynamic PCG mechanics and to ensure their efficiency. KW - procedural content generation KW - user-generated content KW - game mechanics KW - agent-based models KW - self-assembly Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-246032 ER - TY - RPRT A1 - Navade, Piyush A1 - Maile, Lisa A1 - German, Reinhard T1 - Multiple DCLC Routing Algorithms for Ultra-Reliable and Time-Sensitive Applications T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This paper discusses the problem of finding multiple shortest disjoint paths in modern communication networks, which is essential for ultra-reliable and time-sensitive applications. Dijkstra’s algorithm has been a popular solution for the shortest path problem, but repetitive use of it to find multiple paths is not scalable. The Multiple Disjoint Path Algorithm (MDPAlg), published in 2021, proposes the use of a single full graph to construct multiple disjoint paths. This paper proposes modifications to the algorithm to include a delay constraint, which is important in time-sensitive applications. Different delay constraint least-cost routing algorithms are compared in a comprehensive manner to evaluate the benefits of the adapted MDPAlg algorithm. Fault tolerance, and thereby reliability, is ensured by generating multiple link-disjoint paths from source to destination. KW - Dijkstra’s algorithm KW - shortest path routing KW - disjoint multi-paths KW - delay constrained KW - least cost Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322177 ER - TY - JOUR A1 - Halbig, Andreas A1 - Latoschik, Marc Erich T1 - A systematic review of physiological measurements, factors, methods, and applications in virtual reality JF - Frontiers in Virtual Reality N2 - Measurements of physiological parameters provide an objective, often non-intrusive, and (at least semi-)automatic evaluation and utilization of user behavior. In addition, specific hardware devices of Virtual Reality (VR) often ship with built-in sensors, i.e. eye-tracking and movements sensors. Hence, the combination of physiological measurements and VR applications seems promising. Several approaches have investigated the applicability and benefits of this combination for various fields of applications. However, the range of possible application fields, coupled with potentially useful and beneficial physiological parameters, types of sensor, target variables and factors, and analysis approaches and techniques is manifold. This article provides a systematic overview and an extensive state-of-the-art review of the usage of physiological measurements in VR. We identified 1,119 works that make use of physiological measurements in VR. Within these, we identified 32 approaches that focus on the classification of characteristics of experience, common in VR applications. The first part of this review categorizes the 1,119 works by field of application, i.e. therapy, training, entertainment, and communication and interaction, as well as by the specific target factors and variables measured by the physiological parameters. An additional category summarizes general VR approaches applicable to all specific fields of application since they target typical VR qualities. In the second part of this review, we analyze the target factors and variables regarding the respective methods used for an automatic analysis and, potentially, classification. For example, we highlight which measurement setups have been proven to be sensitive enough to distinguish different levels of arousal, valence, anxiety, stress, or cognitive workload in the virtual realm. This work may prove useful for all researchers wanting to use physiological data in VR and who want to have a good overview of prior approaches taken, their benefits and potential drawbacks. KW - virtual reality KW - use cases KW - sesnsors KW - tools KW - biosignals KW - psychophyisology KW - HMD (Head-Mounted Display) KW - systematic review Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260503 VL - 2 ER - TY - JOUR A1 - Li, Ningbo A1 - Guan, Lianwu A1 - Gao, Yanbin A1 - Du, Shitong A1 - Wu, Menghao A1 - Guang, Xingxing A1 - Cong, Xiaodan T1 - Indoor and outdoor low-cost seamless integrated navigation system based on the integration of INS/GNSS/LIDAR system JF - Remote Sensing N2 - Global Navigation Satellite System (GNSS) provides accurate positioning data for vehicular navigation in open outdoor environment. In an indoor environment, Light Detection and Ranging (LIDAR) Simultaneous Localization and Mapping (SLAM) establishes a two-dimensional map and provides positioning data. However, LIDAR can only provide relative positioning data and it cannot directly provide the latitude and longitude of the current position. As a consequence, GNSS/Inertial Navigation System (INS) integrated navigation could be employed in outdoors, while the indoors part makes use of INS/LIDAR integrated navigation and the corresponding switching navigation will make the indoor and outdoor positioning consistent. In addition, when the vehicle enters the garage, the GNSS signal will be blurred for a while and then disappeared. Ambiguous GNSS satellite signals will lead to the continuous distortion or overall drift of the positioning trajectory in the indoor condition. Therefore, an INS/LIDAR seamless integrated navigation algorithm and a switching algorithm based on vehicle navigation system are designed. According to the experimental data, the positioning accuracy of the INS/LIDAR navigation algorithm in the simulated environmental experiment is 50% higher than that of the Dead Reckoning (DR) algorithm. Besides, the switching algorithm developed based on the INS/LIDAR integrated navigation algorithm can achieve 80% success rate in navigation mode switching. KW - vehicular navigation KW - GNSS/INS integrated navigation KW - INS/LIDAR integrated navigation KW - switching navigation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-216229 SN - 2072-4292 VL - 12 IS - 19 ER - TY - JOUR A1 - Lesch, Veronika A1 - König, Maximilian A1 - Kounev, Samuel A1 - Stein, Anthony A1 - Krupitzer, Christian T1 - Tackling the rich vehicle routing problem with nature-inspired algorithms JF - Applied Intelligence N2 - In the last decades, the classical Vehicle Routing Problem (VRP), i.e., assigning a set of orders to vehicles and planning their routes has been intensively researched. As only the assignment of order to vehicles and their routes is already an NP-complete problem, the application of these algorithms in practice often fails to take into account the constraints and restrictions that apply in real-world applications, the so called rich VRP (rVRP) and are limited to single aspects. In this work, we incorporate the main relevant real-world constraints and requirements. We propose a two-stage strategy and a Timeline algorithm for time windows and pause times, and apply a Genetic Algorithm (GA) and Ant Colony Optimization (ACO) individually to the problem to find optimal solutions. Our evaluation of eight different problem instances against four state-of-the-art algorithms shows that our approach handles all given constraints in a reasonable time. KW - logistics KW - rich vehicle routing problem KW - ant-colony optimization KW - genetic algorithm KW - real-world application Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-268942 SN - 1573-7497 VL - 52 ER - TY - RPRT A1 - Deutschmann, Jörg A1 - Hielscher, Kai-Steffen A1 - German, Reinhard T1 - Next-Generation Satellite Communication Networks T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - This paper gives an overview of our recent activities in the field of satellite communication networks, including an introduction to geostationary satellite systems and Low Earth Orbit megaconstellations. To mitigate the high latencies of geostationary satellite networks, TCP-splitting Performance Enhancing Proxies are deployed. However, these cannot be applied in the case of encrypted transport headers as it is the case for VPNs or QUIC. We summarize performance evaluation results from multiple measurement campaigns. In a recently concluded project, multipath communication was used to combine the advantages of very heterogeneous communication paths: low data rate, low latency (e.g., DSL light) and high data rate, high latency (e.g., geostationary satellite). KW - Datennetz KW - satellite communication KW - Performance Enhancing Proxies KW - transport protocols KW - VPN KW - QUIC KW - multipath communication KW - hybrid access Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280732 ER - TY - JOUR A1 - Carolus, Astrid A1 - Wienrich, Carolin A1 - Törke, Anna A1 - Friedel, Tobias A1 - Schwietering, Christian A1 - Sperzel, Mareike T1 - ‘Alexa, I feel for you!’ Observers’ empathetic reactions towards a conversational agent JF - Frontiers in Computer Science N2 - Conversational agents and smart speakers have grown in popularity offering a variety of options for use, which are available through intuitive speech operation. In contrast to the standard dyad of a single user and a device, voice-controlled operations can be observed by further attendees resulting in new, more social usage scenarios. Referring to the concept of ‘media equation’ and to research on the idea of ‘computers as social actors,’ which describes the potential of technology to trigger emotional reactions in users, this paper asks for the capacity of smart speakers to elicit empathy in observers of interactions. In a 2 × 2 online experiment, 140 participants watched a video of a man talking to an Amazon Echo either rudely or neutrally (factor 1), addressing it as ‘Alexa’ or ‘Computer’ (factor 2). Controlling for participants’ trait empathy, the rude treatment results in participants’ significantly higher ratings of empathy with the device, compared to the neutral treatment. The form of address had no significant effect. Results were independent of the participants’ gender and usage experience indicating a rather universal effect, which confirms the basic idea of the media equation. Implications for users, developers and researchers were discussed in the light of (future) omnipresent voice-based technology interaction scenarios. KW - conversational agent KW - empathy KW - smart speaker KW - media equation KW - computers as social actors KW - human-computer interaction Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-258807 VL - 3 ER - TY - RPRT A1 - Riegler, Clemens A1 - Werner, Lennart A1 - Kayal, Hakan T1 - MAPLE: Marsian Autorotation Probe Lander Experiment N2 - The first step towards aerial planetary exploration has been made. Ingenuity shows extremely promising results, and new missions are already underway. Rotorcraft are capable of flight. This capability could be utilized to support the last stages of Entry, Descent, and Landing. Thus, mass and complexity could be scaled down. Autorotation is one method of descent. It describes unpowered descent and landing, typically performed by helicopters in case of an engine failure. MAPLE is suggested to test these procedures and understand autorotation on other planets. In this series of experiments, the Ingenuity helicopter is utilized. Ingenuity would autorotate a ”mid-air-landing” before continuing with normal flight. Ultimately, the collected data shall help to understand autorotation on Mars and its utilization for interplanetary exploration. T3 - Raumfahrttechnik und Extraterrestrik - 2 KW - autorotation KW - descent KW - Mars KW - rotorcraft KW - landing KW - aerospace Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-282390 ER - TY - JOUR A1 - Obremski, David A1 - Lugrin, Jean-Luc A1 - Schaper, Philipp A1 - Lugrin, Birgit T1 - Non-native speaker perception of Intelligent Virtual Agents in two languages: the impact of amount and type of grammatical mistakes JF - Journal on Multimodal User Interfaces N2 - Having a mixed-cultural membership becomes increasingly common in our modern society. It is thus beneficial in several ways to create Intelligent Virtual Agents (IVAs) that reflect a mixed-cultural background as well, e.g., for educational settings. For research with such IVAs, it is essential that they are classified as non-native by members of a target culture. In this paper, we focus on variations of IVAs’ speech to create the impression of non-native speakers that are identified as such by speakers of two different mother tongues. In particular, we investigate grammatical mistakes and identify thresholds beyond which the agents is clearly categorised as a non-native speaker. Therefore, we conducted two experiments: one for native speakers of German, and one for native speakers of English. Results of the German study indicate that beyond 10% of word order mistakes and 25% of infinitive mistakes German-speaking IVAs are perceived as non-native speakers. Results of the English study indicate that beyond 50% of omission mistakes and 50% of infinitive mistakes English-speaking IVAs are perceived as non-native speakers. We believe these thresholds constitute helpful guidelines for computational approaches of non-native speaker generation, simplifying research with IVAs in mixed-cultural settings. KW - mixed-cultural settings KW - Intelligent Virtual Agents KW - verbal behaviour Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-269984 SN - 1783-8738 VL - 15 IS - 2 ER - TY - JOUR A1 - Reinhard, Sebastian A1 - Helmerich, Dominic A. A1 - Boras, Dominik A1 - Sauer, Markus A1 - Kollmannsberger, Philip T1 - ReCSAI: recursive compressed sensing artificial intelligence for confocal lifetime localization microscopy JF - BMC Bioinformatics N2 - Background Localization-based super-resolution microscopy resolves macromolecular structures down to a few nanometers by computationally reconstructing fluorescent emitter coordinates from diffraction-limited spots. The most commonly used algorithms are based on fitting parametric models of the point spread function (PSF) to a measured photon distribution. These algorithms make assumptions about the symmetry of the PSF and thus, do not work well with irregular, non-linear PSFs that occur for example in confocal lifetime imaging, where a laser is scanned across the sample. An alternative method for reconstructing sparse emitter sets from noisy, diffraction-limited images is compressed sensing, but due to its high computational cost it has not yet been widely adopted. Deep neural network fitters have recently emerged as a new competitive method for localization microscopy. They can learn to fit arbitrary PSFs, but require extensive simulated training data and do not generalize well. A method to efficiently fit the irregular PSFs from confocal lifetime localization microscopy combining the advantages of deep learning and compressed sensing would greatly improve the acquisition speed and throughput of this method. Results Here we introduce ReCSAI, a compressed sensing neural network to reconstruct localizations for confocal dSTORM, together with a simulation tool to generate training data. We implemented and compared different artificial network architectures, aiming to combine the advantages of compressed sensing and deep learning. We found that a U-Net with a recursive structure inspired by iterative compressed sensing showed the best results on realistic simulated datasets with noise, as well as on real experimentally measured confocal lifetime scanning data. Adding a trainable wavelet denoising layer as prior step further improved the reconstruction quality. Conclusions Our deep learning approach can reach a similar reconstruction accuracy for confocal dSTORM as frame binning with traditional fitting without requiring the acquisition of multiple frames. In addition, our work offers generic insights on the reconstruction of sparse measurements from noisy experimental data by combining compressed sensing and deep learning. We provide the trained networks, the code for network training and inference as well as the simulation tool as python code and Jupyter notebooks for easy reproducibility. KW - compressed sensing KW - AI KW - SMLM KW - FLIMbee KW - dSTORM Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-299768 VL - 23 IS - 1 ER - TY - RPRT A1 - Simon, Manuel A1 - Gallenmüller, Sebastian A1 - Carle, Georg T1 - Never Miss Twice - Add-On-Miss Table Updates in Software Data Planes T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - State Management at line rate is crucial for critical applications in next-generation networks. P4 is a language used in software-defined networking to program the data plane. The data plane can profit in many circumstances when it is allowed to manage its state without any detour over a controller. This work is based on a previous study by investigating the potential and performance of add-on-miss insertions of state by the data plane. The state keeping capabilities of P4 are limited regarding the amount of data and the update frequency. We follow the tentative specification of an upcoming portable-NIC-architecture and implement these changes into the software P4 target T4P4S. We show that insertions are possible with only a slight overhead compared to lookups and evaluate the influence of the rate of insertions on their latency. KW - SDN KW - state management KW - P4 KW - Add-on-Miss Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322071 ER - TY - RPRT A1 - Brisch, Fabian A1 - Kassler, Andreas A1 - Vestin, Jonathan A1 - Pieska, Marcus A1 - Amend, Markus T1 - Accelerating Transport Layer Multipath Packet Scheduling for 5G-ATSSS T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Utilizing multiple access networks such as 5G, 4G, and Wi-Fi simultaneously can lead to increased robustness, resiliency, and capacity for mobile users. However, transparently implementing packet distribution over multiple paths within the core of the network faces multiple challenges including scalability to a large number of customers, low latency, and high-capacity packet processing requirements. In this paper, we offload congestion-aware multipath packet scheduling to a smartNIC. However, such hardware acceleration faces multiple challenges due to programming language and platform limitations. We implement different multipath schedulers in P4 with different complexity in order to cope with dynamically changing path capacities. Using testbed measurements, we show that our CMon scheduler, which monitors path congestion in the data plane and dynamically adjusts scheduling weights for the different paths based on path state information, can process more than 3.5 Mpps packets 25 μs latency. KW - multipath packet scheduling KW - P4 KW - MP-DCCP KW - 5G KW - ATSSSS Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322052 ER - TY - RPRT A1 - Hasslinger, Gerhard A1 - Ntougias, Konstantinos A1 - Hasslinger, Frank A1 - Hohlfeld, Oliver T1 - Performance Analysis of Basic Web Caching Strategies (LFU, LRU, FIFO, ...) with Time-To-Live Data Validation T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Web caches often use a Time-to-live (TTL) limit to validate data consistency with web servers. We study the impact of TTL constraints on the hit ratio of basic strategies in caches of fixed size. We derive analytical results and confirm their accuracy in comparison to simulations. We propose a score-based caching method with awareness of the current TTL per data for improving the hit ratio close to the upper bound. KW - LRU KW - LFU KW - FIFO caching strategies KW - hit ratio analysis and simulation KW - TTL validation of data consistency Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322048 ER - TY - RPRT A1 - Funda, Christoph A1 - Marín García, Pablo A1 - German, Reinhard A1 - Hielscher, Kai-Steffen T1 - Online Algorithm for Arrival & Service Curve Estimation T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This paper presents a novel concept to extend state-of-the-art buffer monitoring with additional measures to estimate service-curves. The online algorithm for service-curve estimation replaces the state-of-the-art timestamp logging, as we expect it to overcome the main disadvantages of generating a huge amount of data and using a lot of CPU resources to store the data to a file during operation. We prove the accuracy of the online-algorithm offline with timestamp data and compare the derived bounds to the measured delay and backlog. We also do a proof-of- concept of the online-algorithm, implement it in LabVIEW and compare its performance to the timestamp logging by CPU load and data-size of the log-file. However, the implementation is still work-in-progress. KW - hardware-in-the-loop streaming system KW - network calculus KW - service-curve estimation KW - performance monitoring Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322112 ER - TY - RPRT A1 - Mazigh, Sadok Mehdi A1 - Beausencourt, Marcel A1 - Bode, Max Julius A1 - Scheffler, Thomas T1 - Using P4-INT on Tofino for Measuring Device Performance Characteristics in a Network Lab T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This paper presents a prototypical implementation of the In-band Network Telemetry (INT) specification in P4 and demonstrates a use case, where a Tofino Switch is used to measure device and network performance in a lab setting. This work is based on research activities in the area of P4 data plane programming conducted at the network lab of HTW Berlin. KW - P4-INT Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322084 ER - TY - RPRT A1 - Nguyen, Kien A1 - Loh, Frank A1 - Hoßfeld, Tobias T1 - Challenges of Serverless Deployment in Edge-MEC-Cloud T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - The emerging serverless computing may meet Edge Cloud in a beneficial manner as the two offer flexibility and dynamicity in optimizing finite hardware resources. However, the lack of proper study of a joint platform leaves a gap in literature about consumption and performance of such integration. To this end, this paper identifies the key questions and proposes a methodology to answer them. KW - Edge-MEC-Cloud Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322025 ER - TY - RPRT A1 - Raffeck, Simon A1 - Geißler, Stefan A1 - Hoßfeld, Tobias T1 - Towards Understanding the Signaling Traffic in 5G Core Networks T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - The Fifth Generation (5G) communication technology, its infrastructure and architecture, though already deployed in campus and small scale networks, is still undergoing continuous changes and research. Especially, in the light of future large scale deployments and industrial use cases, a detailed analysis of the performance and utilization with regard to latency and service times constraints is crucial. To this end, a fine granular investigation of the Network Function (NF) based core system and the duration for all the tasks performed by these services is necessary. This work presents the first steps towards analyzing the signaling traffic in 5G core networks, and introduces a tool to automatically extract sequence diagrams and service times for NF tasks from traffic traces. KW - signaling traffic KW - 5G core network Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322106 ER - TY - RPRT A1 - Großmann, Marcel A1 - Homeyer, Tobias T1 - Emulation of Multipath Transmissions in P4 Networks with Kathará T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - Packets sent over a network can either get lost or reach their destination. Protocols like TCP try to solve this problem by resending the lost packets. However, retransmissions consume a lot of time and are cumbersome for the transmission of critical data. Multipath solutions are quite common to address this reliability issue and are available on almost every layer of the ISO/OSI model. We propose a solution based on a P4 network to duplicate packets in order to send them to their destination via multiple routes. The last network hop ensures that only a single copy of the traffic is further forwarded to its destination by adopting a concept similar to Bloom filters. Besides, if fast delivery is requested we provide a P4 prototype, which randomly forwards the packets over different transmission paths. For reproducibility, we implement our approach in a container-based network emulation system called Kathará. KW - P4 KW - multipath KW - emulation KW - Kathará Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322095 ER - TY - RPRT A1 - Riegler, Clemens A1 - Kayal, Hakan T1 - VELEX: Venus Lightning Experiment N2 - Lightning has fascinated humanity since the beginning of our existence. Different types of lightning like sprites and blue jets were discovered, and many more are theorized. However, it is very likely that these phenomena are not exclusive to our home planet. Venus’s dense and active atmosphere is a place where lightning is to be expected. Missions like Venera, Pioneer, and Galileo have carried instruments to measure electromagnetic activity. These measurements have indeed delivered results. However, these results are not clear. They could be explained by other effects like cosmic rays, plasma noise, or spacecraft noise. Furthermore, these lightning seem different from those we know from our home planet. In order to tackle these issues, a different approach to measurement is proposed. When multiple devices in different spacecraft or locations can measure the same atmospheric discharge, most other explanations become increasingly less likely. Thus, the suggested instrument and method of VELEX incorporates multiple spacecraft. With this approach, the question about the existence of lightning on Venus could be settled. T3 - Raumfahrttechnik und Extraterrestrik - 3 KW - Venus KW - Lightning KW - CubeSat KW - Balloon KW - Autorotation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-282481 ER - TY - JOUR A1 - Wamser, Florian A1 - Seufert, Anika A1 - Hall, Andrew A1 - Wunderer, Stefan A1 - Hoßfeld, Tobias T1 - Valid statements by the crowd: statistical measures for precision in crowdsourced mobile measurements JF - Network N2 - Crowdsourced network measurements (CNMs) are becoming increasingly popular as they assess the performance of a mobile network from the end user's perspective on a large scale. Here, network measurements are performed directly on the end-users' devices, thus taking advantage of the real-world conditions end-users encounter. However, this type of uncontrolled measurement raises questions about its validity and reliability. The problem lies in the nature of this type of data collection. In CNMs, mobile network subscribers are involved to a large extent in the measurement process, and collect data themselves for the operator. The collection of data on user devices in arbitrary locations and at uncontrolled times requires means to ensure validity and reliability. To address this issue, our paper defines concepts and guidelines for analyzing the precision of CNMs; specifically, the number of measurements required to make valid statements. In addition to the formal definition of the aspect, we illustrate the problem and use an extensive sample data set to show possible assessment approaches. This data set consists of more than 20.4 million crowdsourced mobile measurements from across France, measured by a commercial data provider. KW - mobile networks KW - crowdsourced measurements KW - statistical validity Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-284154 SN - 2673-8732 VL - 1 IS - 2 SP - 215 EP - 232 ER - TY - RPRT A1 - Grigorjew, Alexej A1 - Schumann, Lukas Kilian A1 - Diederich, Philip A1 - Hoßfeld, Tobias A1 - Kellerer, Wolfgang T1 - Understanding the Performance of Different Packet Reception and Timestamping Methods in Linux T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - This document briefly presents some renowned packet reception techniques for network packets in Linux systems. Further, it compares their performance when measuring packet timestamps with respect to throughput and accuracy. Both software and hardware timestamps are compared, and various parameters are examined, including frame size, link speed, network interface card, and CPU load. The results indicate that hardware timestamping offers significantly better accuracy with no downsides, and that packet reception techniques that avoid system calls offer superior measurement throughput. KW - packet reception method KW - timestamping method KW - Linux Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322064 ER - TY - THES A1 - Somody, Joseph Christian Campbell T1 - Leveraging deep learning for identification and structural determination of novel protein complexes from \(in\) \(situ\) electron cryotomography of \(Mycoplasma\) \(pneumoniae\) T1 - Tiefenlernen als Werkzeug zur Identifizierung und Strukturbestimmung neuer Proteinkomplexe aus der \(in\)-\(situ\)-Elektronenkryotomographie von \(Mycoplasma\) \(pneumoniae\) N2 - The holy grail of structural biology is to study a protein in situ, and this goal has been fast approaching since the resolution revolution and the achievement of atomic resolution. A cell's interior is not a dilute environment, and proteins have evolved to fold and function as needed in that environment; as such, an investigation of a cellular component should ideally include the full complexity of the cellular environment. Imaging whole cells in three dimensions using electron cryotomography is the best method to accomplish this goal, but it comes with a limitation on sample thickness and produces noisy data unamenable to direct analysis. This thesis establishes a novel workflow to systematically analyse whole-cell electron cryotomography data in three dimensions and to find and identify instances of protein complexes in the data to set up a determination of their structure and identity for success. Mycoplasma pneumoniae is a very small parasitic bacterium with fewer than 700 protein-coding genes, is thin enough and small enough to be imaged in large quantities by electron cryotomography, and can grow directly on the grids used for imaging, making it ideal for exploratory studies in structural proteomics. As part of the workflow, a methodology for training deep-learning-based particle-picking models is established. As a proof of principle, a dataset of whole-cell Mycoplasma pneumoniae tomograms is used with this workflow to characterize a novel membrane-associated complex observed in the data. Ultimately, 25431 such particles are picked from 353 tomograms and refined to a density map with a resolution of 11 Å. Making good use of orthogonal datasets to filter search space and verify results, structures were predicted for candidate proteins and checked for suitable fit in the density map. In the end, with this approach, nine proteins were found to be part of the complex, which appears to be associated with chaperone activity and interact with translocon machinery. Visual proteomics refers to the ultimate potential of in situ electron cryotomography: the comprehensive interpretation of tomograms. The workflow presented here is demonstrated to help in reaching that potential. N2 - Der heilige Gral der Strukturbiologie ist die Untersuchung eines Proteins in situ, und dieses Ziel ist seit der Auflösungsrevolution und dem Erreichen der atomaren Auflösung in greifbare Nähe gerückt. Das Innere einer Zelle ist keine verdünnte Umgebung, und Proteine haben sich so entwickelt, dass sie sich falten und so funktionieren, wie es in dieser Umgebung erforderlich ist; daher sollte die Untersuchung einer zellulären Komponente idealerweise die gesamte Komplexität der zellulären Umgebung umfassen. Die Abbildung ganzer Zellen in drei Dimensionen mit Hilfe der Elektronenkryotomographie ist die beste Methode, um dieses Ziel zu erreichen, aber sie ist mit einer Beschränkung der Probendicke verbunden und erzeugt verrauschte Daten, die sich nicht für eine direkte Analyse eignen. In dieser Dissertation wird ein neuartiger Workflow zur systematischen dreidimensionalen Analyse von Ganzzell-Elektronenkryotomographiedaten und zur Auffindung und Identifizierung von Proteinkomplexen in diesen Daten entwickelt, um eine erfolgreiche Bestimmung ihrer Struktur und Identität zu ermöglichen. Mycoplasma pneumoniae ist ein sehr kleines parasitäres Bakterium mit weniger als 700 proteinkodierenden Genen. Es ist dünn und klein genug, um in grossen Mengen durch Elektronenkryotomographie abgebildet zu werden, und kann direkt auf den für die Abbildung verwendeten Gittern wachsen, was es ideal für Sondierungsstudien in der strukturellen Proteomik macht. Als Teil des Workflows wird eine Methodik für das Training von Deep-Learning-basierten Partikelpicken-Modellen entwickelt. Als Proof-of-Principle wird ein Dataset von Ganzzell-Tomogrammen von Mycoplasma pneumoniae mit diesem Workflow verwendet, um einen neuartigen membranassoziierten Komplex zu charakterisieren, der in den Daten beobachtet wurde. Insgesamt wurden 25431 solcher Partikel aus 353 Tomogrammen gepickt und zu einer Dichtekarte mit einer Auflösung von 11 Å verfeinert. Unter Verwendung orthogonaler Datensätze zur Filterung des Suchraums und zur Überprüfung der Ergebnisse wurden Strukturen für Protein-Kandidaten vorhergesagt und auf ihre Eignung für die Dichtekarte überprüft. Letztendlich wurden mit diesem Ansatz neun Proteine als Bestandteile des Komplexes gefunden, der offenbar mit der Chaperonaktivität in Verbindung steht und mit der Translocon-Maschinerie interagiert. Das ultimative Potenzial der In-situ-Elektronenkryotomographie – die umfassende Interpretation von Tomogrammen – wird als visuelle Proteomik bezeichnet. Der hier vorgestellte Workflow soll dabei helfen, dieses Potenzial auszuschöpfen. KW - Kryoelektronenmikroskopie KW - Tomografie KW - Mycoplasma pneumoniae KW - Deep learning KW - cryo-EM KW - cryo-ET KW - tomography KW - mycoplasma KW - pneumoniae KW - deep learning KW - particle picking KW - membrane protein KW - visual proteomics Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-313447 ER - TY - RPRT A1 - Elsayed, Karim A1 - Rizk, Amr T1 - Response Times in Time-to-Live Caching Hierarchies under Random Network Delays T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - Time-to-Live (TTL) caches decouple the occupancy of objects in cache through object-specific validity timers. Stateof- the art techniques provide exact methods for the calculation of object-specific hit probabilities given entire cache hierarchies with random inter-cache network delays. The system hit probability is a provider-centric metric as it relates to the origin offload, i.e., the decrease in the number of requests that are served by the content origin server. In this paper we consider a user-centric metric, i.e., the response time, which is shown to be structurally different from the system hit probability. Equipped with the state-of-theart exact modeling technique using Markov-arrival processes we derive expressions for the expected object response time and pave a way for its optimization under network delays. KW - Datennetz KW - TTL Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280843 ER - TY - RPRT A1 - Alfredsson, Rebecka A1 - Kassler, Andreas A1 - Vestin, Jonathan A1 - Pieska, Marcus A1 - Amend, Markus T1 - Accelerating a Transport Layer based 5G Multi-Access Proxy on SmartNIC T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - Utilizing multiple access technologies such as 5G, 4G, and Wi-Fi within a coherent framework is currently standardized by 3GPP within 5G ATSSS. Indeed, distributing packets over multiple networks can lead to increased robustness, resiliency and capacity. A key part of such a framework is the multi-access proxy, which transparently distributes packets over multiple paths. As the proxy needs to serve thousands of customers, scalability and performance are crucial for operator deployments. In this paper, we leverage recent advancements in data plane programming, implement a multi-access proxy based on the MP-DCCP tunneling approach in P4 and hardware accelerate it by deploying the pipeline on a smartNIC. This is challenging due to the complex scheduling and congestion control operations involved. We present our pipeline and data structures design for congestion control and packet scheduling state management. Initial measurements in our testbed show that packet latency is in the range of 25 μs demonstrating the feasibility of our approach. KW - Datennetz KW - multipath KW - MP-DCCP KW - 5G-ATSSS KW - networking KW - dataplane programming KW - P4 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280798 ER - TY - JOUR A1 - Bencurova, Elena A1 - Shityakov, Sergey A1 - Schaack, Dominik A1 - Kaltdorf, Martin A1 - Sarukhanyan, Edita A1 - Hilgarth, Alexander A1 - Rath, Christin A1 - Montenegro, Sergio A1 - Roth, Günter A1 - Lopez, Daniel A1 - Dandekar, Thomas T1 - Nanocellulose composites as smart devices with chassis, light-directed DNA Storage, engineered electronic properties, and chip integration JF - Frontiers in Bioengineering and Biotechnology N2 - The rapid development of green and sustainable materials opens up new possibilities in the field of applied research. Such materials include nanocellulose composites that can integrate many components into composites and provide a good chassis for smart devices. In our study, we evaluate four approaches for turning a nanocellulose composite into an information storage or processing device: 1) nanocellulose can be a suitable carrier material and protect information stored in DNA. 2) Nucleotide-processing enzymes (polymerase and exonuclease) can be controlled by light after fusing them with light-gating domains; nucleotide substrate specificity can be changed by mutation or pH change (read-in and read-out of the information). 3) Semiconductors and electronic capabilities can be achieved: we show that nanocellulose is rendered electronic by iodine treatment replacing silicon including microstructures. Nanocellulose semiconductor properties are measured, and the resulting potential including single-electron transistors (SET) and their properties are modeled. Electric current can also be transported by DNA through G-quadruplex DNA molecules; these as well as classical silicon semiconductors can easily be integrated into the nanocellulose composite. 4) To elaborate upon miniaturization and integration for a smart nanocellulose chip device, we demonstrate pH-sensitive dyes in nanocellulose, nanopore creation, and kinase micropatterning on bacterial membranes as well as digital PCR micro-wells. Future application potential includes nano-3D printing and fast molecular processors (e.g., SETs) integrated with DNA storage and conventional electronics. This would also lead to environment-friendly nanocellulose chips for information processing as well as smart nanocellulose composites for biomedical applications and nano-factories. KW - nanocellulose KW - DNA storage KW - light-gated proteins KW - single-electron transistors KW - protein chip Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-283033 SN - 2296-4185 VL - 10 ER - TY - JOUR A1 - Krenzer, Adrian A1 - Makowski, Kevin A1 - Hekalo, Amar A1 - Fitting, Daniel A1 - Troya, Joel A1 - Zoller, Wolfram G. A1 - Hann, Alexander A1 - Puppe, Frank T1 - Fast machine learning annotation in the medical domain: a semi-automated video annotation tool for gastroenterologists JF - BioMedical Engineering OnLine N2 - Background Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g., visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. Methods In our framework, an expert reviews the video and annotates a few video frames to verify the object’s annotations for the non-expert. In a second step, a non-expert has visual confirmation of the given object and can annotate all following and preceding frames with AI assistance. After the expert has finished, relevant frames will be selected and passed on to an AI model. This information allows the AI model to detect and mark the desired object on all following and preceding frames with an annotation. Therefore, the non-expert can adjust and modify the AI predictions and export the results, which can then be used to train the AI model. Results Using this framework, we were able to reduce workload of domain experts on average by a factor of 20 on our data. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated AI model enhances the annotation speed further. Through a prospective study with 10 participants, we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. Conclusion In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source. KW - object detection KW - machine learning KW - deep learning KW - annotation KW - endoscopy KW - gastroenterology KW - automation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-300231 VL - 21 IS - 1 ER - TY - JOUR A1 - Kaltdorf, Kristin Verena A1 - Schulze, Katja A1 - Helmprobst, Frederik A1 - Kollmannsberger, Philip A1 - Dandekar, Thomas A1 - Stigloher, Christian T1 - Fiji macro 3D ART VeSElecT: 3D automated reconstruction tool for vesicle structures of electron tomograms JF - PLoS Computational Biology N2 - Automatic image reconstruction is critical to cope with steadily increasing data from advanced microscopy. We describe here the Fiji macro 3D ART VeSElecT which we developed to study synaptic vesicles in electron tomograms. We apply this tool to quantify vesicle properties (i) in embryonic Danio rerio 4 and 8 days past fertilization (dpf) and (ii) to compare Caenorhabditis elegans N2 neuromuscular junctions (NMJ) wild-type and its septin mutant (unc-59(e261)). We demonstrate development-specific and mutant-specific changes in synaptic vesicle pools in both models. We confirm the functionality of our macro by applying our 3D ART VeSElecT on zebrafish NMJ showing smaller vesicles in 8 dpf embryos then 4 dpf, which was validated by manual reconstruction of the vesicle pool. Furthermore, we analyze the impact of C. elegans septin mutant unc-59(e261) on vesicle pool formation and vesicle size. Automated vesicle registration and characterization was implemented in Fiji as two macros (registration and measurement). This flexible arrangement allows in particular reducing false positives by an optional manual revision step. Preprocessing and contrast enhancement work on image-stacks of 1nm/pixel in x and y direction. Semi-automated cell selection was integrated. 3D ART VeSElecT removes interfering components, detects vesicles by 3D segmentation and calculates vesicle volume and diameter (spherical approximation, inner/outer diameter). Results are collected in color using the RoiManager plugin including the possibility of manual removal of non-matching confounder vesicles. Detailed evaluation considered performance (detected vesicles) and specificity (true vesicles) as well as precision and recall. We furthermore show gain in segmentation and morphological filtering compared to learning based methods and a large time gain compared to manual segmentation. 3D ART VeSElecT shows small error rates and its speed gain can be up to 68 times faster in comparison to manual annotation. Both automatic and semi-automatic modes are explained including a tutorial. KW - Biology KW - Vesicles KW - Caenorhabditis elegans KW - Zebrafish KW - Septins KW - Synaptic vesicles KW - Neuromuscular junctions KW - Computer software KW - Synapses Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-172112 VL - 13 IS - 1 ER - TY - JOUR A1 - Döllinger, Nina A1 - Wienrich, Carolin A1 - Latoschik, Marc Erich T1 - Challenges and opportunities of immersive technologies for mindfulness meditation: a systematic review JF - Frontiers in Virtual Reality N2 - Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions. KW - virtual reality KW - augmented reality KW - mindfulness KW - XR KW - meditation Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-259047 VL - 2 ER - TY - JOUR A1 - Prakash, Subash A1 - Unnikrishnan, Vishnu A1 - Pryss, Rüdiger A1 - Kraft, Robin A1 - Schobel, Johannes A1 - Hannemann, Ronny A1 - Langguth, Berthold A1 - Schlee, Winfried A1 - Spiliopoulou, Myra T1 - Interactive system for similarity-based inspection and assessment of the well-being of mHealth users JF - Entropy N2 - Recent digitization technologies empower mHealth users to conveniently record their Ecological Momentary Assessments (EMA) through web applications, smartphones, and wearable devices. These recordings can help clinicians understand how the users' condition changes, but appropriate learning and visualization mechanisms are required for this purpose. We propose a web-based visual analytics tool, which processes clinical data as well as EMAs that were recorded through a mHealth application. The goals we pursue are (1) to predict the condition of the user in the near and the far future, while also identifying the clinical data that mostly contribute to EMA predictions, (2) to identify users with outlier EMA, and (3) to show to what extent the EMAs of a user are in line with or diverge from those users similar to him/her. We report our findings based on a pilot study on patient empowerment, involving tinnitus patients who recorded EMAs with the mHealth app TinnitusTips. To validate our method, we also derived synthetic data from the same pilot study. Based on this setting, results for different use cases are reported. KW - medical analytics KW - condition prediction KW - ecological momentary assessment KW - visual analytics KW - time series Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-252333 SN - 1099-4300 VL - 23 IS - 12 ER - TY - JOUR A1 - Kraft, Robin A1 - Birk, Ferdinand A1 - Reichert, Manfred A1 - Deshpande, Aniruddha A1 - Schlee, Winfried A1 - Langguth, Berthold A1 - Baumeister, Harald A1 - Probst, Thomas A1 - Spiliopoulou, Myra A1 - Pryss, Rüdiger T1 - Efficient processing of geospatial mHealth data using a scalable crowdsensing platform JF - Sensors N2 - Smart sensors and smartphones are becoming increasingly prevalent. Both can be used to gather environmental data (e.g., noise). Importantly, these devices can be connected to each other as well as to the Internet to collect large amounts of sensor data, which leads to many new opportunities. In particular, mobile crowdsensing techniques can be used to capture phenomena of common interest. Especially valuable insights can be gained if the collected data are additionally related to the time and place of the measurements. However, many technical solutions still use monolithic backends that are not capable of processing crowdsensing data in a flexible, efficient, and scalable manner. In this work, an architectural design was conceived with the goal to manage geospatial data in challenging crowdsensing healthcare scenarios. It will be shown how the proposed approach can be used to provide users with an interactive map of environmental noise, allowing tinnitus patients and other health-conscious people to avoid locations with harmful sound levels. Technically, the shown approach combines cloud-native applications with Big Data and stream processing concepts. In general, the presented architectural design shall serve as a foundation to implement practical and scalable crowdsensing platforms for various healthcare scenarios beyond the addressed use case. KW - mHealth KW - crowdsensing KW - tinnitus KW - geospatial data KW - cloud-native KW - stream processing KW - scalability KW - architectural design Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-207826 SN - 1424-8220 VL - 20 IS - 12 ER - TY - JOUR A1 - Klemz, Boris A1 - Rote, Günter T1 - Linear-Time Algorithms for Maximum-Weight Induced Matchings and Minimum Chain Covers in Convex Bipartite Graphs JF - Algorithmica N2 - A bipartite graph G=(U,V,E) is convex if the vertices in V can be linearly ordered such that for each vertex u∈U, the neighbors of u are consecutive in the ordering of V. An induced matching H of G is a matching for which no edge of E connects endpoints of two different edges of H. We show that in a convex bipartite graph with n vertices and m weighted edges, an induced matching of maximum total weight can be computed in O(n+m) time. An unweighted convex bipartite graph has a representation of size O(n) that records for each vertex u∈U the first and last neighbor in the ordering of V. Given such a compact representation, we compute an induced matching of maximum cardinality in O(n) time. In convex bipartite graphs, maximum-cardinality induced matchings are dual to minimum chain covers. A chain cover is a covering of the edge set by chain subgraphs, that is, subgraphs that do not contain induced matchings of more than one edge. Given a compact representation, we compute a representation of a minimum chain cover in O(n) time. If no compact representation is given, the cover can be computed in O(n+m) time. All of our algorithms achieve optimal linear running time for the respective problem and model, and they improve and generalize the previous results in several ways: The best algorithms for the unweighted problem versions had a running time of O(n\(^{2}\)) (Brandstädt et al. in Theor. Comput. Sci. 381(1–3):260–265, 2007. https://doi.org/10.1016/j.tcs.2007.04.006). The weighted case has not been considered before. KW - dynamic programming KW - graph algorithm KW - induced matching KW - chain cover KW - convex bipartite graph KW - certifying algorithm Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-267876 SN - 1432-0541 VL - 84 IS - 4 ER - TY - JOUR A1 - Davidson, Padraig A1 - Düking, Peter A1 - Zinner, Christoph A1 - Sperlich, Billy A1 - Hotho, Andreas T1 - Smartwatch-Derived Data and Machine Learning Algorithms Estimate Classes of Ratings of Perceived Exertion in Runners: A Pilot Study JF - Sensors N2 - The rating of perceived exertion (RPE) is a subjective load marker and may assist in individualizing training prescription, particularly by adjusting running intensity. Unfortunately, RPE has shortcomings (e.g., underreporting) and cannot be monitored continuously and automatically throughout a training sessions. In this pilot study, we aimed to predict two classes of RPE (≤15 “Somewhat hard to hard” on Borg’s 6–20 scale vs. RPE >15 in runners by analyzing data recorded by a commercially-available smartwatch with machine learning algorithms. Twelve trained and untrained runners performed long-continuous runs at a constant self-selected pace to volitional exhaustion. Untrained runners reported their RPE each kilometer, whereas trained runners reported every five kilometers. The kinetics of heart rate, step cadence, and running velocity were recorded continuously ( 1 Hz ) with a commercially-available smartwatch (Polar V800). We trained different machine learning algorithms to estimate the two classes of RPE based on the time series sensor data derived from the smartwatch. Predictions were analyzed in different settings: accuracy overall and per runner type; i.e., accuracy for trained and untrained runners independently. We achieved top accuracies of 84.8 % for the whole dataset, 81.8 % for the trained runners, and 86.1 % for the untrained runners. We predict two classes of RPE with high accuracy using machine learning and smartwatch data. This approach might aid in individualizing training prescriptions. KW - artificial intelligence KW - endurance KW - exercise intensity KW - precision training KW - prediction KW - wearable Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-205686 SN - 1424-8220 VL - 20 IS - 9 ER - TY - RPRT ED - Hoßfeld, Tobias ED - Wunderer, Stefan T1 - White Paper on Crowdsourced Network and QoE Measurements – Definitions, Use Cases and Challenges N2 - The goal of the white paper at hand is as follows. The definitions of the terms build a framework for discussions around the hype topic ‘crowdsourcing’. This serves as a basis for differentiation and a consistent view from different perspectives on crowdsourced network measurements, with the goal to provide a commonly accepted definition in the community. The focus is on the context of mobile and fixed network operators, but also on measurements of different layers (network, application, user layer). In addition, the white paper shows the value of crowdsourcing for selected use cases, e.g., to improve QoE or regulatory issues. Finally, the major challenges and issues for researchers and practitioners are highlighted. This white paper is the outcome of the Würzburg seminar on “Crowdsourced Network and QoE Measurements” which took place from 25-26 September 2019 in Würzburg, Germany. International experts were invited from industry and academia. They are well known in their communities, having different backgrounds in crowdsourcing, mobile networks, network measurements, network performance, Quality of Service (QoS), and Quality of Experience (QoE). The discussions in the seminar focused on how crowdsourcing will support vendors, operators, and regulators to determine the Quality of Experience in new 5G networks that enable various new applications and network architectures. As a result of the discussions, the need for a white paper manifested, with the goal of providing a scientific discussion of the terms “crowdsourced network measurements” and “crowdsourced QoE measurements”, describing relevant use cases for such crowdsourced data, and its underlying challenges. During the seminar, those main topics were identified, intensively discussed in break-out groups, and brought back into the plenum several times. The outcome of the seminar is this white paper at hand which is – to our knowledge – the first one covering the topic of crowdsourced network and QoE measurements. KW - Crowdsourcing KW - Network Measurements KW - Quality of Service (QoS) KW - Quality of Experience (QoE) KW - crowdsourced network measurements KW - crowdsourced QoE measurements Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-202327 ER - TY - JOUR A1 - Pawellek, Ruben A1 - Krmar, Jovana A1 - Leistner, Adrian A1 - Djajić, Nevena A1 - Otašević, Biljana A1 - Protić, Ana A1 - Holzgrabe, Ulrike T1 - Charged aerosol detector response modeling for fatty acids based on experimental settings and molecular features: a machine learning approach JF - Journal of Cheminformatics N2 - The charged aerosol detector (CAD) is the latest representative of aerosol-based detectors that generate a response independent of the analytes' chemical structure. This study was aimed at accurately predicting the CAD response of homologous fatty acids under varying experimental conditions. Fatty acids from C12 to C18 were used as model substances due to semivolatile characterics that caused non-uniform CAD behaviour. Considering both experimental conditions and molecular descriptors, a mixed quantitative structure-property relationship (QSPR) modeling was performed using Gradient Boosted Trees (GBT). The ensemble of 10 decisions trees (learning rate set at 0.55, the maximal depth set at 5, and the sample rate set at 1.0) was able to explain approximately 99% (Q\(^2\): 0.987, RMSE: 0.051) of the observed variance in CAD responses. Validation using an external test compound confirmed the high predictive ability of the model established (R-2: 0.990, RMSEP: 0.050). With respect to the intrinsic attribute selection strategy, GBT used almost all independent variables during model building. Finally, it attributed the highest importance to the power function value, the flow rate of the mobile phase, evaporation temperature, the content of the organic solvent in the mobile phase and the molecular descriptors such as molecular weight (MW), Radial Distribution Function-080/weighted by mass (RDF080m) and average coefficient of the last eigenvector from distance/detour matrix (Ve2_D/Dt). The identification of the factors most relevant to the CAD responsiveness has contributed to a better understanding of the underlying mechanisms of signal generation. An increased CAD response that was obtained for acetone as organic modifier demonstrated its potential to replace the more expensive and environmentally harmful acetonitrile. KW - High-performance liquid chromatography (HPLC) KW - Charged aerosol detector (CAD) KW - Gradient boosted trees (GBT) KW - Quantitative structure-property relationship modeling (QSPR) KW - Fatty acids Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-261618 VL - 13 IS - 1 ER - TY - JOUR A1 - Unruh, Fabian A1 - Landeck, Maximilian A1 - Oberdörfer, Sebastian A1 - Lugrin, Jean-Luc A1 - Latoschik, Marc Erich T1 - The Influence of Avatar Embodiment on Time Perception - Towards VR for Time-Based Therapy JF - Frontiers in Virtual Reality N2 - Psycho-pathological conditions, such as depression or schizophrenia, are often accompanied by a distorted perception of time. People suffering from this conditions often report that the passage of time slows down considerably and that they are “stuck in time.” Virtual Reality (VR) could potentially help to diagnose and maybe treat such mental conditions. However, the conditions in which a VR simulation could correctly diagnose a time perception deviation are still unknown. In this paper, we present an experiment investigating the difference in time experience with and without a virtual body in VR, also known as avatar. The process of substituting a person’s body with a virtual body is called avatar embodiment. Numerous studies demonstrated interesting perceptual, emotional, behavioral, and psychological effects caused by avatar embodiment. However, the relations between time perception and avatar embodiment are still unclear. Whether or not the presence or absence of an avatar is already influencing time perception is still open to question. Therefore, we conducted a between-subjects design with and without avatar embodiment as well as a real condition (avatar vs. no-avatar vs. real). A group of 105 healthy subjects had to wait for seven and a half minutes in a room without any distractors (e.g., no window, magazine, people, decoration) or time indicators (e.g., clocks, sunlight). The virtual environment replicates the real physical environment. Participants were unaware that they will be asked to estimate their waiting time duration as well as describing their experience of the passage of time at a later stage. Our main finding shows that the presence of an avatar is leading to a significantly faster perceived passage of time. It seems to be promising to integrate avatar embodiment in future VR time-based therapy applications as they potentially could modulate a user’s perception of the passage of time. We also found no significant difference in time perception between the real and the VR conditions (avatar, no-avatar), but further research is needed to better understand this outcome. KW - virtual reality KW - time perception KW - avatar embodiment KW - immersion KW - human computer interaction (HCI) Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-259076 VL - 2 ER - TY - THES A1 - Huber, Stephan T1 - Proxemo: Documenting Observed Emotions in HCI T1 - Proxemo: Die Dokumentation Beobachteter Emotionen in der Mensch-Computer-Interaktion N2 - For formative evaluations of user experience (UX) a variety of methods have been developed over the years. However, most techniques require the users to interact with the study as a secondary task. This active involvement in the evaluation is not inclusive of all users and potentially biases the experience currently being studied. Yet there is a lack of methods for situations in which the user has no spare cognitive resources. This condition occurs when 1) users' cognitive abilities are impaired (e.g., people with dementia) or 2) users are confronted with very demanding tasks (e.g., air traffic controllers). In this work we focus on emotions as a key component of UX and propose the new structured observation method Proxemo for formative UX evaluations. Proxemo allows qualified observers to document users' emotions by proxy in real time and then directly link them to triggers. Technically this is achieved by synchronising the timestamps of emotions documented by observers with a video recording of the interaction. In order to facilitate the documentation of observed emotions in highly diverse contexts we conceptualise and implement two separate versions of a documentation aid named Proxemo App. For formative UX evaluations of technology-supported reminiscence sessions with people with dementia, we create a smartwatch app to discreetly document emotions from the categories anger, general alertness, pleasure, wistfulness and pride. For formative UX evaluations of prototypical user interfaces with air traffic controllers we create a smartphone app to efficiently document emotions from the categories anger, boredom, surprise, stress and pride. Descriptive case studies in both application domains indicate the feasibility and utility of the method Proxemo and the appropriateness of the respectively adapted design of the Proxemo App. The third part of this work is a series of meta-evaluation studies to determine quality criteria of Proxemo. We evaluate Proxemo regarding its reliability, validity, thoroughness and effectiveness, and compare Proxemo's efficiency and the observers' experience to documentation with pen and paper. Proxemo is reliable, as well as more efficient, thorough and effective than handwritten notes and provides a better UX to observers. Proxemo compares well with existing methods where benchmarks are available. With Proxemo we contribute a validated structured observation method that has shown to meet requirements formative UX evaluations in the extreme contexts of users with cognitive impairments or high task demands. Proxemo is agnostic regarding researchers' theoretical approaches and unites reductionist and holistic perspectives within one method. Future work should explore the applicability of Proxemo for further domains and extend the list of audited quality criteria to include, for instance, downstream utility. With respect to basic research we strive to better understand the sources leading observers to empathic judgments and propose reminisce and older adults as model environment for investigating mixed emotions. N2 - Für formative Evaluationen der User Experience (UX) wurden im Laufe der Jahre zahlreiche Methoden entwickelt. Die meisten Methoden erfordern jedoch, dass die Benutzer als Nebenaufgabe mit der Studie interagieren. Diese aktive Beteiligung an der Evaluation kann das untersuchte Erlebnis verfälschen und schließt Benutzer komplett aus, die keine kognitiven Ressourcen zur Verfügung haben. Dies ist der Fall, wenn 1) die kognitiven Fähigkeiten der Benutzer beeinträchtigt sind (z. B. Menschen mit Demenz) oder 2) Benutzer mit sehr anspruchsvollen Aufgaben konfrontiert sind (z. B. Fluglotsen). In dieser Arbeit konzentrieren wir uns auf Emotionen als eine Schlüsselkomponente von UX und schlagen die neue strukturierte Beobachtungsmethode Proxemo für formative UX-Evaluationen vor. Proxemo ermöglicht es qualifizierten Beobachtern, die Emotionen der Nutzer in Echtzeit zu dokumentieren und sie direkt mit Auslösern zu verknüpfen. Technisch wird dies erreicht, indem die Zeitstempel der von den Beobachtern dokumentierten Emotionen mit einer Videoaufzeichnung der Interaktion synchronisiert werden. Um die Dokumentation von beobachteten Emotionen in sehr unterschiedlichen Kontexten zu erleichtern, konzipieren und implementieren wir zwei verschiedene Versionen einer Dokumentationshilfe namens Proxemo App. Für formative UX-Evaluationen von technologiegestützten Erinnerungssitzungen mit Menschen mit Demenz erstellen wir eine Smartwatch-App zur unauffälligen Dokumentation von Emotionen aus den Kategorien Ärger, allgemeine Wachsamkeit, Freude, Wehmut und Stolz. Für formative UX-Evaluationen prototypischer Nutzerschnittstellen mit Fluglotsen erstellen wir eine Smartphone-App zur effizienten Dokumentation von Emotionen aus den Kategorien Ärger, Langeweile, Überraschung, Stress und Stolz. Deskriptive Fallstudien in beiden Anwendungsfeldern zeigen die Machbarkeit und den Nutzen der Methode Proxemo und die Angemessenheit des jeweiligen Designs der Proxemo App. Der dritte Teil dieser Arbeit besteht aus einer Reihe von Meta-Evaluationsstudien zu den Gütekriterien von Proxemo. Wir evaluieren Proxemo hinsichtlich der Reliabilität, Validität, Gründlichkeit und Effektivität, und vergleichen die Effizienz von Proxemo und die UX der Beobachter mit der Dokumentation mit Stift und Papier. Proxemo ist reliabel, sowie effizienter, gründlicher und effektiver als handschriftliche Notizen und bietet den Beobachtern eine bessere UX. Proxemo schneidet gut ab im Vergleich zu bestehenden Methoden, für die Benchmarks verfügbar sind. Mit Proxemo stellen wir eine validierte, strukturierte Beobachtungsmethode vor, die nachweislich den Anforderungen formativer UX Evaluationen in den extremen Kontexten von Benutzern mit kognitiven Beeinträchtigungen oder hohen Aufgabenanforderungen gerecht wird. Proxemo ist agnostisch bezüglich der theoretischen Ansätze von Forschenden und vereint reduktionistische und ganzheitliche Perspektiven in einer Methode. Zukünftige Arbeiten sollten die Anwendbarkeit von Proxemo für weitere Domänen erkunden und die Liste der geprüften Gütekriterien erweitern, zum Beispiel um das Kriterium Downstream Utility. In Bezug auf die Grundlagenforschung werden wir versuchen, die Quellen besser zu verstehen, auf denen die empathischen Urteile der Beobachter fußen und schlagen Erinnerungen und ältere Erwachsene als Modellumgebung für die künftige Erforschung gemischter Emotionen vor. KW - Gefühl KW - Wissenschaftliche Beobachtung KW - Methode KW - Benutzererlebnis KW - Benutzerforschung KW - Emotionserkennung KW - Emotion inference KW - Emotionsinterpretation Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-305730 ER - TY - THES A1 - Geißler, Stefan T1 - Performance Evaluation of Next-Generation Data Plane Architectures and their Components T1 - Leistungsbewertung von Data Plane Architekturen der Nächsten Generation sowie ihrer Einzelkomponenten N2 - In this doctoral thesis we cover the performance evaluation of next generation data plane architectures, comprised of complex software as well as programmable hardware components that allow fine granular configuration. In the scope of the thesis we propose mechanisms to monitor the performance of singular components and model key performance indicators of software based packet processing solutions. We present novel approaches towards network abstraction that allow the integration of heterogeneous data plane technologies into a singular network while maintaining total transparency between control and data plane. Finally, we investigate a full, complex system consisting of multiple software-based solutions and perform a detailed performance analysis. We employ simulative approaches to investigate overload control mechanisms that allow efficient operation under adversary conditions. The contributions of this work build the foundation for future research in the areas of network softwarization and network function virtualization. N2 - Diese Doktorarbeit behandelt die Leistungsbewertung von Data Plane Architekturen der nächsten Generation, die aus komplexen Softwarelösungen sowie programmierbaren Hardwarekomponenten bestehen. Hierbei werden Mechanismen entwickelt, die es ermöglichen, die Leistungsfähigkeit einzelner Komponenten zu messen und zentrale Leistungsindikatoren softwarebasierter Systeme zur Verarbeitung von Datenpaketen zu modellieren. Es werden neuartige Ansätze zur Netzabstraktion entworfen, die eine vollständig transparente Integration heterogener Technologien im selben Netz ermöglichen. Schließlich wird eine umfassende Leistungsbewertung eines komplexen Systems, das aus einer Vielzahl softwarebasierter Netzfunktionen besteht, durchgeführt. Anhand simulativer Modelle werden Überlastkontrollmechanismen entwickelt, die es dem System erlauben auch unter Überlast effizient zu arbeiten. Die Beiträge dieser Arbeit bilden die Grundlage weiterer Forschungen im Bereich der Softwarisierung von Netzen sowie der Virtualisierung von Netzfunktionen. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 02/21 KW - Leistungsbewertung KW - Simulation KW - Zeitdiskretes System KW - Implementierung KW - performance evaluation KW - simulation KW - discrete-time analysis KW - network softwarization KW - mobile networks Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260157 SN - 1432-8801 ER - TY - JOUR A1 - Kammerer, Klaus A1 - Pryss, Rüdiger A1 - Hoppenstedt, Burkhard A1 - Sommer, Kevin A1 - Reichert, Manfred T1 - Process-driven and flow-based processing of industrial sensor data JF - Sensors N2 - For machine manufacturing companies, besides the production of high quality and reliable machines, requirements have emerged to maintain machine-related aspects through digital services. The development of such services in the field of the Industrial Internet of Things (IIoT) is dealing with solutions such as effective condition monitoring and predictive maintenance. However, appropriate data sources are needed on which digital services can be technically based. As many powerful and cheap sensors have been introduced over the last years, their integration into complex machines is promising for developing digital services for various scenarios. It is apparent that for components handling recorded data of these sensors they must usually deal with large amounts of data. In particular, the labeling of raw sensor data must be furthered by a technical solution. To deal with these data handling challenges in a generic way, a sensor processing pipeline (SPP) was developed, which provides effective methods to capture, process, store, and visualize raw sensor data based on a processing chain. Based on the example of a machine manufacturing company, the SPP approach is presented in this work. For the company involved, the approach has revealed promising results. KW - data stream processing KW - cyber-physical systems KW - processing pipeline KW - sensor networks Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-213089 SN - 1424-8220 VL - 20 IS - 18 ER - TY - JOUR A1 - Wick, Christoph A1 - Hartelt, Alexander A1 - Puppe, Frank T1 - Staff, symbol and melody detection of Medieval manuscripts written in square notation using deep Fully Convolutional Networks JF - Applied Sciences N2 - Even today, the automatic digitisation of scanned documents in general, but especially the automatic optical music recognition (OMR) of historical manuscripts, still remains an enormous challenge, since both handwritten musical symbols and text have to be identified. This paper focuses on the Medieval so-called square notation developed in the 11th–12th century, which is already composed of staff lines, staves, clefs, accidentals, and neumes that are roughly spoken connected single notes. The aim is to develop an algorithm that captures both the neumes, and in particular its melody, which can be used to reconstruct the original writing. Our pipeline is similar to the standard OMR approach and comprises a novel staff line and symbol detection algorithm based on deep Fully Convolutional Networks (FCN), which perform pixel-based predictions for either staff lines or symbols and their respective types. Then, the staff line detection combines the extracted lines to staves and yields an F\(_1\) -score of over 99% for both detecting lines and complete staves. For the music symbol detection, we choose a novel approach that skips the step to identify neumes and instead directly predicts note components (NCs) and their respective affiliation to a neume. Furthermore, the algorithm detects clefs and accidentals. Our algorithm predicts the symbol sequence of a staff with a diplomatic symbol accuracy rate (dSAR) of about 87%, which includes symbol type and location. If only the NCs without their respective connection to a neume, all clefs and accidentals are of interest, the algorithm reaches an harmonic symbol accuracy rate (hSAR) of approximately 90%. In general, the algorithm recognises a symbol in the manuscript with an F\(_1\) -score of over 96%. KW - optical music recognition KW - historical document analysis KW - medieval manuscripts KW - neume notation KW - fully convolutional neural networks Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-197248 SN - 2076-3417 VL - 9 IS - 13 ER - TY - RPRT A1 - Metzger, Florian T1 - Crowdsensed QoE for the community - a concept to make QoE assessment accessible N2 - In recent years several community testbeds as well as participatory sensing platforms have successfully established themselves to provide open data to everyone interested. Each of them with a specific goal in mind, ranging from collecting radio coverage data up to environmental and radiation data. Such data can be used by the community in their decision making, whether to subscribe to a specific mobile phone service that provides good coverage in an area or in finding a sunny and warm region for the summer holidays. However, the existing platforms are usually limiting themselves to directly measurable network QoS. If such a crowdsourced data set provides more in-depth derived measures, this would enable an even better decision making. A community-driven crowdsensing platform that derives spatial application-layer user experience from resource-friendly bandwidth estimates would be such a case, video streaming services come to mind as a prime example. In this paper we present a concept for such a system based on an initial prototype that eases the collection of data necessary to determine mobile-specific QoE at large scale. In addition we reason why the simple quality metric proposed here can hold its own. KW - Quality of Experience KW - Crowdsourcing KW - Crowdsensing KW - QoE Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203748 N1 - Originally written in 2017, but never published. ER - TY - JOUR A1 - Seufert, Anika A1 - Schröder, Svenja A1 - Seufert, Michael T1 - Delivering User Experience over Networks: Towards a Quality of Experience Centered Design Cycle for Improved Design of Networked Applications JF - SN Computer Science N2 - To deliver the best user experience (UX), the human-centered design cycle (HCDC) serves as a well-established guideline to application developers. However, it does not yet cover network-specific requirements, which become increasingly crucial, as most applications deliver experience over the Internet. The missing network-centric view is provided by Quality of Experience (QoE), which could team up with UX towards an improved overall experience. By considering QoE aspects during the development process, it can be achieved that applications become network-aware by design. In this paper, the Quality of Experience Centered Design Cycle (QoE-CDC) is proposed, which provides guidelines on how to design applications with respect to network-specific requirements and QoE. Its practical value is showcased for popular application types and validated by outlining the design of a new smartphone application. We show that combining HCDC and QoE-CDC will result in an application design, which reaches a high UX and avoids QoE degradation. KW - user experience KW - human-centered design KW - design cycle KW - application design KW - quality of experience Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-271762 SN - 2661-8907 VL - 2 IS - 6 ER - TY - JOUR A1 - Krupitzer, Christian A1 - Eberhardinger, Benedikt A1 - Gerostathopoulos, Ilias A1 - Raibulet, Claudia T1 - Introduction to the special issue “Applications in Self-Aware Computing Systems and their Evaluation” JF - Computers N2 - The joint 1st Workshop on Evaluations and Measurements in Self-Aware Computing Systems (EMSAC 2019) and Workshop on Self-Aware Computing (SeAC) was held as part of the FAS* conference alliance in conjunction with the 16th IEEE International Conference on Autonomic Computing (ICAC) and the 13th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO) in Umeå, Sweden on 20 June 2019. The goal of this one-day workshop was to bring together researchers and practitioners from academic environments and from the industry to share their solutions, ideas, visions, and doubts in self-aware computing systems in general and in the evaluation and measurements of such systems in particular. The workshop aimed to enable discussions, partnerships, and collaborations among the participants. This special issue follows the theme of the workshop. It contains extended versions of workshop presentations as well as additional contributions. KW - self-aware computing systems KW - quality evaluation KW - measurements KW - quality assurance KW - autonomous KW - self-adaptive KW - self-managing systems Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203439 SN - 2073-431X VL - 9 IS - 1 ER - TY - THES A1 - Peng, Dongliang T1 - An Optimization-Based Approach for Continuous Map Generalization T1 - Optimierung für die kontinuierliche Generalisierung von Landkarten N2 - Maps are the main tool to represent geographical information. Geographical information is usually scale-dependent, so users need to have access to maps at different scales. In our digital age, the access is realized by zooming. As discrete changes during the zooming tend to distract users, smooth changes are preferred. This is why some digital maps are trying to make the zooming as continuous as they can. The process of producing maps at different scales with smooth changes is called continuous map generalization. In order to produce maps of high quality, cartographers often take into account additional requirements. These requirements are transferred to models in map generalization. Optimization for map generalization is important not only because it finds optimal solutions in the sense of the models, but also because it helps us to evaluate the quality of the models. Optimization, however, becomes more delicate when we deal with continuous map generalization. In this area, there are requirements not only for a specific map but also for relations between maps at difference scales. This thesis is about continuous map generalization based on optimization. First, we show the background of our research topics. Second, we find optimal sequences for aggregating land-cover areas. We compare the A$^{\!\star}$\xspace algorithm and integer linear programming in completing this task. Third, we continuously generalize county boundaries to provincial boundaries based on compatible triangulations. We morph between the two sets of boundaries, using dynamic programming to compute the correspondence. Fourth, we continuously generalize buildings to built-up areas by aggregating and growing. In this work, we group buildings with the help of a minimum spanning tree. Fifth, we define vertex trajectories that allow us to morph between polylines. We require that both the angles and the edge lengths change linearly over time. As it is impossible to fulfill all of these requirements simultaneously, we mediate between them using least-squares adjustment. Sixth, we discuss the performance of some commonly used data structures for a specific spatial problem. Seventh, we conclude this thesis and present open problems. N2 - Maps are the main tool to represent geographical information. Users often zoom in and out to access maps at different scales. Continuous map generalization tries to make the changes between different scales smooth, which is essential to provide users with comfortable zooming experience. In order to achieve continuous map generalization with high quality, we optimize some important aspects of maps. In this book, we have used optimization in the generalization of land-cover areas, administrative boundaries, buildings, and coastlines. According to our experiments, continuous map generalization indeed benefits from optimization. N2 - Landkarten sind das wichtigste Werkzeug zur Repräsentation geografischer Information. Unter der Generalisierung von Landkarten versteht man die Aufbereitung von geografischen Informationen aus detaillierten Daten zur Generierung von kleinmaßstäbigen Karten. Nutzer von Online-Karten zoomen oft in eine Karte hinein oder aus einer Karte heraus, um mehr Details bzw. mehr Überblick zu bekommen. Die kontinuierliche Generalisierung von Landkarten versucht die Änderungen zwischen verschiedenen Maßstäben stetig zu machen. Dies ist wichtig, um Nutzern eine angenehme Zoom-Erfahrung zu bieten. Um eine qualitativ hochwertige kontinuierliche Generalisierung zu erreichen, kann man wichtige Aspekte bei der Generierung von Online-Karten optimieren. In diesem Buch haben wir Optimierung bei der Generalisierung von Landnutzungskarten, von administrativen Grenzen, Gebäuden und Küstenlinien eingesetzt. Unsere Experimente zeigen, dass die kontinuierliche Generalisierung von Landkarten in der Tat von Optimierung profitiert. KW - land-cover area KW - administrative boundary KW - building KW - morphing KW - data structure KW - zooming KW - Generalisierung KW - Landnutzungskartierung KW - Optimierung Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-174427 SN - 978-3-95826-104-4 SN - 978-3-95826-105-1 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, 978-3-95826-104-4, 24,90 EUR. PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - JOUR A1 - Pfitzner, Christian A1 - May, Stefan A1 - Nüchter, Andreas T1 - Body weight estimation for dose-finding and health monitoring of lying, standing and walking patients based on RGB-D data JF - Sensors N2 - This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients. KW - RGB-D KW - human body weight KW - image processing KW - kinect KW - machine learning KW - perception KW - segmentation KW - sensor fusion KW - stroke KW - thermal camera Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-176642 VL - 18 IS - 5 ER - TY - JOUR A1 - Sîrbu, Alina A1 - Becker, Martin A1 - Caminiti, Saverio A1 - De Baets, Bernard A1 - Elen, Bart A1 - Francis, Louise A1 - Gravino, Pietro A1 - Hotho, Andreas A1 - Ingarra, Stefano A1 - Loreto, Vittorio A1 - Molino, Andrea A1 - Mueller, Juergen A1 - Peters, Jan A1 - Ricchiuti, Ferdinando A1 - Saracino, Fabio A1 - Servedio, Vito D.P. A1 - Stumme, Gerd A1 - Theunis, Jan A1 - Tria, Francesca A1 - Van den Bossche, Joris T1 - Participatory Patterns in an International Air Quality Monitoring Initiative JF - PLoS ONE N2 - The issue of sustainability is at the top of the political and societal agenda, being considered of extreme importance and urgency. Human individual action impacts the environment both locally (e.g., local air/water quality, noise disturbance) and globally (e.g., climate change, resource use). Urban environments represent a crucial example, with an increasing realization that the most effective way of producing a change is involving the citizens themselves in monitoring campaigns (a citizen science bottom-up approach). This is possible by developing novel technologies and IT infrastructures enabling large citizen participation. Here, in the wider framework of one of the first such projects, we show results from an international competition where citizens were involved in mobile air pollution monitoring using low cost sensing devices, combined with a web-based game to monitor perceived levels of pollution. Measures of shift in perceptions over the course of the campaign are provided, together with insights into participatory patterns emerging from this study. Interesting effects related to inertia and to direct involvement in measurement activities rather than indirect information exposure are also highlighted, indicating that direct involvement can enhance learning and environmental awareness. In the future, this could result in better adoption of policies towards decreasing pollution. KW - transport microenvironments KW - exposure KW - pollution KW - carbon Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-151379 VL - 10 IS - 8 ER - TY - THES A1 - Fleszar, Krzysztof T1 - Network-Design Problems in Graphs and on the Plane T1 - Netzwerk-Design-Probleme in Graphen und auf der Ebene N2 - A network design problem defines an infinite set whose elements, called instances, describe relationships and network constraints. It asks for an algorithm that, given an instance of this set, designs a network that respects the given constraints and at the same time optimizes some given criterion. In my thesis, I develop algorithms whose solutions are optimum or close to an optimum value within some guaranteed bound. I also examine the computational complexity of these problems. Problems from two vast areas are considered: graphs and the Euclidean plane. In the Maximum Edge Disjoint Paths problem, we are given a graph and a subset of vertex pairs that are called terminal pairs. We are asked for a set of paths where the endpoints of each path form a terminal pair. The constraint is that any two paths share at most one inner vertex. The optimization criterion is to maximize the cardinality of the set. In the hard-capacitated k-Facility Location problem, we are given an integer k and a complete graph where the distances obey a given metric and where each node has two numerical values: a capacity and an opening cost. We are asked for a subset of k nodes, called facilities, and an assignment of all the nodes, called clients, to the facilities. The constraint is that the number of clients assigned to a facility cannot exceed the facility's capacity value. The optimization criterion is to minimize the total cost which consists of the total opening cost of the facilities and the total distance between the clients and the facilities they are assigned to. In the Stabbing problem, we are given a set of axis-aligned rectangles in the plane. We are asked for a set of horizontal line segments such that, for every rectangle, there is a line segment crossing its left and right edge. The optimization criterion is to minimize the total length of the line segments. In the k-Colored Non-Crossing Euclidean Steiner Forest problem, we are given an integer k and a finite set of points in the plane where each point has one of k colors. For every color, we are asked for a drawing that connects all the points of the same color. The constraint is that drawings of different colors are not allowed to cross each other. The optimization criterion is to minimize the total length of the drawings. In the Minimum Rectilinear Polygon for Given Angle Sequence problem, we are given an angle sequence of left (+90°) turns and right (-90°) turns. We are asked for an axis-parallel simple polygon where the angles of the vertices yield the given sequence when walking around the polygon in counter-clockwise manner. The optimization criteria considered are to minimize the perimeter, the area, and the size of the axis-parallel bounding box of the polygon. N2 - Ein Netzwerk-Design-Problem definiert eine unendliche Menge, deren Elemente, als Instanzen bezeichnet, Beziehungen und Beschränkungen in einem Netzwerk beschreiben. Die Lösung eines solchen Problems besteht aus einem Algorithmus, der auf die Eingabe einer beliebigen Instanz dieser Menge ein Netzwerk entwirft, welches die gegebenen Beschränkungen einhält und gleichzeitig ein gegebenes Kriterium optimiert. In meiner Dissertation habe ich Algorithmen entwickelt, deren Netzwerke stets optimal sind oder nachweisbar nahe am Optimum liegen. Zusätzlich habe ich die Berechnungskomplexität dieser Probleme untersucht. Dabei wurden Probleme aus zwei weiten Gebieten betrachtet: Graphen und der Euklidische Ebene. Im Maximum-Edge-Disjoint-Paths-Problem besteht die Eingabe aus einem Graphen und einer Teilmenge von Knotenpaaren, die wir mit Terminalpaare bezeichnen. Gesucht ist eine Menge von Pfaden, die Terminalpaare verbinden. Die Beschränkung ist, dass keine zwei Pfade einen gleichen inneren Knoten haben dürfen. Das Optimierungskriterium ist die Maximierung der Kardinalität dieser Menge. Im Hard-Capacitated-k-Facility-Location-Problem besteht die Eingabe aus einer Ganzzahl k und einem vollständigen Graphen, in welchem die Distanzen einer gegebenen Metrik unterliegen und in welchem jedem Knoten sowohl eine numerische Kapazität als auch ein Eröffnungskostenwert zugeschrieben ist. Gesucht ist eine Teilmenge von k Knoten, Facilities genannt, und eine Zuweisung aller Knoten, Clients genannt, zu den Facilities. Die Beschränkung ist, dass die Anzahl der Clients, die einer Facility zugewiesen sind, nicht deren Kapazität überschreiten darf. Das Optimierungskriterium ist die Minimierung der Gesamtkosten bestehend aus den Gesamteröffnungskosten der Facilities sowie der Gesamtdistanz zwischen den Clients und den ihnen zugewiesenen Facilities. Im Stabbing-Problem besteht die Eingabe aus einer Menge von achsenparallelen Rechtecken in der Ebene. Gesucht ist eine Menge von horizontalen Geradenstücken mit der Randbedingung, dass die linke und rechte Seite eines jeden Rechtecks von einem Geradenstück verbunden ist. Das Optimierungskriterium ist die Minimierung der Gesamtlänge aller Geradenstücke. Im k-Colored-Non-Crossing-Euclidean-Steiner-Forest-Problem besteht die Eingabe aus einer Ganzzahl k und einer endlichen Menge von Punkten in der Ebene, wobei jeder Punkt in einer von k Farben gefärbt ist. Gesucht ist für jede Farbe eine Zeichnung, in welcher alle Punkte der Farbe verbunden sind. Die Beschränkung ist, dass Zeichnungen verschiedener Farben sich nicht kreuzen dürfen. Das Optimierungskriterium ist die Minimierung des Gesamtintenverbrauchs, das heißt, der Gesamtlänge der Zeichnungen. Im Minimum-Rectilinear-Polygon-for-Given-Angle-Sequence-Problem besteht die Eingabe aus einer Folge von Links- (+90°) und Rechtsabbiegungen (-90°). Gesucht ist ein achsenparalleles Polygon dessen Eckpunkte die gegebene Folge ergeben, wenn man das Polygon gegen den Uhrzeigersinn entlangläuft. Die Optimierungskriterien sind die Minimierung des Umfangs und der inneren Fläche des Polygons sowie der Größe des notwendigen Zeichenblattes, d.h., des kleinsten Rechteckes, das das Polygon einschließt. N2 - Given points in the plane, connect them using minimum ink. Though the task seems simple, it turns out to be very time consuming. In fact, scientists believe that computers cannot efficiently solve it. So, do we have to resign? This book examines such NP-hard network-design problems, from connectivity problems in graphs to polygonal drawing problems on the plane. First, we observe why it is so hard to optimally solve these problems. Then, we go over to attack them anyway. We develop fast algorithms that find approximate solutions that are very close to the optimal ones. Hence, connecting points with slightly more ink is not hard. KW - Euklidische Ebene KW - Algorithmus KW - Komplexität KW - NP-schweres Problem KW - Graph KW - approximation algorithm KW - hardness KW - optimization KW - graphs KW - network KW - Optimierungsproblem KW - Approximationsalgorithmus KW - complexity KW - Euclidean plane Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-154904 SN - 978-3-95826-076-4 (Print) SN - 978-3-95826-077-1 (Online) N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, ISBN 978-3-95826-076-4, 28,90 EUR. PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - THES A1 - Wojtkowiak, Harald T1 - Planungssystem zur Steigerung der Autonomie von Kleinstsatelliten T1 - Planningsystem to increase the autonomy of small satellites N2 - Der Betrieb von Satelliten wird sich in Zukunft gravierend ändern. Die bisher ausgeübte konventionelle Vorgehensweise, bei der die Planung der vom Satelliten auszuführenden Aktivitäten sowie die Kontrolle hierüber ausschließlich vom Boden aus erfolgen, stößt bei heutigen Anwendungen an ihre Grenzen. Im schlimmsten Fall verhindert dieser Umstand sogar die Erschließung bisher ungenutzter Möglichkeiten. Der Gewinn eines Satelliten, sei es in Form wissenschaftlicher Daten oder der Vermarktung satellitengestützter Dienste, wird daher nicht optimal ausgeschöpft. Die Ursache für dieses Problem lässt sich im Grunde auf eine ausschlaggebende Tatsache zurückführen: Konventionelle Satelliten können ihr Verhalten, d.h. die Folge ihrer Tätigkeiten, nicht eigenständig anpassen. Stattdessen erstellt das Bedienpersonal am Boden - vor allem die Operatoren - mit Hilfe von Planungssoftware feste Ablaufpläne, die dann in Form von Kommandosequenzen von den Bodenstationen aus an die jeweiligen Satelliten hochgeladen werden. Dort werden die Befehle lediglich überprüft, interpretiert und strikt ausgeführt. Die Abarbeitung erfolgt linear. Situationsbedingte Änderungen, wie sie vergleichsweise bei der Codeausführung von Softwareprogrammen durch Kontrollkonstrukte, zum Beispiel Schleifen und Verzweigungen, üblich sind, sind typischerweise nicht vorgesehen. Der Operator ist daher die einzige Instanz, die das Verhalten des Satelliten mittels Kommandierung, per Upload, beeinflussen kann, und auch nur dann, wenn ein direkter Funkkontakt zwischen Satellit und Bodenstation besteht. Die dadurch möglichen Reaktionszeiten des Satelliten liegen bestenfalls bei einigen Sekunden, falls er sich im Wirkungsbereich der Bodenstation befindet. Außerhalb des Kontaktfensters kann sich die Zeitschranke, gegeben durch den Orbit und die aktuelle Position des Satelliten, von einigen Minuten bis hin zu einigen Stunden erstrecken. Die Signallaufzeiten der Funkübertragung verlängern die Reaktionszeiten um weitere Sekunden im erdnahen Bereich. Im interplanetaren Raum erstrecken sich die Zeitspannen aufgrund der immensen Entfernungen sogar auf mehrere Minuten. Dadurch bedingt liegt die derzeit technologisch mögliche, bodengestützte, Reaktionszeit von Satelliten bestenfalls im Bereich von einigen Sekunden. Diese Einschränkung stellt ein schweres Hindernis für neuartige Satellitenmissionen, bei denen insbesondere nichtdeterministische und kurzzeitige Phänomene (z.B. Blitze und Meteoreintritte in die Erdatmosphäre) Gegenstand der Beobachtungen sind, dar. Die langen Reaktionszeiten des konventionellen Satellitenbetriebs verhindern die Realisierung solcher Missionen, da die verzögerte Reaktion erst erfolgt, nachdem das zu beobachtende Ereignis bereits abgeschlossen ist. Die vorliegende Dissertation zeigt eine Möglichkeit, das durch die langen Reaktionszeiten entstandene Problem zu lösen, auf. Im Zentrum des Lösungsansatzes steht dabei die Autonomie. Im Wesentlichen geht es dabei darum, den Satelliten mit der Fähigkeit auszustatten, sein Verhalten, d.h. die Folge seiner Tätigkeiten, eigenständig zu bestimmen bzw. zu ändern. Dadurch wird die direkte Abhängigkeit des Satelliten vom Operator bei Reaktionen aufgehoben. Im Grunde wird der Satellit in die Lage versetzt, sich selbst zu kommandieren. Die Idee der Autonomie wurde im Rahmen der zugrunde liegenden Forschungsarbeiten umgesetzt. Das Ergebnis ist ein autonomes Planungssystem. Dabei handelt es sich um ein Softwaresystem, mit dem sich autonomes Verhalten im Satelliten realisieren lässt. Es kann an unterschiedliche Satellitenmissionen angepasst werden. Ferner deckt es verschiedene Aspekte des autonomen Satellitenbetriebs, angefangen bei der generellen Entscheidungsfindung der Tätigkeiten, über die zeitliche Ablaufplanung unter Einbeziehung von Randbedingungen (z.B. Ressourcen) bis hin zur eigentlichen Ausführung, d.h. Kommandierung, ab. Das Planungssystem kommt als Anwendung in ASAP, einer autonomen Sensorplattform, zum Einsatz. Es ist ein optisches System und dient der Detektion von kurzzeitigen Phänomenen und Ereignissen in der Erdatmosphäre. Die Forschungsarbeiten an dem autonomen Planungssystem, an ASAP sowie an anderen zu diesen in Bezug stehenden Systemen wurden an der Professur für Raumfahrttechnik des Lehrstuhls Informatik VIII der Julius-Maximilians-Universität Würzburg durchgeführt. N2 - Satellite operation will change thoroughly in future. So far the currently performed conventional approach of controlling satellites is hitting its limitation by todays application. This is due to the fact that activities of the satellite are planned and controlled exclusively by ground infrastructure. In the worst case this circumstance prevents the exploitation of potential but so far unused opportunities. Thus the yield of satellites, may it be in the form of scientific research data or the commercialization of satellite services, is not optimized. After all the cause of this problem can be tracked back to one crucial matter: Conventional satellites are not able to alter their behaviour, i.e. the order of their actions, themselves. Instead schedules are created by ground staff – mainly operators - utilizing specialized planning software. The output is then transformed into command sequences and uploaded to the dedicated satellite via ground stations. On-board the commands are solely checked, interpreted and strictly executed. The flow is linear. Situational changes, like in the code execution of software programs via control constructs, e.g. loops and branches, are typically not present. Thus the operator is the only instance which is able to change the behaviour of the satellite via command upload. Therefore a direct radio link between satellite and ground station is required. Reaction times are hereby restricted. In the best case, means when the satellite is inside the area of effect, the limitations are to a few seconds. Outside the contact window, the time bound may increase from minutes to hours. The exact timing are dependant from the orbit of the satellite and its position on it. The signal flow of the radio links adds additional reaction time from a few seconds in near earth up to some minutes in interplanetary space due to the vast distances. In sum the best achievable ground based reaction time lies in the area of some seconds. This restriction is a severe handicap for novel satellite missions with focus on non-deterministic and short-time phenomena, e.g. lightning and meteor entries into Earth atmosphere. The long reaction times of conventional satellite operations prevent the realization of such missions. This is due to the fact that delayed reactions take place after the event to observe has finished. This dissertation shows a possibility to solve the problem caused by long reaction times. Autonomy lies in the centre of the main approach. The key is to augment the satellite with the ability to alter its behaviour, i.e. the sequence of its actions, and to deliberate about it. Thus, the direct reaction dependency of the satellite from operators will be lifted. In principle the satellite will be able to command itself. The herein idea of autonomy is based on research work, which provides the context for design and implementation. The output is an autonomous planning system. It’s a software system which enables a satellite to behave autonomously and can be adapted to different types of satellite missions. Additionally, it covers different aspects of autonomous satellite operation, starting with general decision making of activities, going over to time scheduling inclusive constraint consideration, e.g. resources, and finishing at last with the actual execution, i.e. commanding. The autonomous planning system runs as one application of ASAP, an autonomous sensor platform. It is an optical system with the purpose to detect short-time phenomena and events in Earth atmosphere. The research work for the autonomous planning system, for ASAP and for other related systems has been executed at the professorship for space technology which is part of the department for computer science VIII at the Julius-Maximilians-Universität Würzburg. KW - Planungssystem KW - Autonomie KW - Satellit KW - Entscheidungsfindung KW - Ablaufplanung KW - Planausführung KW - System KW - Missionsbetrieb KW - decission finding KW - scheduling KW - plan execution KW - system KW - mission operation Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-163569 ER -