TY - JOUR A1 - Li, Ningbo A1 - Guan, Lianwu A1 - Gao, Yanbin A1 - Du, Shitong A1 - Wu, Menghao A1 - Guang, Xingxing A1 - Cong, Xiaodan T1 - Indoor and outdoor low-cost seamless integrated navigation system based on the integration of INS/GNSS/LIDAR system JF - Remote Sensing N2 - Global Navigation Satellite System (GNSS) provides accurate positioning data for vehicular navigation in open outdoor environment. In an indoor environment, Light Detection and Ranging (LIDAR) Simultaneous Localization and Mapping (SLAM) establishes a two-dimensional map and provides positioning data. However, LIDAR can only provide relative positioning data and it cannot directly provide the latitude and longitude of the current position. As a consequence, GNSS/Inertial Navigation System (INS) integrated navigation could be employed in outdoors, while the indoors part makes use of INS/LIDAR integrated navigation and the corresponding switching navigation will make the indoor and outdoor positioning consistent. In addition, when the vehicle enters the garage, the GNSS signal will be blurred for a while and then disappeared. Ambiguous GNSS satellite signals will lead to the continuous distortion or overall drift of the positioning trajectory in the indoor condition. Therefore, an INS/LIDAR seamless integrated navigation algorithm and a switching algorithm based on vehicle navigation system are designed. According to the experimental data, the positioning accuracy of the INS/LIDAR navigation algorithm in the simulated environmental experiment is 50% higher than that of the Dead Reckoning (DR) algorithm. Besides, the switching algorithm developed based on the INS/LIDAR integrated navigation algorithm can achieve 80% success rate in navigation mode switching. KW - vehicular navigation KW - GNSS/INS integrated navigation KW - INS/LIDAR integrated navigation KW - switching navigation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-216229 SN - 2072-4292 VL - 12 IS - 19 ER - TY - JOUR A1 - Krupitzer, Christian A1 - Eberhardinger, Benedikt A1 - Gerostathopoulos, Ilias A1 - Raibulet, Claudia T1 - Introduction to the special issue “Applications in Self-Aware Computing Systems and their Evaluation” JF - Computers N2 - The joint 1st Workshop on Evaluations and Measurements in Self-Aware Computing Systems (EMSAC 2019) and Workshop on Self-Aware Computing (SeAC) was held as part of the FAS* conference alliance in conjunction with the 16th IEEE International Conference on Autonomic Computing (ICAC) and the 13th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO) in Umeå, Sweden on 20 June 2019. The goal of this one-day workshop was to bring together researchers and practitioners from academic environments and from the industry to share their solutions, ideas, visions, and doubts in self-aware computing systems in general and in the evaluation and measurements of such systems in particular. The workshop aimed to enable discussions, partnerships, and collaborations among the participants. This special issue follows the theme of the workshop. It contains extended versions of workshop presentations as well as additional contributions. KW - self-aware computing systems KW - quality evaluation KW - measurements KW - quality assurance KW - autonomous KW - self-adaptive KW - self-managing systems Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203439 SN - 2073-431X VL - 9 IS - 1 ER - TY - JOUR A1 - Hoßfeld, Tobias A1 - Heegaard, Poul E. A1 - Skrorin-Kapov, Lea A1 - Varela, Martín T1 - Deriving QoE in systems: from fundamental relationships to a QoE-based Service-level Quality Index JF - Quality and User Experience N2 - With Quality of Experience (QoE) research having made significant advances over the years, service and network providers aim at user-centric evaluation of the services provided in their system. The question arises how to derive QoE in systems. In the context of subjective user studies conducted to derive relationships between influence factors and QoE, user diversity leads to varying distributions of user rating scores for different test conditions. Such models are commonly exploited by providers to derive various QoE metrics in their system, such as expected QoE, or the percentage of users rating above a certain threshold. The question then becomes how to combine (a) user rating distributions obtained from subjective studies, and (b) system parameter distributions, so as to obtain the actual observed QoE distribution in the system? Moreover, how can various QoE metrics of interest in the system be derived? We prove fundamental relationships for the derivation of QoE in systems, thus providing an important link between the QoE community and the systems community. In our numerical examples, we focus mainly on QoE metrics. We furthermore provide a more generalized view on quantifying the quality of systems by defining a QoE-based Service-level Quality Index. This index exploits the fact that quality can be seen as a proxy measure for utility. Following the assumption that not all user sessions should be weighted equally, we aim to provide a generic framework that can be utilized to quantify the overall utility of a service delivered by a system. KW - QoE fundamentals KW - Expected QoE KW - Expected MOS KW - Good-or-Better (GoB) KW - QoS-QoE mapping functions KW - Service-level Quality Index (SQI) Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-235597 SN - 2366-0139 VL - 5 ER - TY - JOUR A1 - Borchert, Kathrin A1 - Seufert, Anika A1 - Gamboa, Edwin A1 - Hirth, Matthias A1 - Hoßfeld, Tobias T1 - In Vitro vs In Vivo: Does the Study's Interface Design Influence Crowdsourced Video QoE? JF - Quality and User Experience N2 - Evaluating the Quality of Experience (QoE) of video streaming and its influence factors has become paramount for streaming providers, as they want to maintain high satisfaction for their customers. In this context, crowdsourced user studies became a valuable tool to evaluate different factors which can affect the perceived user experience on a large scale. In general, most of these crowdsourcing studies either use, what we refer to, as an in vivo or an in vitro interface design. In vivo design means that the study participant has to rate the QoE of a video that is embedded in an application similar to a real streaming service, e.g., YouTube or Netflix. In vitro design refers to a setting, in which the video stream is separated from a specific service and thus, the video plays on a plain background. Although these interface designs vary widely, the results are often compared and generalized. In this work, we use a crowdsourcing study to investigate the influence of three interface design alternatives, an in vitro and two in vivo designs with different levels of interactiveness, on the perceived video QoE. Contrary to our expectations, the results indicate that there is no significant influence of the study’s interface design in general on the video experience. Furthermore, we found that the in vivo design does not reduce the test takers’ attentiveness. However, we observed that participants who interacted with the test interface reported a higher video QoE than other groups. KW - video QoE KW - crowdsourcing KW - study design KW - user study KW - distraction Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-235586 SN - 2366-0139 VL - 6 ER - TY - JOUR A1 - Düking, Peter A1 - Holmberg, Hans‑Christer A1 - Kunz, Philipp A1 - Leppich, Robert A1 - Sperlich, Billy T1 - Intra-individual physiological response of recreational runners to different training mesocycles: a randomized cross-over study JF - European Journal of Applied Physiology N2 - Purpose Pronounced differences in individual physiological adaptation may occur following various training mesocycles in runners. Here we aimed to assess the individual changes in performance and physiological adaptation of recreational runners performing mesocycles with different intensity, duration and frequency. Methods Employing a randomized cross-over design, the intra-individual physiological responses [i.e., peak (\(\dot{VO}_{2peak}\)) and submaximal (\(\dot{VO}_{2submax}\)) oxygen uptake, velocity at lactate thresholds (V\(_2\), V\(_4\))] and performance (time-to-exhaustion (TTE)) of 13 recreational runners who performed three 3-week sessions of high-intensity interval training (HIIT), high-volume low-intensity training (HVLIT) or more but shorter sessions of HVLIT (high-frequency training; HFT) were assessed. Results \(\dot{VO}_{2submax}\), V\(_2\), V\(_4\) and TTE were not altered by HIIT, HVLIT or HFT (p > 0.05). \(\dot{VO}_{2peak}\) improved to the same extent following HVLIT (p = 0.045) and HFT (p = 0.02). The number of moderately negative responders was higher following HIIT (15.4%); and HFT (15.4%) than HVLIT (7.6%). The number of very positive responders was higher following HVLIT (38.5%) than HFT (23%) or HIIT (7.7%). 46% of the runners responded positively to two mesocycles, while 23% did not respond to any. Conclusion On a group level, none of the interventions altered \(\dot{VO}_{2submax}\), V\(_2\), V\(_4\) or TTE, while HVLIT and HFT improved \(\dot{VO}_{2peak}\). The mean adaptation index indicated similar numbers of positive, negative and non-responders to HIIT, HVLIT and HFT, but more very positive responders to HVLIT than HFT or HIIT. 46% responded positively to two mesocycles, while 23% did not respond to any. These findings indicate that the magnitude of responses to HIIT, HVLIT and HFT is highly individual and no pattern was apparent. KW - cardiorespiratory fitness KW - endurance KW - personalized training Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-235022 SN - 1439-6319 VL - 120 ER - TY - JOUR A1 - Stauffert, Jan-Philipp A1 - Niebling, Florian A1 - Latoschik, Marc Erich T1 - Latency and Cybersickness: Impact, Causes, and Measures. A Review JF - Frontiers in Virtual Reality N2 - Latency is a key characteristic inherent to any computer system. Motion-to-Photon (MTP) latency describes the time between the movement of a tracked object and its corresponding movement rendered and depicted by computer-generated images on a graphical output screen. High MTP latency can cause a loss of performance in interactive graphics applications and, even worse, can provoke cybersickness in Virtual Reality (VR) applications. Here, cybersickness can degrade VR experiences or may render the experiences completely unusable. It can confound research findings of an otherwise sound experiment. Latency as a contributing factor to cybersickness needs to be properly understood. Its effects need to be analyzed, its sources need to be identified, good measurement methods need to be developed, and proper counter measures need to be developed in order to reduce potentially harmful impacts of latency on the usability and safety of VR systems. Research shows that latency can exhibit intricate timing patterns with various spiking and periodic behavior. These timing behaviors may vary, yet most are found to provoke cybersickness. Overall, latency can differ drastically between different systems interfering with generalization of measurement results. This review article describes the causes and effects of latency with regard to cybersickness. We report on different existing approaches to measure and report latency. Hence, the article provides readers with the knowledge to understand and report latency for their own applications, evaluations, and experiments. It should also help to measure, identify, and finally control and counteract latency and hence gain confidence into the soundness of empirical data collected by VR exposures. Low latency increases the usability and safety of VR systems. KW - virtual reality KW - latency KW - cybersickness KW - jitter KW - simulator sickness Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-236133 VL - 1 ER - TY - JOUR A1 - Kramer, Alexander A1 - Bangert, Philip A1 - Schilling, Klaus T1 - UWE-4: First Electric Propulsion on a 1U CubeSat — In-Orbit Experiments and Characterization JF - Aerospace N2 - The electric propulsion system NanoFEEP was integrated and tested in orbit on the UWE-4 satellite, which marks the first successful demonstration of an electric propulsion system on board a 1U CubeSat. In-orbit characterization measurements of the heating process of the propellant and the power consumption of the propulsion system at different thrust levels are presented. Furthermore, an analysis of the thrust vector direction based on its effect on the attitude of the spacecraft is described. The employed heater liquefies the propellant for a duration of 30 min per orbit and consumes 103 ± 4 mW. During this time, the respective thruster can be activated. The propulsion system including one thruster head, its corresponding heater, the neutralizer and the digital components of the power processing unit consume 8.5 ± 0.1 mW ⋅μ A\(^{−1}\) + 184 ± 8.5 mW and scales with the emitter current. The estimated thrust directions of two thruster heads are at angles of 15.7 ± 7.6∘ and 13.2 ± 5.5∘ relative to their mounting direction in the CubeSat structure. In light of the very limited power on a 1U CubeSat, the NanoFEEP propulsion system renders a very viable option. The heater of subsequent NanoFEEP thrusters was already improved, such that the system can be activated during the whole orbit period. KW - CubeSat KW - UWE-4 KW - electric propulsion KW - NanoFEEP KW - power consumption KW - thrust direction KW - characterization KW - in-orbit experiments Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-236124 VL - 7 IS - 7 ER - TY - JOUR A1 - Frey, Anna A1 - Gassenmaier, Tobias A1 - Hofmann, Ulrich A1 - Schmitt, Dominik A1 - Fette, Georg A1 - Marx, Almuth A1 - Heterich, Sabine A1 - Boivin-Jahns, Valérie A1 - Ertl, Georg A1 - Bley, Thorsten A1 - Frantz, Stefan A1 - Jahns, Roland A1 - Störk, Stefan T1 - Coagulation factor XIII activity predicts left ventricular remodelling after acute myocardial infarction JF - ESC Heart Failure N2 - Aims Acute myocardial infarction (MI) is the major cause of chronic heart failure. The activity of blood coagulation factor XIII (FXIIIa) plays an important role in rodents as a healing factor after MI, whereas its role in healing and remodelling processes in humans remains unclear. We prospectively evaluated the relevance of FXIIIa after acute MI as a potential early prognostic marker for adequate healing. Methods and results This monocentric prospective cohort study investigated cardiac remodelling in patients with ST-elevation MI and followed them up for 1 year. Serum FXIIIa was serially assessed during the first 9 days after MI and after 2, 6, and 12 months. Cardiac magnetic resonance imaging was performed within 4 days after MI (Scan 1), after 7 to 9 days (Scan 2), and after 12 months (Scan 3). The FXIII valine-to-leucine (V34L) single-nucleotide polymorphism rs5985 was genotyped. One hundred forty-six patients were investigated (mean age 58 ± 11 years, 13% women). Median FXIIIa was 118 % (quartiles, 102–132%) and dropped to a trough on the second day after MI: 109%(98–109%; P < 0.001). FXIIIa recovered slowly over time, reaching the baseline level after 2 to 6 months and surpassed baseline levels only after 12 months: 124 % (110–142%). The development of FXIIIa after MI was independent of the genotype. FXIIIa on Day 2 was strongly and inversely associated with the relative size of MI in Scan 1 (Spearman’s ρ = –0.31; P = 0.01) and Scan 3 (ρ = –0.39; P < 0.01) and positively associated with left ventricular ejection fraction: ρ = 0.32 (P < 0.01) and ρ = 0.24 (P = 0.04), respectively. Conclusions FXIII activity after MI is highly dynamic, exhibiting a significant decline in the early healing period, with reconstitution 6 months later. Depressed FXIIIa early after MI predicted a greater size of MI and lower left ventricular ejection fraction after 1 year. The clinical relevance of these findings awaits to be tested in a randomized trial. KW - blood coagulation factor XIII KW - ST-elevation myocardial infarction KW - healing and remodelling processes KW - cardiac magnetic resonance imaging Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-236013 VL - 7 IS - 5 ER - TY - JOUR A1 - Lopez-Arreguin, A. J. R. A1 - Montenegro, S. T1 - Towards bio-inspired robots for underground and surface exploration in planetary environments: An overview and novel developments inspired in sand-swimmers JF - Heliyon N2 - Dessert organisms like sandfish lizards (SLs) bend and generate thrust in granular mediums to scape heat and hunt for prey [1]. Further, SLs seems to have striking capabilities to swim in undulatory form keeping the same wavelength even in terrains with different volumetric densities, hence behaving as rigid bodies. This paper tries to recommend new research directions for planetary robotics, adapting principles of sand swimmers for improving robustness of surface exploration robots. First, we summarize previous efforts on bio-inspired hardware developed for granular terrains and accessing complex geological features. Later, a rigid wheel design has been proposed to imitate SLs locomotion capabilities. In order to derive the force models to predict performance of such bio-inspired mobility system, different approaches as RFT (Resistive Force Theory) and analytical terramechanics are introduced. Even in typical wheeled robots the slip and sinkage increase with time, the new design intends to imitate traversability capabilities of SLs, that seem to keep the same slip while displacing at subsurface levels. KW - aerospace engineering KW - mechanical engineering KW - biomimetics KW - biomechanic KW - biomechanical engineering KW - mechanics KW - sandfish KW - granular KW - locomotion KW - slip Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-230309 VL - 6 ER - TY - JOUR A1 - Krupitzer, Christian A1 - Temizer, Timur A1 - Prantl, Thomas A1 - Raibulet, Claudia T1 - An Overview of Design Patterns for Self-Adaptive Systems in the Context of the Internet of Things JF - IEEE Access N2 - The Internet of Things (IoT) requires the integration of all available, highly specialized, and heterogeneous devices, ranging from embedded sensor nodes to servers in the cloud. The self-adaptive research domain provides adaptive capabilities that can support the integration in IoT systems. However, developing such systems is a challenging, error-prone, and time-consuming task. In this context, design patterns propose already used and optimized solutions to specific problems in various contexts. Applying design patterns might help to reuse existing knowledge about similar development issues. However, so far, there is a lack of taxonomies on design patterns for self-adaptive systems. To tackle this issue, in this paper, we provide a taxonomy on design patterns for self-adaptive systems that can be transferred to support adaptivity in IoT systems. Besides describing the taxonomy and the design patterns, we discuss their applicability in an Industrial IoT case study. KW - Design patterns KW - Internet of Things KW - IoT KW - self-adaptive systems KW - software engineering Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-229984 VL - 8 ER - TY - JOUR A1 - Du, Shitong A1 - Lauterbach, Helge A. A1 - Li, Xuyou A1 - Demisse, Girum G. A1 - Borrmann, Dorit A1 - Nüchter, Andreas T1 - Curvefusion — A Method for Combining Estimated Trajectories with Applications to SLAM and Time-Calibration JF - Sensors N2 - Mapping and localization of mobile robots in an unknown environment are essential for most high-level operations like autonomous navigation or exploration. This paper presents a novel approach for combining estimated trajectories, namely curvefusion. The robot used in the experiments is equipped with a horizontally mounted 2D profiler, a constantly spinning 3D laser scanner and a GPS module. The proposed algorithm first combines trajectories from different sensors to optimize poses of the planar three degrees of freedom (DoF) trajectory, which is then fed into continuous-time simultaneous localization and mapping (SLAM) to further improve the trajectory. While state-of-the-art multi-sensor fusion methods mainly focus on probabilistic methods, our approach instead adopts a deformation-based method to optimize poses. To this end, a similarity metric for curved shapes is introduced into the robotics community to fuse the estimated trajectories. Additionally, a shape-based point correspondence estimation method is applied to the multi-sensor time calibration. Experiments show that the proposed fusion method can achieve relatively better accuracy, even if the error of the trajectory before fusion is large, which demonstrates that our method can still maintain a certain degree of accuracy in an environment where typical pose estimation methods have poor performance. In addition, the proposed time-calibration method also achieves high accuracy in estimating point correspondences. KW - mapping KW - continuous-time SLAM KW - deformation-based method KW - time calibration Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-219988 SN - 1424-8220 VL - 20 IS - 23 ER - TY - JOUR A1 - Schlör, Daniel A1 - Ring, Markus A1 - Hotho, Andreas T1 - iNALU: Improved Neural Arithmetic Logic Unit JF - Frontiers in Artificial Intelligence N2 - Neural networks have to capture mathematical relationships in order to learn various tasks. They approximate these relations implicitly and therefore often do not generalize well. The recently proposed Neural Arithmetic Logic Unit (NALU) is a novel neural architecture which is able to explicitly represent the mathematical relationships by the units of the network to learn operations such as summation, subtraction or multiplication. Although NALUs have been shown to perform well on various downstream tasks, an in-depth analysis reveals practical shortcomings by design, such as the inability to multiply or divide negative input values or training stability issues for deeper networks. We address these issues and propose an improved model architecture. We evaluate our model empirically in various settings from learning basic arithmetic operations to more complex functions. Our experiments indicate that our model solves stability issues and outperforms the original NALU model in means of arithmetic precision and convergence. KW - neural networks KW - machine learning KW - arithmetic calculations KW - neural architecture KW - experimental evaluation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-212301 SN - 2624-8212 VL - 3 ER - TY - JOUR A1 - Davidson, Padraig A1 - Düking, Peter A1 - Zinner, Christoph A1 - Sperlich, Billy A1 - Hotho, Andreas T1 - Smartwatch-Derived Data and Machine Learning Algorithms Estimate Classes of Ratings of Perceived Exertion in Runners: A Pilot Study JF - Sensors N2 - The rating of perceived exertion (RPE) is a subjective load marker and may assist in individualizing training prescription, particularly by adjusting running intensity. Unfortunately, RPE has shortcomings (e.g., underreporting) and cannot be monitored continuously and automatically throughout a training sessions. In this pilot study, we aimed to predict two classes of RPE (≤15 “Somewhat hard to hard” on Borg’s 6–20 scale vs. RPE >15 in runners by analyzing data recorded by a commercially-available smartwatch with machine learning algorithms. Twelve trained and untrained runners performed long-continuous runs at a constant self-selected pace to volitional exhaustion. Untrained runners reported their RPE each kilometer, whereas trained runners reported every five kilometers. The kinetics of heart rate, step cadence, and running velocity were recorded continuously ( 1 Hz ) with a commercially-available smartwatch (Polar V800). We trained different machine learning algorithms to estimate the two classes of RPE based on the time series sensor data derived from the smartwatch. Predictions were analyzed in different settings: accuracy overall and per runner type; i.e., accuracy for trained and untrained runners independently. We achieved top accuracies of 84.8 % for the whole dataset, 81.8 % for the trained runners, and 86.1 % for the untrained runners. We predict two classes of RPE with high accuracy using machine learning and smartwatch data. This approach might aid in individualizing training prescriptions. KW - artificial intelligence KW - endurance KW - exercise intensity KW - precision training KW - prediction KW - wearable Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-205686 SN - 1424-8220 VL - 20 IS - 9 ER - TY - THES A1 - Schauer Marin Rodrigues, Johannes T1 - Detecting Changes and Finding Collisions in 3D Point Clouds : Data Structures and Algorithms for Post-Processing Large Datasets T1 - Erkennen von Änderungen und Finden von Kollisionen in 3D Punktwolken N2 - Affordable prices for 3D laser range finders and mature software solutions for registering multiple point clouds in a common coordinate system paved the way for new areas of application for 3D point clouds. Nowadays we see 3D laser scanners being used not only by digital surveying experts but also by law enforcement officials, construction workers or archaeologists. Whether the purpose is digitizing factory production lines, preserving historic sites as digital heritage or recording environments for gaming or virtual reality applications -- it is hard to imagine a scenario in which the final point cloud must also contain the points of "moving" objects like factory workers, pedestrians, cars or flocks of birds. For most post-processing tasks, moving objects are undesirable not least because moving objects will appear in scans multiple times or are distorted due to their motion relative to the scanner rotation. The main contributions of this work are two postprocessing steps for already registered 3D point clouds. The first method is a new change detection approach based on a voxel grid which allows partitioning the input points into static and dynamic points using explicit change detection and subsequently remove the latter for a "cleaned" point cloud. The second method uses this cleaned point cloud as input for detecting collisions between points of the environment point cloud and a point cloud of a model that is moved through the scene. Our approach on explicit change detection is compared to the state of the art using multiple datasets including the popular KITTI dataset. We show how our solution achieves similar or better F1-scores than an existing solution while at the same time being faster. To detect collisions we do not produce a mesh but approximate the raw point cloud data by spheres or cylindrical volumes. We show how our data structures allow efficient nearest neighbor queries that make our CPU-only approach comparable to a massively-parallel algorithm running on a GPU. The utilized algorithms and data structures are discussed in detail. All our software is freely available for download under the terms of the GNU General Public license. Most of the datasets used in this thesis are freely available as well. We provide shell scripts that allow one to directly reproduce the quantitative results shown in this thesis for easy verification of our findings. N2 - Kostengünstige Laserscanner und ausgereifte Softwarelösungen um mehrere Punktwolken in einem gemeinsamen Koordinatensystem zu registrieren, ermöglichen neue Einsatzzwecke für 3D Punktwolken. Heutzutage werden 3D Laserscanner nicht nur von Expert*innen auf dem Gebiet der Vermessung genutzt sondern auch von Polizist*innen, Bauarbeiter*innen oder Archäolog*innen. Unabhängig davon ob der Einsatzzweck die Digitalisierung von Fabrikanlagen, der Erhalt von historischen Stätten als digitaler Nachlass oder die Erfassung einer Umgebung für Virtual Reality Anwendungen ist - es ist schwer ein Szenario zu finden in welchem die finale Punktwolke auch Punkte von sich bewegenden Objekten enthalten soll, wie zum Beispiel Fabrikarbeiter*innen, Passant*innen, Autos oder einen Schwarm Vögel. In den meisten Bearbeitungsschritten sind bewegte Objekte unerwünscht und das nicht nur weil sie in mehrmals im gleichen Scan vorkommen oder auf Grund ihrer Bewegung relativ zur Scanner Rotation verzerrt gemessen werden. Der Hauptbeitrag dieser Arbeit sind zwei Nachverarbeitungsschritte für registrierte 3D Punktwolken. Die erste Methode ist ein neuer Ansatz zur Änderungserkennung basierend auf einem Voxelgitter, welche es erlaubt die Eingabepunktwolke in statische und dynamische Punkte zu segmentieren. Die zweite Methode nutzt die gesäuberte Punktwolke als Eingabe um Kollisionen zwischen Punkten der Umgebung mit der Punktwolke eines Modells welches durch die Szene bewegt wird zu erkennen. Unser Vorgehen für explizite Änderungserkennung wird mit dem aktuellen Stand der Technik unter Verwendung verschiedener Datensätze verglichen, inklusive dem populären KITTI Datensatz. Wir zeigen, dass unsere Lösung ähnliche oder bessere F1-Werte als existierende Lösungen erreicht und gleichzeitig schneller ist. Um Kollisionen zu finden erstellen wir kein Polygonnetz sondern approximieren die Punkte mit Kugeln oder zylindrischen Volumen. Wir zeigen wie unsere Datenstrukturen effiziente Nächste-Nachbarn-Suche erlaubt, die unsere CPU Lösung mit einer massiv-parallelen Lösung für die GPU vergleichbar macht. Die benutzten Algorithmen und Datenstrukturen werden im Detail diskutiert. Die komplette Software ist frei verfügbar unter den Bedingungen der GNU General Public license. Die meisten unserer Datensätze die in dieser Arbeit verwendet wurden stehen ebenfalls zum freien Download zur Verfügung. Wir publizieren ebenfalls all unsere Shell-Skripte mit denen die quantitativen Ergebnisse die in dieser Arbeit gezeigt werden reproduziert und verifiziert werden können. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 20 KW - Punktwolke KW - Änderungserkennung KW - 3d point clouds KW - collision detection KW - change detection KW - k-d tree KW - Dreidimensionale Bildverarbeitung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-214285 SN - 978-3-945459-32-4 ER - TY - THES A1 - Borchert, Kathrin Johanna T1 - Estimating Quality of Experience of Enterprise Applications - A Crowdsourcing-based Approach T1 - Abschätzung der Quality of Experience von Geschäftsanwendungen - Ein crowdsourcing-basierter Ansatz N2 - Nowadays, employees have to work with applications, technical services, and systems every day for hours. Hence, performance degradation of such systems might be perceived negatively by the employees, increase frustration, and might also have a negative effect on their productivity. The assessment of the application's performance in order to provide a smooth operation of the application is part of the application management. Within this process it is not sufficient to assess the system performance solely on technical performance parameters, e.g., response or loading times. These values have to be set into relation to the perceived performance quality on the user's side - the quality of experience (QoE). This dissertation focuses on the monitoring and estimation of the QoE of enterprise applications. As building models to estimate the QoE requires quality ratings from the users as ground truth, one part of this work addresses methods to collect such ratings. Besides the evaluation of approaches to improve the quality of results of tasks and studies completed on crowdsourcing platforms, a general concept for monitoring and estimating QoE in enterprise environments is presented. Here, relevant design dimension of subjective studies are identified and their impact of the QoE is evaluated and discussed. By considering the findings, a methodology for collecting quality ratings from employees during their regular work is developed. The method is realized by implementing a tool to conduct short surveys and deployed in a cooperating company. As a foundation for learning QoE estimation models, this work investigates the relationship between user-provided ratings and technical performance parameters. This analysis is based on a data set collected in a user study in a cooperating company during a time span of 1.5 years. Finally, two QoE estimation models are introduced and their performance is evaluated. N2 - Heutzutage sind Geschäftsanwendungen und technische Systeme aus dem Arbeitsalltag vieler Menschen nicht mehr wegzudenken. Kommt es bei diesen zu Performanzproblemen, wie etwa Verzögerungen im Netzwerk oder Überlast im Datenzentrum, kann sich dies negativ auf die Effizienz und Produktivität der Mitarbeiter auswirken. Daher ist es wichtig aus Sicht der Betreiber die Performanz der Anwendungen und Systeme zu überwachen. Hierbei ist es allerdings nicht ausreichend die Qualität lediglich anhand von technischen Performanzparametern wie Antwortzeiten zu beurteilen. Stattdessen sollten diese Werte in Relation zu der von den Mitarbeitern wahrgenommenen Performanz oder Quality of Experience (QoE) gesetzt werden. Diese Dissertation beschäftigt sich mit dem Monitoring und der Abschätzung der QoE von Geschäftsanwendungen. Neben der Präsentation eines generellen Konzepts zum Monitoring und der Abschätzung der QoE im Geschäftsumfeld, befasst sich die Arbeit mit Aspekten der Erfassung von Qualitätsbewertungen durch die Nutzer. Dies umfasst einerseits die Evaluation von Ansätzen zur Verbesserung der Qualität von Aufgaben- und Studienergebnissen auf Crowdsourcing-Plattformen. Andererseits werden relevante Dimensionen des Designs von Studien zur Untersuchung der QoE von Geschäftsanwendungen aufgezeigt und deren Einfluss auf die QoE diskutiert und evaluiert. Letztendlich wird eine Methodik zur Erfassung von Qualitätsbewertungen durch Mitarbeiter während ihrer regulären Arbeit vorgestellt, welche implementiert und in einem kooperierenden Unternehmen ausgerollt wurde. Als Grundlage der Entwicklung eines QoE Abschätzungsmodells, untersucht diese Arbeit den Zusammenhang zwischen Bewertungen durch die Nutzer und technischen Performanzparametern. Die Untersuchungen erfolgen auf einem Datensatz, welcher in einer Studie über 1.5 Jahre in einem kooperierenden Unternehmen gesammelt wurde. Des Weiteren werden zwei Methoden zur Abschätzung der QoE präsentiert und deren Performanz evaluiert. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 02/20 KW - Quality of Experience KW - Crowdsourcing KW - Monitoring KW - Enterprise-Resource-Planning KW - Quality of Experience KW - Crowdsourcing KW - User studies KW - Enterprise application KW - QoE estimation KW - Quality of Experience KW - Crowdsourcing KW - Nutzerstudien KW - Geschäftsanwendung KW - QoE-Abschätzung KW - Monitoring Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-216978 SN - 1432-8801 ER - TY - THES A1 - Borchers, Kai T1 - Decentralized and Pulse-based Clock Synchronization in SpaceWire Networks for Time-triggered Data Transfers T1 - Dezentralisierte und Puls-basierte Uhrensynchronisation in SpaceWire Netzwerken für zeitgesteuerten Datentransfer N2 - Time-triggered communication is widely used throughout several industry do- mains, primarily for reliable and real-time capable data transfers. However, existing time-triggered technologies are designed for terrestrial usage and not directly applicable to space applications due to the harsh environment. In- stead, specific hardware must be developed to deal with thermal, mechanical, and especially radiation effects. SpaceWire, as an event-triggered communication technology, has been used for years in a large number of space missions. Its moderate complexity, her- itage, and transmission rates up to 400 MBits/s are one of the main ad- vantages and often without alternatives for on-board computing systems of spacecraft. At present, real-time data transfers are either achieved by prior- itization inside SpaceWire routers or by applying a simplified time-triggered approach. These solutions either imply problems if they are used inside dis- tributed on-board computing systems or in case of networks with more than a single router are required. This work provides a solution for the real-time problem by developing a novel clock synchronization approach. This approach is focused on being compatible with distributed system structures and allows time-triggered data transfers. A significant difference to existing technologies is the remote clock estimation by the use of pulses. They are transferred over the network and remove the need for latency accumulation, which allows the incorporation of standardized SpaceWire equipment. Additionally, local clocks are controlled decentralized and provide different correction capabilities in order to handle oscillator induced uncertainties. All these functionalities are provided by a developed Network Controller (NC), able to isolate the attached network and to control accesses. N2 - Zeitgesteuerte Datenübertragung ist in vielen Industriezweigen weit verbreitet, primär für zuverlässige und echtzeitfähige Kommunikation. Bestehende Technologien sind jedoch für den terrestrischen Gebrauch konzipiert und aufgrund der rauen Umgebung nicht direkt auf Weltraumanwendungen anwendbar. Stattdessen wird spezielle Hardware entwickelt, um Strahlungseffekten zu widerstehen sowie thermischen und mechanischen Belastungen standzuhalten. SpaceWire wurde als ereignisgesteuerte Kommunikationstechnologie entwickelt und wird seit Jahren in einer Vielzahl von Weltraummissionen verwendet. Dessen erfolgreiche Verwendung, überschaubare Komplexität, und Übertragungsraten bis zu 400 MBit/s sind einige seiner Hauptvorteile. Derzeit werden Datenübertragungen in Echtzeit entweder durch Priorisierung innerhalb von SpaceWire Router erreicht, oder durch Anwendung von vereinfachten zeitgesteuerten Ansätzen. Diese Lösungen implizieren entweder Probleme in verteilten Systemarchitekturen oder in SpaceWire Netzwerken mit mehreren Routern. Diese Arbeit beschreibt eine Uhrensynchronisation, die bestimmte Eigenschaften von SpaceWire ausnutzt, um das Echtzeitproblem zu lösen. Der Ansatz ist dabei kompatibel mit verteilten Systemstrukturen und ermöglicht eine zeitgesteuerte Datenübertragung. KW - Datenübertragung KW - Field programmable gate array KW - FPGA KW - Formal verification KW - SpaceWire KW - Communication KW - Raumfahrttechnik KW - Verifikation KW - Hardware Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-215606 ER - TY - THES A1 - Hofmann, Jan T1 - Deep Reinforcement Learning for Configuration of Time-Sensitive-Networking T1 - Deep Reinforcement Learning zur Konfiguration von Time-Sensitive-Networking N2 - Reliable, deterministic real-time communication is fundamental to most industrial systems today. In many other domains Ethernet has become the most common platform for communication networks, but has been unsuitable to satisfy the requirements of industrial networks for a long time. This has changed with the introduction of Time-Sensitive-Networking (TSN), a set of standards utilizing Ethernet to implement deterministic real-time networks. This makes Ethernet a viable alternative to the expensive fieldbus systems commonly used in industrial environments. However, TSN is not a silver bullet. Industrial networks are a complex and highly dynamic environment and the configuration of TSN, especially with respect to latency, is a challenging but crucial task. Various approaches have been pursued for the configuration of TSN in dynamic industrial environments. Optimization techniques like Linear Programming (LP) are able to determine an optimal configuration for a given network, but the time consumption exponentially increases with the complexity of the environment. Machine Learning (ML) has become widely popular in the last years and is able to approximate a near-optimal TSN configuration for networks of different complexity. Yet, ML models are usually trained in a supervised manner which requires large amounts of data that have to be generated for the specific environment. Therefore, supervised methods are not scalable and do not adapt to changing dynamics of the network environment. To address these issues, this work proposes a Deep Reinforcement Learning (DRL) approach to the configuration of TSN in industrial networks. DRL combines two different disciplines, Deep Learning (DL) and Reinforcement Learning (RL), and has gained considerable traction in the last years due to breakthroughs in various domains. RL is supposed to autonomously learn a challenging task like the configuration of TSN without requiring any training data. The addition of DL allows to apply well-studied RL methods to a complex environment such as dynamic industrial networks. There are two major contributions made in this work. In the first step, an interactive environment is proposed which allows for the simulation and configuration of industrial networks using basic TSN mechanisms. The environment provides an interface that allows to apply various DRL methods to the problem of TSN configuration. The second contribution of this work is an in-depth study on the application of two fundamentally different DRL methods to the proposed environment. Both methods are evaluated on networks of different complexity and the results are compared to the ground truth and to the results of two supervised ML approaches. Ultimately, this work investigates if DRL can adapt to changing dynamics of the environment in a more scalable manner than supervised methods. N2 - Zuverlässige Echtzeitnetzwerke spielen eine zentrale Rolle im heutigen industriellen Umfeld. Während sich in anderen Anwendungsbereichen Ethernet als Technik für Kommunikationsnetze durchsetzen konnte, basiert industrielle Kommunikation bis heute häufig noch auf teuren Feldbus-Systemen. Mit der Einführung von Time-Sensitive-Networking (TSN) wurde Ethernet schließlich um eine Reihe von Standards erweitert, die die hohen Anforderungen an Echtzeitkommunikation erfüllen und Ethernet damit auch im industriellen Umfeld etablieren sollen. Doch für eine zuverlässige Kommunikation, besonders im Hinblick auf die Übertragungsverzögerung von Datenpaketen (Latenz), ist die richtige Konfiguration von TSN entscheidend. Dynamische Netzwerke zu konfigurieren ist ein Optimierungsproblem, das verschiedene Herausforderungen birgt. Verfahren wie die lineare Optimierung liefern zwar optimale Ergebnisse, jedoch steigt der Zeitaufwand exponentiell mit der Größe der Netzwerke. Moderne Lösungsansätze wie Machine Learning (ML) können sich einer optimalen Lösung annähern, benötigen jedoch üblicherweise große Datenmengen, auf denen sie trainiert werden (Supervised Learning). Diese Arbeit untersucht die Anwendung von Deep Reinforcement Learning (DRL) zur Konfiguration von TSN. DRL kombiniert Reinforcement Learning (RL), also das selbstständige Lernen ausschließlich durch Interaktion, mit dem Deep Learning (DL), dem Lernen mittels tiefer neuronaler Netze. Die Arbeit beschreibt, wie sich eine Umgebung für DRL zur Simulation und Konfiguration von industriellen Netzwerken implementieren lässt, und untersucht die Anwendung zweier unterschiedlicher Ansätze von DRL auf das Problem der TSN-Konfiguration. Beide Methoden wurden anhand von zwei unterschiedlich komplexen Datensätzen ausgewertet und die Ergebnisse sowohl mit den zeitaufwändig generierten Optimallösungen als auch mit den Ergebnissen zweier Supervised Learning-Ansätze verglichen. Es konnte gezeigt werden, dass DRL optimale Ergebnisse auf kleinen Netzwerken erzielen kann und insgesamt in der Lage ist, Supervised Learning bei der Konfiguration von TSN zu übertreffen. Weiterhin konnte in der Arbeit demonstriert werden, dass sich DRL schnell an fundamentale Veränderungen der Umgebung anpassen kann, was mit Supervised Learning nur durch deutlichen Mehraufwand möglich ist. KW - Reinforcement Learning KW - Time-Sensitive Networking KW - Deep Reinforcement Learning KW - Time-Sensitive-Networking KW - Real-Time-Networks KW - Bestärkendes Lernen KW - Echtzeit-Netzwerke Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-215953 ER - TY - THES A1 - Wick, Christoph T1 - Optical Medieval Music Recognition T1 - Optical Medieval Music Recognition N2 - In recent years, great progress has been made in the area of Artificial Intelligence (AI) due to the possibilities of Deep Learning which steadily yielded new state-of-the-art results especially in many image recognition tasks. Currently, in some areas, human performance is achieved or already exceeded. This great development already had an impact on the area of Optical Music Recognition (OMR) as several novel methods relying on Deep Learning succeeded in specific tasks. Musicologists are interested in large-scale musical analysis and in publishing digital transcriptions in a collection enabling to develop tools for searching and data retrieving. The application of OMR promises to simplify and thus speed-up the transcription process by either providing fully-automatic or semi-automatic approaches. This thesis focuses on the automatic transcription of Medieval music with a focus on square notation which poses a challenging task due to complex layouts, highly varying handwritten notations, and degradation. However, since handwritten music notations are quite complex to read, even for an experienced musicologist, it is to be expected that even with new techniques of OMR manual corrections are required to obtain the transcriptions. This thesis presents several new approaches and open source software solutions for layout analysis and Automatic Text Recognition (ATR) for early documents and for OMR of Medieval manuscripts providing state-of-the-art technology. Fully Convolutional Networks (FCN) are applied for the segmentation of historical manuscripts and early printed books, to detect staff lines, and to recognize neume notations. The ATR engine Calamari is presented which allows for ATR of early prints and also the recognition of lyrics. Configurable CNN/LSTM-network architectures which are trained with the segmentation-free CTC-loss are applied to the sequential recognition of text but also monophonic music. Finally, a syllable-to-neume assignment algorithm is presented which represents the final step to obtain a complete transcription of the music. The evaluations show that the performances of any algorithm is highly depending on the material at hand and the number of training instances. The presented staff line detection correctly identifies staff lines and staves with an $F_1$-score of above $99.5\%$. The symbol recognition yields a diplomatic Symbol Accuracy Rate (dSAR) of above $90\%$ by counting the number of correct predictions in the symbols sequence normalized by its length. The ATR of lyrics achieved a Character Error Rate (CAR) (equivalently the number of correct predictions normalized by the sentence length) of above $93\%$ trained on 771 lyric lines of Medieval manuscripts and of 99.89\% when training on around 3.5 million lines of contemporary printed fonts. The assignment of syllables and their corresponding neumes reached $F_1$-scores of up to $99.2\%$. A direct comparison to previously published performances is difficult due to different materials and metrics. However, estimations show that the reported values of this thesis exceed the state-of-the-art in the area of square notation. A further goal of this thesis is to enable musicologists without technical background to apply the developed algorithms in a complete workflow by providing a user-friendly and comfortable Graphical User Interface (GUI) encapsulating the technical details. For this purpose, this thesis presents the web-application OMMR4all. Its fully-functional workflow includes the proposed state-of-the-art machine-learning algorithms and optionally allows for a manual intervention at any stage to correct the output preventing error propagation. To simplify the manual (post-) correction, OMMR4all provides an overlay-editor that superimposes the annotations with a scan of the original manuscripts so that errors can easily be spotted. The workflow is designed to be iteratively improvable by training better models as soon as new Ground Truth (GT) is available. N2 - In den letzten Jahre wurden aufgrund der Möglichkeiten durch Deep Learning, was insbesondere in vielen Bildbearbeitungsaufgaben stetig neue Bestwerte erzielte, große Fortschritte im Bereich der künstlichen Intelligenz (KI) gemacht. Derzeit wird in vielen Gebieten menschliche Performanz erreicht oder mittlerweile sogar übertroffen. Diese großen Entwicklungen hatten einen Einfluss auf den Forschungsbereich der optischen Musikerkennung (OMR), da verschiedenste Methodiken, die auf Deep Learning basierten in spezifischen Aufgaben erfolgreich waren. Musikwissenschaftler sind in großangelegter Musikanalyse und in das Veröffentlichen von digitalen Transkriptionen als Sammlungen interessiert, was eine Entwicklung von Werkzeugen zur Suche und Datenakquise ermöglicht. Die Anwendung von OMR verspricht diesen Transkriptionsprozess zu vereinfachen und zu beschleunigen indem vollautomatische oder semiautomatische Ansätze bereitgestellt werden. Diese Arbeit legt den Schwerpunkt auf die automatische Transkription von mittelalterlicher Musik mit einem Fokus auf Quadratnotation, die eine komplexe Aufgabe aufgrund der komplexen Layouts, der stark variierenden Notationen und der Alterungsprozesse der Originalmanuskripte darstellt. Da jedoch die handgeschriebenen Musiknotationen selbst für erfahrene Musikwissenschaftler aufgrund der Komplexität schwer zu lesen sind, ist davon auszugehen, dass selbst mit den neuesten OMR-Techniken manuelle Korrekturen erforderlich sind, um die Transkription zu erhalten. Diese Arbeit präsentiert mehrere neue Ansätze und Open-Source-Software-Lösungen zur Layoutanalyse und zur automatischen Texterkennung (ATR) von frühen Dokumenten und für OMR von Mittelalterlichen Mauskripten, die auf dem Stand der aktuellen Technik sind. Fully Convolutional Networks (FCN) werden zur Segmentierung der historischen Manuskripte und frühen Buchdrucke, zur Detektion von Notenlinien und zur Erkennung von Neumennotationen eingesetzt. Die ATR-Engine Calamari, die eine ATR von frühen Buchdrucken und ebenso eine Erkennung von Liedtexten ermöglicht wird vorgestellt. Konfigurierbare CNN/LSTM-Netzwerkarchitekturen, die mit dem segmentierungsfreien CTC-loss trainiert werden, werden zur sequentiellen Texterkennung, aber auch einstimmiger Musik, eingesetzt. Abschließend wird ein Silben-zu-Neumen-Algorithmus vorgestellt, der dem letzten Schritt entspricht eine vollständige Transkription der Musik zu erhalten. Die Evaluationen zeigen, dass die Performanz eines jeden Algorithmus hochgradig abhängig vom vorliegenden Material und der Anzahl der Trainingsbeispiele ist. Die vorgestellte Notenliniendetektion erkennt Notenlinien und -zeilen mit einem $F_1$-Wert von über 99,5%. Die Symbolerkennung erreichte eine diplomatische Symbolerkennungsrate (dSAR), die die Anzahl der korrekten Vorhersagen in der Symbolsequenz zählt und mit der Länge normalisiert, von über 90%. Die ATR von Liedtext erzielte eine Zeichengenauigkeit (CAR) (äquivalent zur Anzahl der korrekten Vorhersagen normalisiert durch die Sequenzlänge) von über 93% bei einem Training auf 771 Liedtextzeilen von mittelalterlichen Manuskripten und von 99,89%, wenn auf 3,5 Millionen Zeilen von moderner gedruckter Schrift trainiert wird. Die Zuordnung von Silben und den zugehörigen Neumen erreicht $F_1$-werte von über 99,2%. Ein direkter Vergleich zu bereits veröffentlichten Performanzen ist hierbei jedoch schwer, da mit verschiedenen Material und Metriken evaluiert wurde. Jedoch zeigen Abschätzungen, dass die Werte dieser Arbeit den aktuellen Stand der Technik darstellen. Ein weiteres Ziel dieser Arbeit war es, Musikwissenschaftlern ohne technischen Hintergrund das Anwenden der entwickelten Algorithmen in einem vollständigen Workflow zu ermöglichen, indem eine benutzerfreundliche und komfortable graphische Benutzerschnittstelle (GUI) bereitgestellt wird, die die technischen Details kapselt. Zu diesem Zweck präsentiert diese Arbeit die Web-Applikation OMMR4all. Ihr voll funktionsfähiger Workflow inkludiert die vorgestellten Algorithmen gemäß dem aktuellen Stand der Technik und erlaubt optional manuell zu jedem Schritt einzugreifen, um die Ausgabe zur Vermeidung von Folgefehlern zu korrigieren. Zur Vereinfachung der manuellen (Nach-)Korrektur stellt OMMR4all einen Overlay-Editor zur Verfügung, der die Annotationen mit dem Scan des Originalmanuskripts überlagert, wodurch Fehler leicht erkannt werden können. Das Design des Workflows erlaubt iterative Verbesserungen, indem neue performantere Modelle trainiert werden können, sobald neue Ground Truth (GT) verfügbar ist. KW - Neumenschrift KW - Optische Zeichenerkennung (OCR) KW - Deep Learning KW - Optical Music Recognition KW - Neume Notation KW - Automatic Text Reconition KW - Optical Character Recognition KW - Deep Learning KW - Optische Musikerkennung (OMR) KW - Neumennotation KW - Automatische Texterkennung (ATR) Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-214348 ER - TY - JOUR A1 - Grohmann, Johannes A1 - Herbst, Nikolas A1 - Chalbani, Avi A1 - Arian, Yair A1 - Peretz, Noam A1 - Kounev, Samuel T1 - A Taxonomy of Techniques for SLO Failure Prediction in Software Systems JF - Computers N2 - Failure prediction is an important aspect of self-aware computing systems. Therefore, a multitude of different approaches has been proposed in the literature over the past few years. In this work, we propose a taxonomy for organizing works focusing on the prediction of Service Level Objective (SLO) failures. Our taxonomy classifies related work along the dimensions of the prediction target (e.g., anomaly detection, performance prediction, or failure prediction), the time horizon (e.g., detection or prediction, online or offline application), and the applied modeling type (e.g., time series forecasting, machine learning, or queueing theory). The classification is derived based on a systematic mapping of relevant papers in the area. Additionally, we give an overview of different techniques in each sub-group and address remaining challenges in order to guide future research. KW - taxonomy KW - survey KW - failure prediction KW - anomaly prediction KW - anomaly detection KW - self-aware computing KW - self-adaptive systems KW - performance prediction Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-200594 SN - 2073-431X VL - 9 IS - 1 ER - TY - JOUR A1 - Kaiser, Dennis A1 - Lesch, Veronika A1 - Rothe, Julian A1 - Strohmeier, Michael A1 - Spieß, Florian A1 - Krupitzer, Christian A1 - Montenegro, Sergio A1 - Kounev, Samuel T1 - Towards Self-Aware Multirotor Formations JF - Computers N2 - In the present day, unmanned aerial vehicles become seemingly more popular every year, but, without regulation of the increasing number of these vehicles, the air space could become chaotic and uncontrollable. In this work, a framework is proposed to combine self-aware computing with multirotor formations to address this problem. The self-awareness is envisioned to improve the dynamic behavior of multirotors. The formation scheme that is implemented is called platooning, which arranges vehicles in a string behind the lead vehicle and is proposed to bring order into chaotic air space. Since multirotors define a general category of unmanned aerial vehicles, the focus of this thesis are quadcopters, platforms with four rotors. A modification for the LRA-M self-awareness loop is proposed and named Platooning Awareness. The implemented framework is able to offer two flight modes that enable waypoint following and the self-awareness module to find a path through scenarios, where obstacles are present on the way, onto a goal position. The evaluation of this work shows that the proposed framework is able to use self-awareness to learn about its environment, avoid obstacles, and can successfully move a platoon of drones through multiple scenarios. KW - self-aware computing KW - unmanned aerial vehicles KW - multirotors KW - quadcopters KW - intelligent transportation systems Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-200572 SN - 2073-431X VL - 9 IS - 1 ER - TY - THES A1 - Reul, Christian T1 - An Intelligent Semi-Automatic Workflow for Optical Character Recognition of Historical Printings T1 - Ein intelligenter semi-automatischer Workflow für die OCR historischer Drucke N2 - Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years great progress has been made in the area of historical OCR resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, Automatic Text Recognition (ATR) and postcorrection. Their major drawback is that they only offer limited applicability by non-technical users like humanist scholars, in particular when it comes to the combined use of several tools in a workflow. Furthermore, depending on the material, these tools are usually not able to fully automatically achieve sufficiently low error rates, let alone perfect results, creating a demand for an interactive postcorrection functionality which, however, is generally not incorporated. This thesis addresses these issues by presenting an open-source OCR software called OCR4all which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required Ground Truth (GT) for training stronger mixed models (for segmentation as well as text recognition) is not available, yet, neither in the desired quantity nor quality. To deal with this issue in the short run, OCR4all offers better recognition capabilities in combination with a very comfortable Graphical User Interface (GUI) that allows error corrections not only in the final output, but already in early stages to minimize error propagation. In the long run this constant manual correction produces large quantities of valuable, high quality training material which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. The architecture of OCR4all allows for an easy integration (or substitution) of newly developed tools for its main components by supporting standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings. In addition to OCR4all, several methodical extensions in the form of accuracy improving techniques for training and recognition are presented. Most notably an effective, sophisticated, and adaptable voting methodology using a single ATR engine, a pretraining procedure, and an Active Learning (AL) component are proposed. Experiments showed that combining pretraining and voting significantly improves the effectiveness of book-specific training, reducing the obtained Character Error Rates (CERs) by more than 50%. The proposed extensions were further evaluated during two real world case studies: First, the voting and pretraining techniques are transferred to the task of constructing so-called mixed models which are trained on a variety of different fonts. This was done by using 19th century Fraktur script as an example, resulting in a considerable improvement over a variety of existing open-source and commercial engines and models. Second, the extension from ATR on raw text to the adjacent topic of typography recognition was successfully addressed by thoroughly indexing a historical lexicon that heavily relies on different font types in order to encode its complex semantic structure. During the main experiments on very complex early printed books even users with minimal or no experience were able to not only comfortably deal with the challenges presented by the complex layout, but also to recognize the text with manageable effort and great quality, achieving excellent CERs below 0.5%. Furthermore, the fully automated application on 19th century novels showed that OCR4all (average CER of 0.85%) can considerably outperform the commercial state-of-the-art tool ABBYY Finereader (5.3%) on moderate layouts if suitably pretrained mixed ATR models are available. N2 - Die Optische Zeichenerkennung (Optical Character Recognition, OCR) auf historischen Drucken stellt nach wie vor eine große Herausforderung dar, hauptsächlich aufgrund des häufig komplexen Layouts und der hoch varianten Typographie. In den letzten Jahre gab es große Fortschritte im Bereich der historischen OCR, die nicht selten auch in Form von Open Source Tools interessierten Nutzenden frei zur Verfügung stehen. Der Nachteil dieser Tools ist, dass sie meist ausschließlich über die Kommandozeile bedient werden können und somit nicht-technische Nutzer schnell überfordern. Außerdem sind die Tools häufig nicht aufeinander abgestimmt und verfügen dementsprechend nicht über gemeinsame Schnittstellen. Diese Arbeit adressiert diese Problematik mittels des Open Source Tools OCR4all, das verschiedene State-of-the-Art OCR Lösungen zu einem zusammenhängenden Workflow kombiniert und in einer einzigen Anwendung kapselt. Besonderer Wert liegt dabei darauf, auch nicht-technischen Nutzern zu erlauben, selbst die ältesten und anspruchsvollen Drucke selbstständig und mit höchster Qualität zu erfassen. OCR4all ist vollständig über eine komfortable graphische Nutzeroberfläche bedienbar und bietet umfangreiche Möglichkeiten hinsichtlich Konfiguration und interaktiver Nachkorrektur. Zusätzlich zu OCR4all werden mehrere methodische Erweiterungen präsentiert, um die Effektivität und Effizienz der Trainings- und Erkennungsprozesse zur Texterkennung zu optimieren. Während umfangreicher Evaluationen konnte gezeigt werden, dass selbst Nutzer ohne nennenswerte Vorerfahrung in der Lage waren, OCR4all eigenständig auf komplexe historische Drucke anzuwenden und dort hervorragende Zeichenfehlerraten von durchschnittlich unter 0,5% zu erzielen. Die methodischen Verbesserungen mit Blick auf die Texterkennung reduzierten dabei die Fehlerrate um über 50% im Vergleich zum etablierten Standardansatz. KW - Optische Zeichenerkennung KW - Optical Character Recognition KW - Document Analysis KW - Historical Printings KW - Alter Druck Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-209239 ER - TY - THES A1 - Krug, Markus T1 - Techniques for the Automatic Extraction of Character Networks in German Historic Novels T1 - Techniken zur automatischen Extraktion von Figurennetzwerken aus deutschen Romanen N2 - Recent advances in Natural Language Preprocessing (NLP) allow for a fully automatic extraction of character networks for an incoming text. These networks serve as a compact and easy to grasp representation of literary fiction. They offer an aggregated view of the text, which can be used during distant reading approaches for the analysis of literary hypotheses. In their core, the networks consist of nodes, which represent literary characters, and edges, which represent relations between characters. For an automatic extraction of such a network, the first step is the detection of the references of all fictional entities that are of importance for a text. References to the fictional entities appear in the form of names, noun phrases and pronouns and prior to this work, no components capable of automatic detection of character references were available. Existing tools are only capable of detecting proper nouns, a subset of all character references. When evaluated on the task of detecting proper nouns in the domain of literary fiction, they still underperform at an F1-score of just about 50%. This thesis uses techniques from the field of semi-supervised learning, such as Distant supervision and Generalized Expectations, and improves the results of an existing tool to about 82%, when evaluated on all three categories in literary fiction, but without the need for annotated data in the target domain. However, since this quality is still not sufficient, the decision to annotate DROC, a corpus comprising 90 fragments of German novels was made. This resulted in a new general purpose annotation environment titled as ATHEN, as well as annotated data that spans about 500.000 tokens in total. Using this data, the combination of supervised algorithms and a tailored rule based algorithm, which in combination are able to exploit both - local consistencies as well as global consistencies - yield an algorithm with an F1-score of about 93%. This component is referred to as the Kallimachos tagger. A character network can not directly display references however, instead they need to be clustered so that all references that belong to a real world or fictional entity are grouped together. This process widely known as coreference resolution is a hard problem in the focus of research for more than half a century. This work experimented with adaptations of classical feature based machine learning, with a dedicated rule based algorithm and with modern techniques of Deep Learning, but no approach can surpass 55% B-Cubed F1, when evaluated on DROC. Due to this barrier, many researchers do not use a fully-fledged coreference resolution when they extract character networks, but only focus on a more forgiving subset- the names. For novels such as Alice's Adventures in Wonderland by Lewis Caroll, this would however only result in a network in which many important characters are missing. In order to integrate important characters into the network that are not named by the author, this work makes use of automatic detection of speaker and addressees for direct speech utterances (all entities involved in a dialog are considered to be of importance). This problem is by itself not an easy task, however the most successful system analysed in this thesis is able to correctly determine the speaker to about 85% of the utterances as well as about 65% of the addressees. This speaker information can not only help to identify the most dominant characters, but also serves as a way to model the relations between entities. During the span of this work, components have been developed to model relations between characters using speaker attribution, using co-occurrences as well as by the usage of true interactions, for which yet again a dataset was annotated using ATHEN. Furthermore, since relations between characters are usually typed, a component for the extraction of a typed relation was developed. Similar to the experiments for the character reference detection, a combination of a rule based and a Maximum Entropy classifier yielded the best overall results, with the extraction of family relations showing a score of about 80% and the quality of love relations with a score of about 50%. For family relations, a kernel for a Support Vector Machine was developed that even exceeded the scores of the combined approach but is behind on the other labels. In addition, this work presents new ways to evaluate automatically extracted networks without the need of domain experts, instead it relies on the usage of expert summaries. It also refrains from the uses of social network analysis for the evaluation, but instead presents ranked evaluations using Precision@k and the Spearman Rank correlation coefficient for the evaluation of the nodes and edges of the network. An analysis using these metrics showed, that the central characters of a novel are contained with high probability but the quality drops rather fast if more than five entities are analyzed. The quality of the edges is mainly dominated by the quality of the coreference resolution and the correlation coefficient between gold edges and system edges therefore varies between 30 and 60%. All developed components are aggregated alongside a large set of other preprocessing modules in the Kallimachos pipeline and can be reused without any restrictions. N2 - Techniken zur automatischen Extraktion von Figurennetzwerken aus deutschen Romanen KW - Textanalyse KW - Character Networks KW - Coreference KW - Character Reference Detection KW - Relation Detection KW - Quotation Attribution KW - Netzwerkanalyse KW - Digital Humanities KW - Netzwerk Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-209186 ER - TY - RPRT A1 - Blenk, Andreas A1 - Kellerer, Wolfgang A1 - Hoßfeld, Tobias T1 - Technical Report on DFG Project SDN-App: SDN-enabled Application-aware Network Control Architectures and their Performance Assessment N2 - The DFG project “SDN-enabled Application-aware Network Control Architectures and their Performance Assessment” (DFG SDN-App) focused in phase 1 (Jan 2017 – Dec 2019) on software defined networking (SDN). Being a fundamental paradigm shift, SDN enables a remote control of networking devices made by different vendors from a logically centralized controller. In principle, this enables a more dynamic and flexible management of network resources compared to the traditional legacy networks. Phase 1 focused on multimedia applications and their users’ Quality of Experience (QoE). This documents reports the achievements of the first phase (Jan 2017 – Dec 2019), which is jointly carried out by the Technical University of Munich, Technical University of Berlin, and University of Würzburg. The project started at the institutions in Munich and Würzburg in January 2017 and lasted until December 2019. In Phase 1, the project targeted the development of fundamental control mechanisms for network-aware application control and application-aware network control in Software Defined Networks (SDN) so to enhance the user perceived quality (QoE). The idea is to leverage the QoE from multiple applications as control input parameter for application-and network control mechanisms. These mechanisms are implemented by an Application Control Plane (ACP) and a Network Control Plane (NCP). In order to obtain a global view of the current system state, applications and network parameters are monitored and communicated to the respective control plane interface. Network and application information and their demands are exchanged between the control planes so to derive appropriate control actions. To this end, a methodology is developed to assess the application performance and in particular the QoE. This requires an appropriate QoE modeling of the applications considered in the project as well as metrics like QoE fairness to be utilized within QoE management. In summary, the application-network interaction can improve the QoE for multi-application scenarios. This is ensured by utilizing information from the application layer, which are mapped by appropriate QoS-QoE models to QoE within a network control plane. On the other hand, network information is monitored and communicated to the application control plane. Network and application information and their demands are exchanged between the control planes so to derive appropriate control actions. KW - Software-defined networking KW - Quality of Experience KW - SDN KW - QoE Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-207558 ER - TY - THES A1 - Djebko, Kirill T1 - Quantitative modellbasierte Diagnose am Beispiel der Energieversorgung des SONATE-Nanosatelliten mit automatisch kalibrierten Modellkomponenten T1 - Quantitative model-based diagnosis using the example of the power supply of the SONATE nanosatellite with automatically calibrated model components N2 - Von technischen Systemen wird in der heutigen Zeit erwartet, dass diese stets fehlerfrei funktionieren, um einen reibungslosen Ablauf des Alltags zu gewährleisten. Technische Systeme jedoch können Defekte aufweisen, die deren Funktionsweise einschränken oder zu deren Totalausfall führen können. Grundsätzlich zeigen sich Defekte durch eine Veränderung im Verhalten von einzelnen Komponenten. Diese Abweichungen vom Nominalverhalten nehmen dabei an Intensität zu, je näher die entsprechende Komponente an einem Totalausfall ist. Aus diesem Grund sollte das Fehlverhalten von Komponenten rechtzeitig erkannt werden, um permanenten Schaden zu verhindern. Von besonderer Bedeutung ist dies für die Luft- und Raumfahrt. Bei einem Satelliten kann keine Wartung seiner Komponenten durchgeführt werden, wenn er sich bereits im Orbit befindet. Der Defekt einer einzelnen Komponente, wie der Batterie der Energieversorgung, kann hierbei den Verlust der gesamten Mission bedeuten. Grundsätzlich lässt sich Fehlererkennung manuell durchführen, wie es im Satellitenbetrieb oft üblich ist. Hierfür muss ein menschlicher Experte, ein sogenannter Operator, das System überwachen. Diese Form der Überwachung ist allerdings stark von der Zeit, Verfügbarkeit und Expertise des Operators, der die Überwachung durchführt, abhängig. Ein anderer Ansatz ist die Verwendung eines dedizierten Diagnosesystems. Dieses kann das technische System permanent überwachen und selbstständig Diagnosen berechnen. Die Diagnosen können dann durch einen Experten eingesehen werden, der auf ihrer Basis Aktionen durchführen kann. Das in dieser Arbeit vorgestellte modellbasierte Diagnosesystem verwendet ein quantitatives Modell eines technischen Systems, das dessen Nominalverhalten beschreibt. Das beobachtete Verhalten des technischen Systems, gegeben durch Messwerte, wird mit seinem erwarteten Verhalten, gegeben durch simulierte Werte des Modells, verglichen und Diskrepanzen bestimmt. Jede Diskrepanz ist dabei ein Symptom. Diagnosen werden dadurch berechnet, dass zunächst zu jedem Symptom eine sogenannte Konfliktmenge berechnet wird. Dies ist eine Menge von Komponenten, sodass der Defekt einer dieser Komponenten das entsprechende Symptom erklären könnte. Mithilfe dieser Konfliktmengen werden sogenannte Treffermengen berechnet. Eine Treffermenge ist eine Menge von Komponenten, sodass der gleichzeitige Defekt aller Komponenten dieser Menge alle beobachteten Symptome erklären könnte. Jede minimale Treffermenge entspricht dabei einer Diagnose. Zur Berechnung dieser Mengen nutzt das Diagnosesystem ein Verfahren, bei dem zunächst abhängige Komponenten bestimmt werden und diese von symptombehafteten Komponenten belastet und von korrekt funktionierenden Komponenten entlastet werden. Für die einzelnen Komponenten werden Bewertungen auf Basis dieser Be- und Entlastungen berechnet und mit ihnen Diagnosen gestellt. Da das Diagnosesystem auf ausreichend genaue Modelle angewiesen ist und die manuelle Kalibrierung dieser Modelle mit erheblichem Aufwand verbunden ist, wurde ein Verfahren zur automatischen Kalibrierung entwickelt. Dieses verwendet einen Zyklischen Genetischen Algorithmus, um mithilfe von aufgezeichneten Werten der realen Komponenten Modellparameter zu bestimmen, sodass die Modelle die aufgezeichneten Daten möglichst gut reproduzieren können. Zur Evaluation der automatischen Kalibrierung wurden ein Testaufbau und verschiedene dynamische und manuell schwierig zu kalibrierende Komponenten des Qualifikationsmodells eines realen Nanosatelliten, dem SONATE-Nanosatelliten modelliert und kalibriert. Der Testaufbau bestand dabei aus einem Batteriepack, einem Laderegler, einem Tiefentladeschutz, einem Entladeregler, einem Stepper Motor HAT und einem Motor. Er wurde zusätzlich zur automatischen Kalibrierung unabhängig manuell kalibriert. Die automatisch kalibrierten Satellitenkomponenten waren ein Reaktionsrad, ein Entladeregler, Magnetspulen, bestehend aus einer Ferritkernspule und zwei Luftspulen, eine Abschlussleiterplatine und eine Batterie. Zur Evaluation des Diagnosesystems wurde die Energieversorgung des Qualifikationsmodells des SONATE-Nanosatelliten modelliert. Für die Batterien, die Entladeregler, die Magnetspulen und die Reaktionsräder wurden die vorher automatisch kalibrierten Modelle genutzt. Für das Modell der Energieversorgung wurden Fehler simuliert und diese diagnostiziert. Die Ergebnisse der Evaluation der automatischen Kalibrierung waren, dass die automatische Kalibrierung eine mit der manuellen Kalibrierung vergleichbare Genauigkeit für den Testaufbau lieferte und diese sogar leicht übertraf und dass die automatisch kalibrierten Satellitenkomponenten eine durchweg hohe Genauigkeit aufwiesen und damit für den Einsatz im Diagnosesystem geeignet waren. Die Ergebnisse der Evaluation des Diagnosesystems waren, dass die simulierten Fehler zuverlässig gefunden wurden und dass das Diagnosesystem in der Lage war die plausiblen Ursachen dieser Fehler zu diagnostizieren. N2 - In today's world, technical systems are expected to always work faultlessly to ensure that everyday life runs smoothly. However, technical systems can have defects that limit their functionality or that can lead to their complete failure. In General, defects become apparent through a change in the behavior of individual components. These deviations from the nominal behavior increase in intensity the closer the corresponding component is to a complete failure. For this reason, the malfunction of components should be recognized in time to prevent permanent damage. This is of particular importance for the field of aerospace. When a satellite is already in orbit, no physical maintenance of its components can be performed. The failure of a single component, such as the battery of the power supply, can cause the loss of the entire mission. In principle, fault detection can be carried out manually, as is often the case in satellite operation. For this, a human expert, a so-called operator, has to monitor the system. However, this form of monitoring is heavily dependent on the time, availability and expertise of the operator who performs the monitoring. A different approach is to use a dedicated diagnostic system. Such a diagnostic system can continuously monitor the technical system and calculate diagnoses autonomously. These diagnoses can then be viewed by an expert who can perform actions based on them. The model-based diagnostic system presented in this work uses a quantitative model of a technical system that describes its nominal behavior. The observed behavior of the technical system, given by measured values, is compared with its expected behavior, given by simulated values of the model, and discrepancies are determined. Every discrepancy is a symptom. Diagnoses are calculated by first calculating a so-called conflict set for each symptom. This is a set of components for which holds, that the failure of any single one of these components could explain the corresponding symptom. Using these conflict sets, so-called hitting sets are computed. A hitting set is a set of components for which holds that the simultaneous defect of all components of this set could explain all the observed symptoms. Each minimal hitting set corresponds to one single diagnosis. To compute these sets, the diagnostic system uses a method in which dependent components are determined initially. These components are then suspected by symptomatic components and relieved by correctly functioning components. For the individual components, scores are calculated on the basis of these suspicions and reliefs and diagnoses are computed. Since the diagnostic system relies on sufficiently accurate models and the manual calibration of these models involves considerable effort, a procedure for automatic calibration was developed. This procedure uses a cyclic genetic algorithm to determine model parameters using recorded values of the real components, so that the models can reproduce the recorded data as well as possible. To evaluate the automatic calibration, a test set-up and various dynamic and manually difficult to calibrate components of the qualification model of a real nanosatellite, the SONATE-nanosatellite, were modeled and calibrated. The test setup consisted of a battery pack, a charge controller, a deep discharge protection unit, a discharge controller, a stepper motor HAT and a motor. It was independently manually calibrated, in addition to the automatic calibration. The automatically calibrated satellite components were a reaction wheel, magnetorquers, consisting of a ferrite core coil and two air core coils, a termination board and a battery. To evaluate the diagnostic system, the power supply of the qualification model of the SONATE nanosatellite was modeled. For the batteries, the discharge controller, the magnetorquers and the reaction wheels, the previously automatically calibrated models were used. For the model of the power supply, faults were simulated and diagnosed. The results of the evaluation of the automatic calibration were that the automatic calibration provided a level of accuracy that was comparable to, and even slightly exceeded, that of manual calibration for the test setup, and that the automatically calibrated satellite components were consistently of high accuracy and were therefore suitable to be used for the diagnostic system. The results of the evaluation of the diagnostic system were that the simulated faults were found reliably and that the diagnostic system was able to diagnose the plausible causes of these faults. KW - Satellit KW - Energieversorgung KW - Modellbasierte Diagnose KW - Diagnosesystem KW - Automatisches Kalibrieren Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-206628 ER - TY - RPRT A1 - Grigorjew, Alexej A1 - Metzger, Florian A1 - Hoßfeld, Tobias A1 - Specht, Johannes A1 - Götz, Franz-Josef A1 - Chen, Feng A1 - Schmitt, Jürgen T1 - Asynchronous Traffic Shaping with Jitter Control N2 - Asynchronous Traffic Shaping enabled bounded latency with low complexity for time sensitive networking without the need for time synchronization. However, its main focus is the guaranteed maximum delay. Jitter-sensitive applications may still be forced towards synchronization. This work proposes traffic damping to reduce end-to-end delay jitter. It discusses its application and shows that both the prerequisites and the guaranteed delay of traffic damping and ATS are very similar. Finally, it presents a brief evaluation of delay jitter in an example topology by means of a simulation and worst case estimation. KW - Echtzeit KW - Rechnernetz KW - Latenz KW - Ethernet KW - TSN KW - jitter KW - traffic damping Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-205824 ER - TY - RPRT A1 - Metzger, Florian T1 - Crowdsensed QoE for the community - a concept to make QoE assessment accessible N2 - In recent years several community testbeds as well as participatory sensing platforms have successfully established themselves to provide open data to everyone interested. Each of them with a specific goal in mind, ranging from collecting radio coverage data up to environmental and radiation data. Such data can be used by the community in their decision making, whether to subscribe to a specific mobile phone service that provides good coverage in an area or in finding a sunny and warm region for the summer holidays. However, the existing platforms are usually limiting themselves to directly measurable network QoS. If such a crowdsourced data set provides more in-depth derived measures, this would enable an even better decision making. A community-driven crowdsensing platform that derives spatial application-layer user experience from resource-friendly bandwidth estimates would be such a case, video streaming services come to mind as a prime example. In this paper we present a concept for such a system based on an initial prototype that eases the collection of data necessary to determine mobile-specific QoE at large scale. In addition we reason why the simple quality metric proposed here can hold its own. KW - Quality of Experience KW - Crowdsourcing KW - Crowdsensing KW - QoE Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203748 N1 - Originally written in 2017, but never published. ER - TY - RPRT ED - Hoßfeld, Tobias ED - Wunderer, Stefan T1 - White Paper on Crowdsourced Network and QoE Measurements – Definitions, Use Cases and Challenges N2 - The goal of the white paper at hand is as follows. The definitions of the terms build a framework for discussions around the hype topic ‘crowdsourcing’. This serves as a basis for differentiation and a consistent view from different perspectives on crowdsourced network measurements, with the goal to provide a commonly accepted definition in the community. The focus is on the context of mobile and fixed network operators, but also on measurements of different layers (network, application, user layer). In addition, the white paper shows the value of crowdsourcing for selected use cases, e.g., to improve QoE or regulatory issues. Finally, the major challenges and issues for researchers and practitioners are highlighted. This white paper is the outcome of the Würzburg seminar on “Crowdsourced Network and QoE Measurements” which took place from 25-26 September 2019 in Würzburg, Germany. International experts were invited from industry and academia. They are well known in their communities, having different backgrounds in crowdsourcing, mobile networks, network measurements, network performance, Quality of Service (QoS), and Quality of Experience (QoE). The discussions in the seminar focused on how crowdsourcing will support vendors, operators, and regulators to determine the Quality of Experience in new 5G networks that enable various new applications and network architectures. As a result of the discussions, the need for a white paper manifested, with the goal of providing a scientific discussion of the terms “crowdsourced network measurements” and “crowdsourced QoE measurements”, describing relevant use cases for such crowdsourced data, and its underlying challenges. During the seminar, those main topics were identified, intensively discussed in break-out groups, and brought back into the plenum several times. The outcome of the seminar is this white paper at hand which is – to our knowledge – the first one covering the topic of crowdsourced network and QoE measurements. KW - Crowdsourcing KW - Network Measurements KW - Quality of Service (QoS) KW - Quality of Experience (QoE) KW - crowdsourced network measurements KW - crowdsourced QoE measurements Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-202327 ER - TY - THES A1 - Azar, Isabel T1 - Konzeption und Evaluation eines webbasierten Patienteninformationsprogrammes zur Überprüfung internistischer Verdachtsdiagnosen T1 - Conception and evaluation of a web-based patient information program for verification of internal suspected diagnoses N2 - Das Thema dieser Dissertation lautet „Konzeption und Evaluation eines webbasierten Patienteninformationsprogrammes zur Überprüfung internistischer Verdachtsdiagnosen“. Zusammen mit dem Institut für Informatik wurde das wissensbasierte second-opinion-System SymptomCheck entwickelt. Das Programm dient zur Überprüfung von Verdachtsdiagnosen. Es wurden Wissensbasen erstellt, in denen Symptome, Befunde und Untersuchungen nach einem Bewertungsschema beurteilt werden. Folgend wurde eine online erreichbare Startseite erstellt, auf der Nutzer vornehmlich internistische Verdachtsdiagnosen überprüfen können. Das Programm wurde in zwei Studien bezüglich seiner Sensitivität und Spezifität sowie der Benutzerfreundlichkeit getestet. In der ersten Studie wurden die Verdachtsdiagnosen ambulanter Patienten mit den ärztlich gestellten Diagnosen verglichen, eine zweite an die Allgemeinbevölkerung gerichtete Onlinestudie galt vor allem der Bewertung der Benutzerfreundlichkeit. Soweit bekannt ist dies die erste Studie in der ein selbst entwickeltes Programm selbstständig an echten Patienten getestet wurde. N2 - The topic of this dissertation is "Conception and evaluation of a web-based patient information program for verification of internal suspected diagnoses”. The second opinion system SymptomCheck was developed in cooperation with the Institute of Computer Science. The program is used to verify suspected diagnoses. Knowledge bases were created to evaluate symptoms, medical findings and examination based on an evaluation scheme. Following this an online homepage was built which allows users to verify suspected diagnoses, primarily of internal medicine. The program was tested in two studies regarding its sensitivity and specificity as well as its usability. In the first study the suspected diagnoses were compared to the medical diagnoses of ambulatory patients, a second study was directed to the general population to evaluate its usability. As far as is known this is the first study to independently evaluate a self-developed program on real patients. KW - Entscheidungsunterstützungssystem KW - Verdachtsüberprüfung KW - verification of suspected diagnoses Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-199641 ER - TY - THES A1 - Roth, Daniel T1 - Intrapersonal, Interpersonal, and Hybrid Interactions in Virtual Reality T1 - Intrapersonelle, Interpersonelle und Hybride Interaktionen in Virtual Reality N2 - Virtual reality and related media and communication technologies have a growing impact on professional application fields and our daily life. Virtual environments have the potential to change the way we perceive ourselves and how we interact with others. In comparison to other technologies, virtual reality allows for the convincing display of a virtual self-representation, an avatar, to oneself and also to others. This is referred to as user embodiment. Avatars can be of varying realism and abstraction in their appearance and in the behaviors they convey. Such userembodying interfaces, in turn, can impact the perception of the self as well as the perception of interactions. For researchers, designers, and developers it is of particular interest to understand these perceptual impacts, to apply them to therapy, assistive applications, social platforms, or games, for example. The present thesis investigates and relates these impacts with regard to three areas: intrapersonal effects, interpersonal effects, and effects of social augmentations provided by the simulation. With regard to intrapersonal effects, we specifically explore which simulation properties impact the illusion of owning and controlling a virtual body, as well as a perceived change in body schema. Our studies lead to the construction of an instrument to measure these dimensions and our results indicate that these dimensions are especially affected by the level of immersion, the simulation latency, as well as the level of personalization of the avatar. With regard to interpersonal effects we compare physical and user-embodied social interactions, as well as different degrees of freedom in the replication of nonverbal behavior. Our results suggest that functional levels of interaction are maintained, whereas aspects of presence can be affected by avatar-mediated interactions, and collaborative motor coordination can be disturbed by immersive simulations. Social interaction is composed of many unknown symbols and harmonic patterns that define our understanding and interpersonal rapport. For successful virtual social interactions, a mere replication of physical world behaviors to virtual environments may seem feasible. However, the potential of mediated social interactions goes beyond this mere replication. In a third vein of research, we propose and evaluate alternative concepts on how computers can be used to actively engage in mediating social interactions, namely hybrid avatar-agent technologies. Specifically, we investigated the possibilities to augment social behaviors by modifying and transforming user input according to social phenomena and behavior, such as nonverbal mimicry, directed gaze, joint attention, and grouping. Based on our results we argue that such technologies could be beneficial for computer-mediated social interactions such as to compensate for lacking sensory input and disturbances in data transmission or to increase aspects of social presence by visual substitution or amplification of social behaviors. Based on related work and presented findings, the present thesis proposes the perspective of considering computers as social mediators. Concluding from prototypes and empirical studies, the potential of technology to be an active mediator of social perception with regard to the perception of the self, as well as the perception of social interactions may benefit our society by enabling further methods for diagnosis, treatment, and training, as well as the inclusion of individuals with social disorders. To this regard, we discuss implications for our society and ethical aspects. This thesis extends previous empirical work and further presents novel instruments, concepts, and implications to open up new perspectives for the development of virtual reality, mixed reality, and augmented reality applications. N2 - Virtual Reality und weitere Medien- und Kommunikationstechnologien haben einen wachsenden Einfluss auf professionelle Anwendungsbereiche und unseren Alltag. Virtuelle Umgebungen haben das Potenzial, Einfluss darauf zu nehmen, wie Mensche sich selbst wahrnehmen und wie sie mit anderen umgehen. Im Vergleich zu anderen Technologien ermöglicht Virtual Reality die überzeugende Visualisierung einer virtuellen Selbstdarstellung, eines Avatars, sichtbar für den Nutzer/die Nutzerin selbst aber auch für andere. Dies bezeichnet man als Nutzerverk¨orperung. Avatare können von unterschiedlichem Realismus und Abstraktion in Bezug auf ihr Aussehen sowie der Darstellung von Verhaltensweisen geprägt sein. Solche nutzerverkörpernde Schnittstellen wiederum können die Wahrnehmung des Selbst sowie die Wahrnehmung von Interaktionen beeinflussen. Für Forscher/-innen, Designer/-innen und Entwickler/-innen ist es von besonderem Interesse, diese Wahrnehmungseffekte zu verstehen, um sie beispielsweise auf Therapie, assistive Anwendungen, soziale Plattformen oder Spiele anzuwenden. Die vorliegende Arbeit untersucht und bezieht sich auf diese Auswirkungen in drei Bereichen: intrapersonelle Effekte, zwischenmenschliche Effekte sowie Effekte durch soziale Augmentierungen, die durch die Simulation bereitgestellt werden. Im Hinblick auf intrapersonelle Effekte widmet sich die vorliegende Arbeit insbesondere der Frage, welche Simulationseigenschaften die Illusion des Besitzens/Innehabens und der Kontrolle eines virtuellen Körpers sowie eine wahrgenommene Veränderung des Körperschemas beeinflussen. Die vorgestellten Studien führen zur Konstruktion eines Instruments zur Erfassung dieser Dimensionen und die Ergebnisse zeigen, dass die empfundene Verkörperung besonders von dem Grad der Immersion, der Simulationslatenz sowie dem Grad der Personalisierung des Avatars abhängt. Im Hinblick auf zwischenmenschliche Effekte vergleicht diese Dissertation physische (realweltliche) und virtuelle soziale Interaktionen sowie unterschiedliche Freiheitsgrade in der Replikation nonverbalen Verhaltens. Die Ergebnisse deuten darauf hin, dass die funktionalen Ebenen der Interaktion aufrechterhalten werden, während Aspekte der Präsenz durch avatarvermittelte Interaktionen beeinflusst werden und die kollaborative motorische Koordination durch immersive Simulationen gestört werden kann. Die soziale Interaktion besteht aus vielen unbekannten Symbolen und harmonischen Mustern, die das menschliche Verst¨andnis und zwischenmenschliche Beziehungen definieren. Für erfolgreiche virtuelle soziale Interaktionen mag eine bloße Replikation von physikalischenWeltverhaltensweisen auf virtuelle Umgebungen m¨oglich erscheinen. Das Potenzial computervermittelter sozialer Interaktionen geht jedoch über diese bloße Replikation hinaus. Im dritten Bereich dieser Arbeit werden alternative Konzepte vorgeschlagen und evaluiert, wie Computer genutzt werden können, um eine aktive Rolle in sozialen Interaktionen einzunehmen. Diese Technologien werden als hybride Avatar-Agenten-Technologien definiert. Insbesondere wird untersucht, welche Möglichkeiten das soziale Verhalten zu erweitern emtstehen, indem die Verhaltensweisen der Benutzer/-innen entsprechend sozialer Ph¨anomene und Verhaltensweisen modifiziert und transformiert werden. Beispiele sind die nonverbale Spiegelung, der Fokus des Blicks, eine gemeinsame Aufmerksamkeit und die Gruppenbildung. Basierend auf den Ergebnissen argumentiert diese Arbeit, dass solche Technologien für computervermittelte soziale Interaktionen von Vorteil sein könnten, beispielsweise zum Ausgleich fehlender Sensorik, Störungen bei der Datenübertragung oder zur Verbesserung sozialer Präsenz durch visuelle Substitution oder Verstärkung des sozialen Verhaltens. Basierend auf verwandten Arbeiten und präsentierten Ergebnissen wird abgeleitet, dass Computer als soziale Mediatoren fungieren können. Ausgehend von Prototypen und empirischen Studien kann das Potenzial der Technologie, ein aktiver Vermittler in Bezug auf dieWahrnehmung des Selbst sowie dieWahrnehmung sozialer Interaktionen zu sein, unserer Gesellschaft zugutekommen. Dadurch können beispielsweise weitere Methoden zur Diagnose, der Behandlung und Ausbildung sowie der Inklusion von Menschen mit sozialen Störungen ermöglicht werden. In diesem Zusammenhang werden die Auswirkungen auf unsere Gesellschaft und ethische Aspekte diskutiert. Diese Arbeit erweitert frühere empirische Arbeiten und präasentiert darüber hinaus neue Instrumente, Konzepte und Implikationen, um neue Perspektiven für die Entwicklung von Virtual Reality, Mixed Reality und Augmented Reality Anwendungen zu beleuchten. KW - Virtuelle Realität KW - Mensch-Maschine-Kommunikation KW - virtual embodiment KW - virtual social interaction KW - hybrid avatar-agent systems KW - collaborative interaction KW - avatars KW - virtual reality KW - augmented reality KW - social artificial intelligence KW - Avatar KW - Künstliche Intelligenz Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-188627 ER - TY - RPRT A1 - Grigorjew, Alexej A1 - Metzger, Florian A1 - Hoßfeld, Tobias A1 - Specht, Johannes A1 - Götz, Franz-Josef A1 - Schmitt, Jürgen A1 - Chen, Feng T1 - Technical Report on Bridge-Local Guaranteed Latency with Strict Priority Scheduling N2 - Bridge-local latency computation is often regarded with caution, as historic efforts with the Credit-Based Shaper (CBS) showed that CBS requires network wide information for tight bounds. Recently, new shaping mechanisms and timed gates were applied to achieve such guarantees nonetheless, but they require support for these new mechanisms in the forwarding devices. This document presents a per-hop latency bound for individual streams in a class-based network that applies the IEEE 802.1Q strict priority transmission selection algorithm. It is based on self-pacing talkers and uses the accumulated latency fields during the reservation process to provide upper bounds with bridge-local information. The presented delay bound is proven mathematically and then evaluated with respect to its accuracy. It indicates the required information that must be provided for admission control, e.g., implemented by a resource reservation protocol such as IEEE 802.1Qdd. Further, it hints at potential improvements regarding new mechanisms and higher accuracy given more information. KW - Echtzeit KW - Rechnernetz KW - Latenz KW - Ethernet KW - Latency Bound KW - Formal analysis Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-198310 ER -