TY - CHAP ED - von Mammen, Sebastian ED - Klemke, Roland ED - Lorber, Martin T1 - Proceedings of the 1st Games Technology Summit BT - part of Clash of Realites 11th International Conference on the Technology and Theory of Digital Games N2 - As part of the Clash of Realities International Conference on the Technology and Theory of Digital Games, the Game Technology Summit is a premium venue to bring together experts from academia and industry to disseminate state-of-the-art research on trending technology topics in digital games. In this first iteration of the Game Technology Summit, we specifically paid attention on how the successes in AI in Natural User Interfaces have been impacting the games industry (industry track) and which scientific, state-of-the-art ideas and approaches are currently pursued (scientific track). KW - Veranstaltung KW - Künstliche Intelligenz KW - Mensch-Maschine-Kommunikation KW - Computerspiel KW - natural user interfaces KW - artificial intelligence Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-245776 SN - 978-3-945459-36-2 ER - TY - THES A1 - Löffler, Andre T1 - Constrained Graph Layouts: Vertices on the Outer Face and on the Integer Grid T1 - Graphzeichnen unter Nebenbedingungen: Knoten auf der Außenfacette und mit ganzzahligen Koordinaten N2 - Constraining graph layouts - that is, restricting the placement of vertices and the routing of edges to obey certain constraints - is common practice in graph drawing. In this book, we discuss algorithmic results on two different restriction types: placing vertices on the outer face and on the integer grid. For the first type, we look into the outer k-planar and outer k-quasi-planar graphs, as well as giving a linear-time algorithm to recognize full and closed outer k-planar graphs Monadic Second-order Logic. For the second type, we consider the problem of transferring a given planar drawing onto the integer grid while perserving the original drawings topology; we also generalize a variant of Cauchy's rigidity theorem for orthogonal polyhedra of genus 0 to those of arbitrary genus. N2 - Das Einschränken von Zeichnungen von Graphen, sodass diese bestimmte Nebenbedingungen erfüllen - etwa solche, die das Platzieren von Knoten oder den Verlauf von Kanten beeinflussen - sind im Graphzeichnen allgegenwärtig. In dieser Arbeit befassen wir uns mit algorithmischen Resultaten zu zwei speziellen Einschränkungen, nämlich dem Platzieren von Knoten entweder auf der Außenfacette oder auf ganzzahligen Koordinaten. Für die erste Einschränkung untersuchen wir die außen k-planaren und außen k-quasi-planaren Graphen und geben einen auf monadische Prädikatenlogik zweiter Stufe basierenden Algorithmus an, der überprüft, ob ein Graph voll außen k-planar ist. Für die zweite Einschränkung untersuchen wir das Problem, eine gegebene planare Zeichnung eines Graphen auf das ganzzahlige Koordinatengitter zu transportieren, ohne dabei die Topologie der Zeichnung zu verändern; außerdem generalisieren wir eine Variante von Cauchys Starrheitssatz für orthogonale Polyeder von Geschlecht 0 auf solche von beliebigem Geschlecht. KW - Graphenzeichnen KW - Komplexität KW - Algorithmus KW - Algorithmische Geometrie KW - Kombinatorik KW - Planare Graphen KW - Polyeder KW - Konvexe Zeichnungen Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-215746 SN - 978-3-95826-146-4 SN - 978-3-95826-147-1 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, ISBN 978-3-95826-146-4, 32,90 EUR PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - JOUR A1 - Linsenmann, Thomas A1 - März, Alexander A1 - Dufner, Vera A1 - Stetter, Christian A1 - Weiland, Judith A1 - Westermaier, Thomas T1 - Optimization of radiation settings for angiography using 3D fluoroscopy for imaging of intracranial aneurysms JF - Computer Assisted Surgery N2 - Mobile 3D fluoroscopes have become increasingly available in neurosurgical operating rooms. We recently reported its use for imaging cerebral vascular malformations and aneurysms. This study was conducted to evaluate various radiation settings for the imaging of cerebral aneurysms before and after surgical occlusion. Eighteen patients with cerebral aneurysms with the indication for surgical clipping were included in this prospective analysis. Before surgery the patients were randomized into one of three different scan protocols according (default settings of the 3D fluoroscope): Group 1: 110 kV, 80 mA (enhanced cranial mode), group 2: 120 kV, 64 mA (lumbar spine mode), group 3: 120 kV, 25 mA (head/neck settings). Prior to surgery, a rotational fluoroscopy scan (duration 24 s) was performed without contrast agent followed by another scan with 50 ml of intravenous iodine contrast agent. The image files of both scans were transferred to an Apple PowerMac(R) workstation, subtracted and reconstructed using OsiriX(R) MD 10.0 software. The procedure was repeated after clip placement. The image quality regarding preoperative aneurysm configuration and postoperative assessment of aneurysm occlusion and vessel patency was analyzed by 2 independent reviewers using a 6-grade scale. This technique quickly supplies images of adequate quality to depict intracranial aneurysms and distal vessel patency after aneurysm clipping. Regarding these features, a further optimization to our previous protocol seems possible lowering the voltage and increasing tube current. For quick intraoperative assessment, image subtraction seems not necessary. Thus, a native scan without a contrast agent is not necessary. Further optimization may be possible using a different contrast injection protocol. KW - 3D fluoroscopy KW - aneurysm KW - fluoroscopy KW - intraoperative imaging Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-259251 VL - 26 IS - 1 ER - TY - RPRT A1 - Rossi, Angelo Pio A1 - Maurelli, Francesco A1 - Unnithan, Vikram A1 - Dreger, Hendrik A1 - Mathewos, Kedus A1 - Pradhan, Nayan A1 - Corbeanu, Dan-Andrei A1 - Pozzobon, Riccardo A1 - Massironi, Matteo A1 - Ferrari, Sabrina A1 - Pernechele, Claudia A1 - Paoletti, Lorenzo A1 - Simioni, Emanuele A1 - Maurizio, Pajola A1 - Santagata, Tommaso A1 - Borrmann, Dorit A1 - Nüchter, Andreas A1 - Bredenbeck, Anton A1 - Zevering, Jasper A1 - Arzberger, Fabian A1 - Reyes Mantilla, Camilo Andrés T1 - DAEDALUS - Descent And Exploration in Deep Autonomy of Lava Underground Structures BT - Open Space Innovation Platform (OSIP) Lunar Caves-System Study N2 - The DAEDALUS mission concept aims at exploring and characterising the entrance and initial part of Lunar lava tubes within a compact, tightly integrated spherical robotic device, with a complementary payload set and autonomous capabilities. The mission concept addresses specifically the identification and characterisation of potential resources for future ESA exploration, the local environment of the subsurface and its geologic and compositional structure. A sphere is ideally suited to protect sensors and scientific equipment in rough, uneven environments. It will house laser scanners, cameras and ancillary payloads. The sphere will be lowered into the skylight and will explore the entrance shaft, associated caverns and conduits. Lidar (light detection and ranging) systems produce 3D models with high spatial accuracy independent of lighting conditions and visible features. Hence this will be the primary exploration toolset within the sphere. The additional payload that can be accommodated in the robotic sphere consists of camera systems with panoramic lenses and scanners such as multi-wavelength or single-photon scanners. A moving mass will trigger movements. The tether for lowering the sphere will be used for data communication and powering the equipment during the descending phase. Furthermore, the connector tether-sphere will host a WIFI access point, such that data of the conduit can be transferred to the surface relay station. During the exploration phase, the robot will be disconnected from the cable, and will use wireless communication. Emergency autonomy software will ensure that in case of loss of communication, the robot will continue the nominal mission. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 21 KW - Lunar Caves KW - Spherical Robot KW - Lunar Exploration KW - Mapping KW - 3D Laser Scanning KW - Mond KW - Daedalus-Projekt KW - Lava Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-227911 SN - 978-3-945459-33-1 SN - 1868-7466 ER - TY - JOUR A1 - Rodrigues, Johannes A1 - Weiß, Martin A1 - Hewig, Johannes A1 - Allen, John J. B. T1 - EPOS: EEG Processing Open-Source Scripts JF - Frontiers in Neuroscience N2 - Background: Since the replication crisis, standardization has become even more important in psychological science and neuroscience. As a result, many methods are being reconsidered, and researchers’ degrees of freedom in these methods are being discussed as a potential source of inconsistencies across studies. New Method: With the aim of addressing these subjectivity issues, we have been working on a tutorial-like EEG (pre-)processing pipeline to achieve an automated method based on the semi-automated analysis proposed by Delorme and Makeig. Results: Two scripts are presented and explained step-by-step to perform basic, informed ERP and frequency-domain analyses, including data export to statistical programs and visual representations of the data. The open-source software EEGlab in MATLAB is used as the data handling platform, but scripts based on code provided by Mike Cohen (2014) are also included. Comparison with existing methods: This accompanying tutorial-like article explains and shows how the processing of our automated pipeline affects the data and addresses, especially beginners in EEG-analysis, as other (pre)-processing chains are mostly targeting rather informed users in specialized areas or only parts of a complete procedure. In this context, we compared our pipeline with a selection of existing approaches. Conclusion: The need for standardization and replication is evident, yet it is equally important to control the plausibility of the suggested solution by data exploration. Here, we provide the community with a tool to enhance the understanding and capability of EEG-analysis. We aim to contribute to comprehensive and reliable analyses for neuro-scientific research. KW - EEG KW - electroencephalography KW - event-related potentials-ERP KW - EEG processing KW - EEG preprocessing KW - EEG frequency band analysis Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-240221 SN - 1662-453X VL - 15 ER - TY - JOUR A1 - Hirth, Matthias A1 - Seufert, Michael A1 - Lange, Stanislav A1 - Meixner, Markus A1 - Tran-Gia, Phuoc T1 - Performance evaluation of hybrid crowdsensing and fixed sensor systems for event detection in urban environments JF - Sensors N2 - Crowdsensing offers a cost-effective way to collect large amounts of environmental sensor data; however, the spatial distribution of crowdsensing sensors can hardly be influenced, as the participants carry the sensors, and, additionally, the quality of the crowdsensed data can vary significantly. Hybrid systems that use mobile users in conjunction with fixed sensors might help to overcome these limitations, as such systems allow assessing the quality of the submitted crowdsensed data and provide sensor values where no crowdsensing data are typically available. In this work, we first used a simulation study to analyze a simple crowdsensing system concerning the detection performance of spatial events to highlight the potential and limitations of a pure crowdsourcing system. The results indicate that even if only a small share of inhabitants participate in crowdsensing, events that have locations correlated with the population density can be easily and quickly detected using such a system. On the contrary, events with uniformly randomly distributed locations are much harder to detect using a simple crowdsensing-based approach. A second evaluation shows that hybrid systems improve the detection probability and time. Finally, we illustrate how to compute the minimum number of fixed sensors for the given detection time thresholds in our exemplary scenario. KW - crowdsensing KW - event detection KW - detection time simulation KW - performance analysis Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-245245 SN - 1424-8220 VL - 21 IS - 17 ER - TY - JOUR A1 - Scherer, Marc A1 - Fleishman, Sarel J. A1 - Jones, Patrik R. A1 - Dandekar, Thomas A1 - Bencurova, Elena T1 - Computational Enzyme Engineering Pipelines for Optimized Production of Renewable Chemicals JF - Frontiers in Bioengineering and Biotechnology N2 - To enable a sustainable supply of chemicals, novel biotechnological solutions are required that replace the reliance on fossil resources. One potential solution is to utilize tailored biosynthetic modules for the metabolic conversion of CO2 or organic waste to chemicals and fuel by microorganisms. Currently, it is challenging to commercialize biotechnological processes for renewable chemical biomanufacturing because of a lack of highly active and specific biocatalysts. As experimental methods to engineer biocatalysts are time- and cost-intensive, it is important to establish efficient and reliable computational tools that can speed up the identification or optimization of selective, highly active, and stable enzyme variants for utilization in the biotechnological industry. Here, we review and suggest combinations of effective state-of-the-art software and online tools available for computational enzyme engineering pipelines to optimize metabolic pathways for the biosynthesis of renewable chemicals. Using examples relevant for biotechnology, we explain the underlying principles of enzyme engineering and design and illuminate future directions for automated optimization of biocatalysts for the assembly of synthetic metabolic pathways. KW - computational KW - enzyme KW - engineering KW - design KW - biomanufacturing KW - biofuel KW - microbes KW - metabolism Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-240598 SN - 2296-4185 VL - 9 ER - TY - JOUR A1 - Kammerer, Klaus A1 - Göster, Manuel A1 - Reichert, Manfred A1 - Pryss, Rüdiger T1 - Ambalytics: a scalable and distributed system architecture concept for bibliometric network analyses JF - Future Internet N2 - A deep understanding about a field of research is valuable for academic researchers. In addition to technical knowledge, this includes knowledge about subareas, open research questions, and social communities (networks) of individuals and organizations within a given field. With bibliometric analyses, researchers can acquire quantitatively valuable knowledge about a research area by using bibliographic information on academic publications provided by bibliographic data providers. Bibliometric analyses include the calculation of bibliometric networks to describe affiliations or similarities of bibliometric entities (e.g., authors) and group them into clusters representing subareas or communities. Calculating and visualizing bibliometric networks is a nontrivial and time-consuming data science task that requires highly skilled individuals. In addition to domain knowledge, researchers must often provide statistical knowledge and programming skills or use software tools having limited functionality and usability. In this paper, we present the ambalytics bibliometric platform, which reduces the complexity of bibliometric network analysis and the visualization of results. It accompanies users through the process of bibliometric analysis and eliminates the need for individuals to have programming skills and statistical knowledge, while preserving advanced functionality, such as algorithm parameterization, for experts. As a proof-of-concept, and as an example of bibliometric analyses outcomes, the calculation of research fronts networks based on a hybrid similarity approach is shown. Being designed to scale, ambalytics makes use of distributed systems concepts and technologies. It is based on the microservice architecture concept and uses the Kubernetes framework for orchestration. This paper presents the initial building block of a comprehensive bibliometric analysis platform called ambalytics, which aims at a high usability for users as well as scalability. KW - system architecture design KW - bibliometric analysis KW - community detection Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-244916 SN - 1999-5903 VL - 13 IS - 8 ER - TY - JOUR A1 - Oberdörfer, Sebastian A1 - Birnstiel, Sandra A1 - Latoschik, Marc Erich A1 - Grafe, Silke T1 - Mutual Benefits: Interdisciplinary Education of Pre-Service Teachers and HCI Students in VR/AR Learning Environment Design JF - Frontiers in Education N2 - The successful development and classroom integration of Virtual (VR) and Augmented Reality (AR) learning environments requires competencies and content knowledge with respect to media didactics and the respective technologies. The paper discusses a pedagogical concept specifically aiming at the interdisciplinary education of pre-service teachers in collaboration with human-computer interaction students. The students’ overarching goal is the interdisciplinary realization and integration of VR/AR learning environments in teaching and learning concepts. To assist this approach, we developed a specific tutorial guiding the developmental process. We evaluate and validate the effectiveness of the overall pedagogical concept by analyzing the change in attitudes regarding 1) the use of VR/AR for educational purposes and in competencies and content knowledge regarding 2) media didactics and 3) technology. Our results indicate a significant improvement in the knowledge of media didactics and technology. We further report on four STEM learning environments that have been developed during the seminar. KW - interdisciplinary education KW - virtual reality KW - augmented reality KW - serious games KW - learning environments KW - teacher education Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-241612 SN - 2504-284X VL - 6 ER - TY - JOUR A1 - Osmanoglu, Özge A1 - Khaled AlSeiari, Mariam A1 - AlKhoori, Hasa Abduljaleel A1 - Shams, Shabana A1 - Bencurova, Elena A1 - Dandekar, Thomas A1 - Naseem, Muhammad T1 - Topological Analysis of the Carbon-Concentrating CETCH Cycle and a Photorespiratory Bypass Reveals Boosted CO\(_2\)-Sequestration by Plants JF - Frontiers in Bioengineering and Biotechnology N2 - Synthetically designed alternative photorespiratory pathways increase the biomass of tobacco and rice plants. Likewise, some in planta–tested synthetic carbon-concentrating cycles (CCCs) hold promise to increase plant biomass while diminishing atmospheric carbon dioxide burden. Taking these individual contributions into account, we hypothesize that the integration of bypasses and CCCs will further increase plant productivity. To test this in silico, we reconstructed a metabolic model by integrating photorespiration and photosynthesis with the synthetically designed alternative pathway 3 (AP3) enzymes and transporters. We calculated fluxes of the native plant system and those of AP3 combined with the inhibition of the glycolate/glycerate transporter by using the YANAsquare package. The activity values corresponding to each enzyme in photosynthesis, photorespiration, and for synthetically designed alternative pathways were estimated. Next, we modeled the effect of the crotonyl-CoA/ethylmalonyl-CoA/hydroxybutyryl-CoA cycle (CETCH), which is a set of natural and synthetically designed enzymes that fix CO₂ manifold more than the native Calvin–Benson–Bassham (CBB) cycle. We compared estimated fluxes across various pathways in the native model and under an introduced CETCH cycle. Moreover, we combined CETCH and AP3-w/plgg1RNAi, and calculated the fluxes. We anticipate higher carbon dioxide–harvesting potential in plants with an AP3 bypass and CETCH–AP3 combination. We discuss the in vivo implementation of these strategies for the improvement of C3 plants and in natural high carbon harvesters. KW - CO2-sequestration KW - photorespiration KW - elementary modes KW - synthetic pathways KW - carboxylation KW - metabolic modeling KW - CETCH cycle Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-249260 SN - 2296-4185 VL - 9 ER - TY - CHAP A1 - Davies, Richard A1 - Dewell, Nathan A1 - Harvey, Carlo T1 - A framework for interactive, autonomous and semantic dialogue generation in games T2 - Proceedings of the 1st Games Technology Summit N2 - Immersive virtual environments provide users with the opportunity to escape from the real world, but scripted dialogues can disrupt the presence within the world the user is trying to escape within. Both Non-Playable Character (NPC) to Player and NPC to NPC dialogue can be non-natural and the reliance on responding with pre-defined dialogue does not always meet the players emotional expectations or provide responses appropriate to the given context or world states. This paper investigates the application of Artificial Intelligence (AI) and Natural Language Processing to generate dynamic human-like responses within a themed virtual world. Each thematic has been analysed against humangenerated responses for the same seed and demonstrates invariance of rating across a range of model sizes, but shows an effect of theme and the size of the corpus used for fine-tuning the context for the game world. KW - natural language processing · · · KW - interactive authoring system KW - semantic understanding KW - artificial intelligence Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-246023 ER - TY - CHAP A1 - Sanusi, Khaleel Asyraaf Mat A1 - Klemke, Roland T1 - Immersive Multimodal Environments for Psychomotor Skills Training T2 - Proceedings of the 1st Games Technology Summit N2 - Modern immersive multimodal technologies enable the learners to completely get immersed in various learning situations in a way that feels like experiencing an authentic learning environment. These environments also allow the collection of multimodal data, which can be used with artificial intelligence to further improve the immersion and learning outcomes. The use of artificial intelligence has been widely explored for the interpretation of multimodal data collected from multiple sensors, thus giving insights to support learners’ performance by providing personalised feedback. In this paper, we present a conceptual approach for creating immersive learning environments, integrated with multi-sensor setup to help learners improve their psychomotor skills in a remote setting. KW - immersive learning technologies KW - multimodal learning KW - sensor devices KW - artificial intelligence KW - psychomotor training Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-246016 ER - TY - JOUR A1 - Kern, Florian A1 - Kullmann, Peter A1 - Ganal, Elisabeth A1 - Korwisi, Kristof A1 - Stingl, René A1 - Niebling, Florian A1 - Latoschik, Marc Erich T1 - Off-The-Shelf Stylus: Using XR Devices for Handwriting and Sketching on Physically Aligned Virtual Surfaces JF - Frontiers in Virtual Reality N2 - This article introduces the Off-The-Shelf Stylus (OTSS), a framework for 2D interaction (in 3D) as well as for handwriting and sketching with digital pen, ink, and paper on physically aligned virtual surfaces in Virtual, Augmented, and Mixed Reality (VR, AR, MR: XR for short). OTSS supports self-made XR styluses based on consumer-grade six-degrees-of-freedom XR controllers and commercially available styluses. The framework provides separate modules for three basic but vital features: 1) The stylus module provides stylus construction and calibration features. 2) The surface module provides surface calibration and visual feedback features for virtual-physical 2D surface alignment using our so-called 3ViSuAl procedure, and surface interaction features. 3) The evaluation suite provides a comprehensive test bed combining technical measurements for precision, accuracy, and latency with extensive usability evaluations including handwriting and sketching tasks based on established visuomotor, graphomotor, and handwriting research. The framework’s development is accompanied by an extensive open source reference implementation targeting the Unity game engine using an Oculus Rift S headset and Oculus Touch controllers. The development compares three low-cost and low-tech options to equip controllers with a tip and includes a web browser-based surface providing support for interacting, handwriting, and sketching. The evaluation of the reference implementation based on the OTSS framework identified an average stylus precision of 0.98 mm (SD = 0.54 mm) and an average surface accuracy of 0.60 mm (SD = 0.32 mm) in a seated VR environment. The time for displaying the stylus movement as digital ink on the web browser surface in VR was 79.40 ms on average (SD = 23.26 ms), including the physical controller’s motion-to-photon latency visualized by its virtual representation (M = 42.57 ms, SD = 15.70 ms). The usability evaluation (N = 10) revealed a low task load, high usability, and high user experience. Participants successfully reproduced given shapes and created legible handwriting, indicating that the OTSS and it’s reference implementation is ready for everyday use. We provide source code access to our implementation, including stylus and surface calibration and surface interaction features, making it easy to reuse, extend, adapt and/or replicate previous results (https://go.uniwue.de/hci-otss). KW - virtual reality KW - augmented reality KW - handwriting KW - sketching KW - stylus KW - user interaction KW - usability evaluation KW - passive haptic feedback Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260219 VL - 2 ER - TY - JOUR A1 - Bartl, Andrea A1 - Wenninger, Stephan A1 - Wolf, Erik A1 - Botsch, Mario A1 - Latoschik, Marc Erich T1 - Affordable but not cheap: a case study of the effects of two 3D-reconstruction methods of virtual humans JF - Frontiers in Virtual Reality N2 - Realistic and lifelike 3D-reconstruction of virtual humans has various exciting and important use cases. Our and others’ appearances have notable effects on ourselves and our interaction partners in virtual environments, e.g., on acceptance, preference, trust, believability, behavior (the Proteus effect), and more. Today, multiple approaches for the 3D-reconstruction of virtual humans exist. They significantly vary in terms of the degree of achievable realism, the technical complexities, and finally, the overall reconstruction costs involved. This article compares two 3D-reconstruction approaches with very different hardware requirements. The high-cost solution uses a typical complex and elaborated camera rig consisting of 94 digital single-lens reflex (DSLR) cameras. The recently developed low-cost solution uses a smartphone camera to create videos that capture multiple views of a person. Both methods use photogrammetric reconstruction and template fitting with the same template model and differ in their adaptation to the method-specific input material. Each method generates high-quality virtual humans ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity. We compare the results of the two 3D-reconstruction methods in an immersive virtual environment against each other in a user study. Our results indicate that the virtual humans from the low-cost approach are perceived similarly to those from the high-cost approach regarding the perceived similarity to the original, human-likeness, beauty, and uncanniness, despite significant differences in the objectively measured quality. The perceived feeling of change of the own body was higher for the low-cost virtual humans. Quality differences were perceived more strongly for one’s own body than for other virtual humans. KW - virtual humans KW - 3D-reconstruction methods KW - avatars KW - agents KW - user study Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260492 VL - 2 ER - TY - JOUR A1 - Wienrich, Carolin A1 - Latoschik, Marc Erich T1 - eXtended Artificial Intelligence: New Prospects of Human-AI Interaction Research JF - Frontiers in Virtual Reality N2 - Artificial Intelligence (AI) covers a broad spectrum of computational problems and use cases. Many of those implicate profound and sometimes intricate questions of how humans interact or should interact with AIs. Moreover, many users or future users do have abstract ideas of what AI is, significantly depending on the specific embodiment of AI applications. Human-centered-design approaches would suggest evaluating the impact of different embodiments on human perception of and interaction with AI. An approach that is difficult to realize due to the sheer complexity of application fields and embodiments in reality. However, here XR opens new possibilities to research human-AI interactions. The article’s contribution is twofold: First, it provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum as a framework for and a perspective of different approaches of XR-AI combinations. It motivates XR-AI combinations as a method to learn about the effects of prospective human-AI interfaces and shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces. Second, the article provides two exemplary experiments investigating the aforementioned approach for two distinct AI-systems. The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system. Here the article introduces two paradigmatic implementations of the proposed XR testbed for human-AI interactions and interfaces and shows how a valid and systematic investigation can be conducted. In sum, the article opens new perspectives on how XR benefits human-centered AI design and development. KW - human-artificial intelligence interface KW - human-artificial intelligence interaction KW - XR-artificial intelligence continuum KW - XR-artificial intelligence combination KW - research methods KW - human-centered, human-robot KW - recommender system Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260296 VL - 2 ER - TY - JOUR A1 - Wienrich, Carolin A1 - Carolus, Astrid T1 - Development of an Instrument to Measure Conceptualizations and Competencies About Conversational Agents on the Example of Smart Speakers JF - Frontiers in Computer Science N2 - The concept of digital literacy has been introduced as a new cultural technique, which is regarded as essential for successful participation in a (future) digitized world. Regarding the increasing importance of AI, literacy concepts need to be extended to account for AI-related specifics. The easy handling of the systems results in increased usage, contrasting limited conceptualizations (e.g., imagination of future importance) and competencies (e.g., knowledge about functional principles). In reference to voice-based conversational agents as a concrete application of AI, the present paper aims for the development of a measurement to assess the conceptualizations and competencies about conversational agents. In a first step, a theoretical framework of “AI literacy” is transferred to the context of conversational agent literacy. Second, the “conversational agent literacy scale” (short CALS) is developed, constituting the first attempt to measure interindividual differences in the “(il) literate” usage of conversational agents. 29 items were derived, of which 170 participants answered. An explanatory factor analysis identified five factors leading to five subscales to assess CAL: storage and transfer of the smart speaker’s data input; smart speaker’s functional principles; smart speaker’s intelligent functions, learning abilities; smart speaker’s reach and potential; smart speaker’s technological (surrounding) infrastructure. Preliminary insights into construct validity and reliability of CALS showed satisfying results. Third, using the newly developed instrument, a student sample’s CAL was assessed, revealing intermediated values. Remarkably, owning a smart speaker did not lead to higher CAL scores, confirming our basic assumption that usage of systems does not guarantee enlightened conceptualizations and competencies. In sum, the paper contributes to the first insights into the operationalization and understanding of CAL as a specific subdomain of AI-related competencies. KW - artificial intelligence literacy KW - artificial intelligence education KW - voice-based artificial intelligence KW - conversational agents KW - measurement Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260198 VL - 3 ER - TY - JOUR A1 - Wienrich, Carolin A1 - Reitelbach, Clemens A1 - Carolus, Astrid T1 - The Trustworthiness of Voice Assistants in the Context of Healthcare Investigating the Effect of Perceived Expertise on the Trustworthiness of Voice Assistants, Providers, Data Receivers, and Automatic Speech Recognition JF - Frontiers in Computer Science N2 - As an emerging market for voice assistants (VA), the healthcare sector imposes increasing requirements on the users’ trust in the technological system. To encourage patients to reveal sensitive data requires patients to trust in the technological counterpart. In an experimental laboratory study, participants were presented a VA, which was introduced as either a “specialist” or a “generalist” tool for sexual health. In both conditions, the VA asked the exact same health-related questions. Afterwards, participants assessed the trustworthiness of the tool and further source layers (provider, platform provider, automatic speech recognition in general, data receiver) and reported individual characteristics (disposition to trust and disclose sexual information). Results revealed that perceiving the VA as a specialist resulted in higher trustworthiness of the VA and of the provider, the platform provider and automatic speech recognition in general. Furthermore, the provider’s trustworthiness affected the perceived trustworthiness of the VA. Presenting both a theoretical line of reasoning and empirical data, the study points out the importance of the users’ perspective on the assistant. In sum, this paper argues for further analyses of trustworthiness in voice-based systems and its effects on the usage behavior as well as the impact on responsible design of future technology. KW - voice assistant KW - trustworthiness KW - trust KW - anamnesis tool KW - expertise framing (Min5-Max 8) Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260209 VL - 3 ER - TY - JOUR A1 - Wienrich, Carolin A1 - Komma, Philipp A1 - Vogt, Stephanie A1 - Latoschik, Marc E. T1 - Spatial Presence in Mixed Realities – Considerations About the Concept, Measures, Design, and Experiments JF - Frontiers in Virtual Reality N2 - Plenty of theories, models, measures, and investigations target the understanding of virtual presence, i.e., the sense of presence in immersive Virtual Reality (VR). Other varieties of the so-called eXtended Realities (XR), e.g., Augmented and Mixed Reality (AR and MR) incorporate immersive features to a lesser degree and continuously combine spatial cues from the real physical space and the simulated virtual space. This blurred separation questions the applicability of the accumulated knowledge about the similarities of virtual presence and presence occurring in other varieties of XR, and corresponding outcomes. The present work bridges this gap by analyzing the construct of presence in mixed realities (MR). To achieve this, the following presents (1) a short review of definitions, dimensions, and measurements of presence in VR, and (2) the state of the art views on MR. Additionally, we (3) derived a working definition of MR, extending the Milgram continuum. This definition is based on entities reaching from real to virtual manifestations at one time point. Entities possess different degrees of referential power, determining the selection of the frame of reference. Furthermore, we (4) identified three research desiderata, including research questions about the frame of reference, the corresponding dimension of transportation, and the dimension of realism in MR. Mainly the relationship between the main aspects of virtual presence of immersive VR, i.e., the place-illusion, and the plausibility-illusion, and of the referential power of MR entities are discussed regarding the concept, measures, and design of presence in MR. Finally, (5) we suggested an experimental setup to reveal the research heuristic behind experiments investigating presence in MR. The present work contributes to the theories and the meaning of and approaches to simulate and measure presence in MR. We hypothesize that research about essential underlying factors determining user experience (UX) in MR simulations and experiences is still in its infancy and hopes this article provides an encouraging starting point to tackle related questions. KW - mixed reality KW - virtual-reality-continuum KW - spatial presence KW - place-illusion KW - plausibility-illusion KW - transportation KW - realism Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260328 VL - 2 ER - TY - JOUR A1 - Glémarec, Yann A1 - Lugrin, Jean-Luc A1 - Bosser, Anne-Gwenn A1 - Collins Jackson, Aryana A1 - Buche, Cédric A1 - Latoschik, Marc Erich T1 - Indifferent or Enthusiastic? Virtual Audiences Animation and Perception in Virtual Reality JF - Frontiers in Virtual Reality N2 - In this paper, we present a virtual audience simulation system for Virtual Reality (VR). The system implements an audience perception model controlling the nonverbal behaviors of virtual spectators, such as facial expressions or postures. Groups of virtual spectators are animated by a set of nonverbal behavior rules representing a particular audience attitude (e.g., indifferent or enthusiastic). Each rule specifies a nonverbal behavior category: posture, head movement, facial expression and gaze direction as well as three parameters: type, frequency and proportion. In a first user-study, we asked participants to pretend to be a speaker in VR and then create sets of nonverbal behaviour parameters to simulate different attitudes. Participants manipulated the nonverbal behaviours of single virtual spectator to match a specific levels of engagement and opinion toward them. In a second user-study, we used these parameters to design different types of virtual audiences with our nonverbal behavior rules and evaluated their perceptions. Our results demonstrate our system’s ability to create virtual audiences with three types of different perceived attitudes: indifferent, critical, enthusiastic. The analysis of the results also lead to a set of recommendations and guidelines regarding attitudes and expressions for future design of audiences for VR therapy and training applications. KW - virtual reality KW - perception KW - nonverbal behavior KW - interaction KW - virtual agent KW - virtual audience Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-259328 VL - 2 ER - TY - JOUR A1 - Hein, Rebecca M. A1 - Wienrich, Carolin A1 - Latoschik, Marc E. T1 - A systematic review of foreign language learning with immersive technologies (2001-2020) JF - AIMS Electronics and Electrical Engineering N2 - This study provides a systematic literature review of research (2001–2020) in the field of teaching and learning a foreign language and intercultural learning using immersive technologies. Based on 2507 sources, 54 articles were selected according to a predefined selection criteria. The review is aimed at providing information about which immersive interventions are being used for foreign language learning and teaching and where potential research gaps exist. The papers were analyzed and coded according to the following categories: (1) investigation form and education level, (2) degree of immersion, and technology used, (3) predictors, and (4) criterions. The review identified key research findings relating the use of immersive technologies for learning and teaching a foreign language and intercultural learning at cognitive, affective, and conative levels. The findings revealed research gaps in the area of teachers as a target group, and virtual reality (VR) as a fully immersive intervention form. Furthermore, the studies reviewed rarely examined behavior, and implicit measurements related to inter- and trans-cultural learning and teaching. Inter- and transcultural learning and teaching especially is an underrepresented investigation subject. Finally, concrete suggestions for future research are given. The systematic review contributes to the challenge of interdisciplinary cooperation between pedagogy, foreign language didactics, and Human-Computer Interaction to achieve innovative teaching-learning formats and a successful digital transformation. KW - foreign language learning and teaching KW - intercultural learning and teaching KW - immersive technologies KW - education KW - human-computer interaction KW - systematic literature review Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-268811 VL - 5 IS - 2 ER - TY - JOUR A1 - Dumic, Emil A1 - Bjelopera, Anamaria A1 - Nüchter, Andreas T1 - Dynamic point cloud compression based on projections, surface reconstruction and video compression JF - Sensors N2 - In this paper we will present a new dynamic point cloud compression based on different projection types and bit depth, combined with the surface reconstruction algorithm and video compression for obtained geometry and texture maps. Texture maps have been compressed after creating Voronoi diagrams. Used video compression is specific for geometry (FFV1) and texture (H.265/HEVC). Decompressed point clouds are reconstructed using a Poisson surface reconstruction algorithm. Comparison with the original point clouds was performed using point-to-point and point-to-plane measures. Comprehensive experiments show better performance for some projection maps: cylindrical, Miller and Mercator projections. KW - 3DTK toolkit KW - map projections KW - point cloud compression KW - point-to-point measure KW - point-to-plane measure KW - Poisson surface reconstruction KW - octree Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-252231 SN - 1424-8220 VL - 22 IS - 1 ER - TY - JOUR A1 - Madeira, Octavia A1 - Gromer, Daniel A1 - Latoschik, Marc Erich A1 - Pauli, Paul T1 - Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze JF - Frontiers in Virtual Reality N2 - The Elevated Plus-Maze (EPM) is a well-established apparatus to measure anxiety in rodents, i.e., animals exhibiting an increased relative time spent in the closed vs. the open arms are considered anxious. To examine whether such anxiety-modulated behaviors are conserved in humans, we re-translated this paradigm to a human setting using virtual reality in a Cave Automatic Virtual Environment (CAVE) system. In two studies, we examined whether the EPM exploration behavior of humans is modulated by their trait anxiety and also assessed the individuals’ levels of acrophobia (fear of height), claustrophobia (fear of confined spaces), sensation seeking, and the reported anxiety when on the maze. First, we constructed an exact virtual copy of the animal EPM adjusted to human proportions. In analogy to animal EPM studies, participants (N = 30) freely explored the EPM for 5 min. In the second study (N = 61), we redesigned the EPM to make it more human-adapted and to differentiate influences of trait anxiety and acrophobia by introducing various floor textures and lower walls of closed arms to the height of standard handrails. In the first experiment, hierarchical regression analyses of exploration behavior revealed the expected association between open arm avoidance and Trait Anxiety, an even stronger association with acrophobic fear. In the second study, results revealed that acrophobia was associated with avoidance of open arms with mesh-floor texture, whereas for trait anxiety, claustrophobia, and sensation seeking, no effect was detected. Also, subjects’ fear rating was moderated by all psychometrics but trait anxiety. In sum, both studies consistently indicate that humans show no general open arm avoidance analogous to rodents and that human EPM behavior is modulated strongest by acrophobic fear, whereas trait anxiety plays a subordinate role. Thus, we conclude that the criteria for cross-species validity are met insufficiently in this case. Despite the exploratory nature, our studies provide in-depth insights into human exploration behavior on the virtual EPM. KW - elevated plus-maze KW - EPM KW - anxiety KW - virtual reality KW - translational neuroscience KW - acrophobia KW - trait anxiety Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-258709 VL - 2 ER - TY - JOUR A1 - Wienrich, Carolin A1 - Döllinger, Nina A1 - Hein, Rebecca T1 - Behavioral Framework of Immersive Technologies (BehaveFIT): How and why virtual reality can support behavioral change processes JF - Frontiers in Virtual Reality N2 - The design and evaluation of assisting technologies to support behavior change processes have become an essential topic within the field of human-computer interaction research in general and the field of immersive intervention technologies in particular. The mechanisms and success of behavior change techniques and interventions are broadly investigated in the field of psychology. However, it is not always easy to adapt these psychological findings to the context of immersive technologies. The lack of theoretical foundation also leads to a lack of explanation as to why and how immersive interventions support behavior change processes. The Behavioral Framework for immersive Technologies (BehaveFIT) addresses this lack by 1) presenting an intelligible categorization and condensation of psychological barriers and immersive features, by 2) suggesting a mapping that shows why and how immersive technologies can help to overcome barriers and finally by 3) proposing a generic prediction path that enables a structured, theory-based approach to the development and evaluation of immersive interventions. These three steps explain how BehaveFIT can be used, and include guiding questions for each step. Further, two use cases illustrate the usage of BehaveFIT. Thus, the present paper contributes to guidance for immersive intervention design and evaluation, showing that immersive interventions support behavior change processes and explain and predict 'why' and 'how' immersive interventions can bridge the intention-behavior-gap. KW - immersive technologies KW - behavior change KW - intervention design KW - intervention evaluation KW - framework KW - virtual reality KW - intention-behavior-gap KW - human-computer interaction Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-258796 VL - 2 ER - TY - JOUR A1 - Kraft, Robin A1 - Reichert, Manfred A1 - Pryss, Rüdiger T1 - Towards the interpretation of sound measurements from smartphones collected with mobile crowdsensing in the healthcare domain: an experiment with Android devices JF - Sensors N2 - The ubiquity of mobile devices fosters the combined use of ecological momentary assessments (EMA) and mobile crowdsensing (MCS) in the field of healthcare. This combination not only allows researchers to collect ecologically valid data, but also to use smartphone sensors to capture the context in which these data are collected. The TrackYourTinnitus (TYT) platform uses EMA to track users' individual subjective tinnitus perception and MCS to capture an objective environmental sound level while the EMA questionnaire is filled in. However, the sound level data cannot be used directly among the different smartphones used by TYT users, since uncalibrated raw values are stored. This work describes an approach towards making these values comparable. In the described setting, the evaluation of sensor measurements from different smartphone users becomes increasingly prevalent. Therefore, the shown approach can be also considered as a more general solution as it not only shows how it helped to interpret TYT sound level data, but may also stimulate other researchers, especially those who need to interpret sensor data in a similar setting. Altogether, the approach will show that measuring sound levels with mobile devices is possible in healthcare scenarios, but there are many challenges to ensuring that the measured values are interpretable. KW - mHealth KW - crowdsensing KW - tinnitus KW - noise measurement KW - environmental sound Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-252246 SN - 1424-8220 VL - 22 IS - 1 ER - TY - JOUR A1 - Ankenbrand, Markus J. A1 - Shainberg, Liliia A1 - Hock, Michael A1 - Lohr, David A1 - Schreiber, Laura M. T1 - Sensitivity analysis for interpretation of machine learning based segmentation models in cardiac MRI JF - BMC Medical Imaging N2 - Background Image segmentation is a common task in medical imaging e.g., for volumetry analysis in cardiac MRI. Artificial neural networks are used to automate this task with performance similar to manual operators. However, this performance is only achieved in the narrow tasks networks are trained on. Performance drops dramatically when data characteristics differ from the training set properties. Moreover, neural networks are commonly considered black boxes, because it is hard to understand how they make decisions and why they fail. Therefore, it is also hard to predict whether they will generalize and work well with new data. Here we present a generic method for segmentation model interpretation. Sensitivity analysis is an approach where model input is modified in a controlled manner and the effect of these modifications on the model output is evaluated. This method yields insights into the sensitivity of the model to these alterations and therefore to the importance of certain features on segmentation performance. Results We present an open-source Python library (misas), that facilitates the use of sensitivity analysis with arbitrary data and models. We show that this method is a suitable approach to answer practical questions regarding use and functionality of segmentation models. We demonstrate this in two case studies on cardiac magnetic resonance imaging. The first case study explores the suitability of a published network for use on a public dataset the network has not been trained on. The second case study demonstrates how sensitivity analysis can be used to evaluate the robustness of a newly trained model. Conclusions Sensitivity analysis is a useful tool for deep learning developers as well as users such as clinicians. It extends their toolbox, enabling and improving interpretability of segmentation models. Enhancing our understanding of neural networks through sensitivity analysis also assists in decision making. Although demonstrated only on cardiac magnetic resonance images this approach and software are much more broadly applicable. KW - deep learning KW - neural networks KW - cardiac magnetic resonance KW - sensitivity analysis KW - transformations KW - augmentation KW - segmentation Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-259169 VL - 21 IS - 1 ER - TY - JOUR A1 - Oberdörfer, Sebastian A1 - Heidrich, David A1 - Birnstiel, Sandra A1 - Latoschik, Marc Erich T1 - Enchanted by Your Surrounding? Measuring the Effects of Immersion and Design of Virtual Environments on Decision-Making JF - Frontiers in Virtual Reality N2 - Impaired decision-making leads to the inability to distinguish between advantageous and disadvantageous choices. The impairment of a person’s decision-making is a common goal of gambling games. Given the recent trend of gambling using immersive Virtual Reality it is crucial to investigate the effects of both immersion and the virtual environment (VE) on decision-making. In a novel user study, we measured decision-making using three virtual versions of the Iowa Gambling Task (IGT). The versions differed with regard to the degree of immersion and design of the virtual environment. While emotions affect decision-making, we further measured the positive and negative affect of participants. A higher visual angle on a stimulus leads to an increased emotional response. Thus, we kept the visual angle on the Iowa Gambling Task the same between our conditions. Our results revealed no significant impact of immersion or the VE on the IGT. We further found no significant difference between the conditions with regard to positive and negative affect. This suggests that neither the medium used nor the design of the VE causes an impairment of decision-making. However, in combination with a recent study, we provide first evidence that a higher visual angle on the IGT leads to an effect of impairment. KW - virtual reality KW - virtual environments KW - immersion KW - decision-making KW - iowa gambling task Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260101 VL - 2 ER - TY - JOUR A1 - Breves, Priska A1 - Dodel, Nicola T1 - The influence of cybersickness and the media devices’ mobility on the persuasive effects of 360° commercials JF - Multimedia Tools and Applications N2 - With the rise of immersive media, advertisers have started to use 360° commercials to engage and persuade consumers. Two experiments were conducted to address research gaps and to validate the positive impact of 360° commercials in realistic settings. The first study (N = 62) compared the effects of 360° commercials using either a mobile cardboard head-mounted display (HMD) or a laptop. This experiment was conducted in the participants’ living rooms and incorporated individual feelings of cybersickness as a moderator. The participants who experienced the 360° commercial with the HMD reported higher spatial presence and product evaluation, but their purchase intentions were only increased when their reported cybersickness was low. The second experiment (N = 197) was conducted online and analyzed the impact of 360° commercials that were experienced with mobile (smartphone/tablet) or static (laptop/desktop) devices instead of HMDs. The positive effects of omnidirectional videos were stronger when participants used mobile devices. KW - virtual reality KW - immersive advertising KW - spatial presence KW - cybersickness KW - advertising effectiveness Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-269194 SN - 1573-7721 VL - 80 IS - 18 ER - TY - JOUR A1 - Steininger, Michael A1 - Kobs, Konstantin A1 - Davidson, Padraig A1 - Krause, Anna A1 - Hotho, Andreas T1 - Density-based weighting for imbalanced regression JF - Machine Learning N2 - In many real world settings, imbalanced data impedes model performance of learning algorithms, like neural networks, mostly for rare cases. This is especially problematic for tasks focusing on these rare occurrences. For example, when estimating precipitation, extreme rainfall events are scarce but important considering their potential consequences. While there are numerous well studied solutions for classification settings, most of them cannot be applied to regression easily. Of the few solutions for regression tasks, barely any have explored cost-sensitive learning which is known to have advantages compared to sampling-based methods in classification tasks. In this work, we propose a sample weighting approach for imbalanced regression datasets called DenseWeight and a cost-sensitive learning approach for neural network regression with imbalanced data called DenseLoss based on our weighting scheme. DenseWeight weights data points according to their target value rarities through kernel density estimation (KDE). DenseLoss adjusts each data point’s influence on the loss according to DenseWeight, giving rare data points more influence on model training compared to common data points. We show on multiple differently distributed datasets that DenseLoss significantly improves model performance for rare data points through its density-based weighting scheme. Additionally, we compare DenseLoss to the state-of-the-art method SMOGN, finding that our method mostly yields better performance. Our approach provides more control over model training as it enables us to actively decide on the trade-off between focusing on common or rare cases through a single hyperparameter, allowing the training of better models for rare data points. KW - supervised learning KW - imbalanced regression KW - cost-sensitive learning KW - sample weighting KW - Kerneldensity estimation Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-269177 SN - 1573-0565 VL - 110 IS - 8 ER - TY - JOUR A1 - Holfelder, Marc A1 - Mulansky, Lena A1 - Schlee, Winfried A1 - Baumeister, Harald A1 - Schobel, Johannes A1 - Greger, Helmut A1 - Hoff, Andreas A1 - Pryss, Rüdiger T1 - Medical device regulation efforts for mHealth apps during the COVID-19 pandemic — an experience report of Corona Check and Corona Health JF - J — Multidisciplinary Scientific Journal N2 - Within the healthcare environment, mobile health (mHealth) applications (apps) are becoming more and more important. The number of new mHealth apps has risen steadily in the last years. Especially the COVID-19 pandemic has led to an enormous amount of app releases. In most countries, mHealth applications have to be compliant with several regulatory aspects to be declared a “medical app”. However, the latest applicable medical device regulation (MDR) does not provide more details on the requirements for mHealth applications. When developing a medical app, it is essential that all contributors in an interdisciplinary team — especially software engineers — are aware of the specific regulatory requirements beforehand. The development process, however, should not be stalled due to integration of the MDR. Therefore, a developing framework that includes these aspects is required to facilitate a reliable and quick development process. The paper at hand introduces the creation of such a framework on the basis of the Corona Health and Corona Check apps. The relevant regulatory guidelines are listed and summarized as a guidance for medical app developments during the pandemic and beyond. In particular, the important stages and challenges faced that emerged during the entire development process are highlighted. KW - mHealth KW - mobile application KW - MDR KW - medical device regulation KW - medical device software Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-285434 SN - 2571-8800 VL - 4 IS - 2 SP - 206 EP - 222 ER - TY - JOUR A1 - Koopmann, Tobias A1 - Stubbemann, Maximilian A1 - Kapa, Matthias A1 - Paris, Michael A1 - Buenstorf, Guido A1 - Hanika, Tom A1 - Hotho, Andreas A1 - Jäschke, Robert A1 - Stumme, Gerd T1 - Proximity dimensions and the emergence of collaboration: a HypTrails study on German AI research JF - Scientometrics N2 - Creation and exchange of knowledge depends on collaboration. Recent work has suggested that the emergence of collaboration frequently relies on geographic proximity. However, being co-located tends to be associated with other dimensions of proximity, such as social ties or a shared organizational environment. To account for such factors, multiple dimensions of proximity have been proposed, including cognitive, institutional, organizational, social and geographical proximity. Since they strongly interrelate, disentangling these dimensions and their respective impact on collaboration is challenging. To address this issue, we propose various methods for measuring different dimensions of proximity. We then present an approach to compare and rank them with respect to the extent to which they indicate co-publications and co-inventions. We adapt the HypTrails approach, which was originally developed to explain human navigation, to co-author and co-inventor graphs. We evaluate this approach on a subset of the German research community, specifically academic authors and inventors active in research on artificial intelligence (AI). We find that social proximity and cognitive proximity are more important for the emergence of collaboration than geographic proximity. KW - collaboration KW - dimensions of proximity KW - co-authorships KW - co-inventorships KW - embedding techniques Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-269831 SN - 1588-2861 VL - 126 IS - 12 ER - TY - JOUR A1 - Schlör, Daniel A1 - Ring, Markus A1 - Hotho, Andreas T1 - iNALU: Improved Neural Arithmetic Logic Unit JF - Frontiers in Artificial Intelligence N2 - Neural networks have to capture mathematical relationships in order to learn various tasks. They approximate these relations implicitly and therefore often do not generalize well. The recently proposed Neural Arithmetic Logic Unit (NALU) is a novel neural architecture which is able to explicitly represent the mathematical relationships by the units of the network to learn operations such as summation, subtraction or multiplication. Although NALUs have been shown to perform well on various downstream tasks, an in-depth analysis reveals practical shortcomings by design, such as the inability to multiply or divide negative input values or training stability issues for deeper networks. We address these issues and propose an improved model architecture. We evaluate our model empirically in various settings from learning basic arithmetic operations to more complex functions. Our experiments indicate that our model solves stability issues and outperforms the original NALU model in means of arithmetic precision and convergence. KW - neural networks KW - machine learning KW - arithmetic calculations KW - neural architecture KW - experimental evaluation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-212301 SN - 2624-8212 VL - 3 ER - TY - JOUR A1 - Li, Ningbo A1 - Guan, Lianwu A1 - Gao, Yanbin A1 - Du, Shitong A1 - Wu, Menghao A1 - Guang, Xingxing A1 - Cong, Xiaodan T1 - Indoor and outdoor low-cost seamless integrated navigation system based on the integration of INS/GNSS/LIDAR system JF - Remote Sensing N2 - Global Navigation Satellite System (GNSS) provides accurate positioning data for vehicular navigation in open outdoor environment. In an indoor environment, Light Detection and Ranging (LIDAR) Simultaneous Localization and Mapping (SLAM) establishes a two-dimensional map and provides positioning data. However, LIDAR can only provide relative positioning data and it cannot directly provide the latitude and longitude of the current position. As a consequence, GNSS/Inertial Navigation System (INS) integrated navigation could be employed in outdoors, while the indoors part makes use of INS/LIDAR integrated navigation and the corresponding switching navigation will make the indoor and outdoor positioning consistent. In addition, when the vehicle enters the garage, the GNSS signal will be blurred for a while and then disappeared. Ambiguous GNSS satellite signals will lead to the continuous distortion or overall drift of the positioning trajectory in the indoor condition. Therefore, an INS/LIDAR seamless integrated navigation algorithm and a switching algorithm based on vehicle navigation system are designed. According to the experimental data, the positioning accuracy of the INS/LIDAR navigation algorithm in the simulated environmental experiment is 50% higher than that of the Dead Reckoning (DR) algorithm. Besides, the switching algorithm developed based on the INS/LIDAR integrated navigation algorithm can achieve 80% success rate in navigation mode switching. KW - vehicular navigation KW - GNSS/INS integrated navigation KW - INS/LIDAR integrated navigation KW - switching navigation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-216229 SN - 2072-4292 VL - 12 IS - 19 ER - TY - JOUR A1 - Kraft, Robin A1 - Birk, Ferdinand A1 - Reichert, Manfred A1 - Deshpande, Aniruddha A1 - Schlee, Winfried A1 - Langguth, Berthold A1 - Baumeister, Harald A1 - Probst, Thomas A1 - Spiliopoulou, Myra A1 - Pryss, Rüdiger T1 - Efficient processing of geospatial mHealth data using a scalable crowdsensing platform JF - Sensors N2 - Smart sensors and smartphones are becoming increasingly prevalent. Both can be used to gather environmental data (e.g., noise). Importantly, these devices can be connected to each other as well as to the Internet to collect large amounts of sensor data, which leads to many new opportunities. In particular, mobile crowdsensing techniques can be used to capture phenomena of common interest. Especially valuable insights can be gained if the collected data are additionally related to the time and place of the measurements. However, many technical solutions still use monolithic backends that are not capable of processing crowdsensing data in a flexible, efficient, and scalable manner. In this work, an architectural design was conceived with the goal to manage geospatial data in challenging crowdsensing healthcare scenarios. It will be shown how the proposed approach can be used to provide users with an interactive map of environmental noise, allowing tinnitus patients and other health-conscious people to avoid locations with harmful sound levels. Technically, the shown approach combines cloud-native applications with Big Data and stream processing concepts. In general, the presented architectural design shall serve as a foundation to implement practical and scalable crowdsensing platforms for various healthcare scenarios beyond the addressed use case. KW - mHealth KW - crowdsensing KW - tinnitus KW - geospatial data KW - cloud-native KW - stream processing KW - scalability KW - architectural design Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-207826 SN - 1424-8220 VL - 20 IS - 12 ER - TY - JOUR A1 - Davidson, Padraig A1 - Düking, Peter A1 - Zinner, Christoph A1 - Sperlich, Billy A1 - Hotho, Andreas T1 - Smartwatch-Derived Data and Machine Learning Algorithms Estimate Classes of Ratings of Perceived Exertion in Runners: A Pilot Study JF - Sensors N2 - The rating of perceived exertion (RPE) is a subjective load marker and may assist in individualizing training prescription, particularly by adjusting running intensity. Unfortunately, RPE has shortcomings (e.g., underreporting) and cannot be monitored continuously and automatically throughout a training sessions. In this pilot study, we aimed to predict two classes of RPE (≤15 “Somewhat hard to hard” on Borg’s 6–20 scale vs. RPE >15 in runners by analyzing data recorded by a commercially-available smartwatch with machine learning algorithms. Twelve trained and untrained runners performed long-continuous runs at a constant self-selected pace to volitional exhaustion. Untrained runners reported their RPE each kilometer, whereas trained runners reported every five kilometers. The kinetics of heart rate, step cadence, and running velocity were recorded continuously ( 1 Hz ) with a commercially-available smartwatch (Polar V800). We trained different machine learning algorithms to estimate the two classes of RPE based on the time series sensor data derived from the smartwatch. Predictions were analyzed in different settings: accuracy overall and per runner type; i.e., accuracy for trained and untrained runners independently. We achieved top accuracies of 84.8 % for the whole dataset, 81.8 % for the trained runners, and 86.1 % for the untrained runners. We predict two classes of RPE with high accuracy using machine learning and smartwatch data. This approach might aid in individualizing training prescriptions. KW - artificial intelligence KW - endurance KW - exercise intensity KW - precision training KW - prediction KW - wearable Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-205686 SN - 1424-8220 VL - 20 IS - 9 ER - TY - RPRT ED - Hoßfeld, Tobias ED - Wunderer, Stefan T1 - White Paper on Crowdsourced Network and QoE Measurements – Definitions, Use Cases and Challenges N2 - The goal of the white paper at hand is as follows. The definitions of the terms build a framework for discussions around the hype topic ‘crowdsourcing’. This serves as a basis for differentiation and a consistent view from different perspectives on crowdsourced network measurements, with the goal to provide a commonly accepted definition in the community. The focus is on the context of mobile and fixed network operators, but also on measurements of different layers (network, application, user layer). In addition, the white paper shows the value of crowdsourcing for selected use cases, e.g., to improve QoE or regulatory issues. Finally, the major challenges and issues for researchers and practitioners are highlighted. This white paper is the outcome of the Würzburg seminar on “Crowdsourced Network and QoE Measurements” which took place from 25-26 September 2019 in Würzburg, Germany. International experts were invited from industry and academia. They are well known in their communities, having different backgrounds in crowdsourcing, mobile networks, network measurements, network performance, Quality of Service (QoS), and Quality of Experience (QoE). The discussions in the seminar focused on how crowdsourcing will support vendors, operators, and regulators to determine the Quality of Experience in new 5G networks that enable various new applications and network architectures. As a result of the discussions, the need for a white paper manifested, with the goal of providing a scientific discussion of the terms “crowdsourced network measurements” and “crowdsourced QoE measurements”, describing relevant use cases for such crowdsourced data, and its underlying challenges. During the seminar, those main topics were identified, intensively discussed in break-out groups, and brought back into the plenum several times. The outcome of the seminar is this white paper at hand which is – to our knowledge – the first one covering the topic of crowdsourced network and QoE measurements. KW - Crowdsourcing KW - Network Measurements KW - Quality of Service (QoS) KW - Quality of Experience (QoE) KW - crowdsourced network measurements KW - crowdsourced QoE measurements Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-202327 ER - TY - JOUR A1 - Kammerer, Klaus A1 - Pryss, Rüdiger A1 - Hoppenstedt, Burkhard A1 - Sommer, Kevin A1 - Reichert, Manfred T1 - Process-driven and flow-based processing of industrial sensor data JF - Sensors N2 - For machine manufacturing companies, besides the production of high quality and reliable machines, requirements have emerged to maintain machine-related aspects through digital services. The development of such services in the field of the Industrial Internet of Things (IIoT) is dealing with solutions such as effective condition monitoring and predictive maintenance. However, appropriate data sources are needed on which digital services can be technically based. As many powerful and cheap sensors have been introduced over the last years, their integration into complex machines is promising for developing digital services for various scenarios. It is apparent that for components handling recorded data of these sensors they must usually deal with large amounts of data. In particular, the labeling of raw sensor data must be furthered by a technical solution. To deal with these data handling challenges in a generic way, a sensor processing pipeline (SPP) was developed, which provides effective methods to capture, process, store, and visualize raw sensor data based on a processing chain. Based on the example of a machine manufacturing company, the SPP approach is presented in this work. For the company involved, the approach has revealed promising results. KW - data stream processing KW - cyber-physical systems KW - processing pipeline KW - sensor networks Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-213089 SN - 1424-8220 VL - 20 IS - 18 ER - TY - RPRT A1 - Metzger, Florian T1 - Crowdsensed QoE for the community - a concept to make QoE assessment accessible N2 - In recent years several community testbeds as well as participatory sensing platforms have successfully established themselves to provide open data to everyone interested. Each of them with a specific goal in mind, ranging from collecting radio coverage data up to environmental and radiation data. Such data can be used by the community in their decision making, whether to subscribe to a specific mobile phone service that provides good coverage in an area or in finding a sunny and warm region for the summer holidays. However, the existing platforms are usually limiting themselves to directly measurable network QoS. If such a crowdsourced data set provides more in-depth derived measures, this would enable an even better decision making. A community-driven crowdsensing platform that derives spatial application-layer user experience from resource-friendly bandwidth estimates would be such a case, video streaming services come to mind as a prime example. In this paper we present a concept for such a system based on an initial prototype that eases the collection of data necessary to determine mobile-specific QoE at large scale. In addition we reason why the simple quality metric proposed here can hold its own. KW - Quality of Experience KW - Crowdsourcing KW - Crowdsensing KW - QoE Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203748 N1 - Originally written in 2017, but never published. ER - TY - JOUR A1 - Krupitzer, Christian A1 - Eberhardinger, Benedikt A1 - Gerostathopoulos, Ilias A1 - Raibulet, Claudia T1 - Introduction to the special issue “Applications in Self-Aware Computing Systems and their Evaluation” JF - Computers N2 - The joint 1st Workshop on Evaluations and Measurements in Self-Aware Computing Systems (EMSAC 2019) and Workshop on Self-Aware Computing (SeAC) was held as part of the FAS* conference alliance in conjunction with the 16th IEEE International Conference on Autonomic Computing (ICAC) and the 13th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO) in Umeå, Sweden on 20 June 2019. The goal of this one-day workshop was to bring together researchers and practitioners from academic environments and from the industry to share their solutions, ideas, visions, and doubts in self-aware computing systems in general and in the evaluation and measurements of such systems in particular. The workshop aimed to enable discussions, partnerships, and collaborations among the participants. This special issue follows the theme of the workshop. It contains extended versions of workshop presentations as well as additional contributions. KW - self-aware computing systems KW - quality evaluation KW - measurements KW - quality assurance KW - autonomous KW - self-adaptive KW - self-managing systems Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203439 SN - 2073-431X VL - 9 IS - 1 ER - TY - JOUR A1 - Kaiser, Dennis A1 - Lesch, Veronika A1 - Rothe, Julian A1 - Strohmeier, Michael A1 - Spieß, Florian A1 - Krupitzer, Christian A1 - Montenegro, Sergio A1 - Kounev, Samuel T1 - Towards Self-Aware Multirotor Formations JF - Computers N2 - In the present day, unmanned aerial vehicles become seemingly more popular every year, but, without regulation of the increasing number of these vehicles, the air space could become chaotic and uncontrollable. In this work, a framework is proposed to combine self-aware computing with multirotor formations to address this problem. The self-awareness is envisioned to improve the dynamic behavior of multirotors. The formation scheme that is implemented is called platooning, which arranges vehicles in a string behind the lead vehicle and is proposed to bring order into chaotic air space. Since multirotors define a general category of unmanned aerial vehicles, the focus of this thesis are quadcopters, platforms with four rotors. A modification for the LRA-M self-awareness loop is proposed and named Platooning Awareness. The implemented framework is able to offer two flight modes that enable waypoint following and the self-awareness module to find a path through scenarios, where obstacles are present on the way, onto a goal position. The evaluation of this work shows that the proposed framework is able to use self-awareness to learn about its environment, avoid obstacles, and can successfully move a platoon of drones through multiple scenarios. KW - self-aware computing KW - unmanned aerial vehicles KW - multirotors KW - quadcopters KW - intelligent transportation systems Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-200572 SN - 2073-431X VL - 9 IS - 1 ER - TY - JOUR A1 - Grohmann, Johannes A1 - Herbst, Nikolas A1 - Chalbani, Avi A1 - Arian, Yair A1 - Peretz, Noam A1 - Kounev, Samuel T1 - A Taxonomy of Techniques for SLO Failure Prediction in Software Systems JF - Computers N2 - Failure prediction is an important aspect of self-aware computing systems. Therefore, a multitude of different approaches has been proposed in the literature over the past few years. In this work, we propose a taxonomy for organizing works focusing on the prediction of Service Level Objective (SLO) failures. Our taxonomy classifies related work along the dimensions of the prediction target (e.g., anomaly detection, performance prediction, or failure prediction), the time horizon (e.g., detection or prediction, online or offline application), and the applied modeling type (e.g., time series forecasting, machine learning, or queueing theory). The classification is derived based on a systematic mapping of relevant papers in the area. Additionally, we give an overview of different techniques in each sub-group and address remaining challenges in order to guide future research. KW - taxonomy KW - survey KW - failure prediction KW - anomaly prediction KW - anomaly detection KW - self-aware computing KW - self-adaptive systems KW - performance prediction Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-200594 SN - 2073-431X VL - 9 IS - 1 ER - TY - JOUR A1 - Du, Shitong A1 - Lauterbach, Helge A. A1 - Li, Xuyou A1 - Demisse, Girum G. A1 - Borrmann, Dorit A1 - Nüchter, Andreas T1 - Curvefusion — A Method for Combining Estimated Trajectories with Applications to SLAM and Time-Calibration JF - Sensors N2 - Mapping and localization of mobile robots in an unknown environment are essential for most high-level operations like autonomous navigation or exploration. This paper presents a novel approach for combining estimated trajectories, namely curvefusion. The robot used in the experiments is equipped with a horizontally mounted 2D profiler, a constantly spinning 3D laser scanner and a GPS module. The proposed algorithm first combines trajectories from different sensors to optimize poses of the planar three degrees of freedom (DoF) trajectory, which is then fed into continuous-time simultaneous localization and mapping (SLAM) to further improve the trajectory. While state-of-the-art multi-sensor fusion methods mainly focus on probabilistic methods, our approach instead adopts a deformation-based method to optimize poses. To this end, a similarity metric for curved shapes is introduced into the robotics community to fuse the estimated trajectories. Additionally, a shape-based point correspondence estimation method is applied to the multi-sensor time calibration. Experiments show that the proposed fusion method can achieve relatively better accuracy, even if the error of the trajectory before fusion is large, which demonstrates that our method can still maintain a certain degree of accuracy in an environment where typical pose estimation methods have poor performance. In addition, the proposed time-calibration method also achieves high accuracy in estimating point correspondences. KW - mapping KW - continuous-time SLAM KW - deformation-based method KW - time calibration Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-219988 SN - 1424-8220 VL - 20 IS - 23 ER - TY - INPR A1 - Dandekar, Thomas T1 - Biological heuristics applied to cosmology suggests a condensation nucleus as start of our universe and inflation cosmology replaced by a period of rapid Weiss domain-like crystal growth N2 - Cosmology often uses intricate formulas and mathematics to derive new theories and concepts. We do something different in this paper: We look at biological processes and derive from these heuristics so that the revised cosmology agrees with astronomical observations but does also agree with standard biological observations. We show that we then have to replace any type of singularity at the start of the universe by a condensation nucleus and that the very early period of the universe usually assumed to be inflation has to be replaced by a period of rapid crystal growth as in Weiss magnetization domains. Impressively, these minor modifications agree well with astronomical observations including removing the strong inflation perturbations which were never observed in the recent BICEP2 experiments. Furthermore, looking at biological principles suggests that such a new theory with a condensation nucleus at start and a first rapid phase of magnetization-like growth of the ordered, physical laws obeying lattice we live in is in fact the only convincing theory of the early phases of our universe that also is compatible with current observations. We show in detail in the following that such a process of crystal creation, breaking of new crystal seeds and ultimate evaporation of the present crystal readily leads over several generations to an evolution and selection of better, more stable and more self-organizing crystals. Moreover, this explains the “fine-tuning” question why our universe is fine-tuned to favor life: Our Universe is so self-organizing to have enough offspring and the detailed physics involved is at the same time highly favorable for all self-organizing processes including life. This biological theory contrasts with current standard inflation cosmologies. The latter do not perform well in explaining any phenomena of sophisticated structure creation or self-organization. As proteins can only thermodynamically fold by increasing the entropy in the solution around them we suggest for cosmology a condensation nucleus for a universe can form only in a “chaotic ocean” of string-soup or quantum foam if the entropy outside of the nucleus rapidly increases. We derive an interaction potential for 1 to n-dimensional strings or quantum-foams and show that they allow only 1D, 2D, 4D or octonion interactions. The latter is the richest structure and agrees to the E8 symmetry fundamental to particle physics and also compatible with the ten dimensional string theory E8 which is part of the M-theory. Interestingly, any other interactions of other dimensionality can be ruled out using Hurwitz compositional theorem. Crystallization explains also extremely well why we have only one macroscopic reality and where the worldlines of alternative trajectories exist: They are in other planes of the crystal and for energy reasons they crystallize mostly at the same time, yielding a beautiful and stable crystal. This explains decoherence and allows to determine the size of Planck´s quantum h (very small as separation of crystal layers by energy is extremely strong). Ultimate dissolution of real crystals suggests an explanation for dark energy agreeing with estimates for the “big rip”. The halo distribution of dark matter favoring galaxy formation is readily explained by a crystal seed starting with unit cells made of normal and dark matter. That we have only matter and not antimatter can be explained as there may be right handed mattercrystals and left-handed antimatter crystals. Similarly, real crystals are never perfect and we argue that exactly such irregularities allow formation of galaxies, clusters and superclusters. Finally, heuristics from genetics suggest to look for a systems perspective to derive correct vacuum and Higgs Boson energies. KW - heuristics KW - inflation KW - cosmology KW - crystallization KW - crystal growth KW - E8 symmetry KW - Hurwitz theorem KW - evolution KW - Lee Smolin Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-183945 ER - TY - JOUR A1 - Gehrke, Alexander A1 - Balbach, Nico A1 - Rauch, Yong-Mi A1 - Degkwitz, Andreas A1 - Puppe, Frank T1 - Erkennung von handschriftlichen Unterstreichungen in Alten Drucken JF - Bibliothek Forschung und Praxis N2 - Die Erkennung handschriftlicher Artefakte wie Unterstreichungen in Buchdrucken ermöglicht Rückschlüsse auf das Rezeptionsverhalten und die Provenienzgeschichte und wird auch für eine OCR benötigt. Dabei soll zwischen handschriftlichen Unterstreichungen und waagerechten Linien im Druck (z. B. Trennlinien usw.) unterschieden werden, da letztere nicht ausgezeichnet werden sollen. Im Beitrag wird ein Ansatz basierend auf einem auf Unterstreichungen trainierten Neuronalen Netz gemäß der U-Net Architektur vorgestellt, dessen Ergebnisse in einem zweiten Schritt mit heuristischen Regeln nachbearbeitet werden. Die Evaluationen zeigen, dass Unterstreichungen sehr gut erkannt werden, wenn bei der Binarisierung der Scans nicht zu viele Pixel der Unterstreichung wegen geringem Kontrast verloren gehen. Zukünftig sollen die Worte oberhalb der Unterstreichung mit OCR transkribiert werden und auch andere Artefakte wie handschriftliche Notizen in alten Drucken erkannt werden. N2 - The recognition of handwritten artefacts like underlines in historical printings allows inference on the reception and provenance history and is necessary for OCR (optical character recognition). In this context it is important to differentiate between handwritten and printed lines, since the latter are common in printings, but should be ignored. We present an approach based on neural nets with the U-Net architecture, whose segmentation results are post processed with heuristic rules. The evaluations show that handwritten underlines are very well recognized if the binarisation of the scans is adequate. Future work includes transcription of the underlined words with OCR and recognition of other artefacts like handwritten notes in historical printings. T2 - Recognition of handwritten underlines in historical printings KW - Brüder Grimm Privatbibliothek KW - Erkennung handschriftlicher Artefakte KW - Convolutional Neural Network KW - regelbasierte Nachbearbeitung KW - Grimm brothers personal library KW - handwritten artefact recognition KW - convolutional neural network KW - rule based post processing Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-193377 SN - 1865-7648 SN - 0341-4183 N1 - Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich. VL - 43 IS - 3 SP - 447 EP - 452 ER - TY - JOUR A1 - Wick, Christoph A1 - Hartelt, Alexander A1 - Puppe, Frank T1 - Staff, symbol and melody detection of Medieval manuscripts written in square notation using deep Fully Convolutional Networks JF - Applied Sciences N2 - Even today, the automatic digitisation of scanned documents in general, but especially the automatic optical music recognition (OMR) of historical manuscripts, still remains an enormous challenge, since both handwritten musical symbols and text have to be identified. This paper focuses on the Medieval so-called square notation developed in the 11th–12th century, which is already composed of staff lines, staves, clefs, accidentals, and neumes that are roughly spoken connected single notes. The aim is to develop an algorithm that captures both the neumes, and in particular its melody, which can be used to reconstruct the original writing. Our pipeline is similar to the standard OMR approach and comprises a novel staff line and symbol detection algorithm based on deep Fully Convolutional Networks (FCN), which perform pixel-based predictions for either staff lines or symbols and their respective types. Then, the staff line detection combines the extracted lines to staves and yields an F\(_1\) -score of over 99% for both detecting lines and complete staves. For the music symbol detection, we choose a novel approach that skips the step to identify neumes and instead directly predicts note components (NCs) and their respective affiliation to a neume. Furthermore, the algorithm detects clefs and accidentals. Our algorithm predicts the symbol sequence of a staff with a diplomatic symbol accuracy rate (dSAR) of about 87%, which includes symbol type and location. If only the NCs without their respective connection to a neume, all clefs and accidentals are of interest, the algorithm reaches an harmonic symbol accuracy rate (hSAR) of approximately 90%. In general, the algorithm recognises a symbol in the manuscript with an F\(_1\) -score of over 96%. KW - optical music recognition KW - historical document analysis KW - medieval manuscripts KW - neume notation KW - fully convolutional neural networks Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-197248 SN - 2076-3417 VL - 9 IS - 13 ER - TY - THES A1 - Peng, Dongliang T1 - An Optimization-Based Approach for Continuous Map Generalization T1 - Optimierung für die kontinuierliche Generalisierung von Landkarten N2 - Maps are the main tool to represent geographical information. Geographical information is usually scale-dependent, so users need to have access to maps at different scales. In our digital age, the access is realized by zooming. As discrete changes during the zooming tend to distract users, smooth changes are preferred. This is why some digital maps are trying to make the zooming as continuous as they can. The process of producing maps at different scales with smooth changes is called continuous map generalization. In order to produce maps of high quality, cartographers often take into account additional requirements. These requirements are transferred to models in map generalization. Optimization for map generalization is important not only because it finds optimal solutions in the sense of the models, but also because it helps us to evaluate the quality of the models. Optimization, however, becomes more delicate when we deal with continuous map generalization. In this area, there are requirements not only for a specific map but also for relations between maps at difference scales. This thesis is about continuous map generalization based on optimization. First, we show the background of our research topics. Second, we find optimal sequences for aggregating land-cover areas. We compare the A$^{\!\star}$\xspace algorithm and integer linear programming in completing this task. Third, we continuously generalize county boundaries to provincial boundaries based on compatible triangulations. We morph between the two sets of boundaries, using dynamic programming to compute the correspondence. Fourth, we continuously generalize buildings to built-up areas by aggregating and growing. In this work, we group buildings with the help of a minimum spanning tree. Fifth, we define vertex trajectories that allow us to morph between polylines. We require that both the angles and the edge lengths change linearly over time. As it is impossible to fulfill all of these requirements simultaneously, we mediate between them using least-squares adjustment. Sixth, we discuss the performance of some commonly used data structures for a specific spatial problem. Seventh, we conclude this thesis and present open problems. N2 - Maps are the main tool to represent geographical information. Users often zoom in and out to access maps at different scales. Continuous map generalization tries to make the changes between different scales smooth, which is essential to provide users with comfortable zooming experience. In order to achieve continuous map generalization with high quality, we optimize some important aspects of maps. In this book, we have used optimization in the generalization of land-cover areas, administrative boundaries, buildings, and coastlines. According to our experiments, continuous map generalization indeed benefits from optimization. N2 - Landkarten sind das wichtigste Werkzeug zur Repräsentation geografischer Information. Unter der Generalisierung von Landkarten versteht man die Aufbereitung von geografischen Informationen aus detaillierten Daten zur Generierung von kleinmaßstäbigen Karten. Nutzer von Online-Karten zoomen oft in eine Karte hinein oder aus einer Karte heraus, um mehr Details bzw. mehr Überblick zu bekommen. Die kontinuierliche Generalisierung von Landkarten versucht die Änderungen zwischen verschiedenen Maßstäben stetig zu machen. Dies ist wichtig, um Nutzern eine angenehme Zoom-Erfahrung zu bieten. Um eine qualitativ hochwertige kontinuierliche Generalisierung zu erreichen, kann man wichtige Aspekte bei der Generierung von Online-Karten optimieren. In diesem Buch haben wir Optimierung bei der Generalisierung von Landnutzungskarten, von administrativen Grenzen, Gebäuden und Küstenlinien eingesetzt. Unsere Experimente zeigen, dass die kontinuierliche Generalisierung von Landkarten in der Tat von Optimierung profitiert. KW - land-cover area KW - administrative boundary KW - building KW - morphing KW - data structure KW - zooming KW - Generalisierung KW - Landnutzungskartierung KW - Optimierung Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-174427 SN - 978-3-95826-104-4 SN - 978-3-95826-105-1 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, 978-3-95826-104-4, 24,90 EUR. PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - JOUR A1 - Oberdörfer, Sebastian A1 - Latoschik, Marc Erich T1 - Knowledge encoding in game mechanics: transfer-oriented knowledge learning in desktop-3D and VR JF - International Journal of Computer Games Technology N2 - Affine Transformations (ATs) are a complex and abstract learning content. Encoding the AT knowledge in Game Mechanics (GMs) achieves a repetitive knowledge application and audiovisual demonstration. Playing a serious game providing these GMs leads to motivating and effective knowledge learning. Using immersive Virtual Reality (VR) has the potential to even further increase the serious game’s learning outcome and learning quality. This paper compares the effectiveness and efficiency of desktop-3D and VR in respect to the achieved learning outcome. Also, the present study analyzes the effectiveness of an enhanced audiovisual knowledge encoding and the provision of a debriefing system. The results validate the effectiveness of the knowledge encoding in GMs to achieve knowledge learning. The study also indicates that VR is beneficial for the overall learning quality and that an enhanced audiovisual encoding has only a limited effect on the learning outcome. KW - games Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-201159 VL - 2019 ER - TY - THES A1 - Niebler, Thomas T1 - Extracting and Learning Semantics from Social Web Data T1 - Extraktion und Lernen von Semantik aus Social Web-Daten N2 - Making machines understand natural language is a dream of mankind that existed since a very long time. Early attempts at programming machines to converse with humans in a supposedly intelligent way with humans relied on phrase lists and simple keyword matching. However, such approaches cannot provide semantically adequate answers, as they do not consider the specific meaning of the conversation. Thus, if we want to enable machines to actually understand language, we need to be able to access semantically relevant background knowledge. For this, it is possible to query so-called ontologies, which are large networks containing knowledge about real-world entities and their semantic relations. However, creating such ontologies is a tedious task, as often extensive expert knowledge is required. Thus, we need to find ways to automatically construct and update ontologies that fit human intuition of semantics and semantic relations. More specifically, we need to determine semantic entities and find relations between them. While this is usually done on large corpora of unstructured text, previous work has shown that we can at least facilitate the first issue of extracting entities by considering special data such as tagging data or human navigational paths. Here, we do not need to detect the actual semantic entities, as they are already provided because of the way those data are collected. Thus we can mainly focus on the problem of assessing the degree of semantic relatedness between tags or web pages. However, there exist several issues which need to be overcome, if we want to approximate human intuition of semantic relatedness. For this, it is necessary to represent words and concepts in a way that allows easy and highly precise semantic characterization. This also largely depends on the quality of data from which these representations are constructed. In this thesis, we extract semantic information from both tagging data created by users of social tagging systems and human navigation data in different semantic-driven social web systems. Our main goal is to construct high quality and robust vector representations of words which can the be used to measure the relatedness of semantic concepts. First, we show that navigation in the social media systems Wikipedia and BibSonomy is driven by a semantic component. After this, we discuss and extend methods to model the semantic information in tagging data as low-dimensional vectors. Furthermore, we show that tagging pragmatics influences different facets of tagging semantics. We then investigate the usefulness of human navigational paths in several different settings on Wikipedia and BibSonomy for measuring semantic relatedness. Finally, we propose a metric-learning based algorithm in adapt pre-trained word embeddings to datasets containing human judgment of semantic relatedness. This work contributes to the field of studying semantic relatedness between words by proposing methods to extract semantic relatedness from web navigation, learn highquality and low-dimensional word representations from tagging data, and to learn semantic relatedness from any kind of vector representation by exploiting human feedback. Applications first and foremest lie in ontology learning for the Semantic Web, but also semantic search or query expansion. N2 - Einer der großen Träume der Menschheit ist es, Maschinen dazu zu bringen, natürliche Sprache zu verstehen. Frühe Versuche, Computer dahingehend zu programmieren, dass sie mit Menschen vermeintlich intelligente Konversationen führen können, basierten hauptsächlich auf Phrasensammlungen und einfachen Stichwortabgleichen. Solche Ansätze sind allerdings nicht in der Lage, inhaltlich adäquate Antworten zu liefern, da der tatsächliche Inhalt der Konversation nicht erfasst werden kann. Folgerichtig ist es notwendig, dass Maschinen auf semantisch relevantes Hintergrundwissen zugreifen können, um diesen Inhalt zu verstehen. Solches Wissen ist beispielsweise in Ontologien vorhanden. Ontologien sind große Datenbanken von vernetztem Wissen über Objekte und Gegenstände der echten Welt sowie über deren semantische Beziehungen. Das Erstellen solcher Ontologien ist eine sehr kostspielige und aufwändige Aufgabe, da oft tiefgreifendes Expertenwissen benötigt wird. Wir müssen also Wege finden, um Ontologien automatisch zu erstellen und aktuell zu halten, und zwar in einer Art und Weise, dass dies auch menschlichem Empfinden von Semantik und semantischer Ähnlichkeit entspricht. Genauer gesagt ist es notwendig, semantische Entitäten und deren Beziehungen zu bestimmen. Während solches Wissen üblicherweise aus Textkorpora extrahiert wird, ist es möglich, zumindest das erste Problem - semantische Entitäten zu bestimmen - durch Benutzung spezieller Datensätze zu umgehen, wie zum Beispiel Tagging- oder Navigationsdaten. In diesen Arten von Datensätzen ist es nicht notwendig, Entitäten zu extrahieren, da sie bereits aufgrund inhärenter Eigenschaften bei der Datenakquise vorhanden sind. Wir können uns also hauptsächlich auf die Bestimmung von semantischen Relationen und deren Intensität fokussieren. Trotzdem müssen hier noch einige Hindernisse überwunden werden. Beispielsweise ist es notwendig, Repräsentationen für semantische Entitäten zu finden, so dass es möglich ist, sie einfach und semantisch hochpräzise zu charakterisieren. Dies hängt allerdings auch erheblich von der Qualität der Daten ab, aus denen diese Repräsentationen konstruiert werden. In der vorliegenden Arbeit extrahieren wir semantische Informationen sowohl aus Taggingdaten, von Benutzern sozialer Taggingsysteme erzeugt, als auch aus Navigationsdaten von Benutzern semantikgetriebener Social Media-Systeme. Das Hauptziel dieser Arbeit ist es, hochqualitative und robuste Vektordarstellungen von Worten zu konstruieren, die dann dazu benutzt werden können, die semantische Ähnlichkeit von Konzepten zu bestimmen. Als erstes zeigen wir, dass Navigation in Social Media Systemen unter anderem durch eine semantische Komponente getrieben wird. Danach diskutieren und erweitern wir Methoden, um die semantische Information in Taggingdaten als niedrigdimensionale sogenannte “Embeddings” darzustellen. Darüberhinaus demonstrieren wir, dass die Taggingpragmatik verschiedene Facetten der Taggingsemantik beeinflusst. Anschließend untersuchen wir, inwieweit wir menschliche Navigationspfade zur Bestimmung semantischer Ähnlichkeit benutzen können. Hierzu betrachten wir mehrere Datensätze, die Navigationsdaten in verschiedenen Rahmenbedingungen beinhalten. Als letztes stellen wir einen neuartigen Algorithmus vor, um bereits trainierte Word Embeddings im Nachhinein an menschliche Intuition von Semantik anzupassen. Diese Arbeit steuert wertvolle Beiträge zum Gebiet der Bestimmung von semantischer Ähnlichkeit bei: Es werden Methoden vorgestellt werden, um hochqualitative semantische Information aus Web-Navigation und Taggingdaten zu extrahieren, diese mittels niedrigdimensionaler Vektordarstellungen zu modellieren und selbige schließlich besser an menschliches Empfinden von semantischer Ähnlichkeit anzupassen, indem aus genau diesem Empfinden gelernt wird. Anwendungen liegen in erster Linie darin, Ontologien für das Semantic Web zu lernen, allerdings auch in allen Bereichen, die Vektordarstellungen von semantischen Entitäten benutzen. KW - Semantik KW - Maschinelles Lernen KW - Soziale Software KW - Semantics KW - User Behavior KW - Social Web KW - Machine Learning Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-178666 ER - TY - JOUR A1 - Djebko, Kirill A1 - Puppe, Frank A1 - Kayal, Hakan T1 - Model-based fault detection and diagnosis for spacecraft with an application for the SONATE triple cube nano-satellite JF - Aerospace N2 - The correct behavior of spacecraft components is the foundation of unhindered mission operation. However, no technical system is free of wear and degradation. A malfunction of one single component might significantly alter the behavior of the whole spacecraft and may even lead to a complete mission failure. Therefore, abnormal component behavior must be detected early in order to be able to perform counter measures. A dedicated fault detection system can be employed, as opposed to classical health monitoring, performed by human operators, to decrease the response time to a malfunction. In this paper, we present a generic model-based diagnosis system, which detects faults by analyzing the spacecraft’s housekeeping data. The observed behavior of the spacecraft components, given by the housekeeping data is compared to their expected behavior, obtained through simulation. Each discrepancy between the observed and the expected behavior of a component generates a so-called symptom. Given the symptoms, the diagnoses are derived by computing sets of components whose malfunction might cause the observed discrepancies. We demonstrate the applicability of the diagnosis system by using modified housekeeping data of the qualification model of an actual spacecraft and outline the advantages and drawbacks of our approach. KW - fault detection KW - model-based diagnosis KW - nano-satellite Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-198836 SN - 2226-4310 VL - 6 IS - 10 ER - TY - JOUR A1 - Loda, Sophia A1 - Krebs, Jonathan A1 - Danhof, Sophia A1 - Schreder, Martin A1 - Solimando, Antonio G. A1 - Strifler, Susanne A1 - Rasche, Leo A1 - Kortüm, Martin A1 - Kerscher, Alexander A1 - Knop, Stefan A1 - Puppe, Frank A1 - Einsele, Hermann A1 - Bittrich, Max T1 - Exploration of artificial intelligence use with ARIES in multiple myeloma research JF - Journal of Clinical Medicine N2 - Background: Natural language processing (NLP) is a powerful tool supporting the generation of Real-World Evidence (RWE). There is no NLP system that enables the extensive querying of parameters specific to multiple myeloma (MM) out of unstructured medical reports. We therefore created a MM-specific ontology to accelerate the information extraction (IE) out of unstructured text. Methods: Our MM ontology consists of extensive MM-specific and hierarchically structured attributes and values. We implemented “A Rule-based Information Extraction System” (ARIES) that uses this ontology. We evaluated ARIES on 200 randomly selected medical reports of patients diagnosed with MM. Results: Our system achieved a high F1-Score of 0.92 on the evaluation dataset with a precision of 0.87 and recall of 0.98. Conclusions: Our rule-based IE system enables the comprehensive querying of medical reports. The IE accelerates the extraction of data and enables clinicians to faster generate RWE on hematological issues. RWE helps clinicians to make decisions in an evidence-based manner. Our tool easily accelerates the integration of research evidence into everyday clinical practice. KW - natural language processing KW - ontology KW - artificial intelligence KW - multiple myeloma KW - real world evidence Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-197231 SN - 2077-0383 VL - 8 IS - 7 ER - TY - JOUR A1 - Lopez-Arreguin, A. J. R. A1 - Montenegro, S. T1 - Improving engineering models of terramechanics for planetary exploration JF - Results in Engineering N2 - This short letter proposes more consolidated explicit solutions for the forces and torques acting on typical rover wheels, that can be used as a method to determine their average mobility characteristics in planetary soils. The closed loop solutions stand in one of the verified methods, but at difference of the previous, observables are decoupled requiring a less amount of physical parameters to measure. As a result, we show that with knowledge of terrain properties, wheel driving performance rely in a single observable only. Because of their generality, the formulated equations established here can have further implications in autonomy and control of rovers or planetary soil characterization. KW - Wheel KW - Terramechanics KW - Forces KW - Torque KW - Robotics Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-202490 VL - 3 ER - TY - GEN A1 - Funken, Matthias A1 - Tscherner, Michael T1 - Jahresbericht 2018 des Rechenzentrums der Universität Würzburg T1 - Annual Report 2018 of the Computer Center, University of Wuerzburg N2 - Eine Übersicht über die Aktivitäten des Rechenzentrums im Jahr 2018. T3 - Jahresbericht des Rechenzentrums der Universität Würzburg - 2018 KW - Julius-Maximilians-Universität Würzburg KW - Jahresbericht KW - Jahresbericht KW - Rechenzentrum KW - RZUW KW - annual report KW - Computer Center University of Wuerzburg Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-188265 UR - https://www.rz.uni-wuerzburg.de/wir/publikationen/ ET - 1. Auflage ER - TY - JOUR A1 - Petschke, Danny A1 - Staab, Torsten E.M. T1 - DDRS4PALS: a software for the acquisition and simulation of lifetime spectra using the DRS4 evaluation board JF - SoftwareX N2 - Lifetime techniques are applied to diverse fields of study including materials sciences, semiconductor physics, biology, molecular biophysics and photochemistry. Here we present DDRS4PALS, a software for the acquisition and simulation of lifetime spectra using the DRS4 evaluation board (Paul Scherrer Institute, Switzerland) for time resolved measurements and digitization of detector output pulses. Artifact afflicted pulses can be corrected or rejected prior to the lifetime calculation to provide the generation of high-quality lifetime spectra, which are crucial for a profound analysis, i.e. the decomposition of the true information. Moreover, the pulses can be streamed on an (external) hard drive during the measurement and subsequently downloaded in the offline mode without being connected to the hardware. This allows the generation of various lifetime spectra at different configurations from one single measurement and, hence, a meaningful comparison in terms of analyzability and quality. Parallel processing and an integrated JavaScript based language provide convenient options to accelerate and automate time consuming processes such as lifetime spectra simulations. KW - Lifetime spectroscopy KW - Positron annihilation spectroscopy KW - Simulation KW - Time resolved measurements Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-202276 VL - 10 ER - TY - THES A1 - Borrmann, Dorit T1 - Multi-modal 3D mapping - Combining 3D point clouds with thermal and color information T1 - Multi-modale 3D-Kartierung - Kombination von 3D-Punktwolken mit Thermo- und Farbinformation N2 - Imagine a technology that automatically creates a full 3D thermal model of an environment and detects temperature peaks in it. For better orientation in the model it is enhanced with color information. The current state of the art for analyzing temperature related issues is thermal imaging. It is relevant for energy efficiency but also for securing important infrastructure such as power supplies and temperature regulation systems. Monitoring and analysis of the data for a large building is tedious as stable conditions need to be guaranteed for several hours and detailed notes about the pose and the environment conditions for each image must be taken. For some applications repeated measurements are necessary to monitor changes over time. The analysis of the scene is only possible through expertise and experience. This thesis proposes a robotic system that creates a full 3D model of the environment with color and thermal information by combining thermal imaging with the technology of terrestrial laser scanning. The addition of a color camera facilitates the interpretation of the data and allows for other application areas. The data from all sensors collected at different positions is joined in one common reference frame using calibration and scan matching. The first part of the thesis deals with 3D point cloud processing with the emphasis on accessing point cloud data efficiently, detecting planar structures in the data and registering multiple point clouds into one common coordinate system. The second part covers the autonomous exploration and data acquisition with a mobile robot with the objective to minimize the unseen area in 3D space. Furthermore, the combination of different modalities, color images, thermal images and point cloud data through calibration is elaborated. The last part presents applications for the the collected data. Among these are methods to detect the structure of building interiors for reconstruction purposes and subsequent detection and classification of windows. A system to project the gathered thermal information back into the scene is presented as well as methods to improve the color information and to join separately acquired point clouds and photo series. A full multi-modal 3D model contains all the relevant geometric information about the recorded scene and enables an expert to fully analyze it off-site. The technology clears the path for automatically detecting points of interest thereby helping the expert to analyze the heat flow as well as localize and identify heat leaks. The concept is modular and neither limited to achieving energy efficiency nor restricted to the use in combination with a mobile platform. It also finds its application in fields such as archaeology and geology and can be extended by further sensors. N2 - Man stelle sich eine Technologie vor, die automatisch ein vollständiges 3D-Thermographiemodell einer Umgebung generiert und Temperaturspitzen darin erkennt. Zur besseren Orientierung innerhalb des Modells ist dieses mit Farbinformationen erweitert. In der Analyse temperaturrelevanter Fragestellungen sind Thermalbilder der Stand der Technik. Darunter fallen Energieeffizienz und die Sicherung wichtiger Infrastruktur, wie Energieversorgung und Systeme zur Temperaturregulierung. Die Überwachung und anschließende Analyse der Daten eines großen Gebäudes ist aufwändig, da über mehrere Stunden stabile Bedingungen garantiert und detaillierte Aufzeichnungen über die Aufnahmeposen und die Umgebungsverhältnisse für jedes Wärmebild erstellt werden müssen. Einige Anwendungen erfordern wiederholte Messungen, um Veränderungen über die Zeit zu beobachten. Eine Analyse der Szene ist nur mit Erfahrung und Expertise möglich. Diese Arbeit stellt ein Robotersystem vor, das durch Kombination von Thermographie mit terrestrischem Laserscanning ein vollständiges 3D Modell der Umgebung mit Farb- und Temperaturinformationen erstellt. Die ergänzende Farbkamera vereinfacht die Interpretation der Daten und eröffnet weitere Anwendungsfelder. Die an unterschiedlichen Positionen aufgenommenen Daten aller Sensoren werden durch Kalibrierung und Scanmatching in einem gemeinsamen Bezugssystem zusammengefügt. Der erste Teil der Arbeit behandelt 3D-Punktwolkenverarbeitung mit Schwerpunkt auf effizientem Punktzugriff, Erkennung planarer Strukturen und Registrierung mehrerer Punktwolken in einem gemeinsamen Koordinatensystem. Der zweite Teil beschreibt die autonome Erkundung und Datenakquise mit einem mobilen Roboter, mit dem Ziel, die bisher nicht erfassten Bereiche im 3D-Raum zu minimieren. Des Weiteren wird die Kombination verschiedener Modalitäten, Farbbilder, Thermalbilder und Punktwolken durch Kalibrierung ausgearbeitet. Den abschließenden Teil stellen Anwendungsszenarien für die gesammelten Daten dar, darunter Methoden zur Erkennung der Innenraumstruktur für die Rekonstruktion von Gebäuden und der anschließenden Erkennung und Klassifizierung von Fenstern. Ein System zur Rückprojektion der gesammelten Thermalinformation in die Umgebung wird ebenso vorgestellt wie Methoden zur Verbesserung der Farbinformationen und zum Zusammenfügen separat aufgenommener Punktwolken und Fotoreihen. Ein vollständiges multi-modales 3D Modell enthält alle relevanten geometrischen Informationen der aufgenommenen Szene und ermöglicht einem Experten, diese standortunabhängig zu analysieren. Diese Technologie ebnet den Weg für die automatische Erkennung relevanter Bereiche und für die Analyse des Wärmeflusses und vereinfacht somit die Lokalisierung und Identifikation von Wärmelecks für den Experten. Das vorgestellte modulare Konzept ist weder auf den Anwendungsfall Energieeffizienz beschränkt noch auf die Verwendung einer mobilen Plattform angewiesen. Es ist beispielsweise auch in Feldern wie der Archäologie und Geologie einsetzbar und kann durch zusätzliche Sensoren erweitert werden. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 14 KW - Punktwolke KW - Lidar KW - Thermografie KW - Robotik KW - 3D point cloud KW - Laser scanning KW - Robotics KW - 3D thermal mapping KW - Registration Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-157085 SN - 978-3-945459-20-1 SN - 1868-7474 SN - 1868-7466 ER - TY - JOUR A1 - Pfitzner, Christian A1 - May, Stefan A1 - Nüchter, Andreas T1 - Body weight estimation for dose-finding and health monitoring of lying, standing and walking patients based on RGB-D data JF - Sensors N2 - This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients. KW - RGB-D KW - human body weight KW - image processing KW - kinect KW - machine learning KW - perception KW - segmentation KW - sensor fusion KW - stroke KW - thermal camera Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-176642 VL - 18 IS - 5 ER - TY - THES A1 - Fleszar, Krzysztof T1 - Network-Design Problems in Graphs and on the Plane T1 - Netzwerk-Design-Probleme in Graphen und auf der Ebene N2 - A network design problem defines an infinite set whose elements, called instances, describe relationships and network constraints. It asks for an algorithm that, given an instance of this set, designs a network that respects the given constraints and at the same time optimizes some given criterion. In my thesis, I develop algorithms whose solutions are optimum or close to an optimum value within some guaranteed bound. I also examine the computational complexity of these problems. Problems from two vast areas are considered: graphs and the Euclidean plane. In the Maximum Edge Disjoint Paths problem, we are given a graph and a subset of vertex pairs that are called terminal pairs. We are asked for a set of paths where the endpoints of each path form a terminal pair. The constraint is that any two paths share at most one inner vertex. The optimization criterion is to maximize the cardinality of the set. In the hard-capacitated k-Facility Location problem, we are given an integer k and a complete graph where the distances obey a given metric and where each node has two numerical values: a capacity and an opening cost. We are asked for a subset of k nodes, called facilities, and an assignment of all the nodes, called clients, to the facilities. The constraint is that the number of clients assigned to a facility cannot exceed the facility's capacity value. The optimization criterion is to minimize the total cost which consists of the total opening cost of the facilities and the total distance between the clients and the facilities they are assigned to. In the Stabbing problem, we are given a set of axis-aligned rectangles in the plane. We are asked for a set of horizontal line segments such that, for every rectangle, there is a line segment crossing its left and right edge. The optimization criterion is to minimize the total length of the line segments. In the k-Colored Non-Crossing Euclidean Steiner Forest problem, we are given an integer k and a finite set of points in the plane where each point has one of k colors. For every color, we are asked for a drawing that connects all the points of the same color. The constraint is that drawings of different colors are not allowed to cross each other. The optimization criterion is to minimize the total length of the drawings. In the Minimum Rectilinear Polygon for Given Angle Sequence problem, we are given an angle sequence of left (+90°) turns and right (-90°) turns. We are asked for an axis-parallel simple polygon where the angles of the vertices yield the given sequence when walking around the polygon in counter-clockwise manner. The optimization criteria considered are to minimize the perimeter, the area, and the size of the axis-parallel bounding box of the polygon. N2 - Ein Netzwerk-Design-Problem definiert eine unendliche Menge, deren Elemente, als Instanzen bezeichnet, Beziehungen und Beschränkungen in einem Netzwerk beschreiben. Die Lösung eines solchen Problems besteht aus einem Algorithmus, der auf die Eingabe einer beliebigen Instanz dieser Menge ein Netzwerk entwirft, welches die gegebenen Beschränkungen einhält und gleichzeitig ein gegebenes Kriterium optimiert. In meiner Dissertation habe ich Algorithmen entwickelt, deren Netzwerke stets optimal sind oder nachweisbar nahe am Optimum liegen. Zusätzlich habe ich die Berechnungskomplexität dieser Probleme untersucht. Dabei wurden Probleme aus zwei weiten Gebieten betrachtet: Graphen und der Euklidische Ebene. Im Maximum-Edge-Disjoint-Paths-Problem besteht die Eingabe aus einem Graphen und einer Teilmenge von Knotenpaaren, die wir mit Terminalpaare bezeichnen. Gesucht ist eine Menge von Pfaden, die Terminalpaare verbinden. Die Beschränkung ist, dass keine zwei Pfade einen gleichen inneren Knoten haben dürfen. Das Optimierungskriterium ist die Maximierung der Kardinalität dieser Menge. Im Hard-Capacitated-k-Facility-Location-Problem besteht die Eingabe aus einer Ganzzahl k und einem vollständigen Graphen, in welchem die Distanzen einer gegebenen Metrik unterliegen und in welchem jedem Knoten sowohl eine numerische Kapazität als auch ein Eröffnungskostenwert zugeschrieben ist. Gesucht ist eine Teilmenge von k Knoten, Facilities genannt, und eine Zuweisung aller Knoten, Clients genannt, zu den Facilities. Die Beschränkung ist, dass die Anzahl der Clients, die einer Facility zugewiesen sind, nicht deren Kapazität überschreiten darf. Das Optimierungskriterium ist die Minimierung der Gesamtkosten bestehend aus den Gesamteröffnungskosten der Facilities sowie der Gesamtdistanz zwischen den Clients und den ihnen zugewiesenen Facilities. Im Stabbing-Problem besteht die Eingabe aus einer Menge von achsenparallelen Rechtecken in der Ebene. Gesucht ist eine Menge von horizontalen Geradenstücken mit der Randbedingung, dass die linke und rechte Seite eines jeden Rechtecks von einem Geradenstück verbunden ist. Das Optimierungskriterium ist die Minimierung der Gesamtlänge aller Geradenstücke. Im k-Colored-Non-Crossing-Euclidean-Steiner-Forest-Problem besteht die Eingabe aus einer Ganzzahl k und einer endlichen Menge von Punkten in der Ebene, wobei jeder Punkt in einer von k Farben gefärbt ist. Gesucht ist für jede Farbe eine Zeichnung, in welcher alle Punkte der Farbe verbunden sind. Die Beschränkung ist, dass Zeichnungen verschiedener Farben sich nicht kreuzen dürfen. Das Optimierungskriterium ist die Minimierung des Gesamtintenverbrauchs, das heißt, der Gesamtlänge der Zeichnungen. Im Minimum-Rectilinear-Polygon-for-Given-Angle-Sequence-Problem besteht die Eingabe aus einer Folge von Links- (+90°) und Rechtsabbiegungen (-90°). Gesucht ist ein achsenparalleles Polygon dessen Eckpunkte die gegebene Folge ergeben, wenn man das Polygon gegen den Uhrzeigersinn entlangläuft. Die Optimierungskriterien sind die Minimierung des Umfangs und der inneren Fläche des Polygons sowie der Größe des notwendigen Zeichenblattes, d.h., des kleinsten Rechteckes, das das Polygon einschließt. N2 - Given points in the plane, connect them using minimum ink. Though the task seems simple, it turns out to be very time consuming. In fact, scientists believe that computers cannot efficiently solve it. So, do we have to resign? This book examines such NP-hard network-design problems, from connectivity problems in graphs to polygonal drawing problems on the plane. First, we observe why it is so hard to optimally solve these problems. Then, we go over to attack them anyway. We develop fast algorithms that find approximate solutions that are very close to the optimal ones. Hence, connecting points with slightly more ink is not hard. KW - Euklidische Ebene KW - Algorithmus KW - Komplexität KW - NP-schweres Problem KW - Graph KW - approximation algorithm KW - hardness KW - optimization KW - graphs KW - network KW - Optimierungsproblem KW - Approximationsalgorithmus KW - complexity KW - Euclidean plane Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-154904 SN - 978-3-95826-076-4 (Print) SN - 978-3-95826-077-1 (Online) N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, ISBN 978-3-95826-076-4, 28,90 EUR. PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - THES A1 - Wojtkowiak, Harald T1 - Planungssystem zur Steigerung der Autonomie von Kleinstsatelliten T1 - Planningsystem to increase the autonomy of small satellites N2 - Der Betrieb von Satelliten wird sich in Zukunft gravierend ändern. Die bisher ausgeübte konventionelle Vorgehensweise, bei der die Planung der vom Satelliten auszuführenden Aktivitäten sowie die Kontrolle hierüber ausschließlich vom Boden aus erfolgen, stößt bei heutigen Anwendungen an ihre Grenzen. Im schlimmsten Fall verhindert dieser Umstand sogar die Erschließung bisher ungenutzter Möglichkeiten. Der Gewinn eines Satelliten, sei es in Form wissenschaftlicher Daten oder der Vermarktung satellitengestützter Dienste, wird daher nicht optimal ausgeschöpft. Die Ursache für dieses Problem lässt sich im Grunde auf eine ausschlaggebende Tatsache zurückführen: Konventionelle Satelliten können ihr Verhalten, d.h. die Folge ihrer Tätigkeiten, nicht eigenständig anpassen. Stattdessen erstellt das Bedienpersonal am Boden - vor allem die Operatoren - mit Hilfe von Planungssoftware feste Ablaufpläne, die dann in Form von Kommandosequenzen von den Bodenstationen aus an die jeweiligen Satelliten hochgeladen werden. Dort werden die Befehle lediglich überprüft, interpretiert und strikt ausgeführt. Die Abarbeitung erfolgt linear. Situationsbedingte Änderungen, wie sie vergleichsweise bei der Codeausführung von Softwareprogrammen durch Kontrollkonstrukte, zum Beispiel Schleifen und Verzweigungen, üblich sind, sind typischerweise nicht vorgesehen. Der Operator ist daher die einzige Instanz, die das Verhalten des Satelliten mittels Kommandierung, per Upload, beeinflussen kann, und auch nur dann, wenn ein direkter Funkkontakt zwischen Satellit und Bodenstation besteht. Die dadurch möglichen Reaktionszeiten des Satelliten liegen bestenfalls bei einigen Sekunden, falls er sich im Wirkungsbereich der Bodenstation befindet. Außerhalb des Kontaktfensters kann sich die Zeitschranke, gegeben durch den Orbit und die aktuelle Position des Satelliten, von einigen Minuten bis hin zu einigen Stunden erstrecken. Die Signallaufzeiten der Funkübertragung verlängern die Reaktionszeiten um weitere Sekunden im erdnahen Bereich. Im interplanetaren Raum erstrecken sich die Zeitspannen aufgrund der immensen Entfernungen sogar auf mehrere Minuten. Dadurch bedingt liegt die derzeit technologisch mögliche, bodengestützte, Reaktionszeit von Satelliten bestenfalls im Bereich von einigen Sekunden. Diese Einschränkung stellt ein schweres Hindernis für neuartige Satellitenmissionen, bei denen insbesondere nichtdeterministische und kurzzeitige Phänomene (z.B. Blitze und Meteoreintritte in die Erdatmosphäre) Gegenstand der Beobachtungen sind, dar. Die langen Reaktionszeiten des konventionellen Satellitenbetriebs verhindern die Realisierung solcher Missionen, da die verzögerte Reaktion erst erfolgt, nachdem das zu beobachtende Ereignis bereits abgeschlossen ist. Die vorliegende Dissertation zeigt eine Möglichkeit, das durch die langen Reaktionszeiten entstandene Problem zu lösen, auf. Im Zentrum des Lösungsansatzes steht dabei die Autonomie. Im Wesentlichen geht es dabei darum, den Satelliten mit der Fähigkeit auszustatten, sein Verhalten, d.h. die Folge seiner Tätigkeiten, eigenständig zu bestimmen bzw. zu ändern. Dadurch wird die direkte Abhängigkeit des Satelliten vom Operator bei Reaktionen aufgehoben. Im Grunde wird der Satellit in die Lage versetzt, sich selbst zu kommandieren. Die Idee der Autonomie wurde im Rahmen der zugrunde liegenden Forschungsarbeiten umgesetzt. Das Ergebnis ist ein autonomes Planungssystem. Dabei handelt es sich um ein Softwaresystem, mit dem sich autonomes Verhalten im Satelliten realisieren lässt. Es kann an unterschiedliche Satellitenmissionen angepasst werden. Ferner deckt es verschiedene Aspekte des autonomen Satellitenbetriebs, angefangen bei der generellen Entscheidungsfindung der Tätigkeiten, über die zeitliche Ablaufplanung unter Einbeziehung von Randbedingungen (z.B. Ressourcen) bis hin zur eigentlichen Ausführung, d.h. Kommandierung, ab. Das Planungssystem kommt als Anwendung in ASAP, einer autonomen Sensorplattform, zum Einsatz. Es ist ein optisches System und dient der Detektion von kurzzeitigen Phänomenen und Ereignissen in der Erdatmosphäre. Die Forschungsarbeiten an dem autonomen Planungssystem, an ASAP sowie an anderen zu diesen in Bezug stehenden Systemen wurden an der Professur für Raumfahrttechnik des Lehrstuhls Informatik VIII der Julius-Maximilians-Universität Würzburg durchgeführt. N2 - Satellite operation will change thoroughly in future. So far the currently performed conventional approach of controlling satellites is hitting its limitation by todays application. This is due to the fact that activities of the satellite are planned and controlled exclusively by ground infrastructure. In the worst case this circumstance prevents the exploitation of potential but so far unused opportunities. Thus the yield of satellites, may it be in the form of scientific research data or the commercialization of satellite services, is not optimized. After all the cause of this problem can be tracked back to one crucial matter: Conventional satellites are not able to alter their behaviour, i.e. the order of their actions, themselves. Instead schedules are created by ground staff – mainly operators - utilizing specialized planning software. The output is then transformed into command sequences and uploaded to the dedicated satellite via ground stations. On-board the commands are solely checked, interpreted and strictly executed. The flow is linear. Situational changes, like in the code execution of software programs via control constructs, e.g. loops and branches, are typically not present. Thus the operator is the only instance which is able to change the behaviour of the satellite via command upload. Therefore a direct radio link between satellite and ground station is required. Reaction times are hereby restricted. In the best case, means when the satellite is inside the area of effect, the limitations are to a few seconds. Outside the contact window, the time bound may increase from minutes to hours. The exact timing are dependant from the orbit of the satellite and its position on it. The signal flow of the radio links adds additional reaction time from a few seconds in near earth up to some minutes in interplanetary space due to the vast distances. In sum the best achievable ground based reaction time lies in the area of some seconds. This restriction is a severe handicap for novel satellite missions with focus on non-deterministic and short-time phenomena, e.g. lightning and meteor entries into Earth atmosphere. The long reaction times of conventional satellite operations prevent the realization of such missions. This is due to the fact that delayed reactions take place after the event to observe has finished. This dissertation shows a possibility to solve the problem caused by long reaction times. Autonomy lies in the centre of the main approach. The key is to augment the satellite with the ability to alter its behaviour, i.e. the sequence of its actions, and to deliberate about it. Thus, the direct reaction dependency of the satellite from operators will be lifted. In principle the satellite will be able to command itself. The herein idea of autonomy is based on research work, which provides the context for design and implementation. The output is an autonomous planning system. It’s a software system which enables a satellite to behave autonomously and can be adapted to different types of satellite missions. Additionally, it covers different aspects of autonomous satellite operation, starting with general decision making of activities, going over to time scheduling inclusive constraint consideration, e.g. resources, and finishing at last with the actual execution, i.e. commanding. The autonomous planning system runs as one application of ASAP, an autonomous sensor platform. It is an optical system with the purpose to detect short-time phenomena and events in Earth atmosphere. The research work for the autonomous planning system, for ASAP and for other related systems has been executed at the professorship for space technology which is part of the department for computer science VIII at the Julius-Maximilians-Universität Würzburg. KW - Planungssystem KW - Autonomie KW - Satellit KW - Entscheidungsfindung KW - Ablaufplanung KW - Planausführung KW - System KW - Missionsbetrieb KW - decission finding KW - scheduling KW - plan execution KW - system KW - mission operation Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-163569 ER - TY - THES A1 - Baier, Pablo A. T1 - Simulator for Minimally Invasive Vascular Interventions: Hardware and Software T1 - VR-Simulation für das Training von Herzkathetereingriffen: Hard- und Softwarelösung N2 - A complete simulation system is proposed that can be used as an educational tool by physicians in training basic skills of Minimally Invasive Vascular Interventions. In the first part, a surface model is developed to assemble arteries having a planar segmentation. It is based on Sweep Surfaces and can be extended to T- and Y-like bifurcations. A continuous force vector field is described, representing the interaction between the catheter and the surface. The computation time of the force field is almost unaffected when the resolution of the artery is increased. The mechanical properties of arteries play an essential role in the study of the circulatory system dynamics, which has been becoming increasingly important in the treatment of cardiovascular diseases. In Virtual Reality Simulators, it is crucial to have a tissue model that responds in real time. In this work, the arteries are discretized by a two dimensional mesh and the nodes are connected by three kinds of linear springs. Three tissue layers (Intima, Media, Adventitia) are considered and, starting from the stretch-energy density, some of the elasticity tensor components are calculated. The physical model linearizes and homogenizes the material response, but it still contemplates the geometric nonlinearity. In general, if the arterial stretch varies by 1% or less, then the agreement between the linear and nonlinear models is trustworthy. In the last part, the physical model of the wire proposed by Konings is improved. As a result, a simpler and more stable method is obtained to calculate the equilibrium configuration of the wire. In addition, a geometrical method is developed to perform relaxations. It is particularly useful when the wire is hindered in the physical method because of the boundary conditions. The physical and the geometrical methods are merged, resulting in efficient relaxations. Tests show that the shape of the virtual wire agrees with the experiment. The proposed algorithm allows real-time executions and the hardware to assemble the simulator has a low cost. N2 - Es wird ein vollständiges Simulationssystem entwickelt, das von Ärzten als Lehrmittel zur Ausbildung grundlegender Fertigkeiten bei Herzkathetereingriffen eingesetzt werden kann. Im ersten Teil wird ein Oberflächenmodell zur Erstellung von Arterien mit planarer Segmentierung entwickelt. Im zweiten Teil werden die Arterien durch ein zweidimensionales Netz diskretisiert, die Knoten werden durch drei Arten linearer Federn verbunden und ausgehend von einer Dehnungsenergie-Dichte-Funktion werden einige Komponenten des Elastizitätstensors berechnet. Im letzten Teil wird das von anderen Autoren vorgeschlagene physikalische Modell des Drahtes verbessert und eine neue geometrische Methode entwickelt. Der vorgeschlagene Algorithmus ermöglicht Echtzeit-Ausführungen. Die Hardware des Simulators hat geringe Herstellungskosten. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 15 KW - Computersimulation KW - Simulator KW - Arterie KW - Elastizitätstensor KW - Herzkatheter KW - Minimally invasive vascular intervention KW - Wire relaxation KW - Artery KW - Elasticity tensor KW - Stiffness KW - educational tool KW - Elastizitätstensor KW - Herzkathetereingriff KW - Software KW - Hardware Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-161190 SN - 978-3-945459-22-5 ER - TY - JOUR A1 - Zimmerer, Chris A1 - Fischbach, Martin A1 - Latoschik, Marc Erich T1 - Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks JF - Multimodal Technologies and Interaction N2 - Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality. KW - multimodal fusion KW - multimodal interface KW - semantic fusion KW - procedural fusion methods KW - natural interfaces KW - human-computer interaction Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-197573 SN - 2414-4088 VL - 2 IS - 4 ER - TY - THES A1 - Budig, Benedikt T1 - Extracting Spatial Information from Historical Maps: Algorithms and Interaction T1 - Extraktion räumlicher Informationen aus historischen Landkarten: Algorithmen und Interaktion N2 - Historical maps are fascinating documents and a valuable source of information for scientists of various disciplines. Many of these maps are available as scanned bitmap images, but in order to make them searchable in useful ways, a structured representation of the contained information is desirable. This book deals with the extraction of spatial information from historical maps. This cannot be expected to be solved fully automatically (since it involves difficult semantics), but is also too tedious to be done manually at scale. The methodology used in this book combines the strengths of both computers and humans: it describes efficient algorithms to largely automate information extraction tasks and pairs these algorithms with smart user interactions to handle what is not understood by the algorithm. The effectiveness of this approach is shown for various kinds of spatial documents from the 16th to the early 20th century. N2 - Historische Landkarten sind faszinierende Dokumente und eine wertvolle Informationsquelle für Wissenschaftler verschiedener Fächer. Viele dieser Karten liegen als gescannte Bitmap-Bilder vor, aber um sie auf nützliche Art durchsuchbar zu machen ist eine strukturierte Repräsentation der enthaltenen Informationen wünschenswert. Dieses Buch beschäftigt sich mit der Extraktion räumlicher Informationen aus historischen Landkarten. Man kann nicht erwarten, dass dies vollautomatisch geschieht (da komplizierte Semantik involviert ist), aber es ist auch zu aufwändig, um im großen Stil manuell durchgeführt zu werden. Die Methodik, die in diesem Buch verwendet wird, kombiniert die Stärken von Computern und Menschen: Es werden effiziente Algorithmen beschrieben, die Extraktionsaufgaben weitgehend automatisieren, und dazu passende Nutzerinteraktionen entworfen, mit denen Fälle gelöst werden, die die Algorithmen nicht vestehen. Die Effekitivität dieses Ansatzes wird anhand verschiedener räumlicher Dokumente aus dem 16. bis frühen 20. Jahrhundert gezeigt. KW - Karte KW - Effizienter Algorithmus KW - Interaktion KW - Information Extraction KW - Smart User Interaction KW - Historical Maps KW - Itineraries KW - Deep Georeferencing KW - Benutzerinteraktion KW - Historische Landkarten KW - Itinerare KW - Georeferenzierung KW - Historische Karte KW - Raumdaten Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-160955 SN - 978-3-95826-092-4 SN - 978-3-95826-093-1 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, ISBN 978-3-95826-092-4, 32,90 Euro. PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - GEN A1 - Funken, Matthias A1 - Tscherner, Michael T1 - Jahresbericht 2017 des Rechenzentrums der Universität Würzburg T1 - Annual Report 2017 of the Computer Center, University of Wuerzburg N2 - Eine Übersicht über die Aktivitäten des Rechenzentrums im Jahr 2017. T3 - Jahresbericht des Rechenzentrums der Universität Würzburg - 2017 KW - Julius-Maximilians-Universität Würzburg KW - RZUW KW - Jahresbericht KW - Rechenzentrum KW - Computer Center University of Wuerzburg KW - annual report Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-168537 UR - https://www.rz.uni-wuerzburg.de/wir/publikationen/ ET - 1. Auflage ER - TY - JOUR A1 - Nagler, Matthias A1 - Nägele, Thomas A1 - Gilli, Christian A1 - Fragner, Lena A1 - Korte, Arthur A1 - Platzer, Alexander A1 - Farlow, Ashley A1 - Nordborg, Magnus A1 - Weckwerth, Wolfram T1 - Eco-Metabolomics and Metabolic Modeling: Making the Leap From Model Systems in the Lab to Native Populations in the Field JF - Frontiers in Plant Science N2 - Experimental high-throughput analysis of molecular networks is a central approach to characterize the adaptation of plant metabolism to the environment. However, recent studies have demonstrated that it is hardly possible to predict in situ metabolic phenotypes from experiments under controlled conditions, such as growth chambers or greenhouses. This is particularly due to the high molecular variance of in situ samples induced by environmental fluctuations. An approach of functional metabolome interpretation of field samples would be desirable in order to be able to identify and trace back the impact of environmental changes on plant metabolism. To test the applicability of metabolomics studies for a characterization of plant populations in the field, we have identified and analyzed in situ samples of nearby grown natural populations of Arabidopsis thaliana in Austria. A. thaliana is the primary molecular biological model system in plant biology with one of the best functionally annotated genomes representing a reference system for all other plant genome projects. The genomes of these novel natural populations were sequenced and phylogenetically compared to a comprehensive genome database of A. thaliana ecotypes. Experimental results on primary and secondary metabolite profiling and genotypic variation were functionally integrated by a data mining strategy, which combines statistical output of metabolomics data with genome-derived biochemical pathway reconstruction and metabolic modeling. Correlations of biochemical model predictions and population-specific genetic variation indicated varying strategies of metabolic regulation on a population level which enabled the direct comparison, differentiation, and prediction of metabolic adaptation of the same species to different habitats. These differences were most pronounced at organic and amino acid metabolism as well as at the interface of primary and secondary metabolism and allowed for the direct classification of population-specific metabolic phenotypes within geographically contiguous sampling sites. KW - eco-metabolomics KW - in situ analysis KW - metabolomics KW - metabolic modeling KW - SNP KW - natural variation KW - Jacobian matrix KW - green systems biology Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-189560 SN - 1664-462X VL - 9 IS - 1556 ER - TY - JOUR A1 - Petschke, Danny A1 - Staab, Torsten E.M. T1 - DLTPulseGenerator: a library for the simulation of lifetime spectra based on detector-output pulses JF - SoftwareX N2 - The quantitative analysis of lifetime spectra relevant in both life and materials sciences presents one of the ill-posed inverse problems and, hence, leads to most stringent requirements on the hardware specifications and the analysis algorithms. Here we present DLTPulseGenerator, a library written in native C++ 11, which provides a simulation of lifetime spectra according to the measurement setup. The simulation is based on pairs of non-TTL detector output-pulses. Those pulses require the Constant Fraction Principle (CFD) for the determination of the exact timing signal and, thus, the calculation of the time difference i.e. the lifetime. To verify the functionality, simulation results were compared to experimentally obtained data using Positron Annihilation Lifetime Spectroscopy (PALS) on pure tin. KW - lifetime spectroscopy KW - signal processing KW - pulse simulation Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-176883 VL - 7 ER - TY - THES A1 - Ostermayer, Ludwig T1 - Integration of Prolog and Java with the Connector Architecture CAPJa T1 - Integration von Prolog und Java mit Hilfe der Connector Architecture CAPJa N2 - Modern software is often realized as a modular combination of subsystems for, e. g., knowledge management, visualization, verification, or the interaction with users. As a result, software libraries from possibly different programming languages have to work together. Even more complex the case is if different programming paradigms have to be combined. This type of diversification of programming languages and paradigms in just one software application can only be mastered by mechanisms for a seamless integration of the involved programming languages. However, the integration of the common logic programming language Prolog and the popular object-oriented programming language Java is complicated by various interoperability problems which stem on the one hand from the paradigmatic gap between the programming languages, and on the other hand, from the diversity of the available Prolog systems. The subject of the thesis is the investigation of novel mechanisms for the integration of logic programming in Prolog and object–oriented programming in Java. We are particularly interested in an object–oriented, uniform approach which is not specific to just one Prolog system. Therefore, we have first identified several important criteria for the seamless integration of Prolog and Java from the object–oriented perspective. The main contribution of the thesis is a novel integration framework called the Connector Architecture for Prolog and Java (CAPJa). The framework is completely implemented in Java and imposes no modifications to the Java Virtual Machine or Prolog. CAPJa provides a semi–automated mechanism for the integration of Prolog predicates into Java. For compact, readable, and object–oriented queries to Prolog, CAPJa exploits lambda expressions with conditional and relational operators in Java. The communication between Java and Prolog is based on a fully automated mapping of Java objects to Prolog terms, and vice versa. In Java, an extensible system of gateways provides connectivity with various Prolog system and, moreover, makes any connected Prolog system easily interchangeable, without major adaption in Java. N2 - Moderne Software ist oft modular zusammengesetzt aus Subsystemen zur Wissensverwaltung, Visualisierung, Verfikation oder Benutzerinteraktion. Dabei müssen Programmbibliotheken aus möglicherweise verschiedenen Programmiersprachen miteinander zusammenarbeiten. Noch komplizierter ist der Fall, wenn auch noch verschiedene Programmierparadigmen miteinander kombiniert werden. Diese Art der Diversifikation an Programmiersprachen und –paradigmen in nur einer Software kann nur von nahtlosen Integrationsmechansimen für die beteiligten Programmiersprachen gemeistert werden. Gerade die Einbindung der gängigen Logikprogrammiersprache Prolog und der populären objektorientierten Programmiersprache Java wird durch zahlreiche Kompatibilitätsprobleme erschwert, welche auf der einen Seite von paradigmatischen Unterschieden der beiden Programmiersprachen herrühren und auf der anderen Seite von der Vielfalt der erhältlichen Prologimplementierungen. Gegenstand dieser Arbeit ist die Untersuchung von neuartigen Mechanismen für die Zusammenführung von Logikprogrammierung in Prolog und objektorienter Programmierung in Java. Besonders interessiert uns dabei ein objektorientierter, einheitlicher Ansatz, der nicht auf eine konkrete Prologimplementierung festgelegt ist. Aus diesem Grund haben wir zunächst wichtige Kriterien für die nahtlose Integration von Prolog und Java aus der objetorientierten Sicht identifziert. Der Hauptbeitrag dieser Arbeit ist ein neuartiges Integrationssystems, welches Connector Architecture for Prolog and Java (CAPJa) heißt. Das System ist komplett in Java implementiert und benötigt keine Anpassungen der Java Virtual Machine oder Prolog. CAPJa stellt einen halbautomatischen Mechanismus zur Vernetzung von Prolog Prädikaten mit Java zur Verfügung. Für kompakte, lesbare und objektorientierte Anfragen an Prolog nutzt CAPJa Lambdaausdrücke mit logischen und relationalen Operatoren in Java. Die Kommunikation zwischen Java und Prolog basiert auf einer automatisierten Abbildung von Java Objekten auf Prolog Terme, und umgekehrt. In Java bietet ein erweiterbares System von Schnittstellen Konnektivität zu einer Vielzahl an Prologimplmentierung und macht darüber hinaus jede verbundene Prologimplementierung einfach austauschbar, und zwar ohne größere Anpassung in Java. KW - Logische Programmierung KW - Objektorientierte Programmierung KW - PROLOG KW - Java KW - Multi-Paradigm Programming KW - Logic Programming KW - Object-Oriented Programming KW - Multi-Paradigm Programming Framework Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-150713 ER - TY - THES A1 - Aschenbrenner, Doris T1 - Human Robot Interaction Concepts for Human Supervisory Control and Telemaintenance Applications in an Industry 4.0 Environment T1 - Mensch-Roboter-Interaktionskonzepte für Fernsteuerungs- und Fernwartungsanwendungen in einer Industrie 4.0 Umgebung N2 - While teleoperation of technical highly sophisticated systems has already been a wide field of research, especially for space and robotics applications, the automation industry has not yet benefited from its results. Besides the established fields of application, also production lines with industrial robots and the surrounding plant components are in need of being remotely accessible. This is especially critical for maintenance or if an unexpected problem cannot be solved by the local specialists. Special machine manufacturers, especially robotics companies, sell their technology worldwide. Some factories, for example in emerging economies, lack qualified personnel for repair and maintenance tasks. When a severe failure occurs, an expert of the manufacturer needs to fly there, which leads to long down times of the machine or even the whole production line. With the development of data networks, a huge part of those travels can be omitted, if appropriate teleoperation equipment is provided. This thesis describes the development of a telemaintenance system, which was established in an active production line for research purposes. The customer production site of Braun in Marktheidenfeld, a factory which belongs to Procter & Gamble, consists of a six-axis cartesian industrial robot by KUKA Industries, a two-component injection molding system and an assembly unit. The plant produces plastic parts for electric toothbrushes. In the research projects "MainTelRob" and "Bayern.digital", during which this plant was utilised, the Zentrum für Telematik e.V. (ZfT) and its project partners develop novel technical approaches and procedures for modern telemaintenance. The term "telemaintenance" hereby refers to the integration of computer science and communication technologies into the maintenance strategy. It is particularly interesting for high-grade capital-intensive goods like industrial robots. Typical telemaintenance tasks are for example the analysis of a robot failure or difficult repair operations. The service department of KUKA Industries is responsible for the worldwide distributed customers who own more than one robot. Currently such tasks are offered via phone support and service staff which travels abroad. They want to expand their service activities on telemaintenance and struggle with the high demands of teleoperation especially regarding security infrastructure. In addition, the facility in Marktheidenfeld has to keep up with the high international standards of Procter & Gamble and wants to minimize machine downtimes. Like 71.6 % of all German companies, P&G sees a huge potential for early information on their production system, but complains about the insufficient quality and the lack of currentness of data. The main research focus of this work lies on the human machine interface for all human tasks in a telemaintenance setup. This thesis provides own work in the use of a mobile device in context of maintenance, describes new tools on asynchronous remote analysis and puts all parts together in an integrated telemaintenance infrastructure. With the help of Augmented Reality, the user performance and satisfaction could be raised. A special regard is put upon the situation awareness of the remote expert realized by different camera viewpoints. In detail the work consists of: - Support of maintenance tasks with a mobile device - Development and evaluation of a context-aware inspection tool - Comparison of a new touch-based mobile robot programming device to the former teach pendant - Study on Augmented Reality support for repair tasks with a mobile device - Condition monitoring for a specific plant with industrial robot - Human computer interaction for remote analysis of a single plant cycle - A big data analysis tool for a multitude of cycles and similar plants - 3D process visualization for a specific plant cycle with additional virtual information - Network architecture in hardware, software and network infrastructure - Mobile device computer supported collaborative work for telemaintenance - Motor exchange telemaintenance example in running production environment - Augmented reality supported remote plant visualization for better situation awareness N2 - Die Fernsteuerung technisch hochentwickelter Systeme ist seit vielen Jahren ein breites Forschungsfeld, vor allem im Bereich von Weltraum- und Robotikanwendungen. Allerdings hat die Automatisierungsindustrie bislang zu wenig von den Ergebnissen dieses Forschungsgebiets profitiert. Auch Fertigungslinien mit Industrierobotern und weiterer Anlagenkomponenten müssen über die Ferne zugänglich sein, besonders bei Wartungsfällen oder wenn unvorhergesehene Probleme nicht von den lokalen Spezialisten gelöst werden können. Hersteller von Sondermaschinen wie Robotikfirmen verkaufen ihre Technologie weltweit. Kunden dieser Firmen besitzen beispielsweise Fabriken in Schwellenländern, wo es an qualifizierten Personal für Reparatur und Wartung mangelt. Wenn ein ernster Fehler auftaucht, muss daher ein Experte des Sondermaschinenherstellers zum Kunden fliegen. Das führt zu langen Stillstandzeiten der Maschine. Durch die Weiterentwicklung der Datennetze könnte ein großer Teil dieser Reisen unterbleiben, wenn eine passende Fernwartungsinfrastruktur vorliegen würde. Diese Arbeit beschreibt die Entwicklung eines Fernwartungssystems, welches in einer aktiven Produktionsumgebung für Forschungszwecke eingerichtet wurde. Die Fertigungsanlage des Kunden wurde von Procter & Gamble in Marktheidenfeld zur Verfügung gestellt und besteht aus einem sechsachsigen, kartesischen Industrieroboter von KUKA Industries, einer Zweikomponentenspritzgussanlage und einer Montageeinheit. Die Anlage produziert Plastikteile für elektrische Zahnbürsten. Diese Anlage wurde im Rahmen der Forschungsprojekte "MainTelRob" und "Bayern.digital" verwendet, in denen das Zentrum für Telematik e.V. (ZfT) und seine Projektpartner neue Ansätze und Prozeduren für moderne Fernwartungs-Technologien entwickeln. Fernwartung bedeutet für uns die umfassende Integration von Informatik und Kommunikationstechnologien in der Wartungsstrategie. Das ist vor allem für hochentwickelte, kapitalintensive Güter wie Industrierobotern interessant. Typische Fernwartungsaufgaben sind beispielsweise die Analyse von Roboterfehlermeldungen oder schwierige Reparaturmaßnahmen. Die Service-Abteilung von KUKA Industries ist für die weltweit verteilten Kunden zuständig, die teilweise auch mehr als einen Roboter besitzen. Aktuell werden derartige Aufgaben per Telefonauskunft oder mobilen Servicekräften, die zum Kunden reisen, erledigt. Will man diese komplizierten Aufgaben durch Fernwartung ersetzen um die Serviceaktivitäten auszuweiten muss man mit den hohen Anforderungen von Fernsteuerung zurechtkommen, besonders in Bezug auf Security Infrastruktur. Eine derartige umfassende Herangehensweise an Fernwartung bietet aber auch einen lokalen Mehrwert beim Kunden: Die Fabrik in Marktheidenfeld muss den hohen internationalen Standards von Procter & Gamble folgen und will daher die Stillstandzeiten weiter verringern. Wie 71,6 Prozent aller deutschen Unternehmen sieht auch P&G Marktheidenfeld ein großes Potential für frühe Informationen aus ihrem Produktionssystem, haben aber aktuell noch Probleme mit der Aktualität und Qualität dieser Daten. Der Hauptfokus der hier vorgestellten Forschung liegt auf der Mensch-Maschine-Schnittstelle für alle Aufgaben eines umfassenden Fernwartungskontextes. Diese Arbeit stellt die eigene Arbeiten bei der Verwendung mobiler Endgeräte im Kontext der Wartung und neue Softwarewerkzeuge für die asynchrone Fernanalyse vor und integriert diese Aspekte in eine Fernwartungsinfrastruktur. In diesem Kontext kann gezeigt werden, dass der Einsatz von Augmented Reality die Nutzerleistung und gleichzeitig die Zufriedenheit steigern kann. Dabei wird auf das sogenannte "situative Bewusstsein" des entfernten Experten besonders Wert gelegt. Im Detail besteht die Arbeit aus: - Unterstützung von Wartungsaufgaben mit mobilen Endgeräten - Entwicklung und Evaluation kontextsensitiver Inspektionssoftware - Vergleich von touch-basierten Roboterprogrammierung mit der Vorgängerversion des Programmierhandgeräts - Studien über die Unterstützung von Reparaturaufgaben durch Augmented Reality - Zustandsüberwachung für eine spezielle Anlage mit Industrieroboter - Mensch-Maschine Interaktion für die Teleanalyse eines Produktionszyklus - Grafische Big Data Analyse einer Vielzahl von Produktionszyklen - 3D Prozess Visualisierung und Anreicherung mit virtuellen Informationen - Hardware, Software und Netzwerkarchitektur für die Fernwartung - Computerunterstützte Zusammenarbeit mit Verwendung mobiler Endgeräte für die Fernwartung - Fernwartungsbeispiel: Durchführung eines Motortauschs in der laufenden Produktion - Augmented Reality unterstütze Visualisierung des Anlagenkontextes für die Steigerung des situativen Bewusstseins T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 13 KW - Fernwartung KW - Robotik KW - Mensch-Maschine-Schnittstelle KW - Erweiterte Realität KW - Situation Awareness KW - Industrie 4.0 KW - Industrial internet Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-150520 SN - 978-3-945459-18-8 ER - TY - THES A1 - Houshiar, Hamidreza T1 - Documentation and mapping with 3D point cloud processing T1 - Dokumentation und Kartierung mittels 3D-Punktwolkenverarbeitung N2 - 3D point clouds are a de facto standard for 3D documentation and modelling. The advances in laser scanning technology broadens the usability and access to 3D measurement systems. 3D point clouds are used in many disciplines such as robotics, 3D modelling, archeology and surveying. Scanners are able to acquire up to a million of points per second to represent the environment with a dense point cloud. This represents the captured environment with a very high degree of detail. The combination of laser scanning technology with photography adds color information to the point clouds. Thus the environment is represented more realistically. Full 3D models of environments, without any occlusion, require multiple scans. Merging point clouds is a challenging process. This thesis presents methods for point cloud registration based on the panorama images generated from the scans. Image representation of point clouds introduces 2D image processing methods to 3D point clouds. Several projection methods for the generation of panorama maps of point clouds are presented in this thesis. Additionally, methods for point cloud reduction and compression based on the panorama maps are proposed. Due to the large amounts of data generated from the 3D measurement systems these methods are necessary to improve the point cloud processing, transmission and archiving. This thesis introduces point cloud processing methods as a novel framework for the digitisation of archeological excavations. The framework replaces the conventional documentation methods for excavation sites. It employs point clouds for the generation of the digital documentation of an excavation with the help of an archeologist on-site. The 3D point cloud is used not only for data representation but also for analysis and knowledge generation. Finally, this thesis presents an autonomous indoor mobile mapping system. The mapping system focuses on the sensor placement planning method. Capturing a complete environment requires several scans. The sensor placement planning method solves for the minimum required scans to digitise large environments. Combining this method with a navigation system on a mobile robot platform enables it to acquire data fully autonomously. This thesis introduces a novel hole detection method for point clouds to detect obscured parts of a captured environment. The sensor placement planning method selects the next scan position with the most coverage of the obscured environment. This reduces the required number of scans. The navigation system on the robot platform consist of path planning, path following and obstacle avoidance. This guarantees the safe navigation of the mobile robot platform between the scan positions. The sensor placement planning method is designed as a stand alone process that could be used with a mobile robot platform for autonomous mapping of an environment or as an assistant tool for the surveyor on scanning projects. N2 - 3D-Punktwolken sind der de facto Standard bei der Dokumentation und Modellierung in 3D. Die Fortschritte in der Laserscanningtechnologie erweitern die Verwendbarkeit und die Verfügbarkeit von 3D-Messsystemen. 3D-Punktwolken werden in vielen Disziplinen verwendet, wie z.B. in der Robotik, 3D-Modellierung, Archäologie und Vermessung. Scanner sind in der Lage bis zu einer Million Punkte pro Sekunde zu erfassen, um die Umgebung mit einer dichten Punktwolke abzubilden und mit einem hohen Detaillierungsgrad darzustellen. Die Kombination der Laserscanningtechnologie mit Methoden der Photogrammetrie fügt den Punktwolken Farbinformationen hinzu. Somit wird die Umgebung realistischer dargestellt. Vollständige 3D-Modelle der Umgebung ohne Verschattungen benötigen mehrere Scans. Punktwolken zusammenzufügen ist eine anspruchsvolle Aufgabe. Diese Arbeit stellt Methoden zur Punktwolkenregistrierung basierend auf aus den Scans erzeugten Panoramabildern vor. Die Darstellung einer Punktwolke als Bild bringt Methoden der 2D-Bildverarbeitung an 3D-Punktwolken heran. Der Autor stellt mehrere Projektionsmethoden zur Erstellung von Panoramabildern aus 3D-Punktwolken vor. Außerdem werden Methoden zur Punktwolkenreduzierung und -kompression basierend auf diesen Panoramabildern vorgeschlagen. Aufgrund der großen Datenmenge, die von 3D-Messsystemen erzeugt wird, sind diese Methoden notwendig, um die Punktwolkenverarbeitung, -übertragung und -archivierung zu verbessern. Diese Arbeit präsentiert Methoden der Punktwolkenverarbeitung als neuartige Ablaufstruktur für die Digitalisierung von archäologischen Ausgrabungen. Durch diesen Ablauf werden konventionellen Methoden auf Ausgrabungsstätten ersetzt. Er verwendet Punktwolken für die Erzeugung der digitalen Dokumentation einer Ausgrabung mithilfe eines Archäologen vor Ort. Die 3D-Punktwolke kommt nicht nur für die Anzeige der Daten, sondern auch für die Analyse und Wissensgenerierung zum Einsatz. Schließlich stellt diese Arbeit ein autonomes Indoor-Mobile-Mapping-System mit Fokus auf der Positionsplanung des Messgeräts vor. Die Positionsplanung bestimmt die minimal benötigte Anzahl an Scans, um großflächige Umgebungen zu digitalisieren. Kombiniert mit einem Navigationssystem auf einer mobilen Roboterplattform ermöglicht diese Methode die vollautonome Datenerfassung. Diese Arbeit stellt eine neuartige Erkennungsmethode für Lücken in Punktwolken vor, um verdeckte Bereiche der erfassten Umgebung zu bestimmen. Die Positionsplanung bestimmt als nächste Scanposition diejenige mit der größten Abdeckung der verdeckten Umgebung. Das Navigationssystem des Roboters besteht aus der Pfadplanung, der Pfadverfolgung und einer Hindernisvermeidung um eine sichere Fortbewegung der mobilen Roboterplattform zwischen den Scanpositionen zu garantieren. Die Positionsplanungsmethode wurde als eigenständiges Verfahren entworfen, das auf einer mobilen Roboterplattform zur autonomen Kartierung einer Umgebung zum Einsatz kommen oder einem Vermesser bei einem Scanprojekt als Unterstützung dienen kann. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 12 KW - 3D Punktwolke KW - Robotik KW - Registrierung KW - 3D Pointcloud KW - Feature Based Registration KW - Compression KW - Computer Vision KW - Robotics KW - Panorama Images Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-144493 SN - 978-3-945459-14-0 ER - TY - JOUR A1 - Kaltdorf, Kristin Verena A1 - Schulze, Katja A1 - Helmprobst, Frederik A1 - Kollmannsberger, Philip A1 - Dandekar, Thomas A1 - Stigloher, Christian T1 - Fiji macro 3D ART VeSElecT: 3D automated reconstruction tool for vesicle structures of electron tomograms JF - PLoS Computational Biology N2 - Automatic image reconstruction is critical to cope with steadily increasing data from advanced microscopy. We describe here the Fiji macro 3D ART VeSElecT which we developed to study synaptic vesicles in electron tomograms. We apply this tool to quantify vesicle properties (i) in embryonic Danio rerio 4 and 8 days past fertilization (dpf) and (ii) to compare Caenorhabditis elegans N2 neuromuscular junctions (NMJ) wild-type and its septin mutant (unc-59(e261)). We demonstrate development-specific and mutant-specific changes in synaptic vesicle pools in both models. We confirm the functionality of our macro by applying our 3D ART VeSElecT on zebrafish NMJ showing smaller vesicles in 8 dpf embryos then 4 dpf, which was validated by manual reconstruction of the vesicle pool. Furthermore, we analyze the impact of C. elegans septin mutant unc-59(e261) on vesicle pool formation and vesicle size. Automated vesicle registration and characterization was implemented in Fiji as two macros (registration and measurement). This flexible arrangement allows in particular reducing false positives by an optional manual revision step. Preprocessing and contrast enhancement work on image-stacks of 1nm/pixel in x and y direction. Semi-automated cell selection was integrated. 3D ART VeSElecT removes interfering components, detects vesicles by 3D segmentation and calculates vesicle volume and diameter (spherical approximation, inner/outer diameter). Results are collected in color using the RoiManager plugin including the possibility of manual removal of non-matching confounder vesicles. Detailed evaluation considered performance (detected vesicles) and specificity (true vesicles) as well as precision and recall. We furthermore show gain in segmentation and morphological filtering compared to learning based methods and a large time gain compared to manual segmentation. 3D ART VeSElecT shows small error rates and its speed gain can be up to 68 times faster in comparison to manual annotation. Both automatic and semi-automatic modes are explained including a tutorial. KW - Biology KW - Vesicles KW - Caenorhabditis elegans KW - Zebrafish KW - Septins KW - Synaptic vesicles KW - Neuromuscular junctions KW - Computer software KW - Synapses Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-172112 VL - 13 IS - 1 ER - TY - GEN T1 - Jahresbericht 2016 des Rechenzentrums der Universität Würzburg T1 - Annual Report 2016 of the Computer Center, University of Wuerzburg N2 - Das Dokument umfasst eine jährliche Zusammenfassung der Aktivitäten des Rechenzentrums als zentraler IT-Dienstleister der Universität Würzburg T3 - Jahresbericht des Rechenzentrums der Universität Würzburg - 2016 KW - Jahresbericht KW - Julius-Maximilians-Universität Würzburg KW - Rechenzentrum KW - annual report KW - Computer Center University of Wuerzburg KW - RZUW Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-153558 UR - https://www.rz.uni-wuerzburg.de/wir/publikationen/ ET - 1. Auflage ER - TY - JOUR A1 - von Mammen, Sebastian Albrecht A1 - Wagner, Daniel A1 - Knote, Andreas A1 - Taskin, Umut T1 - Interactive simulations of biohybrid systems JF - Frontiers in Robotics and AI N2 - In this article, we present approaches to interactive simulations of biohybrid systems. These simulations are comprised of two major computational components: (1) agent-based developmental models that retrace organismal growth and unfolding of technical scaffoldings and (2) interfaces to explore these models interactively. Simulations of biohybrid systems allow us to fast forward and experience their evolution over time based on our design decisions involving the choice, configuration and initial states of the deployed biological and robotic actors as well as their interplay with the environment. We briefly introduce the concept of swarm grammars, an agent-based extension of L-systems for retracing growth processes and structural artifacts. Next, we review an early augmented reality prototype for designing and projecting biohybrid system simulations into real space. In addition to models that retrace plant behaviors, we specify swarm grammar agents to braid structures in a self-organizing manner. Based on this model, both robotic and plant-driven braiding processes can be experienced and explored in virtual worlds. We present an according user interface for use in virtual reality. As we present interactive models concerning rather diverse description levels, we only ensured their principal capacity for interaction but did not consider efficiency analyzes beyond prototypic operation. We conclude this article with an outlook on future works on melding reality and virtuality to drive the design and deployment of biohybrid systems. KW - biohybrid systems KW - augmented reality KW - virtual reality KW - user interfaces KW - biological development KW - generative systems Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-195755 SN - 2296-9144 VL - 4 ER - TY - JOUR A1 - Fisseler, Denis A1 - Müller, Gerfrid G. W. A1 - Weichert, Frank T1 - Web-Based scientific exploration and analysis of 3D scanned cuneiform datasets for collaborative research JF - Informatics N2 - The three-dimensional cuneiform script is one of the oldest known writing systems and a central object of research in Ancient Near Eastern Studies and Hittitology. An important step towards the understanding of the cuneiform script is the provision of opportunities and tools for joint analysis. This paper presents an approach that contributes to this challenge: a collaborative compatible web-based scientific exploration and analysis of 3D scanned cuneiform fragments. The WebGL -based concept incorporates methods for compressed web-based content delivery of large 3D datasets and high quality visualization. To maximize accessibility and to promote acceptance of 3D techniques in the field of Hittitology, the introduced concept is integrated into the Hethitologie-Portal Mainz, an established leading online research resource in the field of Hittitology, which until now exclusively included 2D content. The paper shows that increasing the availability of 3D scanned archaeological data through a web-based interface can provide significant scientific value while at the same time finding a trade-off between copyright induced restrictions and scientific usability. KW - cuneiform KW - 3D viewer KW - WebGL KW - Hittitology KW - 3D collation Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-197958 SN - 2227-9709 VL - 4 IS - 4 ER - TY - JOUR A1 - Kunz, Meik A1 - Liang, Chunguang A1 - Nilla, Santosh A1 - Cecil, Alexander A1 - Dandekar, Thomas T1 - The drug-minded protein interaction database (DrumPID) for efficient target analysis and drug development JF - Database N2 - The drug-minded protein interaction database (DrumPID) has been designed to provide fast, tailored information on drugs and their protein networks including indications, protein targets and side-targets. Starting queries include compound, target and protein interactions and organism-specific protein families. Furthermore, drug name, chemical structures and their SMILES notation, affected proteins (potential drug targets), organisms as well as diseases can be queried including various combinations and refinement of searches. Drugs and protein interactions are analyzed in detail with reference to protein structures and catalytic domains, related compound structures as well as potential targets in other organisms. DrumPID considers drug functionality, compound similarity, target structure, interactome analysis and organismic range for a compound, useful for drug development, predicting drug side-effects and structure–activity relationships. KW - drug-minded protein KW - database Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-147369 VL - 2016 ER - TY - JOUR A1 - Ali, Qasim A1 - Montenegro, Sergio T1 - Decentralized control for scalable quadcopter formations JF - International Journal of Aerospace Engineering N2 - An innovative framework has been developed for teamwork of two quadcopter formations, each having its specified formation geometry, assigned task, and matching control scheme. Position control for quadcopters in one of the formations has been implemented through a Linear Quadratic Regulator Proportional Integral (LQR PI) control scheme based on explicit model following scheme. Quadcopters in the other formation are controlled through LQR PI servomechanism control scheme. These two control schemes are compared in terms of their performance and control effort. Both formations are commanded by respective ground stations through virtual leaders. Quadcopters in formations are able to track desired trajectories as well as hovering at desired points for selected time duration. In case of communication loss between ground station and any of the quadcopters, the neighboring quadcopter provides the command data, received from the ground station, to the affected unit. Proposed control schemes have been validated through extensive simulations using MATLAB®/Simulink® that provided favorable results. KW - scalable quadcopter Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-146704 VL - 2016 ER - TY - JOUR A1 - Ali, Qasim A1 - Montenegro, Sergio T1 - Explicit Model Following Distributed Control Scheme for Formation Flying of Mini UAVs JF - IEEE Access N2 - A centralized heterogeneous formation flight position control scheme has been formulated using an explicit model following design, based on a Linear Quadratic Regulator Proportional Integral (LQR PI) controller. The leader quadcopter is a stable reference model with desired dynamics whose output is perfectly tracked by the two wingmen quadcopters. The leader itself is controlled through the pole placement control method with desired stability characteristics, while the two followers are controlled through a robust and adaptive LQR PI control method. Selected 3-D formation geometry and static stability are maintained under a number of possible perturbations. With this control scheme, formation geometry may also be switched to any arbitrary shape during flight, provided a suitable collision avoidance mechanism is incorporated. In case of communication loss between the leader and any of the followers, the other follower provides the data, received from the leader, to the affected follower. The stability of the closed-loop system has been analyzed using singular values. The proposed approach for the tightly coupled formation flight of mini unmanned aerial vehicles has been validated with the help of extensive simulations using MATLAB/Simulink, which provided promising results. KW - quadcopter KW - robustness KW - intelligent vehicles KW - rotors KW - mathematical model KW - aerodynamics KW - adaptation models KW - vehicle dynamics KW - unmanned aerial vehicle KW - distributed control KW - formation flight KW - model following Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-146061 N1 - (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works VL - 4 IS - 397-406 ER - TY - JOUR A1 - Lugrin, Jean-Luc A1 - Latoschik, Marc Erich A1 - Habel, Michael A1 - Roth, Daniel A1 - Seufert, Christian A1 - Grafe, Silke T1 - Breaking Bad Behaviors: A New Tool for Learning Classroom Management Using Virtual Reality JF - Frontiers in ICT N2 - This article presents an immersive virtual reality (VR) system for training classroom management skills, with a specific focus on learning to manage disruptive student behavior in face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3D virtual simulation of a classroom populated by twenty-four semi-autonomous virtual students. The system has been designed as a companion tool for classroom management seminars in a syllabus for primary and secondary school teachers. This will allow lecturers to link theory with practice using the medium of VR. The system is therefore designed for two users: a trainee teacher and an instructor supervising the training session. The teacher is immersed in a real-time 3D simulation of a classroom by means of a head-mounted display and headphone. The instructor operates a graphical desktop console, which renders a view of the class and the teacher whose avatar movements are captured by a marker less tracking system. This console includes a 2D graphics menu with convenient behavior and feedback control mechanisms to provide human-guided training sessions. The system is built using low-cost consumer hardware and software. Its architecture and technical design are described in detail. A first evaluation confirms its conformance to critical usability requirements (i.e., safety and comfort, believability, simplicity, acceptability, extensibility, affordability, and mobility). Our initial results are promising and constitute the necessary first step toward a possible investigation of the efficiency and effectiveness of such a system in terms of learning outcomes and experience. KW - virtual reality training KW - immersive classroom management KW - immersive classroom KW - virtual agent interaction KW - student simulation Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-147945 VL - 3 IS - 26 ER - TY - JOUR A1 - Ankenbrand, Markus J. A1 - Weber, Lorenz A1 - Becker, Dirk A1 - Förster, Frank A1 - Bemm, Felix T1 - TBro: visualization and management of de novo transcriptomes JF - Database N2 - RNA sequencing (RNA-seq) has become a powerful tool to understand molecular mechanisms and/or developmental programs. It provides a fast, reliable and cost-effective method to access sets of expressed elements in a qualitative and quantitative manner. Especially for non-model organisms and in absence of a reference genome, RNA-seq data is used to reconstruct and quantify transcriptomes at the same time. Even SNPs, InDels, and alternative splicing events are predicted directly from the data without having a reference genome at hand. A key challenge, especially for non-computational personnal, is the management of the resulting datasets, consisting of different data types and formats. Here, we present TBro, a flexible de novo transcriptome browser, tackling this challenge. TBro aggregates sequences, their annotation, expression levels as well as differential testing results. It provides an easy-to-use interface to mine the aggregated data and generate publication-ready visualizations. Additionally, it supports users with an intuitive cart system, that helps collecting and analysing biological meaningful sets of transcripts. TBro’s modular architecture allows easy extension of its functionalities in the future. Especially, the integration of new data types such as proteomic quantifications or array-based gene expression data is straightforward. Thus, TBro is a fully featured yet flexible transcriptome browser that supports approaching complex biological questions and enhances collaboration of numerous researchers. KW - database Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-147954 VL - 2016 ER - TY - JOUR A1 - Baier, Pablo A. A1 - Baier-Saip, Jürgen A. A1 - Schilling, Klaus A1 - Oliveira, Jauvane C. T1 - Simulator for Minimally Invasive Vascular Interventions: Hardware and Software JF - Presence N2 - In the present work, a simulation system is proposed that can be used as an educational tool by physicians in training basic skills of minimally invasive vascular interventions. In order to accomplish this objective, initially the physical model of the wire proposed by Konings has been improved. As a result, a simpler and more stable method was obtained to calculate the equilibrium configuration of the wire. In addition, a geometrical method is developed to perform relaxations. It is particularly useful when the wire is hindered in the physical method because of the boundary conditions. Then a recipe is given to merge the physical and the geometrical methods, resulting in efficient relaxations. Moreover, tests have shown that the shape of the virtual wire agrees with the experiment. The proposed algorithm allows real-time executions, and furthermore, the hardware to assemble the simulator has a low cost. KW - simulation system KW - educational tool KW - invasive vascular interventions Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-140580 SN - 1531-3263 VL - 25 IS - 2 ER - TY - THES A1 - Kindermann, Philipp T1 - Angular Schematization in Graph Drawing N2 - Graphs are a frequently used tool to model relationships among entities. A graph is a binary relation between objects, that is, it consists of a set of objects (vertices) and a set of pairs of objects (edges). Networks are common examples of modeling data as a graph. For example, relationships between persons in a social network, or network links between computers in a telecommunication network can be represented by a graph. The clearest way to illustrate the modeled data is to visualize the graphs. The field of Graph Drawing deals with the problem of finding algorithms to automatically generate graph visualizations. The task is to find a "good" drawing, which can be measured by different criteria such as number of crossings between edges or the used area. In this thesis, we study Angular Schematization in Graph Drawing. By this, we mean drawings with large angles (for example, between the edges at common vertices or at crossing points). The thesis consists of three parts. First, we deal with the placement of boxes. Boxes are axis-parallel rectangles that can, for example, contain text. They can be placed on a map to label important sites, or can be used to describe semantic relationships between words in a word network. In the second part of the thesis, we consider graph drawings visually guide the viewer. These drawings generally induce large angles between edges that meet at a vertex. Furthermore, the edges are drawn crossing-free and in a way that makes them easy to follow for the human eye. The third and final part is devoted to crossings with large angles. In drawings with crossings, it is important to have large angles between edges at their crossing point, preferably right angles. N2 - Graphen sind häufig verwendete Werkzeuge zur Modellierung von Zusammenhängen zwischen Daten. Ein Graph ist eine binäre Relation zwischen Objekten, das heißt er besteht aus einer Menge von Objekten (Knoten) und einer Menge von Paaren von Objekten (Kanten). Netzwerke sind übliche Beispiele für das Modellieren von Daten als ein Graph. Beispielsweise lassen sich Beziehungen zwischen Personen in einem sozialen Netzwerk oder Netzanbindungen zwischen Computern in einem Telekommunikationsnetz als Graph darstellen. Die modellierten Daten können am anschaulichsten dargestellt werden, indem man die Graphen visualisiert. Der Bereich des Graphenzeichnens behandelt das Problem, Algorithmen zum automatischen Erzeugen von Graphenvisualisierungen zu finden. Das Ziel ist es, eine "gute" Zeichnung zu finden, was durch unterschiedliche Kriterien gemessen werden kann; zum Beispiel durch die Anzahl der Kreuzungen zwischen Kanten oder durch den Platzverbrauch. In dieser Arbeit beschäftigen wir uns mit Winkelschematisierung im Graphenzeichnen. Darunter verstehen wir Zeichnungen, in denen die Winkel (zum Beispiel zwischen Kanten an einem gemeinsamen Knoten oder einem Kreuzungspunkt) möglichst groß gestaltet sind. Die Arbeit besteht aus drei Teilen. Im ersten Teil betrachten wir die Platzierung von Boxen. Boxen sind achsenparallele Rechtecke, die zum Beispiel Text enthalten. Sie können beispielsweise auf einer Karte platziert werden, um wichtige Standorte zu beschriften, oder benutzt werden, um semantische Beziehungen zwischen Wörtern in einem Wortnetzwerk darzustellen. Im zweiten Teil der Arbeit untersuchen wir Graphenzeichnungen, die den Betrachter visuell führen. Im Allgemeinen haben diese Zeichnungen große Winkel zwischen Kanten, die sich in einem Knoten treffen. Außerdem werden die Verbindungen kreuzungsfrei und so gezeichnet, dass es dem menschlichen Auge leicht fällt, ihnen zu folgen. Im dritten und letzten Teil geht es um Kreuzungen mit großen Winkeln. In Zeichnungen mit Kreuzungen ist es wichtig, dass die Winkel zwischen Kanten an Kreuzungspunkten groß sind, vorzugsweise rechtwinklig. KW - graph drawing KW - angular schematization KW - boundary labeling KW - contact representation KW - word clouds KW - monotone drawing KW - smooth orthogonal drawing KW - simultaneous embedding KW - right angle crossing KW - independent crossing KW - Graphenzeichnen KW - Winkel KW - Kreuzung KW - v Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-112549 SN - 978-3-95826-020-7 (print) SN - 978-3-95826-021-4 (online) PB - Würzburg University Press CY - Würzburg ER - TY - GEN T1 - Jahresbericht 2014 T1 - Annual Report 2014 N2 - Jahresbericht 2014 des Rechenzentrums der Universität Würzburg N2 - Annual Report 2014 of the Computer Center, University of Wuerzburg T3 - Jahresbericht des Rechenzentrums der Universität Würzburg - 2014 KW - Rechenzentrum Universität Würzburg KW - annual report KW - Computer Center University of Wuerzburg KW - Jahresbericht Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-124432 UR - https://www.rz.uni-wuerzburg.de/infos/publikationen/ ER - TY - THES A1 - Busch, Stephan T1 - Robust, Flexible and Efficient Design for Miniature Satellite Systems T1 - Moderne Kleinstsatelliten - Zwischen Robustheit, Flexibilität und Effizienz N2 - Small satellites contribute significantly in the rapidly evolving innovation in space engineering, in particular in distributed space systems for global Earth observation and communication services. Significant mass reduction by miniaturization, increased utilization of commercial high-tech components, and in particular standardization are the key drivers for modern miniature space technology. This thesis addresses key fields in research and development on miniature satellite technology regarding efficiency, flexibility, and robustness. Here, these challenges are addressed by the University of Wuerzburg’s advanced pico-satellite bus, realizing a generic modular satellite architecture and standardized interfaces for all subsystems. The modular platform ensures reusability, scalability, and increased testability due to its flexible subsystem interface which allows efficient and compact integration of the entire satellite in a plug-and-play manner. Beside systematic design for testability, a high degree of operational robustness is achieved by the consequent implementation of redundancy of crucial subsystems. This is combined with efficient fault detection, isolation and recovery mechanisms. Thus, the UWE-3 platform, and in particular the on-board data handling system and the electrical power system, offers one of the most efficient pico-satellite architectures launched in recent years and provides a solid basis for future extensions. The in-orbit performance results of the pico-satellite UWE-3 are presented and summarize successful operations since its launch in 2013. Several software extensions and adaptations have been uploaded to UWE-3 increasing its capabilities. Thus, a very flexible platform for in-orbit software experiments and for evaluations of innovative concepts was provided and tested. N2 - Miniaturisierte Satellitensysteme übernehmen zunehmend eine entscheidende Rolle in der fortschreitenden Globalisierung und Demokratisierung der Raumfahrt. Großes Innovationspotential und neue Kommerzialisierungschancen verspricht der effektive Einsatz von Kleinstsatelliten in zukünftigen fraktionierten Missionen für Erdbeobachtungs- und Kommunikationsanwendungen. Basierend auf vielen kleinen kooperierenden Systemen können diese Systeme zukünftig große multifunktionale Satelliten ergänzen oder ersetzen. Die Herausforderung bei der Entwicklung miniaturisierter Satellitensysteme ist die Gratwanderung zwischen der im Rahmen der Miniaturisierung notwendigen Effizienz, der für die Erfüllung der Mission geforderten Zuverlässigkeit und der wünschenswerten Wiederverwendbarkeit und Erweiterbarkeit zur Realisierung agiler Kleinstsatellitenserien. Diese Arbeit adressiert verschiedene Aspekte für den optimalen Entwurf robuster, flexibler und effizienter Kleinstsatelliten am Beispiel des UWE Pico-Satelliten Bus der Universität Würzburg. Mit dem Ziel der Entwicklung einer soliden Basisplattform für zukünftige Kleinstsatelliten-Formationen wurden entsprechende Designansätze im Rahmen des UWE-3 Projektes in einem integralen Designansatz konsistent umgesetzt. Neben der Entwicklung von effizienten Redundanzkonzepten mit minimalem Overhead für den optimalen Betrieb auf Ressourcen-limitierten Kleinstsatelliten wurde ein modularer Satellitenbus entworfen, der als eine robuste und erweiterbare Basis für zukünftige Missionen dienen soll. Damit realisiert UWE-3 eine der effizientesten Kleinstsatelliten-Architekturen die in den letzten Jahren in den Orbit gebracht wurde. Dargestellte Missionsergebnisse fassen den erfolgreichen Betrieb des Satelliten seit seinem Launch in Jahr 2013 zusammen. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 11 KW - Kleinsatellit KW - Fehlertoleranz KW - Miniaturisierung KW - Modularität KW - Picosatellite KW - Satellit Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-136523 SN - 978-3-945459-10-2 ER - TY - GEN T1 - Jahresbericht 2015 T1 - Annual Report 2015 N2 - Jahresbericht 2015 des Rechenzentrums der Universität Würzburg N2 - Annual Report 2015 of the Computer Center, University of Wuerzburg T3 - Jahresbericht des Rechenzentrums der Universität Würzburg - 2015 KW - Julius-Maximilians-Universität Würzburg. Rechenzentrum KW - annual report KW - Computer Center University of Wuerzburg KW - Jahresbericht KW - RZUW Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-136599 UR - https://www.rz.uni-wuerzburg.de/wir/publikationen/ ER - TY - JOUR A1 - Andronic, Joseph A1 - Shirakashi, Ryo A1 - Pickel, Simone U. A1 - Westerling, Katherine M. A1 - Klein, Teresa A1 - Holm, Thorge A1 - Sauer, Markus A1 - Sukhorukov, Vladimir L. T1 - Hypotonic Activation of the Myo-Inositol Transporter SLC5A3 in HEK293 Cells Probed by Cell Volumetry, Confocal and Super-Resolution Microscopy JF - PLoS One N2 - Swelling-activated pathways for myo-inositol, one of the most abundant organic osmolytes in mammalian cells, have not yet been identified. The present study explores the SLC5A3 protein as a possible transporter of myo-inositol in hyponically swollen HEK293 cells. To address this issue, we examined the relationship between the hypotonicity-induced changes in plasma membrane permeability to myo-inositol Pino [m/s] and expression/localization of SLC5A3. Pino values were determined by cell volumetry over a wide tonicity range (100–275 mOsm) in myo-inositol-substituted solutions. While being negligible under mild hypotonicity (200–275 mOsm), Pino grew rapidly at osmolalities below 200 mOsm to reach a maximum of ∼3 nm/s at 100–125 mOsm, as indicated by fast cell swelling due to myo-inositol influx. The increase in Pino resulted most likely from the hypotonicity-mediated incorporation of cytosolic SLC5A3 into the plasma membrane, as revealed by confocal fluorescence microscopy of cells expressing EGFP-tagged SLC5A3 and super-resolution imaging of immunostained SLC5A3 by direct stochastic optical reconstruction microscopy (dSTORM). dSTORM in hypotonic cells revealed a surface density of membrane-associated SLC5A3 proteins of 200–2000 localizations/μm2. Assuming SLC5A3 to be the major path for myo-inositol, a turnover rate of 80–800 myo-inositol molecules per second for a single transporter protein was estimated from combined volumetric and dSTORM data. Hypotonic stress also caused a significant upregulation of SLC5A3 gene expression as detected by semiquantitative RT-PCR and Western blot analysis. In summary, our data provide first evidence for swelling-mediated activation of SLC5A3 thus suggesting a functional role of this transporter in hypotonic volume regulation of mammalian cells. KW - electrolytes KW - isotonic KW - membrane proteins KW - cell membranes KW - hypotonic KW - hypotonic solutions KW - tonicity KW - permeability Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-126408 VL - 10 IS - 3 ER - TY - JOUR A1 - Weiß, Clemens Leonard A1 - Schultz, Jörg T1 - Identification of divergent WH2 motifs by HMM-HMM alignments JF - BMC Research Notes N2 - Background The actin cytoskeleton is a hallmark of eukaryotic cells. Its regulation as well as its interaction with other proteins is carefully orchestrated by actin interaction domains. One of the key players is the WH2 motif, which enables binding to actin monomers and filaments and is involved in the regulation of actin nucleation. Contrasting conserved domains, the identification of this motif in protein sequences is challenging, as it is short and poorly conserved. Findings To identify divergent members, we combined Hidden-Markov-Model (HMM) to HMM alignments with orthology predictions. Thereby, we identified nearly 500 proteins containing so far not annotated WH2 motifs. This included shootin-1, an actin binding protein involved in neuron polarization. Among others, WH2 motifs of ‘proximal to raf’ (ptr)-orthologs, which are described in the literature, but not annotated in genome databases, were identified. Conclusion In summary, we increased the number of WH2 motif containing proteins substantially. This identification of candidate regions for actin interaction could steer their experimental characterization. Furthermore, the approach outlined here can easily be adapted to the identification of divergent members of further domain families. KW - WH2 domain KW - spire KW - shootin-1 KW - actin nucleation KW - HHblits Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-126413 VL - 8 IS - 18 ER - TY - JOUR A1 - Toepfer, Martin A1 - Corovic, Hamo A1 - Fette, Georg A1 - Klügl, Peter A1 - Störk, Stefan A1 - Puppe, Frank T1 - Fine-grained information extraction from German transthoracic echocardiography reports JF - BMC Medical Informatics and Decision Making N2 - Background Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. Methods A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. Results The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. Conclusions The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research. Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-125509 VL - 15 IS - 91 ER - TY - JOUR A1 - Kuhn, Joachim A1 - Gripp, Tatjana A1 - Flieder, Tobias A1 - Dittrich, Marcus A1 - Hendig, Doris A1 - Busse, Jessica A1 - Knabbe, Cornelius A1 - Birschmann, Ingvild T1 - UPLC-MRM Mass Spectrometry Method for Measurement of the Coagulation Inhibitors Dabigatran and Rivaroxaban in Human Plasma and Its Comparison with Functional Assays JF - PLOS ONE N2 - Introduction The fast, precise, and accurate measurement of the new generation of oral anticoagulants such as dabigatran and rivaroxaban in patients' plasma my provide important information in different clinical circumstances such as in the case of suspicion of overdose, when patients switch from existing oral anticoagulant, in patients with hepatic or renal impairment, by concomitant use of interaction drugs, or to assess anticoagulant concentration in patients' blood before major surgery. Methods Here, we describe a quick and precise method to measure the coagulation inhibitors dabigatran and rivaroxaban using ultra-performance liquid chromatography electrospray ionization-tandem mass spectrometry in multiple reactions monitoring (MRM) mode (UPLC-MRM MS). Internal standards (ISs) were added to the sample and after protein precipitation; the sample was separated on a reverse phase column. After ionization of the analytes the ions were detected using electrospray ionization-tandem mass spectrometry. Run time was 2.5 minutes per injection. Ion suppression was characterized by means of post-column infusion. Results The calibration curves of dabigatran and rivaroxaban were linear over the working range between 0.8 and 800 mu g/L (r > 0.99). Limits of detection (LOD) in the plasma matrix were 0.21 mu g/L for dabigatran and 0.34 mu g/L for rivaroxaban, and lower limits of quantification (LLOQ) in the plasma matrix were 0.46 mu g/L for dabigatran and 0.54 mu g/L for rivaroxaban. The intraassay coefficients of variation (CVs) for dabigatran and rivaroxaban were < 4% and 6%; respectively, the interassay CVs were < 6% for dabigatran and < 9% for rivaroxaban. Inaccuracy was < 5% for both substances. The mean recovery was 104.5% (range 83.8-113.0%) for dabigatran and 87.0%(range 73.6-105.4%) for rivaroxaban. No significant ion suppressions were detected at the elution times of dabigatran or rivaroxaban. Both coagulation inhibitors were stable in citrate plasma at -20 degrees C, 4 degrees C and even at RT for at least one week. A method comparison between our UPLC-MRM MS method, the commercially available automated Direct Thrombin Inhibitor assay (DTI assay) for dabigatran measurement from CoaChrom Diagnostica, as well as the automated anti-Xa assay for rivaroxaban measurement from Chromogenix both performed by ACL-TOP showed a high degree of correlation. However, UPLC-MRM MS measurement of dabigatran and rivaroxaban has a much better selectivity than classical functional assays measuring activities of various coagulation factors which are susceptible to interference by other coagulant drugs. Conclusions Overall, we developed and validated a sensitive and specific UPLC-MRM MS assay for the quick and specific measurement of dabigatran and rivaroxaban in human plasma. KW - LC-MS/MS KW - validation KW - serum KW - quantification KW - apixaban KW - diagnostic accuracy KW - performance liquid chromatography KW - factor XA inhibitor KW - direct oral anticoagulants KW - direct thrombin inhibitor Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-136023 VL - 10 IS - 12 ER - TY - JOUR A1 - Singh, Amit K. A1 - Kingston, Joseph J. A1 - Gupta, Shishir K. A1 - Batra, Harsh V. T1 - Recombinant Bivalent Fusion Protein rVE Induces CD4+ and CD8+ T-Cell Mediated Memory Immune Response for Protection Against Yersinia enterocolitica Infection JF - Frontiers in Microbiology N2 - Studies investigating the correlates of immune protection against Yersinia infection have established that both humoral and cell mediated immune responses are required for the comprehensive protection. In our previous study, we established that the bivalent fusion protein (rVE) comprising immunologically active regions of Y pestis LcrV (100-270 aa) and YopE (50-213 aa) proteins conferred complete passive and active protection against lethal Y enterocolitica 8081 challenge. In the present study, cohort of BALB/c mice immunized with rVE or its component proteins rV, rE were assessed for cell mediated immune responses and memory immune protection against Y enterocolitica 8081 rVE immunization resulted in extensive proliferation of both CD4 and CD8 T cell subsets; significantly high antibody titer with balanced IgG1: IgG2a/IgG2b isotypes (1:1 ratio) and up regulation of both Th1 (INF-\(\alpha\), IFN-\(\gamma\), IL 2, and IL 12) and Th2 (IL 4) cytokines. On the other hand, rV immunization resulted in Th2 biased IgG response (11:1 ratio) and proliferation of CD4+ T-cell; rE group of mice exhibited considerably lower serum antibody titer with predominant Th1 response (1:3 ratio) and CD8+ T-cell proliferation. Comprehensive protection with superior survival (100%) was observed among rVE immunized mice when compared to the significantly lower survival rates among rE (37.5%) and rV (25%) groups when IP challenged with Y enterocolitica 8081 after 120 days of immunization. Findings in this and our earlier studies define the bivalent fusion protein rVE as a potent candidate vaccine molecule with the capability to concurrently stimulate humoral and cell mediated immune responses and a proof of concept for developing efficient subunit vaccines against Gram negative facultative intracellular bacterial pathogens. KW - I-tasser KW - Yersinia enterocolitica KW - memory immune responses KW - cytokine profiling KW - CD8+T cells KW - CD4+T cells KW - recombinant protein rVE KW - resistance KW - pneumonic plague KW - pestis infection KW - nonhuman-primates KW - III secretion KW - V-antigen KW - mice KW - vaccine Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-136114 VL - 6 IS - 1407 ER - TY - THES A1 - Ullmann, Tobias T1 - Characterization of Arctic Environment by Means of Polarimetric Synthetic Aperture Radar (PolSAR) Data and Digital Elevation Models (DEM) T1 - Charakterisierung der arktischen Landoberfläche mittels polarimetrischer Radardaten (PolSAR) und digitalen Höhenmodellen (DEM) N2 - The ecosystem of the high northern latitudes is affected by the recently changing environmental conditions. The Arctic has undergone a significant climatic change over the last decades. The land coverage is changing and a phenological response to the warming is apparent. Remotely sensed data can assist the monitoring and quantification of these changes. The remote sensing of the Arctic was predominantly carried out by the usage of optical sensors but these encounter problems in the Arctic environment, e.g. the frequent cloud cover or the solar geometry. In contrast, the imaging of Synthetic Aperture Radar is not affected by the cloud cover and the acquisition of radar imagery is independent of the solar illumination. The objective of this work was to explore how polarimetric Synthetic Aperture Radar (PolSAR) data of TerraSAR-X, TanDEM-X, Radarsat-2 and ALOS PALSAR and interferometric-derived digital elevation model data of the TanDEM-X Mission can contribute to collect meaningful information on the actual state of the Arctic Environment. The study was conducted for Canadian sites of the Mackenzie Delta Region and Banks Island and in situ reference data were available for the assessment. The up-to-date analysis of the PolSAR data made the application of the Non-Local Means filtering and of the decomposition of co-polarized data necessary. The Non-Local Means filter showed a high capability to preserve the image values, to keep the edges and to reduce the speckle. This supported not only the suitability for the interpretation but also for the classification. The classification accuracies of Non-Local Means filtered data were in average +10% higher compared to unfiltered images. The correlation of the co- and quad-polarized decomposition features was high for classes with distinct surface or double bounce scattering and a usage of the co-polarized data is beneficial for regions of natural land coverage and for low vegetation formations with little volume scattering. The evaluation further revealed that the X- and C-Band were most sensitive to the generalized land cover classes. It was found that the X-Band data were sensitive to low vegetation formations with low shrub density, the C-Band data were sensitive to the shrub density and the shrub dominated tundra. In contrast, the L-Band data were less sensitive to the land cover. Among the different dual-polarized data the HH/VV-polarized data were identified to be most meaningful for the characterization and classification, followed by the HH/HV-polarized and the VV/VH-polarized data. The quad-polarized data showed highest sensitivity to the land cover but differences to the co-polarized data were small. The accuracy assessment showed that spectral information was required for accurate land cover classification. The best results were obtained when spectral and radar information was combined. The benefit of including radar data in the classification was up to +15% accuracy and most significant for the classes wetland and sparse vegetated tundra. The best classifications were realized with quad-polarized C-Band and multispectral data and with co-polarized X-Band and multispectral data. The overall accuracy was up to 80% for unsupervised and up to 90% for supervised classifications. The results indicated that the shortwave co-polarized data show promise for the classification of tundra land cover since the polarimetric information is sensitive to low vegetation and the wetlands. Furthermore, co-polarized data provide a higher spatial resolution than the quad-polarized data. The analysis of the intermediate digital elevation model data of the TanDEM-X showed a high potential for the characterization of the surface morphology. The basic and relative topographic features were shown to be of high relevance for the quantification of the surface morphology and an area-wide application is feasible. In addition, these data were of value for the classification and delineation of landforms. Such classifications will assist the delineation of geomorphological units and have potential to identify locations of actual and future morphologic activity. N2 - Die polaren Regionen der Erde zeigen eine hohe Sensitivität gegenüber dem aktuell stattfindenden klimatischen Wandel. Für den Raum der Arktis wurde eine signifikante Erwärmung der Landoberfläche beobachtet und zukünftige Prognosen zeigen einen positiven Trend der Temperaturentwicklung. Die Folgen für das System sind tiefgehend, zahlreich und zeigen sich bereits heute - beispielsweise in einer Zunahme der photosynthetischen Aktivität und einer Verstärkung der geomorphologischen Dynamik. Durch satellitengestützte Fernerkundungssysteme steht ein Instrumentarium bereit, welches in der Lage ist, solch großflächigen und aktuellen Änderungen der Landoberfläche nachzuzeichnen und zu quantifizieren. Insbesondere optische Systeme haben in den vergangen Jahren ihre hohe Anwendbarkeit für die kontinuierliche Beobachtung und Quantifizierung von Änderungen bewiesen, bzw. durch sie ist ein Erkennen der Änderungen erst ermöglicht worden. Der Nutzen von optischen Systemen für die Beobachtung der arktischen Landoberfläche wird dabei aber durch die häufige Beschattung durch Wolken und die Beleuchtungsgeometrie erschwert, bzw. unmöglich gemacht. Demgegenüber eröffnen bildgebende Radarsystem durch die aktive Sendung von elektromagnetischen Signalen die Möglichkeit kontinuierlich Daten über den Zustand der Oberfläche aufzuzeichnen, ohne von den atmosphärischen oder orbitalen Bedingungen abhängig zu sein. Das Ziel der vorliegenden Arbeit war es den Nutzen und Mehrwert von polarimetrischen Synthetic Aperture Radar (PolSAR) Daten der Satelliten TerraSAR-X, TanDEM-X, Radarsat-2 und ALOS PALSAR für die Charakterisierung und Klassifikation der arktischen Landoberfläche zu identifizieren. Darüber hinaus war es ein Ziel das vorläufige interferometrische digitale Höhenmodel der TanDEM-X Mission für die Charakterisierung der Landoberflächen-Morphologie zu verwenden. Die Arbeiten erfolgten hauptsächlich an ausgewählten Testgebieten im Bereich des kanadischen Mackenzie Deltas und im Norden von Banks Islanld. Für diese Regionen standen in situ erhobene Referenzdaten zur Landbedeckung zur Verfügung. Mit Blick auf den aktuellen Stand der Forschung wurden die Radardaten mit einem entwickelten Non-Local-Means Verfahren gefiltert. Die co-polarisierten Daten wurde zudem mit einer neu entwickelten zwei Komponenten Dekomposition verarbeitet. Das entwickelte Filterverfahren zeigt eine hohe Anwendbarkeit für alle Radardaten. Der Ansatz war in der Lage die Kanten und Grauwerte im Bild zu erhalten, bei einer gleichzeitigen Reduktion der Varianz und des Speckle-Effekts. Dies verbesserte nicht nur die Bildinterpretation, sondern auch die Bildklassifikation und eine Erhöhung der Klassifikationsgüte von ca. +10% konnte durch die Filterung erreicht werden. Die Merkmale der Dekomposition von co-polarisierten Daten zeigten eine hohe Korrelation zu den entsprechenden Merkmalen der Dekomposition von voll-polarisierten Daten. Die Korrelation war besonders hoch für Landbedeckungstypen, welche eine double oder single bounce Rückstreuung hervorrufen. Eine Anwendung von co-polarisierten Daten ist somit besonders sinnvoll und aussagekräftig für Landbedeckungstypen, welche nur einen geringen Teil an Volumenstreuung bedingen. Die vergleichende Auswertung der PolSAR Daten zeigte, dass sowohl X- als auch C-Band Daten besonders sensitiv für die untersuchten Landbedeckungsklassen waren. Die X-Band Daten zeigten die höchste Sensitivität für niedrige Tundrengesellschaften. Die C-Band Daten zeigten eine höhere Sensitivität für mittelhohe Tundrengesellschaften und Gebüsch (shrub). Die L-Band Daten wiesen im Vergleich dazu die geringste Sensitivität für die Oberflächenbedeckung auf. Ein Vergleich von verschiedenen dual-polarisierten Daten zeigte, dass die Kanalkombination HH/VV die beste Differenzierung der Landbedeckungsklassen lieferte. Weniger deutlich war die Differenzierung mit den Kombinationen HH/HV und VV/VH. Insgesamt am besten waren jedoch die voll-polarisierten Daten geeignet, auch wenn die Verbesserung im Vergleich zu den co-polarisierten Daten nur gering war. Die Analyse der Klassifikationsgenauigkeiten bestätigte dieses Bild, machte jedoch deutlich, dass zu einer genauen Landbedeckungsklassifikation die Einbeziehung von multispektraler Information notwendig ist. Eine Nutzung von voll-polarisierten C-Band und multispektralen Daten erbrachte so eine mittlere Güte von ca. 80% für unüberwachte und von ca. 90% für überwachte Klassifikationsverfahren. Ähnlich hohe Werte wurden für die Kombination von co-polarisierten X-Band und multispektralen Daten erreicht. Im Vergleich zu Klassifikation die nur auf Grundlage von multispektralen Daten durchgeführt wurden, erbrachte die Einbeziehung der polarisierten Radardaten eine zusätzliche durchschnittliche Klassifikationsgüte von ca. +15%. Der Zugewinn und die Möglichkeit zur Differenzierung war vor allem für die Bedeckungstypen der Feuchtgebiete (wetlands) und der niedrigen Tundrengesellschaften festzustellen. Die Analyse der digitalen Höhenmodelle zeigte ein hohes Potential der TanDEM-X Daten für die Charakterisierung der topographischen Gegebenheiten. Die aus den Daten abgeleiteten absoluten und relativen topographischen Merkmale waren für eine morphometrische Quantifizierung der Landoberflächen-Morphologie geeignet. Zudem konnten diese Merkmale auch für eine initiale Klassifikation der Landformen genutzt werden. Die Daten zeigten somit ein hohes Potential für die Unterstützung der geomorphologischen Kartierung und für die Identifizierung der aktuellen und zukünftigen Dynamik der Landoberfläche. KW - Mackenzie-River-Delta KW - Banks Islands KW - Radarfernerkundung KW - Topografie KW - Formmessung KW - Klassifikation KW - Relief KW - PolSAR KW - Synthetic Aperture Radar KW - Land Cover Classification KW - Digital Elevation Model KW - Arctic Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-115719 ER - TY - THES A1 - Winkler, Marco T1 - On the Role of Triadic Substructures in Complex Networks T1 - Über die Bedeutung von Dreiecksstrukturen in komplexen Netzwerken N2 - In the course of the growth of the Internet and due to increasing availability of data, over the last two decades, the field of network science has established itself as an own area of research. With quantitative scientists from computer science, mathematics, and physics working on datasets from biology, economics, sociology, political sciences, and many others, network science serves as a paradigm for interdisciplinary research. One of the major goals in network science is to unravel the relationship between topological graph structure and a network’s function. As evidence suggests, systems from the same fields, i.e. with similar function, tend to exhibit similar structure. However, it is still vague whether a similar graph structure automatically implies likewise function. This dissertation aims at helping to bridge this gap, while particularly focusing on the role of triadic structures. After a general introduction to the main concepts of network science, existing work devoted to the relevance of triadic substructures is reviewed. A major challenge in modeling triadic structure is the fact that not all three-node subgraphs can be specified independently of each other, as pairs of nodes may participate in multiple of those triadic subgraphs. In order to overcome this obstacle, we suggest a novel class of generative network models based on so called Steiner triple systems. The latter are partitions of a graph’s vertices into pair-disjoint triples (Steiner triples). Thus, the configurations on Steiner triples can be specified independently of each other without overdetermining the network’s link structure. Subsequently, we investigate the most basic realization of this new class of models. We call it the triadic random graph model (TRGM). The TRGM is parametrized by a probability distribution over all possible triadic subgraph patterns. In order to generate a network instantiation of the model, for all Steiner triples in the system, a pattern is drawn from the distribution and adjusted randomly on the Steiner triple. We calculate the degree distribution of the TRGM analytically and find it to be similar to a Poissonian distribution. Furthermore, it is shown that TRGMs possess non-trivial triadic structure. We discover inevitable correlations in the abundance of certain triadic subgraph patterns which should be taken into account when attributing functional relevance to particular motifs – patterns which occur significantly more frequently than expected at random. Beyond, the strong impact of the probability distributions on the Steiner triples on the occurrence of triadic subgraphs over the whole network is demonstrated. This interdependence allows us to design ensembles of networks with predefined triadic substructure. Hence, TRGMs help to overcome the lack of generative models needed for assessing the relevance of triadic structure. We further investigate whether motifs occur homogeneously or heterogeneously distributed over a graph. Therefore, we study triadic subgraph structures in each node’s neighborhood individually. In order to quantitatively measure structure from an individual node’s perspective, we introduce an algorithm for node-specific pattern mining for both directed unsigned, and undirected signed networks. Analyzing real-world datasets, we find that there are networks in which motifs are distributed highly heterogeneously, bound to the proximity of only very few nodes. Moreover, we observe indication for the potential sensitivity of biological systems to a targeted removal of these critical vertices. In addition, we study whole graphs with respect to the homogeneity and homophily of their node-specific triadic structure. The former describes the similarity of subgraph distributions in the neighborhoods of individual vertices. The latter quantifies whether connected vertices are structurally more similar than non-connected ones. We discover these features to be characteristic for the networks’ origins. Moreover, clustering the vertices of graphs regarding their triadic structure, we investigate structural groups in the neural network of C. elegans, the international airport-connection network, and the global network of diplomatic sentiments between countries. For the latter we find evidence for the instability of triangles considered socially unbalanced according to sociological theories. Finally, we utilize our TRGM to explore ensembles of networks with similar triadic substructure in terms of the evolution of dynamical processes acting on their nodes. Focusing on oscillators, coupled along the graphs’ edges, we observe that certain triad motifs impose a clear signature on the systems’ dynamics, even when embedded in a larger network structure. N2 - Im Zuge des Wachstums des Internets und der Verfügbarkeit nie da gewesener Datenmengen, hat sich, während der letzten beiden Jahrzehnte, die Netzwerkwissenschaft zu einer eigenständigen Forschungsrichtung entwickelt. Mit Wissenschaftlern aus quantitativen Feldern wie der Informatik, Mathematik und Physik, die Datensätze aus Biologie, den Wirtschaftswissenschaften, Soziologie, Politikwissenschaft und vielen weiteren Anwendungsgebieten untersuchen, stellt die Netzwerkwissenschaft ein Paradebeispiel interdisziplinärer Forschung dar. Eines der grundlegenden Ziele der Netzwerkwissenschaft ist es, den Zusammenhang zwischen der topologischen Struktur und der Funktion von Netzwerken herauszufinden. Es gibt zahlreiche Hinweise, dass Netz-werke aus den gleichen Bereichen, d.h. Systeme mit ähnlicher Funktion, auch ähnliche Graphstrukturen aufweisen. Es ist allerdings nach wie vor unklar, ob eine ähnliche Graphstruktur generell zu gleicher Funktionsweise führt. Es ist das Ziel der vorliegenden Dissertation, zur Klärung dieser Frage beizutragen. Das Hauptaugenmerk wird hierbei auf der Rolle von Dreiecksstrukturen liegen. Nach einer allgemeinen Einführung der wichtigsten Grundlagen der Theorie komplexer Netzwerke, wird eine Übersicht über existierende Arbeiten zur Bedeutung von Dreiecksstrukturen gegeben. Eine der größten Herausforderungen bei der Modellierung triadischer Strukturen ist die Tatsache, dass nicht alle Dreiecksbeziehungen in einem Graphen unabhängig voneinander bestimmt werden können, da zwei Knoten an mehreren solcher Dreiecksbeziehungen beteiligt sein können. Um dieses Problem zu lösen, führen wir, basierend auf sogenannten Steiner-Tripel-Systemen, eine neue Klasse generativer Netzwerkmodelle ein. Steiner-Tripel-Systeme sind Zerlegungen der Knoten eines Graphen in paarfremde Tripel (Steiner-Tripel). Daher können die Konfigurationen auf Steiner-Tripeln unabhängig voneinander gewählt werden, ohne dass dies zu einer Überbestimmung der Netzwerkstruktur führen würde. Anschließend untersuchen wir die grundlegendste Realisierung dieser neuen Klasse von Netzwerkmodellen, die wir das triadische Zufallsgraph-Modell (engl. triadic random graph model, TRGM) nennen. TRGMs werden durch eine Wahrscheinlichkeitsverteilung über alle möglichen Dreiecksstrukturen parametrisiert. Um ein konkretes Netzwerk zu erzeugen wird für jedes Steiner-Tripel eine Dreiecksstruktur gemäß der Wahrscheinlichkeitsverteilung gezogen und zufällig auf dem Tripel orientiert. Wir berechnen die Knotengradverteilung des TRGM analytisch und finden heraus, dass diese einer Poissonverteilung ähnelt. Des Weiteren wird gezeigt, dass TRGMs nichttriviale Dreiecksstrukturen aufweisen. Außerdem finden wir unvermeidliche Korrelationen im Auftreten bestimmter Subgraphen, derer man sich bewusst sein sollte. Insbesondere wenn es darum geht, die Bedeutung sogenannter Motive (Strukturen, die signifikant häufiger als zufällig erwartet auftreten) zu beurteilen. Darüber hinaus wird der starke Einfluss der Wahrscheinlichkeitsverteilung auf den Steiner-Tripeln, auf die generelle Dreiecksstruktur der erzeugten Netzwerke gezeigt. Diese Abhängigkeit ermöglicht es, Netzwerkensembles mit vorgegebener Dreiecksstruktur zu konzipieren. Daher helfen TRGMs dabei, den bestehenden Mangel an generativen Netzwerkmodellen, zur Beurteilung der Bedeutung triadischer Strukturen in Graphen, zu beheben. Es wird ferner untersucht, wie homogen Motive räumlich über Graphstrukturen verteilt sind. Zu diesem Zweck untersuchen wir das Auftreten von Dreiecksstrukturen in der Umgebung jedes Knotens separat. Um die Struktur individueller Knoten quantitativ erfassen zu können, führen wir einen Algorithmus zur knotenspezifischen Musterauswertung (node-specific pattern mining) ein, der sowohl auf gerichtete, als auch auf Graphen mit positiven und negativen Kanten angewendet werden kann. Bei der Analyse realer Datensätze beobachten wir, dass Motive in einigen Netzen hochgradig heterogen verteilt, und auf die Umgebung einiger, weniger Knoten beschränkt sind. Darüber hinaus finden wir Hinweise auf die mögliche Fehleranfälligkeit biologischer Systeme auf ein gezieltes Entfernen ebendieser Knoten. Des Weiteren studieren wir ganze Graphen bezüglich der Homogenität und Homophilie ihrer knotenspezifischen Dreiecksmuster. Erstere beschreibt die Ähnlichkeit der lokalen Dreiecksstrukturen zwischen verschiedenen Knoten. Letztere gibt an, ob sich verbundene Knoten bezüglich ihrer Dreiecksstruktur ähnlicher sind, als nicht verbundene Knoten. Wir stellen fest, dass diese Eigenschaften charakteristisch für die Herkunft der jeweiligen Netzwerke sind. Darüber hinaus gruppieren wir die Knoten verschiedener Systeme bezüglich der Ähnlichkeit ihrer lokalen Dreiecksstrukturen. Hierzu untersuchen wir das neuronale Netz von C. elegans, das internationale Flugverbindungsnetzwerk, sowie das Netzwerk internationaler Beziehungen zwischen Staaten. In Letzterem finden wir Hinweise darauf, dass Dreieckskonfigurationen, die nach soziologischen Theorien als unbalanciert gelten, besonders instabil sind. Schließlich verwenden wir unser TRGM, um Netzwerkensembles mit ähnlicher Dreiecksstruktur bezüglich der Eigenschaften dynamischer Prozesse, die auf ihren Knoten ablaufen, zu untersuchen. Wir konzentrieren uns auf Oszillatoren, die entlang der Kanten der Graphen miteinander gekoppelt sind. Hierbei beobachten wir, dass bestimmte Dreiecksmotive charakteristische Merkmale im dynamischen Verhalten der Systeme hinterlassen. Dies ist auch der Fall, wenn die Motive in eine größere Netzwerkstruktur eingebettet sind. KW - Netzwerk KW - Komplexes System KW - Substruktur KW - Dreieck KW - Networks KW - Complex Systems KW - Statistics KW - Machine Learning KW - Biological Networks KW - Statistische Physik KW - Statistische Mechanik KW - Data Mining KW - Maschinelles Lernen KW - Graphentheorie Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-116022 SN - 978-3-7375-5654-5 PB - epubli GmbH CY - Berlin ER - TY - CHAP A1 - Ali, Qasim A1 - Montenegro, Sergio T1 - A Simple Approach to Quadrocopter Formation Flying Test Setup for Education and Development T2 - INTED2015 Proceedings N2 - A simple test setup has been developed at Institute of Aerospace Information Technology, University of Würzburg, Germany to realize basic functionalities for formation flight of quadrocopters. The test environment is planned to be utilized for developing and validating the algorithms for formation flying capability in real environment as well as for education purpose. An already existing test bed for single quadrocopter was extended with necessary inter-communication and distributed control mechanism to test the algorithms for formation flights in 2 degrees of freedom (roll / pitch). This study encompasses the domain of communication, control engineering and embedded systems programming. Bluetooth protocol has been used for inter-communication between two quadrocopters. A simple approach of PID control in combination with Kalman filter has been exploited. MATLAB Instrument Control Toolbox has been used for data display, plotting and analysis. Plots can be drawn in real-time and received information can also be stored in the form of files for later use and analysis. The test setup has been developed indigenously and at considerably low cost. Emphasis has been placed on simplicity to facilitate students learning process. Several lessons have been learnt during the course of development of this setup. Proposed setup is quite flexible that can be modified as per changing requirements. KW - Flugkörper KW - Design and Development KW - Formation Flight KW - Instrument Control Toolbox KW - Quadrocopter KW - Unmanned Aerial Vehicle KW - Quadrocopter Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-114495 SN - 978-84-606-5763-7 SN - 2340-1079 SP - 2776 EP - 2784 PB - International Academy of Technology, Education and Development (IATED) ER - TY - THES A1 - Hartmann, Matthias T1 - Optimization and Design of Network Architectures for Future Internet Routing T1 - Optimierung und Design von Netzwerkarchitekturen für zukünftiges Internet Routing N2 - At the center of the Internet’s protocol stack stands the Internet Protocol (IP) as a common denominator that enables all communication. To make routing efficient, resilient, and scalable, several aspects must be considered. Care must be taken that traffic is well balanced to make efficient use of the existing network resources, both in failure free operation and in failure scenarios. Finding the optimal routing in a network is an NP-complete problem. Therefore, routing optimization is usually performed using heuristics. This dissertation shows that a routing optimized with one objective function is often not good when looking at other objective functions. It can even be worse than unoptimized routing with respect to that objective function. After looking at failure-free routing and traffic distribution in different failure scenarios, the analysis is extended to include the loop-free alternate (LFA) IP fast reroute mechanism. Different application scenarios of LFAs are examined and a special focus is set on the fact that LFAs usually cannot protect all traffic in a network even against single link failures. Thus, the routing optimization for LFAs is targeted on both link utilization and failure coverage. Finally, the pre-congestion notification mechanism PCN for network admission control and overload protection is analyzed and optimized. Different design options for implementing the protocol are compared, before algorithms are developed for the calculation and optimization of protocol parameters and PCN-based routing. The second part of the thesis tackles a routing problem that can only be resolved on a global scale. The scalability of the Internet is at risk since a major and intensifying growth of the interdomain routing tables has been observed. Several protocols and architectures are analyzed that can be used to make interdomain routing more scalable. The most promising approach is the locator/identifier (Loc/ID) split architecture which separates routing from host identification. This way, changes in connectivity, mobility of end hosts, or traffic-engineering activities are hidden from the routing in the core of the Internet and the routing tables can be kept much smaller. All of the currently proposed Loc/ID split approaches have their downsides. In particular, the fact that most architectures use the ID for routing outside the Internet’s core is a poor design, which inhibits many of the possible features of a new routing architecture. To better understand the problems and to provide a solution for a scalable routing design that implements a true Loc/ID split, the new GLI-Split protocol is developed in this thesis, which provides separation of global and local routing and uses an ID that is independent from any routing decisions. Besides GLI-Split, several other new routing architectures implementing Loc/ID split have been proposed for the Internet. Most of them assume that a mapping system is queried for EID-to-RLOC mappings by an intermediate node at the border of an edge network. When the mapping system is queried by an intermediate node, packets are already on their way towards their destination, and therefore, the mapping system must be fast, scalable, secure, resilient, and should be able to relay packets without locators to nodes that can forward them to the correct destination. The dissertation develops a classification for all proposed mapping system architectures and shows their similarities and differences. Finally, the fast two-level mapping system FIRMS is developed. It includes security and resilience features as well as a relay service for initial packets of a flow when intermediate nodes encounter a cache miss for the EID-to-RLOC mapping. N2 - Daten durch das Internet werden heutzutage mit dem paketbasierten Internet Protokoll (IP) übertragen. Dezentralisierte Routingprotokolle innerhalb der einzelnen Netze sorgen für eine zielgerichtete Weiterleitung der einzelnen Pakete. Diese verteilten Protokolle können auch im Fehlerfall weiterarbeiten, benötigen aber mitunter sehr lange bis Daten wieder zuverlässig am Ziel ankommen. Um im Betrieb des Internets eine hohe Leistungsfähigkeit auch bei auftretenden Problemfällen zu gewährleisten, müssen die eingesetzten Protokolle optimal eingestellt werden. Zielfunktionen zur Optimierung paketbasierter Link-State Intradomain-Routingprotokolle: Ein wichtiger Faktor für die Performanz eines Netzes ist die Auswahl der administrativen Linkkosten, anhand derer die Weiterleitungsentscheidungen im Netz getroffen werden. Mit Hilfe von Modellen für Verkehrsaufkommen und für die darunterliegende Netzarchitektur kann mit geeigneten Optimierungsmethoden das Netz für verschiedene Szenarien bestmöglich eingestellt werden. Von besonderer Wichtigkeit ist hierbei die Auswahl der betrachteten Szenarien und Zielfunktionen für die Optimierung. Eine Routingkonfiguration die optimal für ein bestimmtes Ziel ist, kann beliebig schlecht für ein anderes Ziel sein. Zum Beispiel kann eine Konfiguration, die eine besonders hohe Fehlerabdeckung erreicht, zu einer sehr schlechten Verkehrsverteilung führen. Im Rahmen der Dissertation werden heuristische Optimierungen des Routings für verschiedene Protokolle und Anwendungsszenarien durchgeführt. Darüber hinaus wird eine Pareto-Optimierung implementiert, die gleichzeitig mehrere Ziele optimieren kann. Die Analysen werden zuerst für normales Routing im fehlerfreien Fall und für Fehlerszenarien durchgeführt. Daraufhin werden verschiedenste Anwendungsfälle des IP Fast-Reroute Mechanismus Loop-Free Alternates (LFA) betrachtet. Hier wird insbesondere auf die Problematik eingegangen, dass LFAs in Abhängigkeit vom eingestellten Routing in bestimmten Fehlerfällen nicht angewendet werden können. Beim Optimieren des Routings muss hier zusätzlich zur Lastverteilung auch noch die Maximierung der Fehlerabdeckung berücksichtigt werden. Schließlich folgt eine Untersuchung und Optimierung des Pre-Congestion Notification (PCN) Verfahren zur Netzzugangskontrolle und Überlaststeuerung. Hier werden verschiedene Architekturvarianten des Protokolls miteinander verglichen und Algorithmen zur Berechnung und Optimierung wichtiger Parameter des Protokolls entwickelt. Das Wachstum der Routingtabellen im Kern des Internets droht zu einem Skalierbarkeitsproblem zu werden. Ein Grund für diese Problematik ist die duale Funktion der IP-Adresse. Sie wird einerseits zur Identifikation eines Geräts benutzt und andererseits zur Weiterleitung der Daten zu diesem Gerät. Neue Mechanismen und Protokolle die eine Trennung zwischen den beiden Funktionalitäten der IP-Adresse ermöglichen sind potentielle Kandidaten für eine bessere Skalierbarkeit des Internetroutings und damit für die Erhaltung der Funktionalität des Internets. Design eines neuen Namens- und Routingprotokolls für skalierbares Interdomain-Routing: In der Dissertation werden grundlegende Eigenschaften die zu diesem Problem führen erörtert. Daraufhin werden vorhandene Ansätze zur Verbesserung der Skalierbarkeit des Internetroutings analysiert, und es werden Gemeinsamkeiten wie auch Schwachstellen identifiziert. Auf dieser Basis wird dann ein Protokoll entwickelt, das eine strikte Trennung zwischen Identifikationsadressen (IDs) und routebaren Locator-Adressen einhält. Das GLI-Split genannte Protokoll geht dabei über den einfachen Split von vorhandenen Architekturvorschlägen hinaus und führt eine weitere Adresse ein die nur für das lokale Routing innerhalb eines Endkunden-Netzes benutzt wird. Hierdurch wird die ID eines Endgeräts vollständig unabhängig vom Routing. Durch das GLI-Split Protokoll kann das globale Routing wieder skalierbar gemacht werden. Zusätzlich bietet es viele Vorteile für Netze die das Protokoll einführen, was als Anreiz nötig ist um den Einsatz eines neuen Protokolls zu motivieren. Solch ein Identifier/Locator Split Protokoll benötigt ein Mappingsystem um die Identifier der Endgeräte in routebare Locator-Adressen zu übersetzen. Im letzten Teil der Dissertation wird eine mehrstufige Mapping-Architektur namens FIRMS entwickelt. Über ein hierarchisches Verteilungssystem, das die Adressvergabestruktur der fünf Regionalen Internet Registrare (RIRs) und der darunterliegenden Lokalen Internet Registrare (LIRs) abbildet, werden die erforderlichen Zuordnungstabellen so verteilt, dass jederzeit schnell auf die benötigten Informationen zugegriffen werden kann. Hierbei wird auch besonders auf Sicherheitsaspekte geachtet. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 02/15 KW - Netzwerk KW - Routing KW - Optimierung KW - Netzwerkmanagement KW - Optimization KW - Future Internet Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-114165 SN - 1432-8801 ER - TY - THES A1 - Freiberg, Martina T1 - UI-, User-, & Usability-Oriented Engineering of Participative Knowledge-Based Systems T1 - UI-, Benutzer-, & Usability-orientierte Entwicklung partizipativer Wissensbasierter Systeme N2 - Knowledge-based systems (KBS) face an ever-increasing interest in various disciplines and contexts. Yet, the former aim to construct the ’perfect intelligent software’ continuously shifts to user-centered, participative solutions. Such systems enable users to contribute their personal knowledge to the problem solving process for increased efficiency and an ameliorated user experience. More precisely, we define non-functional key requirements of participative KBS as: Transparency (encompassing KBS status mediation), configurability (user adaptability, degree of user control/exploration), quality of the KB and UI, and evolvability (enabling the KBS to grow mature with their users). Many of those requirements depend on the respective target users, thus calling for a more user-centered development. Often, also highly expertise domains are targeted — inducing highly complex KBs — which requires a more careful and considerate UI/interaction design. Still, current KBS engineering (KBSE) approaches mostly focus on knowledge acquisition (KA) This often leads to non-optimal, little reusable, and non/little evaluated KBS front-end solutions. In this thesis we propose a more encompassing KBSE approach. Due to the strong mutual influences between KB and UI, we suggest a novel form of intertwined UI and KB development. We base the approach on three core components for encompassing KBSE: (1) Extensible prototyping, a tailored form of evolutionary prototyping; this builds on mature UI prototypes and offers two extension steps for the anytime creation of core KBS prototypes (KB + core UI) and fully productive KBS (core KBS prototype + common framing functionality). (2) KBS UI patterns, that define reusable solutions for the core KBS UI/interaction; we provide a basic collection of such patterns in this work. (3) Suitable usability instruments for the assessment of the KBS artifacts. Therewith, we do not strive for ’yet another’ self-contained KBS engineering methodology. Rather, we motivate to extend existing approaches by the proposed key components. We demonstrate this based on an agile KBSE model. For practical support, we introduce the tailored KBSE tool ProKEt. ProKEt offers a basic selection of KBS core UI patterns and corresponding configuration options out of the box; their further adaption/extension is possible on various levels of expertise. For practical usability support, ProKEt offers facilities for quantitative and qualitative data collection. ProKEt explicitly fosters the suggested, intertwined development of UI and KB. For seamlessly integrating KA activities, it provides extension points for two selected external KA tools: For KnowOF, a standard office based KA environment. And for KnowWE, a semantic wiki for collaborative KA. Therewith, ProKEt offers powerful support for encompassing, user-centered KBSE. Finally, based on the approach and the tool, we also developed a novel KBS type: Clarification KBS as a mashup of consultation and justification KBS modules. Those denote a specifically suitable realization for participative KBS in highly expertise contexts and consequently require a specific design. In this thesis, apart from more common UI solutions, we particularly also introduce KBS UI patterns especially tailored towards Clarification KBS. N2 - Das Interesse an wissensbasierten Systemen (WBS) in verschiedensten Fachdisziplinen und Anwendungskontexten wächst nach wie vor stetig. Das frühere Ziel — intelligente Software als Expertenersatz — verschiebt sich dabei allerdings kontinuierlich in Richtung partizipativer, nutzerzentrierter Anwendungen. Solche Systeme erlauben dem Benutzer, das vorhandene persönliche Hintergrundwissen in den Problemlösungsprozess mit einzubringen um die Effizienz des Systems zu steigern und die User Experience zu verbessern. Konkret definieren wir für partizipative WBS die folgenden, nichtfunktionalen Anforderungen: Transparenz (umfassende Vermittlung des Systemstatus), Konfigurierbarkeit (Anpassbarkeit an verschiedene Benutzer und Grad der Benutzerkontrolle und Explorationsmöglichkeit), Qualität sowohl der Wissensbasis als auch der Benutzeroberfläche und Fortentwickelbarkeit (Fähigkeit des WBS analog zu den Kenntnissen seiner Nutzer zu reifen). Viele dieser Anforderungen hängen stark von den jeweiligen Nutzern ab. Im Umkehrschluss erfordert dies eine nutzerzentriertere Entwicklung solcher Systeme. Die häufig sehr fachspezifischen Zieldomänen haben meist entsprechend komplexe Wissensbasen zur Folge. Dies verlangt erst Recht nach einem wohlüberlegten, durchdachten UI- und Inkteraktionsdesign. Dem zum Trotz fokussieren aktuelle WBS Entwicklungsansätze jedoch nach wie vor auf der Wissensformalisierung. Mit der Folge, dass oft keine optimalen, schlecht wiederverwendbare und nur teilweise (oder gar nicht) evaluierte WBS UI/Interaktionslösungen entstehen. Die vorliegende Dissertation schlägt einen allumfassenderen WBS Entwicklungsansatz vor. Unter Berücksichtigung der starken, wechselseitigen Beeinflussung von Wissensbasis und UI ist dieser geprägt durch eine starke Verzahnung von Wissensbasis- und UI-Entwicklung. Der Ansatz stützt sich auf drei Kernkomponenten für allumfassende Entwicklung wissensbasierter Systeme: (1) Extensible Prototyping, eine adaptierte Form des evolutionären Prototyping. Extensible Prototyping basiert auf hochentwickelten UI Prototypen und definiert zwei Erweiterungsschritte um jederzeit WBS-Kernprototypen (Wissensbasis und Kern-UI) beziehungsweise voll funktionale wissensbasierte Anwendungen (WBS Kernprototyp und Rahmenfunktionalität) zu erstellen. (2) WBS UI Patterns. Diese Patterns, oder Muster, definieren wiederverwendbare Lösungen für die Kern-UI und -Inkteraktion. In dieser Arbeit stellen wir eine Sammlung grundlegender WBS UI Patterns vor. (3) Passende Usability Techniken die sich speziell für die Evaluation von WBS Software eignen. Insgesamt streben wir keine weitere, in sich geschlossene WBS Entwicklungsmethodologie an. Vielmehr motivieren wir, existierende, Wissensformalisierungs-lastige Ansätze um die vorgeschlagenen Kernkomponenten zu erweitern. Wir demonstrieren das am agilen Prozessmodell für wissensbasierte Systeme. Für die praktische Umsetzung des vorgestellten Ansatzes stellen wir außerdem das spezialisierte WBS Prototyping- und Softwareentwicklungswerkzeug ProKEt vor. ProKEt unterstützt eine Auswahl der interessantesten WBS UI Patterns sowie zugehöriger Konfigurationsoptionen. Deren weitere Anpassung beziehungsweise Erweiterung ist auf verschiedenen Expertise-Leveln möglich. Um ebenso die Anwendung von Usability Techniken zu unterstützen, bietet ProKEt weiterhin Funktionalität für die quantitative und qualitative Datensammlung. Auch baut ProKEt bewusst auf die strikt verzahnte Entwicklung von UI- und Wissensbasis. Für die einfache und nahtlose Integration von Wissensformalisierung in den Gesamtprozess unterstützt ProKEt zwei externe Wissensformalisierungs-Werkzeuge: KnowOF, eine Wissensformalisierungsumgebung welche standartisierte Office Dokumente nutzt. Und KnowWE, ein semantisches Wiki für die kollaborative Wissensformalisierung im Web. Damit ist ProKEt ein mächtiges Werkzeug für umfassende, nutzerzentrierte WBS Entwicklung. Mithilfe des Entwicklungsansatzes und des Werkzeugs ProKEt haben wir weiterhin einen neuartigen WBS Typ entwickelt: Wissensbasierte Klärungssysteme, im Wesentlichen ein Mashup aus Beratungs-, Erklärungs- und Rechtfertigungskomponente. Diese Systeme stellen eine angepasste Realisierung von partizipativen WBS für höchst fachliche Kontexte dar und verlangten entsprechend nach einem sehr speziellen Design. In der vorliegenden Arbeit stellen wir daher neben allgemeinen WBS UI Patterns auch einige spezialisierte Varianten für wissensbasierte Klärungssysteme vor. KW - Wissensbasiertes System KW - Knowledge-based Systems Engineering KW - Expert System KW - Usability KW - UI and Interaction Design KW - User Participation KW - Expertensystem Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-106072 SN - 978-3-95826-012-2 (print) SN - 978-3-95826-013-9 (online) PB - Würzburg University Press ER - TY - JOUR A1 - Sîrbu, Alina A1 - Becker, Martin A1 - Caminiti, Saverio A1 - De Baets, Bernard A1 - Elen, Bart A1 - Francis, Louise A1 - Gravino, Pietro A1 - Hotho, Andreas A1 - Ingarra, Stefano A1 - Loreto, Vittorio A1 - Molino, Andrea A1 - Mueller, Juergen A1 - Peters, Jan A1 - Ricchiuti, Ferdinando A1 - Saracino, Fabio A1 - Servedio, Vito D.P. A1 - Stumme, Gerd A1 - Theunis, Jan A1 - Tria, Francesca A1 - Van den Bossche, Joris T1 - Participatory Patterns in an International Air Quality Monitoring Initiative JF - PLoS ONE N2 - The issue of sustainability is at the top of the political and societal agenda, being considered of extreme importance and urgency. Human individual action impacts the environment both locally (e.g., local air/water quality, noise disturbance) and globally (e.g., climate change, resource use). Urban environments represent a crucial example, with an increasing realization that the most effective way of producing a change is involving the citizens themselves in monitoring campaigns (a citizen science bottom-up approach). This is possible by developing novel technologies and IT infrastructures enabling large citizen participation. Here, in the wider framework of one of the first such projects, we show results from an international competition where citizens were involved in mobile air pollution monitoring using low cost sensing devices, combined with a web-based game to monitor perceived levels of pollution. Measures of shift in perceptions over the course of the campaign are provided, together with insights into participatory patterns emerging from this study. Interesting effects related to inertia and to direct involvement in measurement activities rather than indirect information exposure are also highlighted, indicating that direct involvement can enhance learning and environmental awareness. In the future, this could result in better adoption of policies towards decreasing pollution. KW - transport microenvironments KW - exposure KW - pollution KW - carbon Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-151379 VL - 10 IS - 8 ER - TY - JOUR A1 - Appel, Mirjam A1 - Scholz, Claus-Jürgen A1 - Müller, Tobias A1 - Dittrich, Marcus A1 - König, Christian A1 - Bockstaller, Marie A1 - Oguz, Tuba A1 - Khalili, Afshin A1 - Antwi-Adjei, Emmanuel A1 - Schauer, Tamas A1 - Margulies, Carla A1 - Tanimoto, Hiromu A1 - Yarali, Ayse T1 - Genome-Wide Association Analyses Point to Candidate Genes for Electric Shock Avoidance in Drosophila melanogaster JF - PLoS ONE N2 - Electric shock is a common stimulus for nociception-research and the most widely used reinforcement in aversive associative learning experiments. Yet, nothing is known about the mechanisms it recruits at the periphery. To help fill this gap, we undertook a genome-wide association analysis using 38 inbred Drosophila melanogaster strains, which avoided shock to varying extents. We identified 514 genes whose expression levels and/or sequences covaried with shock avoidance scores. We independently scrutinized 14 of these genes using mutants, validating the effect of 7 of them on shock avoidance. This emphasizes the value of our candidate gene list as a guide for follow-up research. In addition, by integrating our association results with external protein-protein interaction data we obtained a shock avoidance- associated network of 38 genes. Both this network and the original candidate list contained a substantial number of genes that affect mechanosensory bristles, which are hairlike organs distributed across the fly's body. These results may point to a potential role for mechanosensory bristles in shock sensation. Thus, we not only provide a first list of candidate genes for shock avoidance, but also point to an interesting new hypothesis on nociceptive mechanisms. KW - functional analysis KW - disruption project KW - natural variation KW - complex traits KW - networks KW - behavior KW - flies KW - temperature KW - genetics KW - painful Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-152006 VL - 10 IS - 5 ER - TY - THES A1 - Sun, Kaipeng T1 - Six Degrees of Freedom Object Pose Estimation with Fusion Data from a Time-of-flight Camera and a Color Camera T1 - 6DOF Posenschätzung durch Datenfusion einer Time-of-Flight-Kamera und einer Farbkamera N2 - Object six Degrees of Freedom (6DOF) pose estimation is a fundamental problem in many practical robotic applications, where the target or an obstacle with a simple or complex shape can move fast in cluttered environments. In this thesis, a 6DOF pose estimation algorithm is developed based on the fused data from a time-of-flight camera and a color camera. The algorithm is divided into two stages, an annealed particle filter based coarse pose estimation stage and a gradient decent based accurate pose optimization stage. In the first stage, each particle is evaluated with sparse representation. In this stage, the large inter-frame motion of the target can be well handled. In the second stage, the range data based conventional Iterative Closest Point is extended by incorporating the target appearance information and used for calculating the accurate pose by refining the coarse estimate from the first stage. For dealing with significant illumination variations during the tracking, spherical harmonic illumination modeling is investigated and integrated into both stages. The robustness and accuracy of the proposed algorithm are demonstrated through experiments on various objects in both indoor and outdoor environments. Moreover, real-time performance can be achieved with graphics processing unit acceleration. N2 - Die 6DOF Posenschätzung von Objekten ist ein fundamentales Problem in vielen praktischen Robotikanwendungen, bei denen sich ein Ziel- oder Hindernisobjekt, einfacher oder komplexer Form, schnell in einer unstrukturierten schwierigen Umgebung bewegt. In dieser Forschungsarbeit wird zur Lösung des Problem ein 6DOF Posenschätzer entwickelt, der auf der Fusion von Daten einer Time-of-Flight-Kamera und einer Farbkamera beruht. Der Algorithmus ist in zwei Phasen unterteilt, ein Annealed Partikel-Filter bestimmt eine grobe Posenschätzung, welche mittels eines Gradientenverfahrens in einer zweiten Phase optimiert wird. In der ersten Phase wird jeder Partikel mittels sparse represenation ausgewertet, auf diese Weise kann eine große Inter-Frame-Bewegung des Zielobjektes gut behandelt werden. In der zweiten Phase wird die genaue Pose des Zielobjektes mittels des konventionellen, auf Entfernungsdaten beruhenden, Iterative Closest Point-Algorithmus aus der groben Schätzung der ersten Stufe berechnet. Der Algorithmus wurde dabei erweitert, so dass auch Informationen über das äußere Erscheinungsbild des Zielobjektes verwendet werden. Zur Kompensation von signifikanten Beleuchtungsschwankungen während des Trackings, wurde eine Modellierung der Ausleuchtung mittels Kugelflächenfunktionen erforscht und in beide Stufen der Posenschätzung integriert. Die Leistungsfähigkeit, Robustheit und Genauigkeit des entwickelten Algorithmus wurde in Experimenten im Innen- und Außenbereich mit verschiedenen Zielobjekten gezeigt. Zudem konnte gezeigt werden, dass die Schätzung mit Hilfe von Grafikprozessoren in Echtzeit möglich ist. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 10 KW - Mustererkennung KW - Maschinelles Sehen KW - Sensor KW - 3D Vision KW - 6DOF Pose Estimation KW - Visual Tracking KW - Pattern Recognition KW - Computer Vision KW - 3D Sensor Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-105089 SN - 978-3-923959-97-6 ER - TY - RPRT A1 - Kounev, Samuel A1 - Brosig, Fabian A1 - Huber, Nikolaus T1 - The Descartes Modeling Language N2 - This technical report introduces the Descartes Modeling Language (DML), a new architecture-level modeling language for modeling Quality-of-Service (QoS) and resource management related aspects of modern dynamic IT systems, infrastructures and services. DML is designed to serve as a basis for self-aware resource management during operation ensuring that system QoS requirements are continuously satisfied while infrastructure resources are utilized as efficiently as possible. KW - Ressourcenmanagement KW - Software Engineering KW - Resource and Performance Management KW - Software Performance Engineering KW - Software Performance Modeling KW - Performance Management KW - Quality-of-Service Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-104887 ER - TY - THES A1 - Klein, Dominik Werner T1 - Design and Evaluation of Components for Future Internet Architectures T1 - Entwurf und Bewertung von Komponenten für zukünftige Internet Architekturen N2 - Die derzeitige Internetarchitektur wurde nicht in einem geplanten Prozess konzipiert und entwickelt, sondern hat vielmehr eine evolutionsartige Entwicklung hinter sich. Auslöser für die jeweiligen Evolutionsschritte waren dabei meist aufstrebende Anwendungen, welche neue Anforderungen an die zugrundeliegende Netzarchitektur gestellt haben. Um diese Anforderungen zu erfüllen, wurden häufig neuartige Dienste oder Protokolle spezifiziert und in die bestehende Architektur integriert. Dieser Prozess ist jedoch meist mit hohem Aufwand verbunden und daher sehr träge, was die Entwicklung und Verbreitung innovativer Dienste beeinträchtigt. Derzeitig diskutierte Konzepte wie Software-Defined Networking (SDN) oder Netzvirtualisierung (NV) werden als eine Möglichkeit angesehen, die Altlasten der bestehenden Internetarchitektur zu lösen. Beiden Konzepten gemein ist die Idee, logische Netze über dem physikalischen Substrat zu betreiben. Diese logischen Netze sind hochdynamisch und können so flexibel an die Anforderungen der jeweiligen Anwendungen angepasst werden. Insbesondere erlaubt das Konzept der Virtualisierung intelligentere Netzknoten, was innovative neue Anwendungsfälle ermöglicht. Ein häufig in diesem Zusammenhang diskutierter Anwendungsfall ist die Mobilität sowohl von Endgeräten als auch von Diensten an sich. Die Mobilität der Dienste wird hierbei ausgenutzt, um die Zugriffsverzögerung oder die belegten Ressourcen im Netz zu reduzieren, indem die Dienste zum Beispiel in für den Nutzer geographisch nahe Datenzentren migriert werden. Neben den reinen Mechanismen bezüglich Dienst- und Endgerätemobilität sind in diesem Zusammenhang auch geeignete Überwachungslösungen relevant, welche die vom Nutzer wahrgenommene Dienstgüte bewerten können. Diese Lösungen liefern wichtige Entscheidungshilfen für die Migration oder überwachen mögliche Effekte der Migration auf die erfahrene Dienstgüte beim Nutzer. Im Falle von Video Streaming ermöglicht ein solcher Anwendungsfall die flexible Anpassung der Streaming Topologie für mobile Nutzer, um so die Videoqualität unabhängig vom Zugangsnetz aufrechterhalten zu können. Im Rahmen dieser Doktorarbeit wird der beschriebene Anwendungsfall am Beispiel einer Video Streaming Anwendung näher analysiert und auftretende Herausforderungen werden diskutiert. Des Weiteren werden Lösungsansätze vorgestellt und bezüglich ihrer Effizienz ausgewertet. Im Detail beschäftigt sich die Arbeit mit der Leistungsanalyse von Mechanismen für die Dienstmobilität und entwickelt eine Architektur zur Optimierung der Dienstmobilität. Im Bereich Endgerätemobilität werden Verbesserungen entwickelt, welche die Latenz zwischen Endgerät und Dienst reduzieren oder die Konnektivität unabhängig vom Zugangsnetz gewährleisten. Im letzten Teilbereich wird eine Lösung zur Überwachung der Videoqualität im Netz entwickelt und bezüglich ihrer Genauigkeit analysiert. N2 - Today’s Internet architecture was not designed from scratch but was driven by new services that emerged during its development. Hence, it is often described as patchwork where additional patches are applied in case new services require modifications to the existing architecture. This process however is rather slow and hinders the development of innovative network services with certain architecture or network requirements. Currently discussed technologies like Software-Defined Networking (SDN) or Network Virtualization (NV) are seen as key enabling technologies to overcome this rigid best effort legacy of the Internet. Both technologies offer the possibility to create virtual networks that accommodate the specific needs of certain services. These logical networks are operated on top of a physical substrate and facilitate flexible network resource allocation as physical resources can be added and removed depending on the current network and load situation. In addition, the clear separation and isolation of networks foster the development of application-aware networks that fulfill the special requirements of emerging applications. A prominent use case that benefits from these extended capabilities of the network is denoted with service component mobility. Services hosted on Virtual Machines (VMs) follow their consuming mobile endpoints, so that access latency as well as consumed network resources are reduced. Especially for applications like video streaming, which consume a large fraction of the available resources, is this an important means to relieve the resource constraints and eventually provide better service quality. Service and endpoint mobility both allow an adaptation of the used paths between an offered service, i.e., video streaming and the consuming users in case the service quality drops due to network problems. To make evidence-based adaptations in case of quality drops, a scalable monitoring component is required that is able to monitor the service quality for video streaming applications with reliable accuracy. This monograph details challenges that arise when deploying a certain service, i.e., video streaming, in a future virtualized network architecture and discusses possible solutions. In particular, this work evaluates the performance of mechanisms enabling service mobility and presents an optimized architecture for service mobility. Concerning endpoint mobility, improvements are developed that reduce the latency between endpoints and consumed services and ensure connectivity regardless of the used mobile access network. In the last part, a network-based video quality monitoring solution is developed and its accuracy is evaluated. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/14 KW - Leistungsbewertung KW - Netzwerkmanagement KW - Virtuelles Netzwerk KW - Mobiles Internet KW - Service Mobility KW - Endpoint Mobility KW - Video Quality Monitoring Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-93134 SN - 1432-8801 ER - TY - THES A1 - Fink, Martin T1 - Crossings, Curves, and Constraints in Graph Drawing T1 - Kreuzungen, Kurven und Constraints beim Zeichnen von Graphen N2 - In many cases, problems, data, or information can be modeled as graphs. Graphs can be used as a tool for modeling in any case where connections between distinguishable objects occur. Any graph consists of a set of objects, called vertices, and a set of connections, called edges, such that any edge connects a pair of vertices. For example, a social network can be modeled by a graph by transforming the users of the network into vertices and friendship relations between users into edges. Also physical networks like computer networks or transportation networks, for example, the metro network of a city, can be seen as graphs. For making graphs and, thereby, the data that is modeled, well-understandable for users, we need a visualization. Graph drawing deals with algorithms for visualizing graphs. In this thesis, especially the use of crossings and curves is investigated for graph drawing problems under additional constraints. The constraints that occur in the problems investigated in this thesis especially restrict the positions of (a part of) the vertices; this is done either as a hard constraint or as an optimization criterion. N2 - Viele Probleme, Informationen oder Daten lassen sich mit Hilfe von Graphen modellieren. Graphen können überall dort eingesetzt werden, wo Verbindungen zwischen unterscheidbaren Objekten auftreten. Ein Graph besteht aus einer Menge von Objekten, genannt Knoten, und einer Menge von Verbindungen, genannt Kanten, zwischen je einem Paar von Knoten. Ein soziales Netzwerk lässt sich etwa als Graph modellieren, indem die teilnehmenden Personen als Knoten und Freundschaftsbeziehungen als Kanten dargestellt werden. Physikalische Netzwerke wie etwa Computernetze oder Transportnetze - wie beispielsweise das U-Bahnliniennetz einer Stadt - lassen sich ebenfalls als Graph auffassen. Um Graphen und die damit modellierten Daten gut erfassen zu können benötigen wir eine Visualisierung. Das Graphenzeichnen befasst sich mit dem Entwickeln von Algorithmen zur Visualisierung von Graphen. Diese Dissertation beschäftigt sich insbesondere mit dem Einsatz von Kreuzungen und Kurven beim Zeichnen von Graphen unter Nebenbedingungen (Constraints). Die in den untersuchten Problemen auftretenden Nebenbedingungen sorgen unter anderem dafür, dass die Lage eines Teils der Knoten - als feste Anforderung oder als Optimierungskriterium - vorgegeben ist. KW - Graphenzeichnen KW - Kreuzung KW - Kurve KW - Graph KW - graph drawing KW - crossing minimization KW - curves KW - labeling KW - metro map KW - Kreuzungsminimierung KW - Landkartenbeschriftung KW - U-Bahnlinienplan Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-98235 SN - 978-3-95826-002-3 (print) SN - 978-3-95826-003-0 (online) PB - Würzburg University Press ER - TY - THES A1 - Jarschel, Michael T1 - An Assessment of Applications and Performance Analysis of Software Defined Networking T1 - Eine Untersuchung von Anwendungen und Leistungsbewertung von Software Defined Networking N2 - With the introduction of OpenFlow by the Stanford University in 2008, a process began in the area of network research, which questions the predominant approach of fully distributed network control. OpenFlow is a communication protocol that allows the externalization of the network control plane from the network devices, such as a router, and to realize it as a logically-centralized entity in software. For this concept, the term "Software Defined Networking" (SDN) was coined during scientific discourse. For the network operators, this concept has several advantages. The two most important can be summarized under the points cost savings and flexibility. Firstly, it is possible through the uniform interface for network hardware ("Southbound API"), as implemented by OpenFlow, to combine devices and software from different manufacturers, which increases the innovation and price pressure on them. Secondly, the realization of the network control plane as a freely programmable software with open interfaces ("Northbound API") provides the opportunity to adapt it to the individual circumstances of the operator's network and to exchange information with the applications it serves. This allows the network to be more flexible and to react more quickly to changing circumstances as well as transport the traffic more effectively and tailored to the user’s "Quality of Experience" (QoE). The approach of a separate network control layer for packet-based networks is not new and has already been proposed several times in the past. Therefore, the SDN approach has raised many questions about its feasibility in terms of efficiency and applicability. These questions are caused to some extent by the fact that there is no generally accepted definition of the SDN concept to date. It is therefore a part of this thesis to derive such a definition. In addition, several of the open issues are investigated. This Investigations follow the three aspects: Performance Evaluation of Software Defined Networking, applications on the SDN control layer, and the usability of SDN Northbound-API for creation application-awareness in network operation. Performance Evaluation of Software Defined Networking: The question of the efficiency of an SDN-based system was from the beginning one of the most important. In this thesis, experimental measurements of the performance of OpenFlow-enabled switch hardware and control software were conducted for the purpose of answering this question. The results of these measurements were used as input parameters for establishing an analytical model of the reactive SDN approach. Through the model it could be determined that the performance of the software control layer, often called "Controller", is crucial for the overall performance of the system, but that the approach is generally viable. Based on this finding a software for analyzing the performance of SDN controllers was developed. This software allows the emulation of the forwarding layer of an SDN network towards the control software and can thus determine its performance in different situations and configurations. The measurements with this software showed that there are quite significant differences in the behavior of different control software implementations. Among other things it has been shown that some show different characteristics for various switches, in particular in terms of message processing speed. Under certain circumstances this can lead to network failures. Applications on the SDN control layer: The core piece of software defined networking are the intelligent network applications that operate on the control layer. However, their development is still in its infancy and little is known about the technical possibilities and their limitations. Therefore, the relationship between an SDN-based and classical implementation of a network function is investigated in this thesis. This function is the monitoring of network links and the traffic they carry. A typical approach for this task has been built based on Wiretapping and specialized measurement hardware and compared with an implementation based on OpenFlow switches and a special SDN control application. The results of the comparison show that the SDN version can compete in terms of measurement accuracy for bandwidth and delay estimation with the traditional measurement set-up. However, a compromise has to be found for measurements below the millisecond range. Another question regarding the SDN control applications is whether and how well they can solve existing problems in networks. Two programs have been developed based on SDN in this thesis to solve two typical network issues. Firstly, the tool "IPOM", which enables considerably more flexibility in the study of effects of network structures for a researcher, who is confined to a fixed physical test network topology. The second software provides an interface between the Cloud Orchestration Software "OpenNebula" and an OpenFlow controller. The purpose of this software was to investigate experimentally whether a pre-notification of the network of an impending relocation of a virtual service in a data center is sufficient to ensure the continuous operation of that service. This was demonstrated on the example of a video service. Usability of the SDN Northbound API for creating application-awareness in network operation: Currently, the fact that the network and the applications that run on it are developed and operated separately leads to problems in network operation. SDN offers with the Northbound-API an open interface that enables the exchange between information of both worlds during operation. One aim of this thesis was to investigate whether this interface can be exploited so that the QoE experienced by the user can be maintained on high level. For this purpose, the QoE influence factors were determined on a challenging application by means of a subjective survey study. The application is cloud gaming, in which the calculation of video game environments takes place in the cloud and is transported via video over the network to the user. It was shown that apart from the most important factor influencing QoS, i.e., packet loss on the downlink, also the type of game type and its speed play a role. This demonstrates that in addition to QoS the application state is important and should be communicated to the network. Since an implementation of such a state conscious SDN for the example of Cloud Gaming was not possible due to its proprietary implementation, in this thesis the application “YouTube video streaming” was chosen as an alternative. For this application, status information is retrievable via the "Yomo" tool and can be used for network control. It was shown that an SDN-based implementation of an application-aware network has distinct advantages over traditional network management methods and the user quality can be obtained in spite of disturbances. N2 - Mit der Vorstellung von OpenFlow durch die Stanford Universität im Jahre 2008 begann ein Prozess im Bereich der Netzwerkforschung, der den vorherrschenden Ansatz der völlig verteilten Netzsteuerung in Frage stellt. Bei OpenFlow handelt es sich um ein Kommunikationsprotokoll, das es ermöglicht die Netzsteuerung aus den Netzwerkgeräten, z. B. einem Router, herauszulösen und als logisch-zentralisierte Einheit als reine Software zu realisieren. Ein Konzept für das im Verlauf des wissenschaftlichen Diskurses der Begriff „Software Defined Networking“ (SDN) geprägt wurde. Für die Betreiber von Netzwerken hat dieses Konzept verschiedene Vorteile. Die beiden wichtigsten lassen sich unter den Punkten Kostenersparnis und Flexibilität zusammenfassen. Zum einen ist es durch die einheitliche Schnittstelle zur Netzwerkhardware („Southbound-API“), wie sie von OpenFlow realisiert wird, möglich Geräte und Software verschiedener Hersteller miteinander zu kombinieren, was den Innovations- und Preisdruck auf diese erhöht. Zum anderen besteht durch die Realisierung der Netzwerksteuerung als frei programmierbare Software mit offenen Schnittstellen („Northbound-API“) die Möglichkeit diese an die individuellen Gegebenheiten des Betreibernetzes anzupassen als auch Informationen mit den Applikationen auszutauschen, die es bedient. Dadurch kann das Netz viel flexibler und schneller auf sich ändernde Gegebenheiten reagieren und den Verkehr effektiver und auf die vom Nutzer erfahrene „Quality of Experience“ (QoE) abgestimmt transportieren. Der Ansatz einer von der Hardware getrennten Netzwerkkontrollschicht für paket-basierte Netze ist nicht neu und wurde in der Vergangenheit bereits mehrfach vorgeschlagen. Daher sah und sieht sich der Ansatz von SDN vielen Fragen nach seiner Realisierbarkeit im Bezug auf Leistungsfähigkeit und Anwendbarkeit gegenüber. Diese Fragen rühren zum Teil auch daher, dass es bis dato keine allgemein anerkannte Definition des SDN Begriffes gibt. Daher ist es ein Teil dieser Doktorarbeit eine solche Definition herzuleiten. Darüber hinaus werden verschiedene der offenen Fragen auf ihre Lösung hin untersucht. Diese Untersuchungen folgen drei Teilaspekten: Leistungsbewertung von Software Defined Networking, Anwendungen auf der SDN Kontrollschicht und die Nutzbarkeit der SDN Northbound-API zum Herstellen eines Applikationsbewusstseins im Netzbetrieb. Leistungsbewertung von Software Defined Networking: Die Frage nach der Leistungsfähigkeit eines SDN-basierten Systems war von Beginn an eine der wichtigsten. In dieser Doktorarbeit wurden zu diesem Zweck experimentelle Messungen zur Leistung OpenFlow-fähiger Switch Hardware und Kontrollsoftware durchgeführt. Die Ergebnisse dieser Messungen dienten als Eingabeparameter zur Aufstellung eines analytischen Modells des reaktiven SDN Ansatzes. Durch das Modell ließ sich bestimmen, dass die Leistungsfähigkeit der Software Kontrollschicht, oft „Controller“ genannt, von entscheidender Bedeutung für die Gesamtleistung des Systems ist, aber der Ansatz insgesamt tragfähig ist. Basierend auf dieser Erkenntnis wurde eine Software zur Analyse der Leistungsfähigkeit von SDN Controllern entwickelt. Diese Software ermöglicht die Emulation der Weiterleitungsschicht eines SDN Netzes gegenüber der Kontrollsoftware und kann so deren Leistungsdaten in verschiedenen Situationen und Konfigurationen bestimmen. Die Messungen mit dieser Software zeigten, dass es durchaus gravierende Unterschiede im Verhalten verschiedener Kontrollsoftware Implementierungen gibt. Unter anderem konnte gezeigt werden, dass einige gegenüber verschiedenen Switches ein unterschiedliches Verhalten aufweisen, insbesondere in der Abarbeitungsgeschwindigkeit von Nachrichten. Ein Umstand der in bestimmten Fällen zu Netzausfällen führen kann. Anwendungen auf der SDN Kontrollschicht: Das Kernstück von Software Defined Networking sind die intelligenten Netzwerkapplikationen, die auf der Kontrollschicht betrieben werden. Allerdings steht deren Entwicklung noch am Anfang und es ist wenig über die technischen Möglichkeiten und deren Grenzen bekannt. Daher wurde in dieser Doktorarbeit der Frage nachgegangen, in welchem Verhältnis sich die Realisierung einer Netzwerkaufgabe mit SDN zur klassischen Umsetzung dieser bewegt. Bei dieser Aufgabe handelt es sich im speziellen um das Monitoring von Links und dem darauf befindlichen Netzwerkverkehr. Hier wurde ein typischer Ansatz für diese Aufgabe basierend auf Wiretapping und spezieller Messhardware aufgebaut und mit einer Implementierung durch OpenFlow Switches und einer speziellen SDN Kontrollapplikation verglichen. Die Ergebnisse des Vergleiches zeigen, dass die SDN Variante im Bezug auf Bandbreiten und Verzögerungsabschätzung durchaus mit dem traditionellen Messaufbau im Bezug auf Messgenauigkeit mithalten kann. Jedoch müssen Abstriche bezüglich der Verzögerungsmessung unterhalb des Millisekundenbereiches gemacht werden. Eine weitere Frage bezüglich der SDN Kontrollapplikationen besteht darin, ob und wie gut sich bestehende Probleme in Netzwerken jetzt mit SDN lösen lassen. Diesbezüglich wurden in dieser Doktorarbeit zwei Programme auf SDN Basis entwickelt, die zwei typische Netzwerkprobleme lösen. Zum einen das Tool „IPOM“, dass es ermöglicht via SDN auf Basis einer festen physikalischen Testnetz-Topologie eine andere zu emulieren und so einem Forscher auf dem Testnetz deutlich mehr Flexibilität bei der Untersuchung von Auswirkungen anderer Netzstrukturen zu ermöglichen. Die zweite Software stellt eine Schnittstelle zwischen der Cloud Orchestration Software „OpenNebula“ und einem OpenFlow Controller dar. Der Zweck dieser Software war es experimentell zu untersuchen, ob eine Vorwarnung des Netzes vor einem bevorstehenden Umzug eines virtuellen Dienstes in einem Datenzentrum hinreichend ist, um den kontinuierlichen Betrieb dieses Dienstes zu gewährleisten. Dieses konnte am Beispiel eines Videodienstes demonstriert werden. Nutzbarkeit der SDN Northbound-API zum Herstellen eines Applikationsbewusstseins im Netzbetrieb: Aktuell führt die Tatsache, dass Netz und Applikationen, die darauf laufen, separat entwickelt und betrieben werden zu Problemen im Netzbetrieb. SDN bietet mit der Northbound-API eine Schnittstelle an, die den Austausch zwischen Informationen beider Welten im Betrieb ermöglicht. Ein Ziel dieser Doktorarbeit war es zu untersuchen, ob sich diese Schnittstelle ausnutzen lässt, so dass die vom Nutzer erfahrene QoE hoch gehalten werden kann. Zu diesem Zwecke wurden zunächst mittels einer subjektiven Umfragestudie die Einflussfaktoren auf eine anspruchsvolle Applikation bestimmt. Bei der Applikation handelt es sich um Cloud Gaming, bei dem die Berechnung der Videospiele Umgebung in der Cloud stattfindet und per Video über das Netz transportiert wird. Hier konnte gezeigt werden, dass neben dem wichtigsten QoS Einflussfaktor Paketverlust auf dem Downlink auch die Art des Spieltyps und dessen Geschwindigkeit eine Rolle spielen. Dies belegt, dass neben QoS auch der Applikationszustand wichtig ist und dem Netz mitgeteilt werden sollte. Da eine Umsetzung eines solchen zustandsbewussten SDNs für das Beispiel Cloud Gaming auf Grund dessen proprietärer Implementierung nicht möglich war, wurde in dieser Doktorarbeit auf die Applikation YouTube Video Streaming ausgewichen. Für diese sind mit Hilfe des Tools „YoMo“ Statusinformationen abfragbar und können zur Netzsteuerung genutzt werden. Es konnte gezeigt werden, dass eine SDN-basierte Realisierung eines applikationsbewussten Netzes deutliche Vorteile gegenüber klassischen Netzwerk Management Methoden aufweist und die Nutzerqualität trotz Störfaktoren erhalten werden kann. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 03/14 KW - Leistungsbewertung KW - Netzwerk KW - Software Defined Networking KW - Quality of Experience KW - Cloud Gaming KW - Performance Evaluation Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-100795 SN - 1432-8801 ER - TY - THES A1 - Hock, David Rogér T1 - Analysis and Optimization of Resilient Routing in Core Communication Networks T1 - Analyse und Optimierung von ausfallsicherem Routing in Kernkommunikationsnetzen N2 - Routing is one of the most important issues in any communication network. It defines on which path packets are transmitted from the source of a connection to the destination. It allows to control the distribution of flows between different locations in the network and thereby is a means to influence the load distribution or to reach certain constraints imposed by particular applications. As failures in communication networks appear regularly and cannot be completely avoided, routing is required to be resilient against such outages, i.e., routing still has to be able to forward packets on backup paths even if primary paths are not working any more. Throughout the years, various routing technologies have been introduced that are very different in their control structure, in their way of working, and in their ability to handle certain failure cases. Each of the different routing approaches opens up their own specific questions regarding configuration, optimization, and inclusion of resilience issues. This monograph investigates, with the example of three particular routing technologies, some concrete issues regarding the analysis and optimization of resilience. It thereby contributes to a better general, technology-independent understanding of these approaches and of their diverse potential for the use in future network architectures. The first considered routing type, is decentralized intra-domain routing based on administrative IP link costs and the shortest path principle. Typical examples are common today's intra-domain routing protocols OSPF and IS-IS. This type of routing includes automatic restoration abilities in case of failures what makes it in general very robust even in the case of severe network outages including several failed components. Furthermore, special IP-Fast Reroute mechanisms allow for a faster reaction on outages. For routing based on link costs, traffic engineering, e.g. the optimization of the maximum relative link load in the network, can be done indirectly by changing the administrative link costs to adequate values. The second considered routing type, MPLS-based routing, is based on the a priori configuration of primary and backup paths, so-called Label Switched Paths. The routing layout of MPLS paths offers more freedom compared to IP-based routing as it is not restricted by any shortest path constraints but any paths can be setup. However, this in general involves a higher configuration effort. Finally, in the third considered routing type, typically centralized routing using a Software Defined Networking (SDN) architecture, simple switches only forward packets according to routing decisions made by centralized controller units. SDN-based routing layouts offer the same freedom as for explicit paths configured using MPLS. In case of a failure, new rules can be setup by the controllers to continue the routing in the reduced topology. However, new resilience issues arise caused by the centralized architecture. If controllers are not reachable anymore, the forwarding rules in the single nodes cannot be adapted anymore. This might render a rerouting in case of connection problems in severe failure scenarios infeasible. N2 - Routing stellt eine der zentralen Aufgaben in Kommunikationsnetzen dar. Es bestimmt darüber, auf welchem Weg Verkehr von der Quelle zum Ziel transportiert wird. Durch geschicktes Routing kann dadurch eine Verteilung der Verkehrsflüsse zum Beispiel zur Lastverteilung erreicht werden. Da Fehler in Kommunikationsnetzen nicht vollständig verhindert werden können, muss Routing insbesondere ausfallsicher sein, d.h., im Falle von Fehlern im Netz muss das Routing weiterhin in der Lage sein, Pakete auf alternativen Pfaden zum Ziel zu transportieren. Es existieren verschiedene gängige Routingverfahren und Technologien, die sich hinsichtlich Ihrer Arbeitsweise, Ihrer Kontrollstrukturen und Ihrer Funktionalität in bestimmten Fehlerszenarien unterscheiden. Für diese verschiedenen Ansätze ergeben sich jeweils eigene Fragestellungen hinsichtlich der Konfiguration, der Optimierung und der Berücksichtigung von Ausfallsicherheit. Diese Doktorarbeit behandelt am Beispiel bestimmter Technologien einige konkrete Fragestellungen zur Analyse und Optimierung der Ausfallsicherheit. Sie liefert damit einen Beitrag zum besseren generellen Verständnis verschiedenartiger Routingansätze und deren unterschiedlichen Potentials für den Einsatz in zukünftigen Netzarchitekturen. Zuerst wird dezentrales Routing behandelt, basierend auf administrativen Linkgewichten und dem Prinzip der kürzesten Pfade, wie es beispielsweise in den Protokollen IS-IS und OSPF genutzt wird. Diese Routingverfahren beinhalteten automatische Rekonvergenz-Mechanismen um im Falle von Fehlern auf der verbleibenden Netzstruktur weiterhin einen Transport von Verkehr zu ermöglichen. Spezielle IP-Fast Reroute Mechanismen ermöglichen zudem eine schnelle Reaktion im Falle von Fehlern. Routing basierend auf Linkgewichten lässt sich nur indirekt durch die Wahl geeigneter Gewichte beeinflussen und optimieren. Die zweite in der Doktorarbeit behandelte Routingart ist MPLS-basiertes Routing, bei dem Labels für Pakete verwendet werden und Pakete anhand sogenannter vorkonfigurierter Label Switched Paths weitergeleitet werden. Diese Technologie bietet mehr Freiheiten bei der Wahl des Pfadlayouts, was aber wiederum im Allgemeinen einen erhöhten Konfigurationsaufwand mit sich bringt. Schließlich greift die Doktorarbeit auch das Routing in SDN Netzen auf. Dort erfolgt eine Trennung von Control Plane und Data Plane, so dass einzelne dedizierte Controller die Routingentscheidungen festlegen und ansonsten einfache Switches mit reduzierter Komplexität den Verkehr lediglich entsprechend der festgelegten Regeln weiterleiten. Dies ermöglicht die größte Freiheit bei der Konfiguration des Routing bringt aber wiederum neue Fragestellungen bedingt durch die zentralen Kontrolleinheiten mit sich. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 02/14 KW - Leistungsbewertung KW - Verteiltes System KW - Routing KW - Netzwerk KW - Optimization KW - Routing KW - Software Defined Networking KW - Optimierung Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-101681 SN - 1432-8801 ER - TY - JOUR A1 - Hurtienne, Jörn T1 - Inter-coder reliability of categorising force-dynamic events in human-technology interaction N2 - Two studies are reported that investigate how readily accessible and applicable ten force-dynamic categories are to novices in describing short episodes of human-technology interaction (Study 1) and that establish a measure of inter-coder reliability when re-classifying these episodes into force-dynamic categories (Study 2). The results of the first study show that people can easily and confidently relate their experiences with technology to the definitions of force-dynamic events (e.g. “The driver released the handbrake” as an example of restraint removal). The results of the second study show moderate agreement between four expert coders across all ten force-dynamic categories (Cohen’s kappa = .59) when re-classifying these episodes. Agreement values for single force-dynamic categories ranged between ‘fair’ and ‘almost perfect’, i.e. between kappa = .30 and .95. Agreement with the originally intended classifications of study 1 was higher than the pure inter-coder reliabilities. Single coders achieved an average kappa of .71, indicating substantial agreement. Using more than one coder increased kappas to almost perfect: up to .87 for four coders. A qualitative analysis of the predicted versus the observed number of category confusions revealed that about half of the category disagreement could be predicted from strong overlaps in the definitions of force-dynamic categories. From the quantitative and qualitative results, guidelines are derived to aid the better training of coders in order to increase inter-coder reliability. KW - inter-coder reliability KW - force dynamics KW - image schemas KW - humantechnology interaction Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-194127 SN - 2197-2796 SN - 2197-2788 N1 - This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively. VL - 1 IS - 1 ER - TY - THES A1 - Lehrieder, Frank T1 - Performance Evaluation and Optimization of Content Distribution using Overlay Networks T1 - Leistungsbewertung und Optimierung von Overlay Netzwerken zum Verteilen großer Datenmengen N2 - The work presents a performance evaluation and optimization of so-called overlay networks for content distribution in the Internet. Chapter 1 describes the importance which have such networks in today's Internet, for example, for the transmission of video content. The focus of this work is on overlay networks based on the peer-to-peer principle. These are characterized by the fact that users who download content, also contribute to the distribution process by sharing parts of the data to other users. This enables efficient content distribution because each user not only consumes resources in the system, but also provides its own resources. Chapter 2 of the monograph contains a detailed description of the functionality of today's most popular overlay network BitTorrent. It explains the various components and their interaction. This is followed by an illustration of why such overlay networks for Internet service providers (ISPs) are problematic. The reason lies in the large amount of inter-ISP traffic that is produced by these overlay networks. Since this inter-ISP traffic leads to high costs for ISPs, they try to reduce it by improved mechanisms for overlay networks. One optimization approach is the use of topology awareness within the overlay networks. It provides users of the overlay networks with information about the underlying physical network topology. This allows them to avoid inter-ISP traffic by exchanging data preferrentially with other users that are connected to the same ISP. Another approach to save inter-ISP traffic is caching. In this case the ISP provides additional computers in its network, called caches, which store copies of popular content. The users of this ISP can then obtain such content from the cache. This prevents that the content must be retrieved from locations outside of the ISP's network, and saves costly inter-ISP traffic in this way. In the third chapter of the thesis, the results of a comprehensive measurement study of overlay networks, which can be found in today's Internet, are presented. After a short description of the measurement methodology, the results of the measurements are described. These results contain data on a variety of characteristics of current P2P overlay networks in the Internet. These include the popularity of content, i.e., how many users are interested in specific content, the evolution of the popularity and the size of the files. The distribution of users within the Internet is investigated in detail. Special attention is given to the number of users that exchange a particular file within the same ISP. On the basis of these measurement results, an estimation of the traffic savings that can achieved by topology awareness is derived. This new estimation is of scientific and practical importance, since it is not limited to individual ISPs and files, but considers the whole Internet and the total amount of data exchanged in overlay networks. Finally, the characteristics of regional content are considered, in which the popularity is limited to certain parts of the Internet. This is for example the case of videos in German, Italian or French language. Chapter 4 of the thesis is devoted to the optimization of overlay networks for content distribution through caching. It presents a deterministic flow model that describes the influence of caches. On the basis of this model, it derives an estimate of the inter-ISP traffic that is generated by an overlay network, and which part can be saved by caches. The results show that the influence of the cache depends on the structure of the overlay networks, and that caches can also lead to an increase in inter-ISP traffic under certain circumstances. The described model is thus an important tool for ISPs to decide for which overlay networks caches are useful and to dimension them. Chapter 5 summarizes the content of the work and emphasizes the importance of the findings. In addition, it explains how the findings can be applied to the optimization of future overlay networks. Special attention is given to the growing importance of video-on-demand and real-time video transmissions. N2 - Die Arbeit beschäftigt sich mit der Leistungsbewertung und Optimierung von sogenannten Overlay-Netzwerken zum Verteilen von großen Datenmengen im Internet. Im Kapitel 1 der Arbeit wird die große Bedeutung erläutert, die solche Netzwerke im heutigen Internet haben, beispielsweise für die Übertragung von Video-Inhalten. Im Fokus der Arbeit liegen Overlay-Netzwerke, die auf dem Peer-to-peer Prinzip basieren. Diese zeichnen sich dadurch aus, dass Nutzer, die Inhalte herunterladen, auch gleichzeitig an dem Verteilprozess teilnehmen, indem sie Teile der Daten an andere Nutzer weitergeben. Dies ermöglicht eine effiziente Verteilung der Daten, weil jeder Nutzer nicht nur Ressourcen im System belegt, sondern auch eigene Ressourcen einbringt. Kapitel 2 der Arbeit enthält eine detaillierte Beschreibung der Funktionsweise des heute populärsten Overlay-Netzwerks BitTorrent. Es werden die einzelnen Komponenten erläutert und deren Zusammenspiel erklärt. Darauf folgt eine Darstellung, warum solche Overlay-Netzwerke für Internet-Anbieter (Internet service provider, ISP) problematisch sind. Der Grund dafür liegt in der großen Menge an Inter-ISP Verkehr, den diese Overlays erzeugen. Da solcher Inter-ISP Verkehr zu hohen Kosten für ISPs führt, versuchen diese den Inter-ISP Verkehr zu reduzieren, indem sie die Mechanismen der Overlay-Netzwerke optimieren. Ein Ansatz zur Optimierung ist die Verwendung von Topologiebewusstsein innerhalb der Overlay-Netzwerke. Dabei erhalten die Nutzer der Overlay-Netzwerke Informationen über die zugrunde liegende, physikalische Netzwerktopologie. Diese ermöglichen es ihnen, Inter-ISP Verkehr zu vermeiden, indem sie Daten bevorzugt mit anderen Nutzern austauschen, die mit dem gleichen ISP verbunden sind. Ein weiterer Ansatz, um Inter-ISP Verkehr einzusparen, ist Caching. Dabei stellt der ISP zusätzliche Rechner, sogenannte Caches, in seinem Netzwerk zur Verfügung, die Kopien populärer Inhalte zwischenspeichern. Die Nutzer dieses ISP können solche Inhalte nun von den Caches beziehen. Dies verhindert, dass populäre Inhalte mehrfach von außerhalb des betrachteten ISPs bezogen werden müssen, und spart so kostenintensiven Inter-ISP Verkehr ein. Im dritten Kapitel der Arbeit werden Ergebnisse einer umfassenden Messung von Overlay-Netzwerken vorgestellt, wie sie heute im Internet anzutreffen sind. Nach einer kurzen Darstellung der bei der Messung verwendeten Methodik werden die Resultate der Messungen beschrieben. Diese Ergebnisse enthalten Daten über eine Vielzahl von Eigenschaften von heutigen P2P-basierten Overlay-Netzwerken im Internet. Dazu zählen die Popularität von Inhalten, d.h., wie viele Nutzer an bestimmten Inhalten interessiert sind, die zeitliche Entwicklung der Popularität und die Größe der Daten. Im Detail wird auch die Verteilung der Nutzer über das Internet analysiert. Ein besonderes Augenmerk liegt dabei auf der Anzahl der Nutzer, die gleichzeitig und im Netz desselben ISP eine bestimmte Datei tauschen. Auf der Basis dieser Messergebnisse wird eine Abschätzung durchgeführt, welches Einsparpotential die Optimierung von Overlay-Netzwerken durch Topologiebewusstsein bietet. Diese neuartige Abschätzung ist von wissenschaftlicher und praktischer Bedeutung, da sie sich nicht auf einzelne ISPs und Dateien beschränkt, sondern des gesamte Internet und die Menge aller in Overlay-Netzwerken verfügbaren Dateien umfasst. Schließlich werden die Besonderheiten von regionalen Inhalten betrachtet, bei denen sich die Popularität auf bestimmte Teile des Internets beschränkt. Dies ist beispielsweise bei Videos in deutscher, italienischer oder französischer Sprache der Fall. Kapitel 4 der Arbeit widmet sich der Optimierung von Overlay-Netzwerken zum Verteilen großer Datenmengen durch Caching. Es wird ein deterministisches Flussmodel entwickelt, das den Einfluss von Caches beschreibt. Auf der Basis dieses Modells leitet er eine Abschätzung des Inter-ISP Verkehrs ab, der von einem Overlay-Netzwerk erzeugt wird, und welcher Teil davon durch Caches eingespart werden kann. Die Ergebnisse zeigen, dass der Einfluss von Caches von der Struktur der Overlay-Netzwerke abhängt und dass Caches unter bestimmten Umständen auch zu einem erhöhten Inter-ISP Verkehr führen können. Das beschriebene Modell ist somit ein wichtiges Hilfsmittel für ISPs um zu entscheiden, für welche Overlay-Netzwerke Caches sinnvoll sind, und um diese anschließend richtig zu dimensionieren. Kapitel 5 fasst den Inhalt der Arbeit zusammen und stellt die Bedeutung der gewonnenen Erkenntnisse heraus. Abschließend wird erläutert, in welcher Weise die in der Arbeit beschriebenen Ergebnisse wichtige Grundlagen für die Optimierung von zukünftigen Overlay-Netzwerken darstellen werden. Dabei wird besonders auf die wachsende Bedeutung von Video-On-Demand und Echt-Zeit Video-Übertragungen eingegangen. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/13 KW - Leistungsbewertung KW - Verteiltes System KW - Overlay-Netz KW - Overlay Netzwerke KW - Overlay networks Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-76018 ER - TY - JOUR A1 - Becker, Martin A1 - Caminiti, Saverio A1 - Fiorella, Donato A1 - Francis, Louise A1 - Gravino, Pietro A1 - Haklay, Mordechai (Muki) A1 - Hotho, Andreas A1 - Loreto, Virrorio A1 - Mueller, Juergen A1 - Ricchiuti, Ferdinando A1 - Servedio, Vito D. P. A1 - Sirbu, Alina A1 - Tria, Franesca T1 - Awareness and Learning in Participatory Noise Sensing JF - PLOS ONE N2 - The development of ICT infrastructures has facilitated the emergence of new paradigms for looking at society and the environment over the last few years. Participatory environmental sensing, i.e. directly involving citizens in environmental monitoring, is one example, which is hoped to encourage learning and enhance awareness of environmental issues. In this paper, an analysis of the behaviour of individuals involved in noise sensing is presented. Citizens have been involved in noise measuring activities through the WideNoise smartphone application. This application has been designed to record both objective (noise samples) and subjective (opinions, feelings) data. The application has been open to be used freely by anyone and has been widely employed worldwide. In addition, several test cases have been organised in European countries. Based on the information submitted by users, an analysis of emerging awareness and learning is performed. The data show that changes in the way the environment is perceived after repeated usage of the application do appear. Specifically, users learn how to recognise different noise levels they are exposed to. Additionally, the subjective data collected indicate an increased user involvement in time and a categorisation effect between pleasant and less pleasant environments. KW - exposure Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-127675 SN - 1932-6203 VL - 8 IS - 12 ER -