TY - JOUR A1 - Greubel, André A1 - Andres, Daniela A1 - Hennecke, Martin T1 - Analyzing reporting on ransomware incidents: a case study JF - Social Sciences N2 - Knowledge about ransomware is important for protecting sensitive data and for participating in public debates about suitable regulation regarding its security. However, as of now, this topic has received little to no attention in most school curricula. As such, it is desirable to analyze what citizens can learn about this topic outside of formal education, e.g., from news articles. This analysis is both relevant to analyzing the public discourse about ransomware, as well as to identify what aspects of this topic should be included in the limited time available for this topic in formal education. Thus, this paper was motivated both by educational and media research. The central goal is to explore how the media reports on this topic and, additionally, to identify potential misconceptions that could stem from this reporting. To do so, we conducted an exploratory case study into the reporting of 109 media articles regarding a high-impact ransomware event: the shutdown of the Colonial Pipeline (located in the east of the USA). We analyzed how the articles introduced central terminology, what details were provided, what details were not, and what (mis-)conceptions readers might receive from them. Our results show that an introduction of the terminology and technical concepts of security is insufficient for a complete understanding of the incident. Most importantly, the articles may lead to four misconceptions about ransomware that are likely to lead to misleading conclusions about the responsibility for the incident and possible political and technical options to prevent such attacks in the future. KW - media analysis KW - informal education KW - IT security KW - ransomware KW - misconceptions Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-313746 SN - 2076-0760 VL - 12 IS - 5 ER - TY - JOUR A1 - Zirkel, J. A1 - Cecil, A. A1 - Schäfer, F. A1 - Rahlfs, S. A1 - Ouedraogo, A. A1 - Xiao, K. A1 - Sawadogo, S. A1 - Coulibaly, B. A1 - Becker, K. A1 - Dandekar, T. T1 - Analyzing Thiol-Dependent Redox Networks in the Presence of Methylene Blue and Other Antimalarial Agents with RT-PCR-Supported in silico Modeling JF - Bioinformatics and Biology Insights N2 - BACKGROUND: In the face of growing resistance in malaria parasites to drugs, pharmacological combination therapies are important. There is accumulating evidence that methylene blue (MB) is an effective drug against malaria. Here we explore the biological effects of both MB alone and in combination therapy using modeling and experimental data. RESULTS: We built a model of the central metabolic pathways in P. falciparum. Metabolic flux modes and their changes under MB were calculated by integrating experimental data (RT-PCR data on mRNAs for redox enzymes) as constraints and results from the YANA software package for metabolic pathway calculations. Several different lines of MB attack on Plasmodium redox defense were identified by analysis of the network effects. Next, chloroquine resistance based on pfmdr/and pfcrt transporters, as well as pyrimethamine/sulfadoxine resistance (by mutations in DHF/DHPS), were modeled in silico. Further modeling shows that MB has a favorable synergism on antimalarial network effects with these commonly used antimalarial drugs. CONCLUSIONS: Theoretical and experimental results support that methylene blue should, because of its resistance-breaking potential, be further tested as a key component in drug combination therapy efforts in holoendemic areas. KW - methylene blue KW - malaria KW - elementary mode analysis KW - drug KW - resistance KW - combination therapy KW - pathway KW - metabolic flux Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-123751 N1 - This is an open access article. Unrestricted non-commercial use is permitted provided the original work is properly cited. VL - 6 ER - TY - THES A1 - Kindermann, Philipp T1 - Angular Schematization in Graph Drawing N2 - Graphs are a frequently used tool to model relationships among entities. A graph is a binary relation between objects, that is, it consists of a set of objects (vertices) and a set of pairs of objects (edges). Networks are common examples of modeling data as a graph. For example, relationships between persons in a social network, or network links between computers in a telecommunication network can be represented by a graph. The clearest way to illustrate the modeled data is to visualize the graphs. The field of Graph Drawing deals with the problem of finding algorithms to automatically generate graph visualizations. The task is to find a "good" drawing, which can be measured by different criteria such as number of crossings between edges or the used area. In this thesis, we study Angular Schematization in Graph Drawing. By this, we mean drawings with large angles (for example, between the edges at common vertices or at crossing points). The thesis consists of three parts. First, we deal with the placement of boxes. Boxes are axis-parallel rectangles that can, for example, contain text. They can be placed on a map to label important sites, or can be used to describe semantic relationships between words in a word network. In the second part of the thesis, we consider graph drawings visually guide the viewer. These drawings generally induce large angles between edges that meet at a vertex. Furthermore, the edges are drawn crossing-free and in a way that makes them easy to follow for the human eye. The third and final part is devoted to crossings with large angles. In drawings with crossings, it is important to have large angles between edges at their crossing point, preferably right angles. N2 - Graphen sind häufig verwendete Werkzeuge zur Modellierung von Zusammenhängen zwischen Daten. Ein Graph ist eine binäre Relation zwischen Objekten, das heißt er besteht aus einer Menge von Objekten (Knoten) und einer Menge von Paaren von Objekten (Kanten). Netzwerke sind übliche Beispiele für das Modellieren von Daten als ein Graph. Beispielsweise lassen sich Beziehungen zwischen Personen in einem sozialen Netzwerk oder Netzanbindungen zwischen Computern in einem Telekommunikationsnetz als Graph darstellen. Die modellierten Daten können am anschaulichsten dargestellt werden, indem man die Graphen visualisiert. Der Bereich des Graphenzeichnens behandelt das Problem, Algorithmen zum automatischen Erzeugen von Graphenvisualisierungen zu finden. Das Ziel ist es, eine "gute" Zeichnung zu finden, was durch unterschiedliche Kriterien gemessen werden kann; zum Beispiel durch die Anzahl der Kreuzungen zwischen Kanten oder durch den Platzverbrauch. In dieser Arbeit beschäftigen wir uns mit Winkelschematisierung im Graphenzeichnen. Darunter verstehen wir Zeichnungen, in denen die Winkel (zum Beispiel zwischen Kanten an einem gemeinsamen Knoten oder einem Kreuzungspunkt) möglichst groß gestaltet sind. Die Arbeit besteht aus drei Teilen. Im ersten Teil betrachten wir die Platzierung von Boxen. Boxen sind achsenparallele Rechtecke, die zum Beispiel Text enthalten. Sie können beispielsweise auf einer Karte platziert werden, um wichtige Standorte zu beschriften, oder benutzt werden, um semantische Beziehungen zwischen Wörtern in einem Wortnetzwerk darzustellen. Im zweiten Teil der Arbeit untersuchen wir Graphenzeichnungen, die den Betrachter visuell führen. Im Allgemeinen haben diese Zeichnungen große Winkel zwischen Kanten, die sich in einem Knoten treffen. Außerdem werden die Verbindungen kreuzungsfrei und so gezeichnet, dass es dem menschlichen Auge leicht fällt, ihnen zu folgen. Im dritten und letzten Teil geht es um Kreuzungen mit großen Winkeln. In Zeichnungen mit Kreuzungen ist es wichtig, dass die Winkel zwischen Kanten an Kreuzungspunkten groß sind, vorzugsweise rechtwinklig. KW - graph drawing KW - angular schematization KW - boundary labeling KW - contact representation KW - word clouds KW - monotone drawing KW - smooth orthogonal drawing KW - simultaneous embedding KW - right angle crossing KW - independent crossing KW - Graphenzeichnen KW - Winkel KW - Kreuzung KW - v Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-112549 SN - 978-3-95826-020-7 (print) SN - 978-3-95826-021-4 (online) PB - Würzburg University Press CY - Würzburg ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Analysis and Understanding of Quran Search Results with Interactive Scatter Plots and Tables N2 - The Quran is the holy book of Islam consisting of 6236 verses divided into 114 chapters called suras. Many verses are similar and even identical. Searching for similar texts (e.g verses) could return thousands of verses, that when displayed completely or partly as textual list would make analysis and understanding difficult and confusing. Moreover it would be visually impossible to instantly figure out the overall distribution of the retrieved verses in the Quran. As consequence reading and analyzing the verses would be tedious and unintuitive. In this study a combination of interactive scatter plots and tables has been developed to assist analysis and understanding of the search result. Retrieved verses are clustered by chapters, and a weight is assigned to each cluster according to number of verses it contains, so that users could visually identify most relevant areas, and figure out the places of revelation of the verses. Users visualize the complete result and can select a region of the plot to zoom in, click on a marker to display a table containing verses with English translation side by side. KW - Text Mining KW - Visualisierung KW - Koran KW - Information Visualization KW - Visual Text Mining KW - Scatter Plot KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55840 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Understanding, Retention, and Dissemination of Religious Texts Knowledge with Modeling, and Visualization Techniques: The Case of The Quran N2 - Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. In this paper, books organized in terms of chapters consisting of verses, are considered as the source of knowledge to be modeled. The knowledge model consists of verses with their metadata and semantic annotations. The metadata represent the multiple perspectives of knowledge modeling. Verses with their metadata and annotations form a meta-model, which will be published on a web Mashup. The meta-model with linking between its elements constitute a knowledge base. An XML-based annotation system breaking down the learning process into specific tasks, helps constructing the desired meta-model. The system is made up of user interfaces for creating metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing the generated knowledge on the Internet. The proposed software system improves comprehension and retention of knowledge contained in religious texts through modeling and visualization. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts. It is expected that this short ongoing study would motivate others to engage in devising and offering software systems for cross-religions learning. KW - Wissensmanagement KW - Koran KW - Knowledge Modeling KW - Meta-model KW - Knowledge Management KW - Content Management KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55927 ER - TY - JOUR A1 - Wolff, Alexander A1 - Rutter, Iganz T1 - Augmenting the Connectivity of Planar and Geometric Graphs JF - Journal of Graph Algorithms and Applications N2 - In this paper we study connectivity augmentation problems. Given a connected graph G with some desirable property, we want to make G 2-vertex connected (or 2-edge connected) by adding edges such that the resulting graph keeps the property. The aim is to add as few edges as possible. The property that we consider is planarity, both in an abstract graph-theoretic and in a geometric setting, where vertices correspond to points in the plane and edges to straight-line segments. We show that it is NP-hard to � nd a minimum-cardinality augmentation that makes a planar graph 2-edge connected. For making a planar graph 2-vertex connected this was known. We further show that both problems are hard in the geometric setting, even when restricted to trees. The problems remain hard for higher degrees of connectivity. On the other hand we give polynomial-time algorithms for the special case of convex geometric graphs. We also study the following related problem. Given a planar (plane geometric) graph G, two vertices s and t of G, and an integer c, how many edges have to be added to G such that G is still planar (plane geometric) and contains c edge- (or vertex-) disjoint s{t paths? For the planar case we give a linear-time algorithm for c = 2. For the plane geometric case we give optimal worst-case bounds for c = 2; for c = 3 we characterize the cases that have a solution. Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-97587 ER - TY - JOUR A1 - Krenzer, Adrian A1 - Heil, Stefan A1 - Fitting, Daniel A1 - Matti, Safa A1 - Zoller, Wolfram G. A1 - Hann, Alexander A1 - Puppe, Frank T1 - Automated classification of polyps using deep learning architectures and few-shot learning JF - BMC Medical Imaging N2 - Background Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification. Methods We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database. Results For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations. Conclusion Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning. KW - machine learning KW - deep learning KW - endoscopy KW - gastroenterology KW - automation KW - image classification KW - transformer KW - deep metric learning KW - few-shot learning Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-357465 VL - 23 ER - TY - JOUR A1 - Becker, Martin A1 - Caminiti, Saverio A1 - Fiorella, Donato A1 - Francis, Louise A1 - Gravino, Pietro A1 - Haklay, Mordechai (Muki) A1 - Hotho, Andreas A1 - Loreto, Virrorio A1 - Mueller, Juergen A1 - Ricchiuti, Ferdinando A1 - Servedio, Vito D. P. A1 - Sirbu, Alina A1 - Tria, Franesca T1 - Awareness and Learning in Participatory Noise Sensing JF - PLOS ONE N2 - The development of ICT infrastructures has facilitated the emergence of new paradigms for looking at society and the environment over the last few years. Participatory environmental sensing, i.e. directly involving citizens in environmental monitoring, is one example, which is hoped to encourage learning and enhance awareness of environmental issues. In this paper, an analysis of the behaviour of individuals involved in noise sensing is presented. Citizens have been involved in noise measuring activities through the WideNoise smartphone application. This application has been designed to record both objective (noise samples) and subjective (opinions, feelings) data. The application has been open to be used freely by anyone and has been widely employed worldwide. In addition, several test cases have been organised in European countries. Based on the information submitted by users, an analysis of emerging awareness and learning is performed. The data show that changes in the way the environment is perceived after repeated usage of the application do appear. Specifically, users learn how to recognise different noise levels they are exposed to. Additionally, the subjective data collected indicate an increased user involvement in time and a categorisation effect between pleasant and less pleasant environments. KW - exposure Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-127675 SN - 1932-6203 VL - 8 IS - 12 ER - TY - JOUR A1 - Wienrich, Carolin A1 - Döllinger, Nina A1 - Hein, Rebecca T1 - Behavioral Framework of Immersive Technologies (BehaveFIT): How and why virtual reality can support behavioral change processes JF - Frontiers in Virtual Reality N2 - The design and evaluation of assisting technologies to support behavior change processes have become an essential topic within the field of human-computer interaction research in general and the field of immersive intervention technologies in particular. The mechanisms and success of behavior change techniques and interventions are broadly investigated in the field of psychology. However, it is not always easy to adapt these psychological findings to the context of immersive technologies. The lack of theoretical foundation also leads to a lack of explanation as to why and how immersive interventions support behavior change processes. The Behavioral Framework for immersive Technologies (BehaveFIT) addresses this lack by 1) presenting an intelligible categorization and condensation of psychological barriers and immersive features, by 2) suggesting a mapping that shows why and how immersive technologies can help to overcome barriers and finally by 3) proposing a generic prediction path that enables a structured, theory-based approach to the development and evaluation of immersive interventions. These three steps explain how BehaveFIT can be used, and include guiding questions for each step. Further, two use cases illustrate the usage of BehaveFIT. Thus, the present paper contributes to guidance for immersive intervention design and evaluation, showing that immersive interventions support behavior change processes and explain and predict 'why' and 'how' immersive interventions can bridge the intention-behavior-gap. KW - immersive technologies KW - behavior change KW - intervention design KW - intervention evaluation KW - framework KW - virtual reality KW - intention-behavior-gap KW - human-computer interaction Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-258796 VL - 2 ER - TY - INPR A1 - Dandekar, Thomas T1 - Biological heuristics applied to cosmology suggests a condensation nucleus as start of our universe and inflation cosmology replaced by a period of rapid Weiss domain-like crystal growth N2 - Cosmology often uses intricate formulas and mathematics to derive new theories and concepts. We do something different in this paper: We look at biological processes and derive from these heuristics so that the revised cosmology agrees with astronomical observations but does also agree with standard biological observations. We show that we then have to replace any type of singularity at the start of the universe by a condensation nucleus and that the very early period of the universe usually assumed to be inflation has to be replaced by a period of rapid crystal growth as in Weiss magnetization domains. Impressively, these minor modifications agree well with astronomical observations including removing the strong inflation perturbations which were never observed in the recent BICEP2 experiments. Furthermore, looking at biological principles suggests that such a new theory with a condensation nucleus at start and a first rapid phase of magnetization-like growth of the ordered, physical laws obeying lattice we live in is in fact the only convincing theory of the early phases of our universe that also is compatible with current observations. We show in detail in the following that such a process of crystal creation, breaking of new crystal seeds and ultimate evaporation of the present crystal readily leads over several generations to an evolution and selection of better, more stable and more self-organizing crystals. Moreover, this explains the “fine-tuning” question why our universe is fine-tuned to favor life: Our Universe is so self-organizing to have enough offspring and the detailed physics involved is at the same time highly favorable for all self-organizing processes including life. This biological theory contrasts with current standard inflation cosmologies. The latter do not perform well in explaining any phenomena of sophisticated structure creation or self-organization. As proteins can only thermodynamically fold by increasing the entropy in the solution around them we suggest for cosmology a condensation nucleus for a universe can form only in a “chaotic ocean” of string-soup or quantum foam if the entropy outside of the nucleus rapidly increases. We derive an interaction potential for 1 to n-dimensional strings or quantum-foams and show that they allow only 1D, 2D, 4D or octonion interactions. The latter is the richest structure and agrees to the E8 symmetry fundamental to particle physics and also compatible with the ten dimensional string theory E8 which is part of the M-theory. Interestingly, any other interactions of other dimensionality can be ruled out using Hurwitz compositional theorem. Crystallization explains also extremely well why we have only one macroscopic reality and where the worldlines of alternative trajectories exist: They are in other planes of the crystal and for energy reasons they crystallize mostly at the same time, yielding a beautiful and stable crystal. This explains decoherence and allows to determine the size of Planck´s quantum h (very small as separation of crystal layers by energy is extremely strong). Ultimate dissolution of real crystals suggests an explanation for dark energy agreeing with estimates for the “big rip”. The halo distribution of dark matter favoring galaxy formation is readily explained by a crystal seed starting with unit cells made of normal and dark matter. That we have only matter and not antimatter can be explained as there may be right handed mattercrystals and left-handed antimatter crystals. Similarly, real crystals are never perfect and we argue that exactly such irregularities allow formation of galaxies, clusters and superclusters. Finally, heuristics from genetics suggest to look for a systems perspective to derive correct vacuum and Higgs Boson energies. KW - heuristics KW - inflation KW - cosmology KW - crystallization KW - crystal growth KW - E8 symmetry KW - Hurwitz theorem KW - evolution KW - Lee Smolin Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-183945 ER - TY - JOUR A1 - Pfitzner, Christian A1 - May, Stefan A1 - Nüchter, Andreas T1 - Body weight estimation for dose-finding and health monitoring of lying, standing and walking patients based on RGB-D data JF - Sensors N2 - This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients. KW - RGB-D KW - human body weight KW - image processing KW - kinect KW - machine learning KW - perception KW - segmentation KW - sensor fusion KW - stroke KW - thermal camera Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-176642 VL - 18 IS - 5 ER - TY - JOUR A1 - Kirikkayis, Yusuf A1 - Gallik, Florian A1 - Winter, Michael A1 - Reichert, Manfred T1 - BPMNE4IoT: a framework for modeling, executing and monitoring IoT-driven processes JF - Future Internet N2 - The Internet of Things (IoT) enables a variety of smart applications, including smart home, smart manufacturing, and smart city. By enhancing Business Process Management Systems with IoT capabilities, the execution and monitoring of business processes can be significantly improved. Providing a holistic support for modeling, executing and monitoring IoT-driven processes, however, constitutes a challenge. Existing process modeling and process execution languages, such as BPMN 2.0, are unable to fully meet the IoT characteristics (e.g., asynchronicity and parallelism) of IoT-driven processes. In this article, we present BPMNE4IoT—A holistic framework for modeling, executing and monitoring IoT-driven processes. We introduce various artifacts and events based on the BPMN 2.0 metamodel that allow realizing the desired IoT awareness of business processes. The framework is evaluated along two real-world scenarios from two different domains. Moreover, we present a user study for comparing BPMNE4IoT and BPMN 2.0. In particular, this study has confirmed that the BPMNE4IoT framework facilitates the support of IoT-driven processes. KW - IoT KW - BPM KW - BPMN KW - IoT-driven processes Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-304097 SN - 1999-5903 VL - 15 IS - 3 ER - TY - JOUR A1 - Lugrin, Jean-Luc A1 - Latoschik, Marc Erich A1 - Habel, Michael A1 - Roth, Daniel A1 - Seufert, Christian A1 - Grafe, Silke T1 - Breaking Bad Behaviors: A New Tool for Learning Classroom Management Using Virtual Reality JF - Frontiers in ICT N2 - This article presents an immersive virtual reality (VR) system for training classroom management skills, with a specific focus on learning to manage disruptive student behavior in face-to-face, one-to-many teaching scenarios. The core of the system is a real-time 3D virtual simulation of a classroom populated by twenty-four semi-autonomous virtual students. The system has been designed as a companion tool for classroom management seminars in a syllabus for primary and secondary school teachers. This will allow lecturers to link theory with practice using the medium of VR. The system is therefore designed for two users: a trainee teacher and an instructor supervising the training session. The teacher is immersed in a real-time 3D simulation of a classroom by means of a head-mounted display and headphone. The instructor operates a graphical desktop console, which renders a view of the class and the teacher whose avatar movements are captured by a marker less tracking system. This console includes a 2D graphics menu with convenient behavior and feedback control mechanisms to provide human-guided training sessions. The system is built using low-cost consumer hardware and software. Its architecture and technical design are described in detail. A first evaluation confirms its conformance to critical usability requirements (i.e., safety and comfort, believability, simplicity, acceptability, extensibility, affordability, and mobility). Our initial results are promising and constitute the necessary first step toward a possible investigation of the efficiency and effectiveness of such a system in terms of learning outcomes and experience. KW - virtual reality training KW - immersive classroom management KW - immersive classroom KW - virtual agent interaction KW - student simulation Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-147945 VL - 3 IS - 26 ER - TY - JOUR A1 - Döllinger, Nina A1 - Wienrich, Carolin A1 - Latoschik, Marc Erich T1 - Challenges and opportunities of immersive technologies for mindfulness meditation: a systematic review JF - Frontiers in Virtual Reality N2 - Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions. KW - virtual reality KW - augmented reality KW - mindfulness KW - XR KW - meditation Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-259047 VL - 2 ER - TY - RPRT A1 - Nguyen, Kien A1 - Loh, Frank A1 - Hoßfeld, Tobias T1 - Challenges of Serverless Deployment in Edge-MEC-Cloud T2 - KuVS Fachgespräch - Würzburg Workshop on Modeling, Analysis and Simulation of Next-Generation Communication Networks 2023 (WueWoWAS’23) N2 - The emerging serverless computing may meet Edge Cloud in a beneficial manner as the two offer flexibility and dynamicity in optimizing finite hardware resources. However, the lack of proper study of a joint platform leaves a gap in literature about consumption and performance of such integration. To this end, this paper identifies the key questions and proposes a methodology to answer them. KW - Edge-MEC-Cloud Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-322025 ER - TY - THES A1 - Ullmann, Tobias T1 - Characterization of Arctic Environment by Means of Polarimetric Synthetic Aperture Radar (PolSAR) Data and Digital Elevation Models (DEM) T1 - Charakterisierung der arktischen Landoberfläche mittels polarimetrischer Radardaten (PolSAR) und digitalen Höhenmodellen (DEM) N2 - The ecosystem of the high northern latitudes is affected by the recently changing environmental conditions. The Arctic has undergone a significant climatic change over the last decades. The land coverage is changing and a phenological response to the warming is apparent. Remotely sensed data can assist the monitoring and quantification of these changes. The remote sensing of the Arctic was predominantly carried out by the usage of optical sensors but these encounter problems in the Arctic environment, e.g. the frequent cloud cover or the solar geometry. In contrast, the imaging of Synthetic Aperture Radar is not affected by the cloud cover and the acquisition of radar imagery is independent of the solar illumination. The objective of this work was to explore how polarimetric Synthetic Aperture Radar (PolSAR) data of TerraSAR-X, TanDEM-X, Radarsat-2 and ALOS PALSAR and interferometric-derived digital elevation model data of the TanDEM-X Mission can contribute to collect meaningful information on the actual state of the Arctic Environment. The study was conducted for Canadian sites of the Mackenzie Delta Region and Banks Island and in situ reference data were available for the assessment. The up-to-date analysis of the PolSAR data made the application of the Non-Local Means filtering and of the decomposition of co-polarized data necessary. The Non-Local Means filter showed a high capability to preserve the image values, to keep the edges and to reduce the speckle. This supported not only the suitability for the interpretation but also for the classification. The classification accuracies of Non-Local Means filtered data were in average +10% higher compared to unfiltered images. The correlation of the co- and quad-polarized decomposition features was high for classes with distinct surface or double bounce scattering and a usage of the co-polarized data is beneficial for regions of natural land coverage and for low vegetation formations with little volume scattering. The evaluation further revealed that the X- and C-Band were most sensitive to the generalized land cover classes. It was found that the X-Band data were sensitive to low vegetation formations with low shrub density, the C-Band data were sensitive to the shrub density and the shrub dominated tundra. In contrast, the L-Band data were less sensitive to the land cover. Among the different dual-polarized data the HH/VV-polarized data were identified to be most meaningful for the characterization and classification, followed by the HH/HV-polarized and the VV/VH-polarized data. The quad-polarized data showed highest sensitivity to the land cover but differences to the co-polarized data were small. The accuracy assessment showed that spectral information was required for accurate land cover classification. The best results were obtained when spectral and radar information was combined. The benefit of including radar data in the classification was up to +15% accuracy and most significant for the classes wetland and sparse vegetated tundra. The best classifications were realized with quad-polarized C-Band and multispectral data and with co-polarized X-Band and multispectral data. The overall accuracy was up to 80% for unsupervised and up to 90% for supervised classifications. The results indicated that the shortwave co-polarized data show promise for the classification of tundra land cover since the polarimetric information is sensitive to low vegetation and the wetlands. Furthermore, co-polarized data provide a higher spatial resolution than the quad-polarized data. The analysis of the intermediate digital elevation model data of the TanDEM-X showed a high potential for the characterization of the surface morphology. The basic and relative topographic features were shown to be of high relevance for the quantification of the surface morphology and an area-wide application is feasible. In addition, these data were of value for the classification and delineation of landforms. Such classifications will assist the delineation of geomorphological units and have potential to identify locations of actual and future morphologic activity. N2 - Die polaren Regionen der Erde zeigen eine hohe Sensitivität gegenüber dem aktuell stattfindenden klimatischen Wandel. Für den Raum der Arktis wurde eine signifikante Erwärmung der Landoberfläche beobachtet und zukünftige Prognosen zeigen einen positiven Trend der Temperaturentwicklung. Die Folgen für das System sind tiefgehend, zahlreich und zeigen sich bereits heute - beispielsweise in einer Zunahme der photosynthetischen Aktivität und einer Verstärkung der geomorphologischen Dynamik. Durch satellitengestützte Fernerkundungssysteme steht ein Instrumentarium bereit, welches in der Lage ist, solch großflächigen und aktuellen Änderungen der Landoberfläche nachzuzeichnen und zu quantifizieren. Insbesondere optische Systeme haben in den vergangen Jahren ihre hohe Anwendbarkeit für die kontinuierliche Beobachtung und Quantifizierung von Änderungen bewiesen, bzw. durch sie ist ein Erkennen der Änderungen erst ermöglicht worden. Der Nutzen von optischen Systemen für die Beobachtung der arktischen Landoberfläche wird dabei aber durch die häufige Beschattung durch Wolken und die Beleuchtungsgeometrie erschwert, bzw. unmöglich gemacht. Demgegenüber eröffnen bildgebende Radarsystem durch die aktive Sendung von elektromagnetischen Signalen die Möglichkeit kontinuierlich Daten über den Zustand der Oberfläche aufzuzeichnen, ohne von den atmosphärischen oder orbitalen Bedingungen abhängig zu sein. Das Ziel der vorliegenden Arbeit war es den Nutzen und Mehrwert von polarimetrischen Synthetic Aperture Radar (PolSAR) Daten der Satelliten TerraSAR-X, TanDEM-X, Radarsat-2 und ALOS PALSAR für die Charakterisierung und Klassifikation der arktischen Landoberfläche zu identifizieren. Darüber hinaus war es ein Ziel das vorläufige interferometrische digitale Höhenmodel der TanDEM-X Mission für die Charakterisierung der Landoberflächen-Morphologie zu verwenden. Die Arbeiten erfolgten hauptsächlich an ausgewählten Testgebieten im Bereich des kanadischen Mackenzie Deltas und im Norden von Banks Islanld. Für diese Regionen standen in situ erhobene Referenzdaten zur Landbedeckung zur Verfügung. Mit Blick auf den aktuellen Stand der Forschung wurden die Radardaten mit einem entwickelten Non-Local-Means Verfahren gefiltert. Die co-polarisierten Daten wurde zudem mit einer neu entwickelten zwei Komponenten Dekomposition verarbeitet. Das entwickelte Filterverfahren zeigt eine hohe Anwendbarkeit für alle Radardaten. Der Ansatz war in der Lage die Kanten und Grauwerte im Bild zu erhalten, bei einer gleichzeitigen Reduktion der Varianz und des Speckle-Effekts. Dies verbesserte nicht nur die Bildinterpretation, sondern auch die Bildklassifikation und eine Erhöhung der Klassifikationsgüte von ca. +10% konnte durch die Filterung erreicht werden. Die Merkmale der Dekomposition von co-polarisierten Daten zeigten eine hohe Korrelation zu den entsprechenden Merkmalen der Dekomposition von voll-polarisierten Daten. Die Korrelation war besonders hoch für Landbedeckungstypen, welche eine double oder single bounce Rückstreuung hervorrufen. Eine Anwendung von co-polarisierten Daten ist somit besonders sinnvoll und aussagekräftig für Landbedeckungstypen, welche nur einen geringen Teil an Volumenstreuung bedingen. Die vergleichende Auswertung der PolSAR Daten zeigte, dass sowohl X- als auch C-Band Daten besonders sensitiv für die untersuchten Landbedeckungsklassen waren. Die X-Band Daten zeigten die höchste Sensitivität für niedrige Tundrengesellschaften. Die C-Band Daten zeigten eine höhere Sensitivität für mittelhohe Tundrengesellschaften und Gebüsch (shrub). Die L-Band Daten wiesen im Vergleich dazu die geringste Sensitivität für die Oberflächenbedeckung auf. Ein Vergleich von verschiedenen dual-polarisierten Daten zeigte, dass die Kanalkombination HH/VV die beste Differenzierung der Landbedeckungsklassen lieferte. Weniger deutlich war die Differenzierung mit den Kombinationen HH/HV und VV/VH. Insgesamt am besten waren jedoch die voll-polarisierten Daten geeignet, auch wenn die Verbesserung im Vergleich zu den co-polarisierten Daten nur gering war. Die Analyse der Klassifikationsgenauigkeiten bestätigte dieses Bild, machte jedoch deutlich, dass zu einer genauen Landbedeckungsklassifikation die Einbeziehung von multispektraler Information notwendig ist. Eine Nutzung von voll-polarisierten C-Band und multispektralen Daten erbrachte so eine mittlere Güte von ca. 80% für unüberwachte und von ca. 90% für überwachte Klassifikationsverfahren. Ähnlich hohe Werte wurden für die Kombination von co-polarisierten X-Band und multispektralen Daten erreicht. Im Vergleich zu Klassifikation die nur auf Grundlage von multispektralen Daten durchgeführt wurden, erbrachte die Einbeziehung der polarisierten Radardaten eine zusätzliche durchschnittliche Klassifikationsgüte von ca. +15%. Der Zugewinn und die Möglichkeit zur Differenzierung war vor allem für die Bedeckungstypen der Feuchtgebiete (wetlands) und der niedrigen Tundrengesellschaften festzustellen. Die Analyse der digitalen Höhenmodelle zeigte ein hohes Potential der TanDEM-X Daten für die Charakterisierung der topographischen Gegebenheiten. Die aus den Daten abgeleiteten absoluten und relativen topographischen Merkmale waren für eine morphometrische Quantifizierung der Landoberflächen-Morphologie geeignet. Zudem konnten diese Merkmale auch für eine initiale Klassifikation der Landformen genutzt werden. Die Daten zeigten somit ein hohes Potential für die Unterstützung der geomorphologischen Kartierung und für die Identifizierung der aktuellen und zukünftigen Dynamik der Landoberfläche. KW - Mackenzie-River-Delta KW - Banks Islands KW - Radarfernerkundung KW - Topografie KW - Formmessung KW - Klassifikation KW - Relief KW - PolSAR KW - Synthetic Aperture Radar KW - Land Cover Classification KW - Digital Elevation Model KW - Arctic Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-115719 ER - TY - JOUR A1 - Pawellek, Ruben A1 - Krmar, Jovana A1 - Leistner, Adrian A1 - Djajić, Nevena A1 - Otašević, Biljana A1 - Protić, Ana A1 - Holzgrabe, Ulrike T1 - Charged aerosol detector response modeling for fatty acids based on experimental settings and molecular features: a machine learning approach JF - Journal of Cheminformatics N2 - The charged aerosol detector (CAD) is the latest representative of aerosol-based detectors that generate a response independent of the analytes' chemical structure. This study was aimed at accurately predicting the CAD response of homologous fatty acids under varying experimental conditions. Fatty acids from C12 to C18 were used as model substances due to semivolatile characterics that caused non-uniform CAD behaviour. Considering both experimental conditions and molecular descriptors, a mixed quantitative structure-property relationship (QSPR) modeling was performed using Gradient Boosted Trees (GBT). The ensemble of 10 decisions trees (learning rate set at 0.55, the maximal depth set at 5, and the sample rate set at 1.0) was able to explain approximately 99% (Q\(^2\): 0.987, RMSE: 0.051) of the observed variance in CAD responses. Validation using an external test compound confirmed the high predictive ability of the model established (R-2: 0.990, RMSEP: 0.050). With respect to the intrinsic attribute selection strategy, GBT used almost all independent variables during model building. Finally, it attributed the highest importance to the power function value, the flow rate of the mobile phase, evaporation temperature, the content of the organic solvent in the mobile phase and the molecular descriptors such as molecular weight (MW), Radial Distribution Function-080/weighted by mass (RDF080m) and average coefficient of the last eigenvector from distance/detour matrix (Ve2_D/Dt). The identification of the factors most relevant to the CAD responsiveness has contributed to a better understanding of the underlying mechanisms of signal generation. An increased CAD response that was obtained for acetone as organic modifier demonstrated its potential to replace the more expensive and environmentally harmful acetonitrile. KW - High-performance liquid chromatography (HPLC) KW - Charged aerosol detector (CAD) KW - Gradient boosted trees (GBT) KW - Quantitative structure-property relationship modeling (QSPR) KW - Fatty acids Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-261618 VL - 13 IS - 1 ER - TY - JOUR A1 - Hentschel, Simon A1 - Kobs, Konstantin A1 - Hotho, Andreas T1 - CLIP knows image aesthetics JF - Frontiers in Artificial Intelligence N2 - Most Image Aesthetic Assessment (IAA) methods use a pretrained ImageNet classification model as a base to fine-tune. We hypothesize that content classification is not an optimal pretraining task for IAA, since the task discourages the extraction of features that are useful for IAA, e.g., composition, lighting, or style. On the other hand, we argue that the Contrastive Language-Image Pretraining (CLIP) model is a better base for IAA models, since it has been trained using natural language supervision. Due to the rich nature of language, CLIP needs to learn a broad range of image features that correlate with sentences describing the image content, composition, environments, and even subjective feelings about the image. While it has been shown that CLIP extracts features useful for content classification tasks, its suitability for tasks that require the extraction of style-based features like IAA has not yet been shown. We test our hypothesis by conducting a three-step study, investigating the usefulness of features extracted by CLIP compared to features obtained from the last layer of a comparable ImageNet classification model. In each step, we get more computationally expensive. First, we engineer natural language prompts that let CLIP assess an image's aesthetic without adjusting any weights in the model. To overcome the challenge that CLIP's prompting only is applicable to classification tasks, we propose a simple but effective strategy to convert multiple prompts to a continuous scalar as required when predicting an image's mean aesthetic score. Second, we train a linear regression on the AVA dataset using image features obtained by CLIP's image encoder. The resulting model outperforms a linear regression trained on features from an ImageNet classification model. It also shows competitive performance with fully fine-tuned networks based on ImageNet, while only training a single layer. Finally, by fine-tuning CLIP's image encoder on the AVA dataset, we show that CLIP only needs a fraction of training epochs to converge, while also performing better than a fine-tuned ImageNet model. Overall, our experiments suggest that CLIP is better suited as a base model for IAA methods than ImageNet pretrained networks. KW - Image Aesthetic Assessment KW - CLIP KW - language-image pre-training KW - text supervision KW - prompt engineering KW - AVA Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-297150 SN - 2624-8212 VL - 5 ER - TY - RPRT A1 - Le, Duy Thanh A1 - Großmann, Marcel A1 - Krieger, Udo R. T1 - Cloudless Resource Monitoring in a Fog Computing System Enabled by an SDN/NFV Infrastructure T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - Today’s advanced Internet-of-Things applications raise technical challenges on cloud, edge, and fog computing. The design of an efficient, virtualized, context-aware, self-configuring orchestration system of a fog computing system constitutes a major development effort within this very innovative area of research. In this paper we describe the architecture and relevant implementation aspects of a cloudless resource monitoring system interworking with an SDN/NFV infrastructure. It realizes the basic monitoring component of the fundamental MAPE-K principles employed in autonomic computing. Here we present the hierarchical layering and functionality within the underlying fog nodes to generate a working prototype of an intelligent, self-managed orchestrator for advanced IoT applications and services. The latter system has the capability to monitor automatically various performance aspects of the resource allocation among multiple hosts of a fog computing system interconnected by SDN. KW - Datennetz KW - fog computing KW - SDN/NVF KW - container virtualization KW - autonomic orchestration KW - docker Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280723 ER - TY - JOUR A1 - Schokraie, Elham A1 - Warnken, Uwe A1 - Hotz-Wagenblatt, Agnes A1 - Grohme, Markus A. A1 - Hengherr, Steffen A1 - Förster, Frank A1 - Schill, Ralph O. A1 - Frohme, Marcus A1 - Dandekar, Thomas A1 - Schnölzer, Martina T1 - Comparative proteome analysis of Milnesium tardigradum in early embryonic state versus adults in active and anhydrobiotic state JF - PLoS One N2 - Tardigrades have fascinated researchers for more than 300 years because of their extraordinary capability to undergo cryptobiosis and survive extreme environmental conditions. However, the survival mechanisms of tardigrades are still poorly understood mainly due to the absence of detailed knowledge about the proteome and genome of these organisms. Our study was intended to provide a basis for the functional characterization of expressed proteins in different states of tardigrades. High-throughput, high-accuracy proteomics in combination with a newly developed tardigrade specific protein database resulted in the identification of more than 3000 proteins in three different states: early embryonic state and adult animals in active and anhydrobiotic state. This comprehensive proteome resource includes protein families such as chaperones, antioxidants, ribosomal proteins, cytoskeletal proteins, transporters, protein channels, nutrient reservoirs, and developmental proteins. A comparative analysis of protein families in the different states was performed by calculating the exponentially modified protein abundance index which classifies proteins in major and minor components. This is the first step to analyzing the proteins involved in early embryonic development, and furthermore proteins which might play an important role in the transition into the anhydrobiotic state. KW - life-span regulation KW - genes KW - Yolk protein KW - water stress KW - expression KW - tolerance KW - richtersius coronifer KW - superoxide-dismutase KW - caenorhabditis elegans KW - arabidopsis thaliana KW - vitellogenin Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-134447 VL - 7 IS - 9 ER - TY - JOUR A1 - Hossfeld, Tobias A1 - Heegaard, Poul E. A1 - Kellerer, Wolfgang T1 - Comparing the scalability of communication networks and systems JF - IEEE Access N2 - Scalability is often mentioned in literature, but a stringent definition is missing. In particular, there is no general scalability assessment which clearly indicates whether a system scales or not or whether a system scales better than another. The key contribution of this article is the definition of a scalability index (SI) which quantifies if a system scales in comparison to another system, a hypothetical system, e.g., linear system, or the theoretically optimal system. The suggested SI generalizes different metrics from literature, which are specialized cases of our SI. The primary target of our scalability framework is, however, benchmarking of two systems, which does not require any reference system. The SI is demonstrated and evaluated for different use cases, that are (1) the performance of an IoT load balancer depending on the system load, (2) the availability of a communication system depending on the size and structure of the network, (3) scalability comparison of different location selection mechanisms in fog computing with respect to delays and energy consumption; (4) comparison of time-sensitive networking (TSN) mechanisms in terms of efficiency and utilization. Finally, we discuss how to use and how not to use the SI and give recommendations and guidelines in practice. To the best of our knowledge, this is the first work which provides a general SI for the comparison and benchmarking of systems, which is the primary target of our scalability analysis. KW - communication networks KW - performance KW - availability KW - scalability Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-349403 VL - 11 ER - TY - THES A1 - Spoerhase, Joachim T1 - Competitive and Voting Location T1 - Kompetitive und präferenzbasierte Standortprobleme N2 - We consider competitive location problems where two competing providers place their facilities sequentially and users can decide between the competitors. We assume that both competitors act non-cooperatively and aim at maximizing their own benefits. We investigate the complexity and approximability of such problems on graphs, in particular on simple graph classes such as trees and paths. We also develop fast algorithms for single competitive location problems where each provider places a single facilty. Voting location, in contrast, aims at identifying locations that meet social criteria. The provider wants to satisfy the users (customers) of the facility to be opened. In general, there is no location that is favored by all users. Therefore, a satisfactory compromise has to be found. To this end, criteria arising from voting theory are considered. The solution of the location problem is understood as the winner of a virtual election among the users of the facilities, in which the potential locations play the role of the candidates and the users represent the voters. Competitive and voting location problems turn out to be closely related. N2 - Wir betrachten kompetitive Standortprobleme, bei denen zwei konkurrierende Anbieter ihre Versorger sequenziell platzieren und die Kunden sich zwischen den Konkurrenten entscheiden können. Wir nehmen an, dass beide Konkurrenten nicht-kooperativ agieren und auf die Maximierung ihres eigenen Vorteils abzielen. Wir untersuchen die Komplexität und Approximierbarkeit solcher Probleme auf Graphen, insbesondere auf einfachen Graphklassen wie Bäumen und Pfaden. Ferner entwickeln wir schnelle Algorithmen für kompetitive Einzelstandortprobleme, bei denen jeder Anbieter genau einen Versorger errichtet. Im Gegensatz dazu geht es bei Voting-Standortproblemen um die Bestimmung eines Standorts, der die Benutzer oder Kunden soweit wie möglich zufrieden stellt. Solche Fragestellungen sind beispielsweise bei der Planung öffentlicher Einrichtungen relevant. In den meisten Fällen gibt es keinen Standort, der von allen Benutzern favorisiert wird. Daher muss ein Kompromiss gefunden werden. Hierzu werden Kriterien betrachtet, die auch in Wahlsystemen eingesetzt werden: Ein geeigneter Standort wird als Sieger einer gedachten Wahl verstanden, bei der die möglichen Standorte die zur Wahl stehenden Kandidaten und die Kunden die Wähler darstellen. Kompetitive Standortprobleme und Voting-Standortprobleme erweisen sich als eng miteinander verwandt. KW - Standortproblem KW - NP-hartes Problem KW - Approximationsalgorithmus KW - Graph KW - Effizienter Algorithmus KW - competitive location KW - voting location KW - NP-hardness KW - approximation algorithm KW - efficient algorithm KW - graph KW - tree KW - graph decomposition Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-52978 ER - TY - THES A1 - Kosub, Sven T1 - Complexity and Partitions T1 - Komplexität von Partitionen N2 - Computational complexity theory usually investigates the complexity of sets, i.e., the complexity of partitions into two parts. But often it is more appropriate to represent natural problems by partitions into more than two parts. A particularly interesting class of such problems consists of classification problems for relations. For instance, a binary relation R typically defines a partitioning of the set of all pairs (x,y) into four parts, classifiable according to the cases where R(x,y) and R(y,x) hold, only R(x,y) or only R(y,x) holds or even neither R(x,y) nor R(y,x) is true. By means of concrete classification problems such as Graph Embedding or Entailment (for propositional logic), this thesis systematically develops tools, in shape of the boolean hierarchy of NP-partitions and its refinements, for the qualitative analysis of the complexity of partitions generated by NP-relations. The Boolean hierarchy of NP-partitions is introduced as a generalization of the well-known and well-studied Boolean hierarchy (of sets) over NP. Whereas the latter hierarchy has a very simple structure, the situation is much more complicated for the case of partitions into at least three parts. To get an idea of this hierarchy, alternative descriptions of the partition classes are given in terms of finite, labeled lattices. Based on these characterizations the Embedding Conjecture is established providing the complete information on the structure of the hierarchy. This conjecture is supported by several results. A natural extension of the Boolean hierarchy of NP-partitions emerges from the lattice-characterization of its classes by considering partition classes generated by finite, labeled posets. It turns out that all significant ideas translate from the case of lattices. The induced refined Boolean hierarchy of NP-partitions enables us more accuratly capturing the complexity of certain relations (such as Graph Embedding) and a description of projectively closed partition classes. N2 - Die klassische Komplexitätstheorie untersucht in erster Linie die Komplexität von Mengen, d.h. von Zerlegungen (Partitionen) einer Grundmenge in zwei Teile. Häufig werden aber natürliche Fragestellungen viel angemessener durch Zerlegungen in mehr als zwei Teile abgebildet. Eine besonders interessante Klasse solcher Fragestellungen sind Klassifikationsprobleme für Relationen. Zum Beispiel definiert eine Binärrelation R typischerweise eine Zerlegung der Menge aller Paare (x,y) in vier Teile, klassifizierbar danach, ob R(x,y) und R(y,x), R(x,y) aber nicht R(y,x), nicht R(x,y) aber dafür R(y,x) oder weder R(x,y) noch R(y,x) gilt. Anhand konkreter Klassifikationsprobleme, wie zum Beispiel der Einbettbarkeit von Graphen und der Folgerbarkeit für aussagenlogische Formeln, werden in der Dissertation Instrumente für eine qualitative Analyse der Komplexität von Partitionen, die von NP-Relationen erzeugt werden, in Form der Booleschen Hierarchie der NP-Partitionen und ihrer Erweiterungen systematisch entwickelt. Die Boolesche Hierarchie der NP-Partitionen wird als Verallgemeinerung der bereits bekannten und wohluntersuchten Boolesche Hierarchie über NP eingeführt. Während die letztere Hierarchie eine sehr einfache Struktur aufweist, stellt sich die Boolesche Hierarchie der NP-Partitionen im Falle von Zerlegungen in mindestens 3 Teile als sehr viel komplizierter heraus. Um einen Überblick über diese Hierarchien zu erlangen, werden alternative Beschreibungen der Klassen der Hierarchien mittels endlicher, bewerteter Verbände angegeben. Darauf aufbauend wird die Einbettungsvermutung aufgestellt, die uns die vollständige Information über die Struktur der Hierarchie liefert. Diese Vermutung wird mit verschiedene Resultaten untermauert. Eine Erweiterung der Booleschen Hierarchie der NP-Partitionen ergibt sich auf natürliche Weise aus der Charakterisierung ihrer Klassen durch Verbände. Dazu werden Klassen betrachtet, die von endlichen, bewerteten Halbordnungen erzeugt werden. Es zeigt sich, dass die wesentlichen Konzepte vom Verbandsfall übertragen werden können. Die entstehende Verfeinerung der Booleschen Hierarchie der NP-Partitionen ermöglicht die exaktere Analyse der Komplexität bestimmter Relationen (wie zum Beispiel der Einbettbarkeit von Graphen) und die Beschreibung projektiv abgeschlossener Partitionenklassen. KW - Partition KW - Boolesche Hierarchie KW - Komplexitätsklasse NP KW - Theoretische Informatik KW - Komplexitätstheorie KW - NP KW - Boolesche Hierarchie KW - Partitionen KW - Verbände KW - Halbordnungen KW - Theoretical computer science KW - computational complexity KW - NP KW - Boolean hierarchy KW - partitions KW - lattices KW - posets Y1 - 2001 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-2808 ER - TY - JOUR A1 - Böhler, Elmar A1 - Creignou, Nadia A1 - Galota, Matthias A1 - Reith, Steffen A1 - Schnoor, Henning A1 - Vollmer, Heribert T1 - Complexity Classifications for Different Equivalence and Audit Problems for Boolean Circuits JF - Logical Methods in Computer Science N2 - We study Boolean circuits as a representation of Boolean functions and conskier different equivalence, audit, and enumeration problems. For a number of restricted sets of gate types (bases) we obtain efficient algorithms, while for all other gate types we show these problems are at least NP-hard. KW - hierarchy KW - satisfiability problems Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-131121 VL - 8 IS - 3:27 SP - 1 EP - 25 ER - TY - JOUR A1 - Scherer, Marc A1 - Fleishman, Sarel J. A1 - Jones, Patrik R. A1 - Dandekar, Thomas A1 - Bencurova, Elena T1 - Computational Enzyme Engineering Pipelines for Optimized Production of Renewable Chemicals JF - Frontiers in Bioengineering and Biotechnology N2 - To enable a sustainable supply of chemicals, novel biotechnological solutions are required that replace the reliance on fossil resources. One potential solution is to utilize tailored biosynthetic modules for the metabolic conversion of CO2 or organic waste to chemicals and fuel by microorganisms. Currently, it is challenging to commercialize biotechnological processes for renewable chemical biomanufacturing because of a lack of highly active and specific biocatalysts. As experimental methods to engineer biocatalysts are time- and cost-intensive, it is important to establish efficient and reliable computational tools that can speed up the identification or optimization of selective, highly active, and stable enzyme variants for utilization in the biotechnological industry. Here, we review and suggest combinations of effective state-of-the-art software and online tools available for computational enzyme engineering pipelines to optimize metabolic pathways for the biosynthesis of renewable chemicals. Using examples relevant for biotechnology, we explain the underlying principles of enzyme engineering and design and illuminate future directions for automated optimization of biocatalysts for the assembly of synthetic metabolic pathways. KW - computational KW - enzyme KW - engineering KW - design KW - biomanufacturing KW - biofuel KW - microbes KW - metabolism Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-240598 SN - 2296-4185 VL - 9 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Computer-based Textual Documents Collation System for Reconstructing the Original Text from Automatically Identified Base Text and Ranked Witnesses N2 - Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a bar diagram. Additionally this study includes a recursive algorithm for automatically reconstructing the original text from the identified base text and ranked witnesses. KW - Textvergleich KW - Text Mining KW - Textual document collation KW - Base text KW - Reconstruction of original text KW - Gothenburg model KW - Bayesian classifier KW - Textual alterations weighting system Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65749 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Computing Generic Causes of Revelation of the Quranic Verses Using Machine Learning Techniques N2 - Because many verses of the holy Quran are similar, there is high probability that, similar verses addressing same issues share same generic causes of revelation. In this study, machine learning techniques have been employed in order to automatically derive causes of revelation of Quranic verses. The derivation of the causes of revelation is viewed as a classification problem. Initially the categories are based on the verses with known causes of revelation, and the testing set consists of the remaining verses. Based on a computed threshold value, a naïve Bayesian classifier is used to categorize some verses. After that, using a decision tree classifier the remaining uncategorized verses are separated into verses that contain indicators (resultative connectors, causative expressions…), and those that do not. As for those verses having indicators, each one is segmented into its constituent clauses by identification of the linking indicators. Then a dominant clause is extracted and considered either as the cause of revelation, or post-processed by adding or subtracting some terms to form a causal clause that constitutes the cause of revelation. Concerning remaining unclassified verses without indicators, a naive Bayesian classifier is again used to assign each one of them to one of the existing classes based on features and topics similarity. As for verses that could not be classified so far, manual classification was made by considering each verse as a category on its own. The result obtained in this study is encouraging, and shows that automatic derivation of Quranic verses’ generic causes of revelation is achievable, and reasonably reliable for understanding and implementing the teachings of the Quran. KW - Text Mining KW - Koran KW - Text mining KW - Statistical classifiers KW - Text segmentation KW - Causes of revelation KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66083 ER - TY - JOUR A1 - Latoschik, Marc Erich A1 - Wienrich, Carolin T1 - Congruence and plausibility, not presence: pivotal conditions for XR experiences and effects, a novel approach JF - Frontiers in Virtual Reality N2 - Presence is often considered the most important quale describing the subjective feeling of being in a computer-generated and/or computer-mediated virtual environment. The identification and separation of orthogonal presence components, i.e., the place illusion and the plausibility illusion, has been an accepted theoretical model describing Virtual Reality (VR) experiences for some time. This perspective article challenges this presence-oriented VR theory. First, we argue that a place illusion cannot be the major construct to describe the much wider scope of virtual, augmented, and mixed reality (VR, AR, MR: or XR for short). Second, we argue that there is no plausibility illusion but merely plausibility, and we derive the place illusion caused by the congruent and plausible generation of spatial cues and similarly for all the current model’s so-defined illusions. Finally, we propose congruence and plausibility to become the central essential conditions in a novel theoretical model describing XR experiences and effects. KW - XR KW - experience KW - presence KW - congruence KW - plausibility KW - coherence KW - theory KW - prediction Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-284787 SN - 2673-4192 VL - 3 ER - TY - THES A1 - Löffler, Andre T1 - Constrained Graph Layouts: Vertices on the Outer Face and on the Integer Grid T1 - Graphzeichnen unter Nebenbedingungen: Knoten auf der Außenfacette und mit ganzzahligen Koordinaten N2 - Constraining graph layouts - that is, restricting the placement of vertices and the routing of edges to obey certain constraints - is common practice in graph drawing. In this book, we discuss algorithmic results on two different restriction types: placing vertices on the outer face and on the integer grid. For the first type, we look into the outer k-planar and outer k-quasi-planar graphs, as well as giving a linear-time algorithm to recognize full and closed outer k-planar graphs Monadic Second-order Logic. For the second type, we consider the problem of transferring a given planar drawing onto the integer grid while perserving the original drawings topology; we also generalize a variant of Cauchy's rigidity theorem for orthogonal polyhedra of genus 0 to those of arbitrary genus. N2 - Das Einschränken von Zeichnungen von Graphen, sodass diese bestimmte Nebenbedingungen erfüllen - etwa solche, die das Platzieren von Knoten oder den Verlauf von Kanten beeinflussen - sind im Graphzeichnen allgegenwärtig. In dieser Arbeit befassen wir uns mit algorithmischen Resultaten zu zwei speziellen Einschränkungen, nämlich dem Platzieren von Knoten entweder auf der Außenfacette oder auf ganzzahligen Koordinaten. Für die erste Einschränkung untersuchen wir die außen k-planaren und außen k-quasi-planaren Graphen und geben einen auf monadische Prädikatenlogik zweiter Stufe basierenden Algorithmus an, der überprüft, ob ein Graph voll außen k-planar ist. Für die zweite Einschränkung untersuchen wir das Problem, eine gegebene planare Zeichnung eines Graphen auf das ganzzahlige Koordinatengitter zu transportieren, ohne dabei die Topologie der Zeichnung zu verändern; außerdem generalisieren wir eine Variante von Cauchys Starrheitssatz für orthogonale Polyeder von Geschlecht 0 auf solche von beliebigem Geschlecht. KW - Graphenzeichnen KW - Komplexität KW - Algorithmus KW - Algorithmische Geometrie KW - Kombinatorik KW - Planare Graphen KW - Polyeder KW - Konvexe Zeichnungen Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-215746 SN - 978-3-95826-146-4 SN - 978-3-95826-147-1 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, ISBN 978-3-95826-146-4, 32,90 EUR PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - JOUR A1 - Glémarec, Yann A1 - Lugrin, Jean-Luc A1 - Bosser, Anne-Gwenn A1 - Buche, Cédric A1 - Latoschik, Marc Erich T1 - Controlling the stage: a high-level control system for virtual audiences in Virtual Reality JF - Frontiers in Virtual Reality N2 - This article presents a novel method for controlling a virtual audience system (VAS) in Virtual Reality (VR) application, called STAGE, which has been originally designed for supervised public speaking training in university seminars dedicated to the preparation and delivery of scientific talks. We are interested in creating pedagogical narratives: narratives encompass affective phenomenon and rather than organizing events changing the course of a training scenario, pedagogical plans using our system focus on organizing the affects it arouses for the trainees. Efficiently controlling a virtual audience towards a specific training objective while evaluating the speaker’s performance presents a challenge for a seminar instructor: the high level of cognitive and physical demands required to be able to control the virtual audience, whilst evaluating speaker’s performance, adjusting and allowing it to quickly react to the user’s behaviors and interactions. It is indeed a critical limitation of a number of existing systems that they rely on a Wizard of Oz approach, where the tutor drives the audience in reaction to the user’s performance. We address this problem by integrating with a VAS a high-level control component for tutors, which allows using predefined audience behavior rules, defining custom ones, as well as intervening during run-time for finer control of the unfolding of the pedagogical plan. At its core, this component offers a tool to program, select, modify and monitor interactive training narratives using a high-level representation. The STAGE offers the following features: i) a high-level API to program pedagogical narratives focusing on a specific public speaking situation and training objectives, ii) an interactive visualization interface iii) computation and visualization of user metrics, iv) a semi-autonomous virtual audience composed of virtual spectators with automatic reactions to the speaker and surrounding spectators while following the pedagogical plan V) and the possibility for the instructor to embody a virtual spectator to ask questions or guide the speaker from within the Virtual Environment. We present here the design, and implementation of the tutoring system and its integration in STAGE, and discuss its reception by end-users. KW - virtual reality KW - virtual agent KW - behavior perception KW - public speaking KW - education Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-284601 SN - 2673-4192 VL - 3 ER - TY - JOUR A1 - Steininger, Michael A1 - Abel, Daniel A1 - Ziegler, Katrin A1 - Krause, Anna A1 - Paeth, Heiko A1 - Hotho, Andreas T1 - ConvMOS: climate model output statistics with deep learning JF - Data Mining and Knowledge Discovery N2 - Climate models are the tool of choice for scientists researching climate change. Like all models they suffer from errors, particularly systematic and location-specific representation errors. One way to reduce these errors is model output statistics (MOS) where the model output is fitted to observational data with machine learning. In this work, we assess the use of convolutional Deep Learning climate MOS approaches and present the ConvMOS architecture which is specifically designed based on the observation that there are systematic and location-specific errors in the precipitation estimates of climate models. We apply ConvMOS models to the simulated precipitation of the regional climate model REMO, showing that a combination of per-location model parameters for reducing location-specific errors and global model parameters for reducing systematic errors is indeed beneficial for MOS performance. We find that ConvMOS models can reduce errors considerably and perform significantly better than three commonly used MOS approaches and plain ResNet and U-Net models in most cases. Our results show that non-linear MOS models underestimate the number of extreme precipitation events, which we alleviate by training models specialized towards extreme precipitation events with the imbalanced regression method DenseLoss. While we consider climate MOS, we argue that aspects of ConvMOS may also be beneficial in other domains with geospatial data, such as air pollution modeling or weather forecasts. KW - Klima KW - Modell KW - Deep learning KW - Neuronales Netz KW - climate KW - neural networks KW - model output statistics Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-324213 SN - 1384-5810 VL - 37 IS - 1 ER - TY - THES A1 - Fink, Martin T1 - Crossings, Curves, and Constraints in Graph Drawing T1 - Kreuzungen, Kurven und Constraints beim Zeichnen von Graphen N2 - In many cases, problems, data, or information can be modeled as graphs. Graphs can be used as a tool for modeling in any case where connections between distinguishable objects occur. Any graph consists of a set of objects, called vertices, and a set of connections, called edges, such that any edge connects a pair of vertices. For example, a social network can be modeled by a graph by transforming the users of the network into vertices and friendship relations between users into edges. Also physical networks like computer networks or transportation networks, for example, the metro network of a city, can be seen as graphs. For making graphs and, thereby, the data that is modeled, well-understandable for users, we need a visualization. Graph drawing deals with algorithms for visualizing graphs. In this thesis, especially the use of crossings and curves is investigated for graph drawing problems under additional constraints. The constraints that occur in the problems investigated in this thesis especially restrict the positions of (a part of) the vertices; this is done either as a hard constraint or as an optimization criterion. N2 - Viele Probleme, Informationen oder Daten lassen sich mit Hilfe von Graphen modellieren. Graphen können überall dort eingesetzt werden, wo Verbindungen zwischen unterscheidbaren Objekten auftreten. Ein Graph besteht aus einer Menge von Objekten, genannt Knoten, und einer Menge von Verbindungen, genannt Kanten, zwischen je einem Paar von Knoten. Ein soziales Netzwerk lässt sich etwa als Graph modellieren, indem die teilnehmenden Personen als Knoten und Freundschaftsbeziehungen als Kanten dargestellt werden. Physikalische Netzwerke wie etwa Computernetze oder Transportnetze - wie beispielsweise das U-Bahnliniennetz einer Stadt - lassen sich ebenfalls als Graph auffassen. Um Graphen und die damit modellierten Daten gut erfassen zu können benötigen wir eine Visualisierung. Das Graphenzeichnen befasst sich mit dem Entwickeln von Algorithmen zur Visualisierung von Graphen. Diese Dissertation beschäftigt sich insbesondere mit dem Einsatz von Kreuzungen und Kurven beim Zeichnen von Graphen unter Nebenbedingungen (Constraints). Die in den untersuchten Problemen auftretenden Nebenbedingungen sorgen unter anderem dafür, dass die Lage eines Teils der Knoten - als feste Anforderung oder als Optimierungskriterium - vorgegeben ist. KW - Graphenzeichnen KW - Kreuzung KW - Kurve KW - Graph KW - graph drawing KW - crossing minimization KW - curves KW - labeling KW - metro map KW - Kreuzungsminimierung KW - Landkartenbeschriftung KW - U-Bahnlinienplan Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-98235 SN - 978-3-95826-002-3 (print) SN - 978-3-95826-003-0 (online) PB - Würzburg University Press ER - TY - RPRT A1 - Metzger, Florian T1 - Crowdsensed QoE for the community - a concept to make QoE assessment accessible N2 - In recent years several community testbeds as well as participatory sensing platforms have successfully established themselves to provide open data to everyone interested. Each of them with a specific goal in mind, ranging from collecting radio coverage data up to environmental and radiation data. Such data can be used by the community in their decision making, whether to subscribe to a specific mobile phone service that provides good coverage in an area or in finding a sunny and warm region for the summer holidays. However, the existing platforms are usually limiting themselves to directly measurable network QoS. If such a crowdsourced data set provides more in-depth derived measures, this would enable an even better decision making. A community-driven crowdsensing platform that derives spatial application-layer user experience from resource-friendly bandwidth estimates would be such a case, video streaming services come to mind as a prime example. In this paper we present a concept for such a system based on an initial prototype that eases the collection of data necessary to determine mobile-specific QoE at large scale. In addition we reason why the simple quality metric proposed here can hold its own. KW - Quality of Experience KW - Crowdsourcing KW - Crowdsensing KW - QoE Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203748 N1 - Originally written in 2017, but never published. ER - TY - JOUR A1 - Du, Shitong A1 - Lauterbach, Helge A. A1 - Li, Xuyou A1 - Demisse, Girum G. A1 - Borrmann, Dorit A1 - Nüchter, Andreas T1 - Curvefusion — A Method for Combining Estimated Trajectories with Applications to SLAM and Time-Calibration JF - Sensors N2 - Mapping and localization of mobile robots in an unknown environment are essential for most high-level operations like autonomous navigation or exploration. This paper presents a novel approach for combining estimated trajectories, namely curvefusion. The robot used in the experiments is equipped with a horizontally mounted 2D profiler, a constantly spinning 3D laser scanner and a GPS module. The proposed algorithm first combines trajectories from different sensors to optimize poses of the planar three degrees of freedom (DoF) trajectory, which is then fed into continuous-time simultaneous localization and mapping (SLAM) to further improve the trajectory. While state-of-the-art multi-sensor fusion methods mainly focus on probabilistic methods, our approach instead adopts a deformation-based method to optimize poses. To this end, a similarity metric for curved shapes is introduced into the robotics community to fuse the estimated trajectories. Additionally, a shape-based point correspondence estimation method is applied to the multi-sensor time calibration. Experiments show that the proposed fusion method can achieve relatively better accuracy, even if the error of the trajectory before fusion is large, which demonstrates that our method can still maintain a certain degree of accuracy in an environment where typical pose estimation methods have poor performance. In addition, the proposed time-calibration method also achieves high accuracy in estimating point correspondences. KW - mapping KW - continuous-time SLAM KW - deformation-based method KW - time calibration Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-219988 SN - 1424-8220 VL - 20 IS - 23 ER - TY - RPRT A1 - Rossi, Angelo Pio A1 - Maurelli, Francesco A1 - Unnithan, Vikram A1 - Dreger, Hendrik A1 - Mathewos, Kedus A1 - Pradhan, Nayan A1 - Corbeanu, Dan-Andrei A1 - Pozzobon, Riccardo A1 - Massironi, Matteo A1 - Ferrari, Sabrina A1 - Pernechele, Claudia A1 - Paoletti, Lorenzo A1 - Simioni, Emanuele A1 - Maurizio, Pajola A1 - Santagata, Tommaso A1 - Borrmann, Dorit A1 - Nüchter, Andreas A1 - Bredenbeck, Anton A1 - Zevering, Jasper A1 - Arzberger, Fabian A1 - Reyes Mantilla, Camilo Andrés T1 - DAEDALUS - Descent And Exploration in Deep Autonomy of Lava Underground Structures BT - Open Space Innovation Platform (OSIP) Lunar Caves-System Study N2 - The DAEDALUS mission concept aims at exploring and characterising the entrance and initial part of Lunar lava tubes within a compact, tightly integrated spherical robotic device, with a complementary payload set and autonomous capabilities. The mission concept addresses specifically the identification and characterisation of potential resources for future ESA exploration, the local environment of the subsurface and its geologic and compositional structure. A sphere is ideally suited to protect sensors and scientific equipment in rough, uneven environments. It will house laser scanners, cameras and ancillary payloads. The sphere will be lowered into the skylight and will explore the entrance shaft, associated caverns and conduits. Lidar (light detection and ranging) systems produce 3D models with high spatial accuracy independent of lighting conditions and visible features. Hence this will be the primary exploration toolset within the sphere. The additional payload that can be accommodated in the robotic sphere consists of camera systems with panoramic lenses and scanners such as multi-wavelength or single-photon scanners. A moving mass will trigger movements. The tether for lowering the sphere will be used for data communication and powering the equipment during the descending phase. Furthermore, the connector tether-sphere will host a WIFI access point, such that data of the conduit can be transferred to the surface relay station. During the exploration phase, the robot will be disconnected from the cable, and will use wireless communication. Emergency autonomy software will ensure that in case of loss of communication, the robot will continue the nominal mission. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 21 KW - Lunar Caves KW - Spherical Robot KW - Lunar Exploration KW - Mapping KW - 3D Laser Scanning KW - Mond KW - Daedalus-Projekt KW - Lava Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-227911 SN - 978-3-945459-33-1 SN - 1868-7466 ER - TY - RPRT A1 - Raffeck, Simon A1 - Geißler, Stefan A1 - Hoßfeld, Tobias T1 - DBM: Decentralized Burst Mitigation for Self-Organizing LoRa Deployments T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - This work proposes a novel approach to disperse dense transmission intervals and reduce bursty traffic patterns without the need for centralized control. Furthermore, by keeping the mechanism as close to the Long Range Wide Area Network (LoRaWAN) standard as possible the suggested mechanism can be deployed within existing networks and can even be co-deployed with other devices. KW - Datennetz KW - LoRa Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280809 ER - TY - JOUR A1 - Petschke, Danny A1 - Staab, Torsten E.M. T1 - DDRS4PALS: a software for the acquisition and simulation of lifetime spectra using the DRS4 evaluation board JF - SoftwareX N2 - Lifetime techniques are applied to diverse fields of study including materials sciences, semiconductor physics, biology, molecular biophysics and photochemistry. Here we present DDRS4PALS, a software for the acquisition and simulation of lifetime spectra using the DRS4 evaluation board (Paul Scherrer Institute, Switzerland) for time resolved measurements and digitization of detector output pulses. Artifact afflicted pulses can be corrected or rejected prior to the lifetime calculation to provide the generation of high-quality lifetime spectra, which are crucial for a profound analysis, i.e. the decomposition of the true information. Moreover, the pulses can be streamed on an (external) hard drive during the measurement and subsequently downloaded in the offline mode without being connected to the hardware. This allows the generation of various lifetime spectra at different configurations from one single measurement and, hence, a meaningful comparison in terms of analyzability and quality. Parallel processing and an integrated JavaScript based language provide convenient options to accelerate and automate time consuming processes such as lifetime spectra simulations. KW - Lifetime spectroscopy KW - Positron annihilation spectroscopy KW - Simulation KW - Time resolved measurements Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-202276 VL - 10 ER - TY - JOUR A1 - Ali, Qasim A1 - Montenegro, Sergio T1 - Decentralized control for scalable quadcopter formations JF - International Journal of Aerospace Engineering N2 - An innovative framework has been developed for teamwork of two quadcopter formations, each having its specified formation geometry, assigned task, and matching control scheme. Position control for quadcopters in one of the formations has been implemented through a Linear Quadratic Regulator Proportional Integral (LQR PI) control scheme based on explicit model following scheme. Quadcopters in the other formation are controlled through LQR PI servomechanism control scheme. These two control schemes are compared in terms of their performance and control effort. Both formations are commanded by respective ground stations through virtual leaders. Quadcopters in formations are able to track desired trajectories as well as hovering at desired points for selected time duration. In case of communication loss between ground station and any of the quadcopters, the neighboring quadcopter provides the command data, received from the ground station, to the affected unit. Proposed control schemes have been validated through extensive simulations using MATLAB®/Simulink® that provided favorable results. KW - scalable quadcopter Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-146704 VL - 2016 ER - TY - JOUR A1 - Müller, Konstantin A1 - Leppich, Robert A1 - Geiß, Christian A1 - Borst, Vanessa A1 - Pelizari, Patrick Aravena A1 - Kounev, Samuel A1 - Taubenböck, Hannes T1 - Deep neural network regression for normalized digital surface model generation with Sentinel-2 imagery JF - IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing N2 - In recent history, normalized digital surface models (nDSMs) have been constantly gaining importance as a means to solve large-scale geographic problems. High-resolution surface models are precious, as they can provide detailed information for a specific area. However, measurements with a high resolution are time consuming and costly. Only a few approaches exist to create high-resolution nDSMs for extensive areas. This article explores approaches to extract high-resolution nDSMs from low-resolution Sentinel-2 data, allowing us to derive large-scale models. We thereby utilize the advantages of Sentinel 2 being open access, having global coverage, and providing steady updates through a high repetition rate. Several deep learning models are trained to overcome the gap in producing high-resolution surface maps from low-resolution input data. With U-Net as a base architecture, we extend the capabilities of our model by integrating tailored multiscale encoders with differently sized kernels in the convolution as well as conformed self-attention inside the skip connection gates. Using pixelwise regression, our U-Net base models can achieve a mean height error of approximately 2 m. Moreover, through our enhancements to the model architecture, we reduce the model error by more than 7%. KW - Deep learning KW - multiscale encoder KW - sentinel KW - surface model Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-349424 SN - 1939-1404 VL - 16 ER - TY - THES A1 - Nogatz, Falco T1 - Defining and Implementing Domain-Specific Languages with Prolog T1 - Definition und Implementierung domänenspezifischer Sprachen mit Prolog N2 - The landscape of today’s programming languages is manifold. With the diversity of applications, the difficulty of adequately addressing and specifying the used programs increases. This often leads to newly designed and implemented domain-specific languages. They enable domain experts to express knowledge in their preferred format, resulting in more readable and concise programs. Due to its flexible and declarative syntax without reserved keywords, the logic programming language Prolog is particularly suitable for defining and embedding domain-specific languages. This thesis addresses the questions and challenges that arise when integrating domain-specific languages into Prolog. We compare the two approaches to define them either externally or internally, and provide assisting tools for each. The grammar of a formal language is usually defined in the extended Backus–Naur form. In this work, we handle this formalism as a domain-specific language in Prolog, and define term expansions that allow to translate it into equivalent definite clause grammars. We present the package library(dcg4pt) for SWI-Prolog, which enriches them by an additional argument to automatically process the term’s corresponding parse tree. To simplify the work with definite clause grammars, we visualise their application by a web-based tracer. The external integration of domain-specific languages requires the programmer to keep the grammar, parser, and interpreter in sync. In many cases, domain-specific languages can instead be directly embedded into Prolog by providing appropriate operator definitions. In addition, we propose syntactic extensions for Prolog to expand its expressiveness, for instance to state logic formulas with their connectives verbatim. This allows to use all tools that were originally written for Prolog, for instance code linters and editors with syntax highlighting. We present the package library(plammar), a standard-compliant parser for Prolog source code, written in Prolog. It is able to automatically infer from example sentences the required operator definitions with their classes and precedences as well as the required Prolog language extensions. As a result, we can automatically answer the question: Is it possible to model these example sentences as valid Prolog clauses, and how? We discuss and apply the two approaches to internal and external integrations for several domain-specific languages, namely the extended Backus–Naur form, GraphQL, XPath, and a controlled natural language to represent expert rules in if-then form. The created toolchain with library(dcg4pt) and library(plammar) yields new application opportunities for static Prolog source code analysis, which we also present. N2 - Die Landschaft der heutigen Programmiersprachen ist vielfältig. Mit ihren unterschiedlichen Anwendungsbereichen steigt zugleich die Schwierigkeit, die eingesetzten Programme adäquat anzusprechen und zu spezifizieren. Immer häufiger werden hierfür domänenspezifische Sprachen entworfen und implementiert. Sie ermöglichen Domänenexperten, Wissen in ihrem bevorzugten Format auszudrücken, was zu lesbareren Programmen führt. Durch ihre flexible und deklarative Syntax ohne vorbelegte Schlüsselwörter ist die logische Programmsprache Prolog besonders geeignet, um domänenspezifische Sprachen zu definieren und einzubetten. Diese Arbeit befasst sich mit den Fragen und Herausforderungen, die sich bei der Integration von domänenspezifischen Sprachen in Prolog ergeben. Wir vergleichen die zwei Ansätze, sie entweder extern oder intern zu definieren, und stellen jeweils Hilfsmittel zur Verfügung. Die Grammatik einer formalen Sprache wird häufig in der erweiterten Backus–Naur–Form definiert. Diesen Formalismus behandeln wir in dieser Arbeit als eine domänenspezifische Sprache in Prolog und definieren Termexpansionen, die es erlauben, ihn in äquivalente Definite Clause Grammars für Prolog zu übersetzen. Durch das Modul library(dcg4pt) werden sie um ein zusätzliches Argument erweitert, das den Syntaxbaum eines Terms automatisch erzeugt. Um die Arbeit mit Definite Clause Grammars zu erleichtern, visualisieren wir ihre Anwendung in einem webbasierten Tracer. Meist können domänenspezifische Sprachen jedoch auch mittels passender Operatordefinitionen direkt in Prolog eingebettet werden. Dies ermöglicht die Verwendung aller Werkzeuge, die ursprünglich für Prolog geschrieben wurden, z.B. zum Code-Linting und Syntax-Highlighting. In dieser Arbeit stellen wir den standardkonformen Prolog-Parser library(plammar) vor. Er ist in Prolog geschrieben und in der Lage, aus Beispielsätzen automatisch die erforderlichen Operatoren mit ihren Klassen und Präzedenzen abzuleiten. Um die Ausdruckskraft von Prolog noch zu erweitern, schlagen wir Ergänzungen zum ISO Standard vor. Sie erlauben es, weitere Sprachen direkt einzubinden, und werden ebenfalls von library(plammar) identifiziert. So ist es bspw. möglich, logische Formeln direkt mit den bekannten Symbolen für Konjunktion, Disjunktion, usw. als Prolog-Programme anzugeben. Beide Ansätze der internen und externen Integration werden für mehrere domänen-spezifische Sprachen diskutiert und beispielhaft für GraphQL, XPath, die erweiterte Backus–Naur–Form sowie Expertenregeln in Wenn–Dann–Form umgesetzt. Die vorgestellten Werkzeuge um library(dcg4pt) und library(plammar) ergeben zudem neue Anwendungsmöglichkeiten auch für die statische Quellcodeanalyse von Prolog-Programmen. KW - PROLOG KW - Domänenspezifische Sprache KW - logic programming KW - knowledge representation KW - definite clause grammars Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-301872 ER - TY - JOUR A1 - Seufert, Anika A1 - Schröder, Svenja A1 - Seufert, Michael T1 - Delivering User Experience over Networks: Towards a Quality of Experience Centered Design Cycle for Improved Design of Networked Applications JF - SN Computer Science N2 - To deliver the best user experience (UX), the human-centered design cycle (HCDC) serves as a well-established guideline to application developers. However, it does not yet cover network-specific requirements, which become increasingly crucial, as most applications deliver experience over the Internet. The missing network-centric view is provided by Quality of Experience (QoE), which could team up with UX towards an improved overall experience. By considering QoE aspects during the development process, it can be achieved that applications become network-aware by design. In this paper, the Quality of Experience Centered Design Cycle (QoE-CDC) is proposed, which provides guidelines on how to design applications with respect to network-specific requirements and QoE. Its practical value is showcased for popular application types and validated by outlining the design of a new smartphone application. We show that combining HCDC and QoE-CDC will result in an application design, which reaches a high UX and avoids QoE degradation. KW - user experience KW - human-centered design KW - design cycle KW - application design KW - quality of experience Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-271762 SN - 2661-8907 VL - 2 IS - 6 ER - TY - JOUR A1 - Steininger, Michael A1 - Kobs, Konstantin A1 - Davidson, Padraig A1 - Krause, Anna A1 - Hotho, Andreas T1 - Density-based weighting for imbalanced regression JF - Machine Learning N2 - In many real world settings, imbalanced data impedes model performance of learning algorithms, like neural networks, mostly for rare cases. This is especially problematic for tasks focusing on these rare occurrences. For example, when estimating precipitation, extreme rainfall events are scarce but important considering their potential consequences. While there are numerous well studied solutions for classification settings, most of them cannot be applied to regression easily. Of the few solutions for regression tasks, barely any have explored cost-sensitive learning which is known to have advantages compared to sampling-based methods in classification tasks. In this work, we propose a sample weighting approach for imbalanced regression datasets called DenseWeight and a cost-sensitive learning approach for neural network regression with imbalanced data called DenseLoss based on our weighting scheme. DenseWeight weights data points according to their target value rarities through kernel density estimation (KDE). DenseLoss adjusts each data point’s influence on the loss according to DenseWeight, giving rare data points more influence on model training compared to common data points. We show on multiple differently distributed datasets that DenseLoss significantly improves model performance for rare data points through its density-based weighting scheme. Additionally, we compare DenseLoss to the state-of-the-art method SMOGN, finding that our method mostly yields better performance. Our approach provides more control over model training as it enables us to actively decide on the trade-off between focusing on common or rare cases through a single hyperparameter, allowing the training of better models for rare data points. KW - supervised learning KW - imbalanced regression KW - cost-sensitive learning KW - sample weighting KW - Kerneldensity estimation Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-269177 SN - 1573-0565 VL - 110 IS - 8 ER - TY - THES A1 - Klein, Dominik Werner T1 - Design and Evaluation of Components for Future Internet Architectures T1 - Entwurf und Bewertung von Komponenten für zukünftige Internet Architekturen N2 - Die derzeitige Internetarchitektur wurde nicht in einem geplanten Prozess konzipiert und entwickelt, sondern hat vielmehr eine evolutionsartige Entwicklung hinter sich. Auslöser für die jeweiligen Evolutionsschritte waren dabei meist aufstrebende Anwendungen, welche neue Anforderungen an die zugrundeliegende Netzarchitektur gestellt haben. Um diese Anforderungen zu erfüllen, wurden häufig neuartige Dienste oder Protokolle spezifiziert und in die bestehende Architektur integriert. Dieser Prozess ist jedoch meist mit hohem Aufwand verbunden und daher sehr träge, was die Entwicklung und Verbreitung innovativer Dienste beeinträchtigt. Derzeitig diskutierte Konzepte wie Software-Defined Networking (SDN) oder Netzvirtualisierung (NV) werden als eine Möglichkeit angesehen, die Altlasten der bestehenden Internetarchitektur zu lösen. Beiden Konzepten gemein ist die Idee, logische Netze über dem physikalischen Substrat zu betreiben. Diese logischen Netze sind hochdynamisch und können so flexibel an die Anforderungen der jeweiligen Anwendungen angepasst werden. Insbesondere erlaubt das Konzept der Virtualisierung intelligentere Netzknoten, was innovative neue Anwendungsfälle ermöglicht. Ein häufig in diesem Zusammenhang diskutierter Anwendungsfall ist die Mobilität sowohl von Endgeräten als auch von Diensten an sich. Die Mobilität der Dienste wird hierbei ausgenutzt, um die Zugriffsverzögerung oder die belegten Ressourcen im Netz zu reduzieren, indem die Dienste zum Beispiel in für den Nutzer geographisch nahe Datenzentren migriert werden. Neben den reinen Mechanismen bezüglich Dienst- und Endgerätemobilität sind in diesem Zusammenhang auch geeignete Überwachungslösungen relevant, welche die vom Nutzer wahrgenommene Dienstgüte bewerten können. Diese Lösungen liefern wichtige Entscheidungshilfen für die Migration oder überwachen mögliche Effekte der Migration auf die erfahrene Dienstgüte beim Nutzer. Im Falle von Video Streaming ermöglicht ein solcher Anwendungsfall die flexible Anpassung der Streaming Topologie für mobile Nutzer, um so die Videoqualität unabhängig vom Zugangsnetz aufrechterhalten zu können. Im Rahmen dieser Doktorarbeit wird der beschriebene Anwendungsfall am Beispiel einer Video Streaming Anwendung näher analysiert und auftretende Herausforderungen werden diskutiert. Des Weiteren werden Lösungsansätze vorgestellt und bezüglich ihrer Effizienz ausgewertet. Im Detail beschäftigt sich die Arbeit mit der Leistungsanalyse von Mechanismen für die Dienstmobilität und entwickelt eine Architektur zur Optimierung der Dienstmobilität. Im Bereich Endgerätemobilität werden Verbesserungen entwickelt, welche die Latenz zwischen Endgerät und Dienst reduzieren oder die Konnektivität unabhängig vom Zugangsnetz gewährleisten. Im letzten Teilbereich wird eine Lösung zur Überwachung der Videoqualität im Netz entwickelt und bezüglich ihrer Genauigkeit analysiert. N2 - Today’s Internet architecture was not designed from scratch but was driven by new services that emerged during its development. Hence, it is often described as patchwork where additional patches are applied in case new services require modifications to the existing architecture. This process however is rather slow and hinders the development of innovative network services with certain architecture or network requirements. Currently discussed technologies like Software-Defined Networking (SDN) or Network Virtualization (NV) are seen as key enabling technologies to overcome this rigid best effort legacy of the Internet. Both technologies offer the possibility to create virtual networks that accommodate the specific needs of certain services. These logical networks are operated on top of a physical substrate and facilitate flexible network resource allocation as physical resources can be added and removed depending on the current network and load situation. In addition, the clear separation and isolation of networks foster the development of application-aware networks that fulfill the special requirements of emerging applications. A prominent use case that benefits from these extended capabilities of the network is denoted with service component mobility. Services hosted on Virtual Machines (VMs) follow their consuming mobile endpoints, so that access latency as well as consumed network resources are reduced. Especially for applications like video streaming, which consume a large fraction of the available resources, is this an important means to relieve the resource constraints and eventually provide better service quality. Service and endpoint mobility both allow an adaptation of the used paths between an offered service, i.e., video streaming and the consuming users in case the service quality drops due to network problems. To make evidence-based adaptations in case of quality drops, a scalable monitoring component is required that is able to monitor the service quality for video streaming applications with reliable accuracy. This monograph details challenges that arise when deploying a certain service, i.e., video streaming, in a future virtualized network architecture and discusses possible solutions. In particular, this work evaluates the performance of mechanisms enabling service mobility and presents an optimized architecture for service mobility. Concerning endpoint mobility, improvements are developed that reduce the latency between endpoints and consumed services and ensure connectivity regardless of the used mobile access network. In the last part, a network-based video quality monitoring solution is developed and its accuracy is evaluated. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/14 KW - Leistungsbewertung KW - Netzwerkmanagement KW - Virtuelles Netzwerk KW - Mobiles Internet KW - Service Mobility KW - Endpoint Mobility KW - Video Quality Monitoring Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-93134 SN - 1432-8801 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Design and Implementation of a Model-driven XML-based Integrated System Architecture for Assisting Analysis, Understanding, and Retention of Religious Texts:The Case of The Quran N2 - Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. This paper discusses about modeling religious texts using semantic XML markup based on frame-based knowledge representation, with the purpose of assisting understanding, retention, and sharing of knowledge they contain. In this study, books organized in terms of chapters made up of verses are considered as the source of knowledge to model. Some metadata representing the multiple perspectives of knowledge modeling are assigned to each chapter and verse. Chapters and verses with their metadata form a meta-model, which is represented using frames, and published on a web mashup. An XML-based annotation and visualization system equipped with user interfaces for creating static and dynamic metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing generated knowledge on the Internet, has been developed. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts, in order to support analysis, understanding, and retention of the texts. KW - Wissensrepräsentation KW - Wissensmanagement KW - Content Management KW - XML KW - Koran KW - Knowledge representation KW - Meta-model KW - Frames KW - XML model KW - Knowledge Management KW - Content Management KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65737 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Design and Implementation of Architectures for Interactive Textual Documents Collation Systems N2 - One of the main purposes of textual documents collation is to identify a base text or closest witness to the base text, by analyzing and interpreting differences also known as types of changes that might exist between those documents. Based on this fact, it is reasonable to argue that, explicit identification of types of changes such as deletions, additions, transpositions, and mutations should be part of the collation process. The identification could be carried out by an interpretation module after alignment has taken place. Unfortunately existing collation software such as CollateX1 and Juxta2’s collation engine do not have interpretation modules. In fact they implement the Gothenburg model [1] for collation process which does not include an interpretation unit. Currently both CollateX and Juxta’s collation engine do not distinguish in their critical apparatus between the types of changes, and do not offer statistics about those changes. This paper presents a model for both integrated and distributed collation processes that improves the Gothenburg model. The model introduces an interpretation component for computing and distinguishing between the types of changes that documents could have undergone. Moreover two architectures implementing the model in order to solve the problem of interactive collation are discussed as well. Each architecture uses CollateX library, and provides on the one hand preprocessing functions for transforming input documents into CollateX input format, and on the other hand a post-processing module for enabling interactive collation. Finally simple algorithms for distinguishing between types of changes, and linking collated source documents with the collation results are also introduced. KW - Softwarearchitektur KW - Textvergleich KW - service based software architecture KW - service brokerage KW - interactive collation of textual variants KW - Gothenburg model of collation process Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-56601 ER - TY - JOUR A1 - Wienrich, Carolin A1 - Carolus, Astrid T1 - Development of an Instrument to Measure Conceptualizations and Competencies About Conversational Agents on the Example of Smart Speakers JF - Frontiers in Computer Science N2 - The concept of digital literacy has been introduced as a new cultural technique, which is regarded as essential for successful participation in a (future) digitized world. Regarding the increasing importance of AI, literacy concepts need to be extended to account for AI-related specifics. The easy handling of the systems results in increased usage, contrasting limited conceptualizations (e.g., imagination of future importance) and competencies (e.g., knowledge about functional principles). In reference to voice-based conversational agents as a concrete application of AI, the present paper aims for the development of a measurement to assess the conceptualizations and competencies about conversational agents. In a first step, a theoretical framework of “AI literacy” is transferred to the context of conversational agent literacy. Second, the “conversational agent literacy scale” (short CALS) is developed, constituting the first attempt to measure interindividual differences in the “(il) literate” usage of conversational agents. 29 items were derived, of which 170 participants answered. An explanatory factor analysis identified five factors leading to five subscales to assess CAL: storage and transfer of the smart speaker’s data input; smart speaker’s functional principles; smart speaker’s intelligent functions, learning abilities; smart speaker’s reach and potential; smart speaker’s technological (surrounding) infrastructure. Preliminary insights into construct validity and reliability of CALS showed satisfying results. Third, using the newly developed instrument, a student sample’s CAL was assessed, revealing intermediated values. Remarkably, owning a smart speaker did not lead to higher CAL scores, confirming our basic assumption that usage of systems does not guarantee enlightened conceptualizations and competencies. In sum, the paper contributes to the first insights into the operationalization and understanding of CAL as a specific subdomain of AI-related competencies. KW - artificial intelligence literacy KW - artificial intelligence education KW - voice-based artificial intelligence KW - conversational agents KW - measurement Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-260198 VL - 3 ER - TY - JOUR A1 - Krueger, Beate A1 - Friedrich, Torben A1 - Förster, Frank A1 - Bernhardt, Jörg A1 - Gross, Roy A1 - Dandekar, Thomas T1 - Different evolutionary modifications as a guide to rewire two-component systems JF - Bioinformatics and Biology Insights N2 - Two-component systems (TCS) are short signalling pathways generally occurring in prokaryotes. They frequently regulate prokaryotic stimulus responses and thus are also of interest for engineering in biotechnology and synthetic biology. The aim of this study is to better understand and describe rewiring of TCS while investigating different evolutionary scenarios. Based on large-scale screens of TCS in different organisms, this study gives detailed data, concrete alignments, and structure analysis on three general modification scenarios, where TCS were rewired for new responses and functions: (i) exchanges in the sequence within single TCS domains, (ii) exchange of whole TCS domains; (iii) addition of new components modulating TCS function. As a result, the replacement of stimulus and promotor cassettes to rewire TCS is well defined exploiting the alignments given here. The diverged TCS examples are non-trivial and the design is challenging. Designed connector proteins may also be useful to modify TCS in selected cases. KW - histidine kinase KW - connector KW - Mycoplasma KW - engineering KW - promoter KW - sensor KW - response regulator KW - synthetic biology KW - sequence alignment Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-123647 N1 - This is an open access article. Unrestricted non-commercial use is permitted provided the original work is properly cited. VL - 6 ER - TY - JOUR A1 - Petschke, Danny A1 - Staab, Torsten E.M. T1 - DLTPulseGenerator: a library for the simulation of lifetime spectra based on detector-output pulses JF - SoftwareX N2 - The quantitative analysis of lifetime spectra relevant in both life and materials sciences presents one of the ill-posed inverse problems and, hence, leads to most stringent requirements on the hardware specifications and the analysis algorithms. Here we present DLTPulseGenerator, a library written in native C++ 11, which provides a simulation of lifetime spectra according to the measurement setup. The simulation is based on pairs of non-TTL detector output-pulses. Those pulses require the Constant Fraction Principle (CFD) for the determination of the exact timing signal and, thus, the calculation of the time difference i.e. the lifetime. To verify the functionality, simulation results were compared to experimentally obtained data using Positron Annihilation Lifetime Spectroscopy (PALS) on pure tin. KW - lifetime spectroscopy KW - signal processing KW - pulse simulation Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-176883 VL - 7 ER - TY - THES A1 - Houshiar, Hamidreza T1 - Documentation and mapping with 3D point cloud processing T1 - Dokumentation und Kartierung mittels 3D-Punktwolkenverarbeitung N2 - 3D point clouds are a de facto standard for 3D documentation and modelling. The advances in laser scanning technology broadens the usability and access to 3D measurement systems. 3D point clouds are used in many disciplines such as robotics, 3D modelling, archeology and surveying. Scanners are able to acquire up to a million of points per second to represent the environment with a dense point cloud. This represents the captured environment with a very high degree of detail. The combination of laser scanning technology with photography adds color information to the point clouds. Thus the environment is represented more realistically. Full 3D models of environments, without any occlusion, require multiple scans. Merging point clouds is a challenging process. This thesis presents methods for point cloud registration based on the panorama images generated from the scans. Image representation of point clouds introduces 2D image processing methods to 3D point clouds. Several projection methods for the generation of panorama maps of point clouds are presented in this thesis. Additionally, methods for point cloud reduction and compression based on the panorama maps are proposed. Due to the large amounts of data generated from the 3D measurement systems these methods are necessary to improve the point cloud processing, transmission and archiving. This thesis introduces point cloud processing methods as a novel framework for the digitisation of archeological excavations. The framework replaces the conventional documentation methods for excavation sites. It employs point clouds for the generation of the digital documentation of an excavation with the help of an archeologist on-site. The 3D point cloud is used not only for data representation but also for analysis and knowledge generation. Finally, this thesis presents an autonomous indoor mobile mapping system. The mapping system focuses on the sensor placement planning method. Capturing a complete environment requires several scans. The sensor placement planning method solves for the minimum required scans to digitise large environments. Combining this method with a navigation system on a mobile robot platform enables it to acquire data fully autonomously. This thesis introduces a novel hole detection method for point clouds to detect obscured parts of a captured environment. The sensor placement planning method selects the next scan position with the most coverage of the obscured environment. This reduces the required number of scans. The navigation system on the robot platform consist of path planning, path following and obstacle avoidance. This guarantees the safe navigation of the mobile robot platform between the scan positions. The sensor placement planning method is designed as a stand alone process that could be used with a mobile robot platform for autonomous mapping of an environment or as an assistant tool for the surveyor on scanning projects. N2 - 3D-Punktwolken sind der de facto Standard bei der Dokumentation und Modellierung in 3D. Die Fortschritte in der Laserscanningtechnologie erweitern die Verwendbarkeit und die Verfügbarkeit von 3D-Messsystemen. 3D-Punktwolken werden in vielen Disziplinen verwendet, wie z.B. in der Robotik, 3D-Modellierung, Archäologie und Vermessung. Scanner sind in der Lage bis zu einer Million Punkte pro Sekunde zu erfassen, um die Umgebung mit einer dichten Punktwolke abzubilden und mit einem hohen Detaillierungsgrad darzustellen. Die Kombination der Laserscanningtechnologie mit Methoden der Photogrammetrie fügt den Punktwolken Farbinformationen hinzu. Somit wird die Umgebung realistischer dargestellt. Vollständige 3D-Modelle der Umgebung ohne Verschattungen benötigen mehrere Scans. Punktwolken zusammenzufügen ist eine anspruchsvolle Aufgabe. Diese Arbeit stellt Methoden zur Punktwolkenregistrierung basierend auf aus den Scans erzeugten Panoramabildern vor. Die Darstellung einer Punktwolke als Bild bringt Methoden der 2D-Bildverarbeitung an 3D-Punktwolken heran. Der Autor stellt mehrere Projektionsmethoden zur Erstellung von Panoramabildern aus 3D-Punktwolken vor. Außerdem werden Methoden zur Punktwolkenreduzierung und -kompression basierend auf diesen Panoramabildern vorgeschlagen. Aufgrund der großen Datenmenge, die von 3D-Messsystemen erzeugt wird, sind diese Methoden notwendig, um die Punktwolkenverarbeitung, -übertragung und -archivierung zu verbessern. Diese Arbeit präsentiert Methoden der Punktwolkenverarbeitung als neuartige Ablaufstruktur für die Digitalisierung von archäologischen Ausgrabungen. Durch diesen Ablauf werden konventionellen Methoden auf Ausgrabungsstätten ersetzt. Er verwendet Punktwolken für die Erzeugung der digitalen Dokumentation einer Ausgrabung mithilfe eines Archäologen vor Ort. Die 3D-Punktwolke kommt nicht nur für die Anzeige der Daten, sondern auch für die Analyse und Wissensgenerierung zum Einsatz. Schließlich stellt diese Arbeit ein autonomes Indoor-Mobile-Mapping-System mit Fokus auf der Positionsplanung des Messgeräts vor. Die Positionsplanung bestimmt die minimal benötigte Anzahl an Scans, um großflächige Umgebungen zu digitalisieren. Kombiniert mit einem Navigationssystem auf einer mobilen Roboterplattform ermöglicht diese Methode die vollautonome Datenerfassung. Diese Arbeit stellt eine neuartige Erkennungsmethode für Lücken in Punktwolken vor, um verdeckte Bereiche der erfassten Umgebung zu bestimmen. Die Positionsplanung bestimmt als nächste Scanposition diejenige mit der größten Abdeckung der verdeckten Umgebung. Das Navigationssystem des Roboters besteht aus der Pfadplanung, der Pfadverfolgung und einer Hindernisvermeidung um eine sichere Fortbewegung der mobilen Roboterplattform zwischen den Scanpositionen zu garantieren. Die Positionsplanungsmethode wurde als eigenständiges Verfahren entworfen, das auf einer mobilen Roboterplattform zur autonomen Kartierung einer Umgebung zum Einsatz kommen oder einem Vermesser bei einem Scanprojekt als Unterstützung dienen kann. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 12 KW - 3D Punktwolke KW - Robotik KW - Registrierung KW - 3D Pointcloud KW - Feature Based Registration KW - Compression KW - Computer Vision KW - Robotics KW - Panorama Images Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-144493 SN - 978-3-945459-14-0 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Doing Webservices Composition by Content-based Mashup: Example of a Web-based Simulator for Itinerary Planning N2 - Webservices composition is traditionally carried out using composition technologies such as Business Process Execution Language (BPEL) [1] and Web Service Choreography Interface (WSCI) [2]. The composition technology involves the process of web service discovery, invocation, and composition. However these technologies are not easy and flexible enough because they are mainly developer-centric. Moreover majority of websites have not yet embarked into the world of web service, although they have very important and useful information to offer. Is it because they have not understood the usefulness of web services or is it because of the costs? Whatever might be the answers to these questions, time and money are definitely required in order to create and offer web services. To avoid these expenditures, wrappers [7] to automatically generate webservices from websites would be a cheaper and easier solution. Mashups offer a different way of doing webservices composition. In web environment a Mashup is a web application that brings together data from several sources using webservices, APIs, wrappers and so on, in order to create entirely a new application that was not provided before. This paper presents first an overview of Mashups and the process of web service invocation and composition based on Mashup, then describes an example of a web-based simulator for navigation system in Germany. KW - Mashup KW - Wrapper KW - Mashup KW - Webservice Composition KW - Wrappers Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-50036 ER -