004 Datenverarbeitung; Informatik
Refine
Has Fulltext
- yes (257)
Year of publication
Document Type
- Journal article (123)
- Doctoral Thesis (68)
- Working Paper (37)
- Preprint (19)
- Conference Proceeding (8)
- Report (2)
Language
- English (257) (remove)
Keywords
- virtual reality (16)
- Datennetz (14)
- Leistungsbewertung (13)
- Quran (8)
- Robotik (8)
- Koran (7)
- Text Mining (7)
- Mobiler Roboter (6)
- Autonomer Roboter (5)
- Komplexitätstheorie (5)
Institute
- Institut für Informatik (183)
- Theodor-Boveri-Institut für Biowissenschaften (29)
- Institut Mensch - Computer - Medien (17)
- Institut für deutsche Philologie (17)
- Institut für Klinische Epidemiologie und Biometrie (7)
- Center for Computational and Theoretical Biology (4)
- Graduate School of Science and Technology (3)
- Medizinische Klinik und Poliklinik II (3)
- Institut für Funktionsmaterialien und Biofabrikation (2)
- Institut für Geographie und Geologie (2)
Schriftenreihe
Sonstige beteiligte Institutionen
- Cologne Game Lab (2)
- Birmingham City University (1)
- DATE Lab, KITE Research Insititute, University Health Network, Toronto, Canada (1)
- EMBL Heidelberg (1)
- INAF Padova, Italy (1)
- Jacobs University Bremen, Germany (1)
- Open University of the Netherlands (1)
- Servicezentrum Medizin-Informatik (Universitätsklinikum) (1)
- Social and Technological Systems (SaTS) lab, School of Art, Media, Performance and Design, York University, Toronto, Canada (1)
- TH Köln (1)
Neural networks have to capture mathematical relationships in order to learn various tasks. They approximate these relations implicitly and therefore often do not generalize well. The recently proposed Neural Arithmetic Logic Unit (NALU) is a novel neural architecture which is able to explicitly represent the mathematical relationships by the units of the network to learn operations such as summation, subtraction or multiplication. Although NALUs have been shown to perform well on various downstream tasks, an in-depth analysis reveals practical shortcomings by design, such as the inability to multiply or divide negative input values or training stability issues for deeper networks. We address these issues and propose an improved model architecture. We evaluate our model empirically in various settings from learning basic arithmetic operations to more complex functions. Our experiments indicate that our model solves stability issues and outperforms the original NALU model in means of arithmetic precision and convergence.
The rating of perceived exertion (RPE) is a subjective load marker and may assist in individualizing training prescription, particularly by adjusting running intensity. Unfortunately, RPE has shortcomings (e.g., underreporting) and cannot be monitored continuously and automatically throughout a training sessions. In this pilot study, we aimed to predict two classes of RPE (≤15 “Somewhat hard to hard” on Borg’s 6–20 scale vs. RPE >15 in runners by analyzing data recorded by a commercially-available smartwatch with machine learning algorithms. Twelve trained and untrained runners performed long-continuous runs at a constant self-selected pace to volitional exhaustion. Untrained runners reported their RPE each kilometer, whereas trained runners reported every five kilometers. The kinetics of heart rate, step cadence, and running velocity were recorded continuously ( 1 Hz ) with a commercially-available smartwatch (Polar V800). We trained different machine learning algorithms to estimate the two classes of RPE based on the time series sensor data derived from the smartwatch. Predictions were analyzed in different settings: accuracy overall and per runner type; i.e., accuracy for trained and untrained runners independently. We achieved top accuracies of 84.8 % for the whole dataset, 81.8 % for the trained runners, and 86.1 % for the untrained runners. We predict two classes of RPE with high accuracy using machine learning and smartwatch data. This approach might aid in individualizing training prescriptions.
The DAEDALUS mission concept aims at exploring and characterising the entrance and initial part of Lunar lava tubes within a compact, tightly integrated spherical robotic device, with a complementary payload set and autonomous capabilities.
The mission concept addresses specifically the identification and characterisation of potential resources for future ESA exploration, the local environment of the subsurface and its geologic and compositional structure.
A sphere is ideally suited to protect sensors and scientific equipment in rough, uneven environments.
It will house laser scanners, cameras and ancillary payloads.
The sphere will be lowered into the skylight and will explore the entrance shaft, associated caverns and conduits. Lidar (light detection and ranging) systems produce 3D models with high spatial accuracy independent of lighting conditions and visible features.
Hence this will be the primary exploration toolset within the sphere.
The additional payload that can be accommodated in the robotic sphere consists of camera systems with panoramic lenses and scanners such as multi-wavelength or single-photon scanners.
A moving mass will trigger movements.
The tether for lowering the sphere will be used for data communication and powering the equipment during the descending phase.
Furthermore, the connector tether-sphere will host a WIFI access point, such that data of the conduit can be transferred to the surface relay station. During the exploration phase, the robot will be disconnected from the cable, and will use wireless communication.
Emergency autonomy software will ensure that in case of loss of communication, the robot will continue the nominal mission.
Constraining graph layouts - that is, restricting the placement of vertices and the routing of edges to obey certain constraints - is common practice in graph drawing.
In this book, we discuss algorithmic results on two different restriction types:
placing vertices on the outer face and on the integer grid.
For the first type, we look into the outer k-planar and outer k-quasi-planar graphs, as well as giving a linear-time algorithm to recognize full and closed outer k-planar graphs Monadic Second-order Logic.
For the second type, we consider the problem of transferring a given planar drawing onto the integer grid while perserving the original drawings topology;
we also generalize a variant of Cauchy's rigidity theorem for orthogonal polyhedra of genus 0 to those of arbitrary genus.
In this article, we present approaches to interactive simulations of biohybrid systems. These simulations are comprised of two major computational components: (1) agent-based developmental models that retrace organismal growth and unfolding of technical scaffoldings and (2) interfaces to explore these models interactively. Simulations of biohybrid systems allow us to fast forward and experience their evolution over time based on our design decisions involving the choice, configuration and initial states of the deployed biological and robotic actors as well as their interplay with the environment. We briefly introduce the concept of swarm grammars, an agent-based extension of L-systems for retracing growth processes and structural artifacts. Next, we review an early augmented reality prototype for designing and projecting biohybrid system simulations into real space. In addition to models that retrace plant behaviors, we specify swarm grammar agents to braid structures in a self-organizing manner. Based on this model, both robotic and plant-driven braiding processes can be experienced and explored in virtual worlds. We present an according user interface for use in virtual reality. As we present interactive models concerning rather diverse description levels, we only ensured their principal capacity for interaction but did not consider efficiency analyzes beyond prototypic operation. We conclude this article with an outlook on future works on melding reality and virtuality to drive the design and deployment of biohybrid systems.
Two studies are reported that investigate how readily accessible and applicable ten force-dynamic categories are to novices in describing short episodes of human-technology interaction (Study 1) and that establish a measure of inter-coder reliability when re-classifying these episodes into force-dynamic categories (Study 2). The results of the first study show that people can easily and confidently relate their experiences with technology to the definitions of force-dynamic events (e.g. “The driver released the handbrake” as an example of restraint removal). The results of the second study show moderate agreement between four expert coders across all ten force-dynamic categories (Cohen’s kappa = .59) when re-classifying these episodes. Agreement values for single force-dynamic categories ranged between ‘fair’ and ‘almost perfect’, i.e. between kappa = .30 and .95. Agreement with the originally intended classifications of study 1 was higher than the pure inter-coder reliabilities. Single coders achieved an average kappa of .71, indicating substantial agreement. Using more than one coder increased kappas to almost perfect: up to .87 for four coders. A qualitative analysis of the predicted versus the observed number of category confusions revealed that about half of the category disagreement could be predicted from strong overlaps in the definitions of force-dynamic categories. From the quantitative and qualitative results, guidelines are derived to aid the better training of coders in order to increase inter-coder reliability.
Failure prediction is an important aspect of self-aware computing systems. Therefore, a multitude of different approaches has been proposed in the literature over the past few years. In this work, we propose a taxonomy for organizing works focusing on the prediction of Service Level Objective (SLO) failures. Our taxonomy classifies related work along the dimensions of the prediction target (e.g., anomaly detection, performance prediction, or failure prediction), the time horizon (e.g., detection or prediction, online or offline application), and the applied modeling type (e.g., time series forecasting, machine learning, or queueing theory). The classification is derived based on a systematic mapping of relevant papers in the area. Additionally, we give an overview of different techniques in each sub-group and address remaining challenges in order to guide future research.
In the present day, unmanned aerial vehicles become seemingly more popular every year, but, without regulation of the increasing number of these vehicles, the air space could become chaotic and uncontrollable. In this work, a framework is proposed to combine self-aware computing with multirotor formations to address this problem. The self-awareness is envisioned to improve the dynamic behavior of multirotors. The formation scheme that is implemented is called platooning, which arranges vehicles in a string behind the lead vehicle and is proposed to bring order into chaotic air space. Since multirotors define a general category of unmanned aerial vehicles, the focus of this thesis are quadcopters, platforms with four rotors. A modification for the LRA-M self-awareness loop is proposed and named Platooning Awareness. The implemented framework is able to offer two flight modes that enable waypoint following and the self-awareness module to find a path through scenarios, where obstacles are present on the way, onto a goal position. The evaluation of this work shows that the proposed framework is able to use self-awareness to learn about its environment, avoid obstacles, and can successfully move a platoon of drones through multiple scenarios.
Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks
(2018)
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.
This short letter proposes more consolidated explicit solutions for the forces and torques acting on typical rover wheels, that can be used as a method to determine their average mobility characteristics in planetary soils. The closed loop solutions stand in one of the verified methods, but at difference of the previous, observables are decoupled requiring a less amount of physical parameters to measure. As a result, we show that with knowledge of terrain properties, wheel driving performance rely in a single observable only. Because of their generality, the formulated equations established here can have further implications in autonomy and control of rovers or planetary soil characterization.
Lifetime techniques are applied to diverse fields of study including materials sciences, semiconductor physics, biology, molecular biophysics and photochemistry.
Here we present DDRS4PALS, a software for the acquisition and simulation of lifetime spectra using the DRS4 evaluation board (Paul Scherrer Institute, Switzerland) for time resolved measurements and digitization of detector output pulses. Artifact afflicted pulses can be corrected or rejected prior to the lifetime calculation to provide the generation of high-quality lifetime spectra, which are crucial for a profound analysis, i.e. the decomposition of the true information. Moreover, the pulses can be streamed on an (external) hard drive during the measurement and subsequently downloaded in the offline mode without being connected to the hardware. This allows the generation of various lifetime spectra at different configurations from one single measurement and, hence, a meaningful comparison in terms of analyzability and quality. Parallel processing and an integrated JavaScript based language provide convenient options to accelerate and automate time consuming processes such as lifetime spectra simulations.
Knowledge encoding in game mechanics: transfer-oriented knowledge learning in desktop-3D and VR
(2019)
Affine Transformations (ATs) are a complex and abstract learning content. Encoding the AT knowledge in Game Mechanics (GMs) achieves a repetitive knowledge application and audiovisual demonstration. Playing a serious game providing these GMs leads to motivating and effective knowledge learning. Using immersive Virtual Reality (VR) has the potential to even further increase the serious game’s learning outcome and learning quality. This paper compares the effectiveness and efficiency of desktop-3D and VR in respect to the achieved learning outcome. Also, the present study analyzes the effectiveness of an enhanced audiovisual knowledge encoding and the provision of a debriefing system. The results validate the effectiveness of the knowledge encoding in GMs to achieve knowledge learning. The study also indicates that VR is beneficial for the overall learning quality and that an enhanced audiovisual encoding has only a limited effect on the learning outcome.
In recent years several community testbeds as well as participatory sensing platforms have successfully established themselves to provide open data to everyone interested. Each of them with a specific goal in mind, ranging from collecting radio coverage data up to environmental and radiation data. Such data can be used by the community in their decision making, whether to subscribe to a specific mobile phone service that provides good coverage in an area or in finding a sunny and warm region for the summer holidays.
However, the existing platforms are usually limiting themselves to directly measurable network QoS. If such a crowdsourced data set provides more in-depth derived measures, this would enable an even better decision making. A community-driven crowdsensing platform that derives spatial application-layer user experience from resource-friendly bandwidth estimates would be such a case, video streaming services come to mind as a prime example. In this paper we present a concept for such a system based on an initial prototype that eases the collection of data necessary to determine mobile-specific QoE at large scale. In addition we reason why the simple quality metric proposed here can hold its own.
The three-dimensional cuneiform script is one of the oldest known writing systems and a central object of research in Ancient Near Eastern Studies and Hittitology. An important step towards the understanding of the cuneiform script is the provision of opportunities and tools for joint analysis. This paper presents an approach that contributes to this challenge: a collaborative compatible web-based scientific exploration and analysis of 3D scanned cuneiform fragments. The WebGL -based concept incorporates methods for compressed web-based content delivery of large 3D datasets and high quality visualization. To maximize accessibility and to promote acceptance of 3D techniques in the field of Hittitology, the introduced concept is integrated into the Hethitologie-Portal Mainz, an established leading online research resource in the field of Hittitology, which until now exclusively included 2D content. The paper shows that increasing the availability of 3D scanned archaeological data through a web-based interface can provide significant scientific value while at the same time finding a trade-off between copyright induced restrictions and scientific usability.
White Paper on Crowdsourced Network and QoE Measurements – Definitions, Use Cases and Challenges
(2020)
The goal of the white paper at hand is as follows. The definitions of the terms build a framework for discussions around the hype topic ‘crowdsourcing’. This serves as a basis for differentiation and a consistent view from different perspectives on crowdsourced network measurements, with the goal to provide a commonly accepted definition in the community. The focus is on the context of mobile and fixed network operators, but also on measurements of different layers (network, application, user layer). In addition, the white paper shows the value of crowdsourcing for selected use cases, e.g., to improve QoE or regulatory issues. Finally, the major challenges and issues for researchers and practitioners are highlighted.
This white paper is the outcome of the Würzburg seminar on “Crowdsourced Network and QoE Measurements” which took place from 25-26 September 2019 in Würzburg, Germany. International experts were invited from industry and academia. They are well known in their communities, having different backgrounds in crowdsourcing, mobile networks, network measurements, network performance, Quality of Service (QoS), and Quality of Experience (QoE). The discussions in the seminar focused on how crowdsourcing will support vendors, operators, and regulators to determine the Quality of Experience in new 5G networks that enable various new applications and network architectures. As a result of the discussions, the need for a white paper manifested, with the goal of providing a scientific discussion of the terms “crowdsourced network measurements” and “crowdsourced QoE measurements”, describing relevant use cases for such crowdsourced data, and its underlying challenges. During the seminar, those main topics were identified, intensively discussed in break-out groups, and brought back into the plenum several times. The outcome of the seminar is this white paper at hand which is – to our knowledge – the first one covering the topic of crowdsourced network and QoE measurements.
The correct behavior of spacecraft components is the foundation of unhindered mission operation. However, no technical system is free of wear and degradation. A malfunction of one single component might significantly alter the behavior of the whole spacecraft and may even lead to a complete mission failure. Therefore, abnormal component behavior must be detected early in order to be able to perform counter measures. A dedicated fault detection system can be employed, as opposed to classical health monitoring, performed by human operators, to decrease the response time to a malfunction. In this paper, we present a generic model-based diagnosis system, which detects faults by analyzing the spacecraft’s housekeeping data. The observed behavior of the spacecraft components, given by the housekeeping data is compared to their expected behavior, obtained through simulation. Each discrepancy between the observed and the expected behavior of a component generates a so-called symptom. Given the symptoms, the diagnoses are derived by computing sets of components whose malfunction might cause the observed discrepancies. We demonstrate the applicability of the diagnosis system by using modified housekeeping data of the qualification model of an actual spacecraft and outline the advantages and drawbacks of our approach.
Background: Natural language processing (NLP) is a powerful tool supporting the generation of Real-World Evidence (RWE). There is no NLP system that enables the extensive querying of parameters specific to multiple myeloma (MM) out of unstructured medical reports. We therefore created a MM-specific ontology to accelerate the information extraction (IE) out of unstructured text. Methods: Our MM ontology consists of extensive MM-specific and hierarchically structured attributes and values. We implemented “A Rule-based Information Extraction System” (ARIES) that uses this ontology. We evaluated ARIES on 200 randomly selected medical reports of patients diagnosed with MM. Results: Our system achieved a high F1-Score of 0.92 on the evaluation dataset with a precision of 0.87 and recall of 0.98. Conclusions: Our rule-based IE system enables the comprehensive querying of medical reports. The IE accelerates the extraction of data and enables clinicians to faster generate RWE on hematological issues. RWE helps clinicians to make decisions in an evidence-based manner. Our tool easily accelerates the integration of research evidence into everyday clinical practice.
Experimental high-throughput analysis of molecular networks is a central approach to characterize the adaptation of plant metabolism to the environment. However, recent studies have demonstrated that it is hardly possible to predict in situ metabolic phenotypes from experiments under controlled conditions, such as growth chambers or greenhouses. This is particularly due to the high molecular variance of in situ samples induced by environmental fluctuations. An approach of functional metabolome interpretation of field samples would be desirable in order to be able to identify and trace back the impact of environmental changes on plant metabolism. To test the applicability of metabolomics studies for a characterization of plant populations in the field, we have identified and analyzed in situ samples of nearby grown natural populations of Arabidopsis thaliana in Austria. A. thaliana is the primary molecular biological model system in plant biology with one of the best functionally annotated genomes representing a reference system for all other plant genome projects. The genomes of these novel natural populations were sequenced and phylogenetically compared to a comprehensive genome database of A. thaliana ecotypes. Experimental results on primary and secondary metabolite profiling and genotypic variation were functionally integrated by a data mining strategy, which combines statistical output of metabolomics data with genome-derived biochemical pathway reconstruction and metabolic modeling. Correlations of biochemical model predictions and population-specific genetic variation indicated varying strategies of metabolic regulation on a population level which enabled the direct comparison, differentiation, and prediction of metabolic adaptation of the same species to different habitats. These differences were most pronounced at organic and amino acid metabolism as well as at the interface of primary and secondary metabolism and allowed for the direct classification of population-specific metabolic phenotypes within geographically contiguous sampling sites.
Cosmology often uses intricate formulas and mathematics to derive new theories and concepts. We do something different in this paper: We look at biological processes and derive from these heuristics so that the revised cosmology agrees with astronomical observations but does also agree with standard biological observations. We show that we then have to replace any type of singularity at the start of the universe by a condensation nucleus and that the very early period of the universe usually assumed to be inflation has to be replaced by a period of rapid crystal growth as in Weiss magnetization domains.
Impressively, these minor modifications agree well with astronomical observations including removing the strong inflation perturbations which were never observed in the recent BICEP2 experiments. Furthermore, looking at biological principles suggests that such a new theory with a condensation nucleus at start and a first rapid phase of magnetization-like growth of the ordered, physical laws obeying lattice we live in is in fact the only convincing theory of the early phases of our universe that also is compatible with current observations.
We show in detail in the following that such a process of crystal creation, breaking of new crystal seeds and ultimate evaporation of the present crystal readily leads over several generations to an evolution and selection of better, more stable and more self-organizing crystals. Moreover, this explains the “fine-tuning” question why our universe is fine-tuned to favor life: Our Universe is so self-organizing to have enough offspring and the detailed physics involved is at the same time highly favorable for all self-organizing processes including life.
This biological theory contrasts with current standard inflation cosmologies. The latter do not perform well in explaining any phenomena of sophisticated structure creation or self-organization. As proteins can only thermodynamically fold by increasing the entropy in the solution around them we suggest for cosmology a condensation nucleus for a universe can form only in a “chaotic ocean” of string-soup or quantum foam if the entropy outside of the nucleus rapidly increases. We derive an interaction potential for 1 to n-dimensional strings or quantum-foams and show that they allow only 1D, 2D, 4D or octonion interactions. The latter is the richest structure and agrees to the E8 symmetry fundamental to particle physics and also compatible with the ten dimensional string theory E8 which is part of the M-theory. Interestingly, any other interactions of other dimensionality can be ruled out using Hurwitz compositional theorem. Crystallization explains also extremely well why we have only one macroscopic reality and where the worldlines of alternative trajectories exist: They are in other planes of the crystal and for energy reasons they crystallize mostly at the same time, yielding a beautiful and stable crystal. This explains decoherence and allows to determine the size of Planck´s quantum h (very small as separation of crystal layers by energy is extremely strong).
Ultimate dissolution of real crystals suggests an explanation for dark energy agreeing with estimates for the “big rip”. The halo distribution of dark matter favoring galaxy formation is readily explained by a crystal seed starting with unit cells made of normal and dark matter.
That we have only matter and not antimatter can be explained as there may be right handed mattercrystals and left-handed antimatter crystals. Similarly, real crystals are never perfect and we argue that exactly such irregularities allow formation of galaxies, clusters and superclusters. Finally, heuristics from genetics suggest to look for a systems perspective to derive correct vacuum and Higgs Boson energies.
Maps are the main tool to represent geographical information. Users often zoom in and out to access maps at different scales. Continuous map generalization tries to make the changes between different scales smooth, which is essential to provide users with comfortable zooming experience.
In order to achieve continuous map generalization with high quality, we optimize some important aspects of maps. In this book, we have used optimization in the generalization of land-cover areas, administrative boundaries, buildings, and coastlines. According to our experiments, continuous map generalization indeed benefits from optimization.