Institut für Informatik
Refine
Has Fulltext
- yes (386)
Year of publication
Document Type
- Doctoral Thesis (168)
- Journal article (149)
- Working Paper (40)
- Conference Proceeding (11)
- Report (7)
- Master Thesis (6)
- Bachelor Thesis (3)
- Book (1)
- Study Thesis (term paper) (1)
Language
- English (347)
- German (38)
- Multiple languages (1)
Keywords
- Leistungsbewertung (29)
- virtual reality (19)
- Datennetz (14)
- Quality of Experience (12)
- Robotik (11)
- Netzwerk (10)
- machine learning (9)
- Kleinsatellit (8)
- Modellierung (8)
- Simulation (8)
Institute
- Institut für Informatik (386)
- Institut Mensch - Computer - Medien (12)
- Graduate School of Science and Technology (7)
- Medizinische Klinik und Poliklinik I (7)
- Medizinische Klinik und Poliklinik II (6)
- Institut für Sportwissenschaft (5)
- Institut für Klinische Epidemiologie und Biometrie (4)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (3)
- Theodor-Boveri-Institut für Biowissenschaften (3)
- Institut für Geographie und Geologie (2)
Schriftenreihe
Sonstige beteiligte Institutionen
- Cologne Game Lab (3)
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Open University of the Netherlands (2)
- Siemens AG (2)
- Zentrum für Telematik e.V. (2)
- Airbus Defence and Space GmbH (1)
- Beuth Hochschule für Technik Berlin (1)
- Birmingham City University (1)
- California Institute of Technology (1)
- DLR (1)
Within companies, the ongoing digitization makes protection of data from unauthorized access and manipulation increasingly relevant. Here, artificial intelligence offers means to automatically detect such anomalous events. However, as the capabilities of these automated anomaly detection systems grow, so does their complexity, making it challenging to understand their decisions. Subsequently, many methods to explain these decisions have been proposed in recent research. The most popular techniques in this area are feature relevance explainers that explain a decision made by an artificial intelligence system by distributing relevance scores across the inputs given to the system, thus highlighting which given information had the most impact on the decision. These explainers, although present in anomaly detection, are not systematically and quantitatively evaluated. This is especially problematic, as explainers are inherently approximations that simplify the underlying artificial intelligence and thus may not always provide high-quality explanations.
This thesis makes a contribution towards the systematic evaluation of feature relevance explainers in anomaly detection on tabular data. We first review the existing literature for available feature relevance explainers and suitable evaluation schemes. We find that multiple feature relevance explainers with different internal functioning are employed in anomaly detection, but that many existing evaluation schemes are not applicable to this domain. As a result, we construct a novel evaluation setup based on ground truth explanations. Since these ground truth explanations are not commonly available for anomaly detection data, we also provide methods to obtain ground truth explanations across different scenarios of data availability, allowing us to generate multiple labeled data sets with ground truth explanations.
Multiple experiments across the aggregated data and explainers reveal that explanation quality varies strongly and that explainers can achieve both very high-quality and near-random explanations. Furthermore, high explanation quality does not transfer across different data and anomaly detection models, resulting in no best feature relevance explainer that can be applied without performance evaluations.
As evaluation appears necessary to ensure high-quality explanations, we propose a framework that enables the optimization of explainers on unlabeled data through expert simulations. Further, to aid explainers in consistently achieving high-quality explanations in applications where expert simulations are not available, we provide two schemes for setting explainer hyperparameters specifically suitable for anomaly detection.
In geographic data analysis, one is often given point data of different categories (such as facilities of a university categorized by department). Drawing upon recent research on set visualization, we want to visualize category membership by connecting points of the same category with visual links. Existing approaches that follow this path usually insist on connecting all members of a category, which may lead to many crossings and visual clutter. We propose an approach that avoids crossings between connections of different categories completely. Instead of connecting all data points of the same category, we subdivide categories into smaller, local clusters where needed. We do a case study comparing the legibility of drawings produced by our approach and those by existing approaches.
In our problem formulation, we are additionally given a graph G on the data points whose edges express some sort of proximity. Our aim is to find a subgraph G′ of G with the following properties: (i) edges connect only data points of the same category, (ii) no two edges cross, and (iii) the number of connected components (clusters) is minimized. We then visualize the clusters in G′. For arbitrary graphs, the resulting optimization problem, Cluster Minimization, is NP-hard (even to approximate). Therefore, we introduce two heuristics. We do an extensive benchmark test on real-world data. Comparisons with exact solutions indicate that our heuristics do astonishing well for certain relative-neighborhood graphs.
Mixed, augmented, and virtual reality, collectively known as extended reality (XR), allows users to immerse themselves in virtual environments and engage in experiences surpassing reality's boundaries. Virtual humans are ubiquitous in such virtual environments and can be utilized for myriad purposes, offering the potential to greatly impact daily life. Through the embodiment of virtual humans, XR offers the opportunity to influence how we see ourselves and others. In this function, virtual humans serve as a predefined stimulus whose perception is elementary for researchers, application designers, and developers to understand. This dissertation aims to investigate the influence of individual-, system-, and application-related factors on the perception of virtual humans in virtual environments, focusing on their potential use as stimuli in the domain of body perception. Individual-related factors encompass influences based on the user's characteristics, such as appearance, attitudes, and concerns. System-related factors relate to the technical properties of the system that implements the virtual environment, such as the level of immersion. Application-related factors refer to design choices and specific implementations of virtual humans within virtual environments, such as their rendering or animation style. This dissertation provides a contextual framework and reviews the relevant literature on factors influencing the perception of virtual humans. To address identified research gaps, it reports on five empirical studies analyzing quantitative and qualitative data from a total of 165 participants. The studies utilized a custom-developed XR system, enabling users to embody rapidly generated, photorealistically personalized virtual humans that can be realistically altered in body weight and observed using different immersive XR displays. The dissertation's findings showed, for example, that embodiment and personalization of virtual humans serve as self-related cues and moderate the perception of their body weight based on the user's body weight. They also revealed a display bias that significantly influences the perception of virtual humans, with disparities in body weight perception of up to nine percent between different immersive XR displays. Based on all findings, implications for application design were derived, including recommendations regarding reconstruction, animation, body weight modification, and body weight estimation methods for virtual humans, but also for the general user experience. By revealing influences on the perception of virtual humans, this dissertation contributes to understanding the intricate relationship between users and virtual humans. The findings and implications presented have the potential to enhance the design and development of virtual humans, leading to improved user experiences and broader applications beyond the domain of body perception.
There is a great deal of interest in efficient, accurate and reliable high-quality scene modeling:
In robotics, especially for autonomous robots and drones, high-quality scene modeling is essential
for navigation and interaction within complex environments. In agriculture, farmers require the
tool for precision monitoring and cropping. In architecture and construction, engineers use it to
assist Building Information Modeling (BIM) in creating detailed 3D representations of buildings
and infrastructure. In the entertainment industry, high-quality scene modeling facilitates the cre-
ation of immersive experiences in films, video games, and virtual reality (VR). While the process
of object and small scene modeling is well developed with implicit representation, precise incre-
mental reconstruction on large scenes remains a complex and challenging task, due to the high
budget required for loop correction. Besides the geometric modeling, color suffers from the need
to accommodate more complex patterns, and supports only inefficient post-training. Semantic
modeling also presents a significant challenge. Due to the significantly higher dimensionality, it
is even more difficult to model semantic information in continuous space.
This thesis deals with the Dense SLAM and the challenges of the new trend of continuous
mapping and usage on large scale. To support loop-correction on large scale, IMT-Mapping
introduces an SE(3)-transformable implicit map for remapping functions. IFR makes SDF-to-
SDF registration without moving the field, thereby providing an efficient way for aligning two
sub-maps. Turning from geometry to color, NSLF-OL introduces online-learning of high quality
color alongside real-time reconstruction. Analyzing the limitations of previous research, Uni-
Fusion proposes Universal Continuous Mapping for all map properties, even high-dimensional
CLIP features, without any training. SceneFactory designs a workflow-centric framework that
uniformly supports a complete range of Incremental Scene Modeling.
The combination of these contributions results in the emergence of a novel general concept in
this thesis: Incremental Continuous Scene Modeling (ICSM). ICSM transcends the conventional
limitations of dense SLAM, offering a versatile and comprehensive approach to high-quality in-
cremental mapping of various features (geometric, color, semantic, and more), sensor setups, and
applications (Dense RGB/RGB-D/RGB-L/Depth-only SLAM, Unposed & Uncalibrated MVD,
and more).
This thesis first introduces the history of robotic mapping and our overall contribution.
Then in three separate chapters, we present the geometry, color and semantics components of
ICSM. Where we integrate our papers in subsections of corresponding chapters. Besides the
broad features supported in ICSM, the subsequent chapter introduces a unified framework for
accommodating the diverse sensor configurations and applications. The authors are confident
about the contribution to the field. However, there are limitations. Therefore, in the final
chapter, together with the conclusion of the thesis, we also indicate the limitations, problems
that remain to be explored, and a future direction outlook.
Introduction:
Perception and memorizing of melody and rhythm start about the third trimester of gestation. Infants have astonishing musical predispositions, and melody contour is most salient for them.
Objective:
To longitudinally analyse melody contour of spontaneous crying of healthy infants and to identify melodic intervals. The aim was 3-fold: (1) to answer the question whether spontaneous crying of healthy infants regularly exhibits melodic intervals across the observation period, (2) to investigate whether interval events become more complex with age and (3) to analyse interval size distribution.
Methods:
Weekly cry recordings of 12 healthy infants (6 females) over the first 4 months of life were analysed (6,130 cry utterances) using frequency spectrograms and pitch analyses (PRAAT). A preselection of utterances containing a well-identifiable, noise-free and undisturbed melodic contour was applied to identify and measure melodic intervals in the final subset of 3,114 utterances. Age-dependent frequency of occurrence of melodic intervals was statistically analysed using generalized estimating equations.
Results:
85.3% of all preselected melody contours (n = 3,114) either contained single rising or falling melodic intervals or complex events as combinations of both. In total 6,814 melodic intervals were measured. A significant increase in interval occurrence was found characterized by a non-linear age effect (3 developmental phases). Complex events were found to significantly increase linearly with age. In both calculations, no sex effect was found. Interval size distribution showed a maximum of the minor second as the prevailing musical interval in infants’ crying over the first 4 months of life.
Conclusion:
Melodic intervals seem to be a regular phenomenon of spontaneous crying of healthy infants. They are suggested to be a further candidate for developing an early risk marker of vocal control in infants. Subsequent studies are needed to compare healthy infants and infants at risk for respiratory-laryngeal dysfunction to investigate the diagnostic value of the occurrence of melodic intervals and their age-depending complexification.
In this thesis, a model of the dynamics during the landing phase of an interplanetary lander mission is developed in a 3 DOF approach with the focus lying on landing by propulsive means. Based on this model, a MATLAB simulation was developed with the goal of enabling an estimation of the performance and especially the required fuel amount of a propulsive landing system on Venus. This landing system is modeled to be able to control its descent using thrusters and to perform a stable landing at a specified target location. Using this simulation, the planetary environments of Mars and Venus can be simulated and the impact of wind, atmospheric density and gravity as well as of using different thrusters on the fuel consumption and landing abilities of the simulated landing system can be investigated. The comparability of these results with the behavior of real landing systems is validated in this thesis by simulating the Powered Descent Phase of the Mars 2020 mission and comparing the results to the data the Mars 2020 descent stage has collected during this phase of its landing. Further, based on the simulation, the minimal necessary fuel amount for a successful landing on Venus has been determined for different scenarios. The simulation along with these results are a contribution to the research of this thesis’s supervisor Clemens Riegler, M.Sc., who will use them for a comparison of different types of landing systems in the context of his doctoral thesis.
Dieser Kurzbericht beleuchtet die Einsatzmöglichkeiten von Kleinsatelliten in der extraterrestrischen Forschung und zeigt auf welche technologischen Herausforderungen sich bei ihrem Einsatz ergeben. Die präsentierten Ergebnisse sind Teil der SATEX Untersuchung (FKZ 50OO2222). In diesem Dokument werden zunächst die allgemeinen Einsatzmöglichkeiten von Kleinsatelliten in der Extraterrestrik anhand ausgewählter Beispielmissionen beleuchtet. Daraufhin erfolgt die Erörterung spezifischer technischer Herausforderungen und Umweltbedingungen bei cislunaren und interplanetaren Kleinsatellitenmissionen, gefolgt von einer kurzen Präsentation von Nutzerwünsche aus Deutschland für Missionen zur Erforschung des Weltraums. Zum Abschluss werden zehn konkrete, im Rahmen der Untersuchung ermittelte, Missionsideen vorgestellt und bewertet. Schließlich erfolgt die Zusammenfassung der wichtigsten Erkenntnisse und Empfehlungen.
Dieser Kurzbericht beleuchtet die Einsatzmöglichkeiten von Kleinsatelliten in der extraterrestrischen Forschung und zeigt auf welche technologischen Herausforderungen sich bei ihrem Einsatz ergeben. Die präsentierten Ergebnisse sind Teil der SATEX Untersuchung (FKZ 50OO2222). In diesem Dokument werden zunächst die allgemeinen Einsatzmöglichkeiten von Kleinsatelliten in der Extraterrestrik anhand ausgewählter Beispielmissionen beleuchtet. Daraufhin erfolgt die Erörterung spezifischer technischer Herausforderungen und Umweltbedingungen bei cislunaren und interplanetaren Kleinsatellitenmissionen, gefolgt von einer kurzen Präsentation von Nutzerwünsche aus Deutschland für Missionen zur Erforschung des Weltraums. Zum Abschluss werden zehn konkrete, im Rahmen der Untersuchung ermittelte, Missionsideen vorgestellt und bewertet. Schließlich erfolgt die Zusammenfassung der wichtigsten Erkenntnisse und Empfehlungen.
Here, we performed a non-systematic analysis of the strength, weaknesses, opportunities, and threats (SWOT) associated with the application of artificial intelligence to sports research, coaching and optimization of athletic performance. The strength of AI with regards to applied sports research, coaching and athletic performance involve the automation of time-consuming tasks, processing and analysis of large amounts of data, and recognition of complex patterns and relationships. However, it is also essential to be aware of the weaknesses associated with the integration of AI into this field. For instance, it is imperative that the data employed to train the AI system be both diverse and complete, in addition to as unbiased as possible with respect to factors such as the gender, level of performance, and experience of an athlete. Other challenges include e.g., limited adaptability to novel situations and the cost and other resources required. Opportunities include the possibility to monitor athletes both long-term and in real-time, the potential discovery of novel indicators of performance, and prediction of risk for future injury. Leveraging these opportunities can transform athletic development and the practice of sports science in general. Threats include over-dependence on technology, less involvement of human expertise, risks with respect to data privacy, breaching of the integrity and manipulation of data, and resistance to adopting such new technology. Understanding and addressing these SWOT factors is essential for maximizing the benefits of AI while mitigating its risks, thereby paving the way for its successful integration into sport science research, coaching, and optimization of athletic performance.
Autonomous mobile robots operating in unknown terrain have to guide
their drive decisions through local perception. Local mapping and
traversability analysis is essential for safe rover operation and low level
locomotion. This thesis deals with the challenge of building a local,
robot centric map from ultra short baseline stereo imagery for height
and traversability estimation.
Several grid-based, incremental mapping algorithms are compared and
evaluated in a multi size, multi resolution framework. A new, covariance
based mapping update is introduced, which is capable of detecting sub-
cellsize obstacles and abstracts the terrain of one cell as a first order
surface.
The presented mapping setup is capable of producing reliable ter-
rain and traversability estimates under the conditions expected for the
Cooperative Autonomous Distributed Robotic Exploreration (CADRE)
mission.
Algorithmic- and software architecture design targets high reliability
and efficiency for meeting the tight constraints implied by CADRE’s
small on-board embedded CPU.
Extensive evaluations are conducted to find possible edge-case scenar-
ios in the operating envelope of the map and to confirm performance
parameters. The research in this thesis targets the CADRE mission, but
is applicable to any form of mobile robotics which require height- and
traversability mapping.