000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Has Fulltext
- yes (136)
Year of publication
Document Type
- Doctoral Thesis (92)
- Journal article (21)
- Working Paper (5)
- Bachelor Thesis (3)
- Book (3)
- Master Thesis (3)
- Jahresbericht (2)
- Conference Proceeding (2)
- Report (2)
- Book article / Book chapter (1)
Keywords
- Leistungsbewertung (14)
- Quality of Experience (9)
- Cloud Computing (7)
- Maschinelles Lernen (6)
- Data Mining (5)
- Netzwerk (5)
- Mensch-Maschine-Kommunikation (4)
- Modellierung (4)
- Simulation (4)
- Telekommunikationsnetz (4)
Institute
- Institut für Informatik (103)
- Betriebswirtschaftliches Institut (9)
- Graduate School of Science and Technology (7)
- Graduate School of Life Sciences (4)
- Theodor-Boveri-Institut für Biowissenschaften (4)
- Institut für Molekulare Infektionsbiologie (3)
- Universitätsbibliothek (3)
- Institut Mensch - Computer - Medien (2)
- Institut für Sportwissenschaft (2)
- Universität Würzburg (2)
Sonstige beteiligte Institutionen
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Siemens AG (2)
- Technische Hochschule Nürnberg Georg Simon Ohm (2)
- Beuth Hochschule für Technik Berlin (1)
- California Institute of Technology (1)
- Deutsches Zentrum für Luft- und Raumfahrt e.V. (1)
- European Space Agency (1)
- Hochschule Wismar (1)
- NASA Jet Propulsion Laboratory (1)
- University of Applied Sciences and Arts Western Switzerland, Fribourg (1)
EU-Project number / Contract (GA) number
- 320377 (1)
In this thesis, a model of the dynamics during the landing phase of an interplanetary lander mission is developed in a 3 DOF approach with the focus lying on landing by propulsive means. Based on this model, a MATLAB simulation was developed with the goal of enabling an estimation of the performance and especially the required fuel amount of a propulsive landing system on Venus. This landing system is modeled to be able to control its descent using thrusters and to perform a stable landing at a specified target location. Using this simulation, the planetary environments of Mars and Venus can be simulated and the impact of wind, atmospheric density and gravity as well as of using different thrusters on the fuel consumption and landing abilities of the simulated landing system can be investigated. The comparability of these results with the behavior of real landing systems is validated in this thesis by simulating the Powered Descent Phase of the Mars 2020 mission and comparing the results to the data the Mars 2020 descent stage has collected during this phase of its landing. Further, based on the simulation, the minimal necessary fuel amount for a successful landing on Venus has been determined for different scenarios. The simulation along with these results are a contribution to the research of this thesis’s supervisor Clemens Riegler, M.Sc., who will use them for a comparison of different types of landing systems in the context of his doctoral thesis.
Here, we performed a non-systematic analysis of the strength, weaknesses, opportunities, and threats (SWOT) associated with the application of artificial intelligence to sports research, coaching and optimization of athletic performance. The strength of AI with regards to applied sports research, coaching and athletic performance involve the automation of time-consuming tasks, processing and analysis of large amounts of data, and recognition of complex patterns and relationships. However, it is also essential to be aware of the weaknesses associated with the integration of AI into this field. For instance, it is imperative that the data employed to train the AI system be both diverse and complete, in addition to as unbiased as possible with respect to factors such as the gender, level of performance, and experience of an athlete. Other challenges include e.g., limited adaptability to novel situations and the cost and other resources required. Opportunities include the possibility to monitor athletes both long-term and in real-time, the potential discovery of novel indicators of performance, and prediction of risk for future injury. Leveraging these opportunities can transform athletic development and the practice of sports science in general. Threats include over-dependence on technology, less involvement of human expertise, risks with respect to data privacy, breaching of the integrity and manipulation of data, and resistance to adopting such new technology. Understanding and addressing these SWOT factors is essential for maximizing the benefits of AI while mitigating its risks, thereby paving the way for its successful integration into sport science research, coaching, and optimization of athletic performance.
Autonomous mobile robots operating in unknown terrain have to guide
their drive decisions through local perception. Local mapping and
traversability analysis is essential for safe rover operation and low level
locomotion. This thesis deals with the challenge of building a local,
robot centric map from ultra short baseline stereo imagery for height
and traversability estimation.
Several grid-based, incremental mapping algorithms are compared and
evaluated in a multi size, multi resolution framework. A new, covariance
based mapping update is introduced, which is capable of detecting sub-
cellsize obstacles and abstracts the terrain of one cell as a first order
surface.
The presented mapping setup is capable of producing reliable ter-
rain and traversability estimates under the conditions expected for the
Cooperative Autonomous Distributed Robotic Exploreration (CADRE)
mission.
Algorithmic- and software architecture design targets high reliability
and efficiency for meeting the tight constraints implied by CADRE’s
small on-board embedded CPU.
Extensive evaluations are conducted to find possible edge-case scenar-
ios in the operating envelope of the map and to confirm performance
parameters. The research in this thesis targets the CADRE mission, but
is applicable to any form of mobile robotics which require height- and
traversability mapping.
Wireless communication networks already comprise an integral part of both the private and industrial sectors and are successfully replacing existing wired networks. They enable the development of novel applications and offer greater flexibility and efficiency. Although some efforts are already underway in the aerospace sector to deploy wireless communication networks on board spacecraft, none of these projects have yet succeeded in replacing the hard-wired state-of-the-art architecture for intra-spacecraft communication. The advantages are evident as the reduction of the wiring harness saves time, mass, and costs, and makes the whole integration process more flexible. It also allows for easier scaling when interconnecting different systems.
This dissertation deals with the design and implementation of a wireless network architecture to enhance intra-spacecraft communications by breaking with the state-of-the-art standards that have existed in the space industry for decades. The potential and benefits of this novel wireless network architecture are evaluated, an innovative design using ultra-wideband technology is presented. It is combined with a Medium Access Control (MAC) layer tailored for low-latency and deterministic networks supporting even mission-critical applications. As demonstrated by the Wireless Compose experiment on the International Space Station (ISS), this technology is not limited to communications but also enables novel positioning applications.
To adress the technological challenges, extensive studies have been carried out on electromagnetic compatibility, space radiation, and data robustness. The architecture was evaluated from various perspectives and successfully demonstrated in space.
Overall, this research highlights how a wireless network can improve and potentially replace existing state-of-the-art communication systems on board spacecraft in future missions. And it will help to adapt and ultimately accelerate the implementation of wireless networks in space systems.
Graphs provide a key means to model relationships between entities.
They consist of vertices representing the entities,
and edges representing relationships between pairs of entities.
To make people conceive the structure of a graph,
it is almost inevitable to visualize the graph.
We call such a visualization a graph drawing.
Moreover, we have a straight-line graph drawing
if each vertex is represented as a point
(or a small geometric object, e.g., a rectangle)
and each edge is represented as a line segment between its two vertices.
A polyline is a very simple straight-line graph drawing,
where the vertices form a sequence according to which the vertices are connected by edges.
An example of a polyline in practice is a GPS trajectory.
The underlying road network, in turn, can be modeled as a graph.
This book addresses problems that arise
when working with straight-line graph drawings and polylines.
In particular, we study algorithms
for recognizing certain graphs representable with line segments,
for generating straight-line graph drawings,
and for abstracting polylines.
In the first part, we first examine,
how and in which time we can decide
whether a given graph is a stick graph,
that is, whether its vertices can be represented as
vertical and horizontal line segments on a diagonal line,
which intersect if and only if there is an edge between them.
We then consider the visual complexity of graphs.
Specifically, we investigate, for certain classes of graphs,
how many line segments are necessary for any straight-line graph drawing,
and whether three (or more) different slopes of the line segments
are sufficient to draw all edges.
Last, we study the question,
how to assign (ordered) colors to the vertices of a graph
with both directed and undirected edges
such that no neighboring vertices get the same color
and colors are ascending along directed edges.
Here, the special property of the considered graph is
that the vertices can be represented as intervals
that overlap if and only if there is an edge between them.
The latter problem is motivated by an application
in automated drawing of cable plans with vertical and horizontal line segments,
which we cover in the second part.
We describe an algorithm that
gets the abstract description of a cable plan as input,
and generates a drawing that takes into account
the special properties of these cable plans,
like plugs and groups of wires.
We then experimentally evaluate the quality of the resulting drawings.
In the third part, we study the problem of abstracting (or simplifying)
a single polyline and a bundle of polylines.
In this problem, the objective is to remove as many vertices as possible from the given polyline(s)
while keeping each resulting polyline sufficiently similar to its original course
(according to a given similarity measure).
Introduction.
Mobile health (mHealth) integrates mobile devices into healthcare, enabling remote monitoring, data collection, and personalized interventions. Machine Learning (ML), a subfield of Artificial Intelligence (AI), can use mHealth data to confirm or extend domain knowledge by finding associations within the data, i.e., with the goal of improving healthcare decisions. In this work, two data collection techniques were used for mHealth data fed into ML systems: Mobile Crowdsensing (MCS), which is a collaborative data gathering approach, and Ecological Momentary Assessments (EMA), which capture real-time individual experiences within the individual’s common environments using questionnaires and sensors. We collected EMA and MCS data on tinnitus and COVID-19. About 15 % of the world’s population suffers from tinnitus.
Materials & Methods.
This thesis investigates the challenges of ML systems when using MCS and EMA data. It asks: How can ML confirm or broad domain knowledge? Domain knowledge refers to expertise and understanding in a specific field, gained through experience and education. Are ML systems always superior to simple heuristics and if yes, how can one reach explainable AI (XAI) in the presence of mHealth data? An XAI method enables a human to understand why a model makes certain predictions. Finally, which guidelines can be beneficial for the use of ML within the mHealth domain? In tinnitus research, ML discerns gender, temperature, and season-related variations among patients. In the realm of COVID-19, we collaboratively designed a COVID-19 check app for public education, incorporating EMA data to offer informative feedback on COVID-19-related matters. This thesis uses seven EMA datasets with more than 250,000 assessments. Our analyses revealed a set of challenges: App user over-representation, time gaps, identity ambiguity, and operating system specific rounding errors, among others. Our systematic review of 450 medical studies assessed prior utilization of XAI methods.
Results.
ML models predict gender and tinnitus perception, validating gender-linked tinnitus disparities. Using season and temperature to predict tinnitus shows the association of these variables with tinnitus. Multiple assessments of one app user can constitute a group. Neglecting these groups in data sets leads to model overfitting. In select instances, heuristics outperform ML models, highlighting the need for domain expert consultation to unveil hidden groups or find simple heuristics.
Conclusion.
This thesis suggests guidelines for mHealth related data analyses and improves estimates for ML performance. Close communication with medical domain experts to identify latent user subsets and incremental benefits of ML is essential.
The ongoing and evolving usage of networks presents two critical challenges for current and future networks that require attention: (1) the task of effectively managing the vast and continually increasing data traffic and (2) the need to address the substantial number of end devices resulting from the rapid adoption of the Internet of Things. Besides these challenges, there is a mandatory need for energy consumption reduction, a more efficient resource usage, and streamlined processes without losing service quality. We comprehensively address these efforts, tackling the monitoring and quality assessment of streaming applications, a leading contributor to the total Internet traffic, as well as conducting an exhaustive analysis of the network performance within a Long Range Wide Area Network (LoRaWAN), one of the rapidly emerging LPWAN solutions.
Deep Learning (DL) models are trained on a downstream task by feeding (potentially preprocessed) input data through a trainable Neural Network (NN) and updating its parameters to minimize the loss function between the predicted and the desired output. While this general framework has mainly remained unchanged over the years, the architectures of the trainable models have greatly evolved. Even though it is undoubtedly important to choose the right architecture, we argue that it is also beneficial to develop methods that address other components of the training process. We hypothesize that utilizing domain knowledge can be helpful to improve DL models in terms of performance and/or efficiency. Such model-agnostic methods can be applied to any existing or future architecture. Furthermore, the black box nature of DL models motivates the development of techniques to understand their inner workings. Considering the rapid advancement of DL architectures, it is again crucial to develop model-agnostic methods.
In this thesis, we explore six principles that incorporate domain knowledge to understand or improve models. They are applied either on the input or output side of the trainable model. Each principle is applied to at least two DL tasks, leading to task-specific implementations. To understand DL models, we propose to use Generated Input Data coming from a controllable generation process requiring knowledge about the data properties. This way, we can understand the model’s behavior by analyzing how it changes when one specific high-level input feature changes in the generated data. On the output side, Gradient-Based Attribution methods create a gradient at the end of the NN and then propagate it back to the input, indicating which low-level input features have a large influence on the model’s prediction. The resulting input features can be interpreted by humans using domain knowledge.
To improve the trainable model in terms of downstream performance, data and compute efficiency, or robustness to unwanted features, we explore principles that each address one of the training components besides the trainable model. Input Masking and Augmentation directly modifies the training input data, integrating knowledge about the data and its impact on the model’s output. We also explore the use of Feature Extraction using Pretrained Multimodal Models which can be seen as a beneficial preprocessing step to extract useful features. When no training data is available for the downstream task, using such features and domain knowledge expressed in other modalities can result in a Zero-Shot Learning (ZSL) setting, completely eliminating the trainable model. The Weak Label Generation principle produces new desired outputs using knowledge about the labels, giving either a good pretraining or even exclusive training dataset to solve the downstream task. Finally, improving and choosing the right Loss Function is another principle we explore in this thesis. Here, we enrich existing loss functions with knowledge about label interactions or utilize and combine multiple task-specific loss functions in a multitask setting.
We apply the principles to classification, regression, and representation tasks as well as to image and text modalities. We propose, apply, and evaluate existing and novel methods to understand and improve the model. Overall, this thesis introduces and evaluates methods that complement the development and choice of DL model architectures.
The ongoing and evolving usage of networks presents two critical challenges for current and future networks that require attention: (1) the task of effectively managing the vast and continually increasing data traffic and (2) the need to address the substantial number of end devices resulting from the rapid adoption of the Internet of Things. Besides these challenges, there is a mandatory need for energy consumption reduction, a more efficient resource usage, and streamlined processes without losing service quality. We comprehensively address these efforts, tackling the monitoring and quality assessment of streaming applications, a leading contributor to the total Internet traffic, as well as conducting an exhaustive analysis of the network performance within a Long Range Wide Area Network (LoRaWAN), one of the rapidly emerging LPWAN solutions.
Seit jeher üben Roboter eine Faszination auf den Menschen aus. Es ist die Ähnlichkeit zum Menschen, die technische Systeme, die mit einer höheren Intelligenz ausgestattet sind, gleichermaßen faszinierend wie erschreckend erscheinen lässt. Der Gedanke daran, technische Kreaturen zu erschaffen, die uns erhabenen menschlichen Wesen „das Wasser reichen“ oder uns gar übertreffen können, lässt uns nicht mehr los. Die Erkenntnis von dem Nutzen, den uns derartige Wesen in allen denkbaren Bereichen bringen könnten, mündet jedoch sehr schnell in eine Skepsis im Hinblick auf eine Entmündigung und Entwertung des Menschen. Denn schon heute, obgleich die Forschung in vielen Bereichen noch in den Kinderschuhen steckt, geraten wir in zahlreichen Lebensbereichen in Kontakt mit technischen Systemen, die eine starke Wirkung auf uns ausüben und viele grundlegende Fragen aufwerfen.
Die Arbeit widmet sich der ethischen Dimension autonomer (Pflege-)Systeme und thematisiert zu diesem Zweck konkrete Anwendungsszenarien. Dabei geht es nicht um allgemeine ethische Fragen, sondern konkret um den Aspekt der Vereinbarkeit autonomer technischer Systeme mit der Menschenwürde ihrer Nutzer. Auch der Gesichtspunkt des Einflusses von autonomen technischen Innovationen auf das Selbstverständnis des Menschen (Menschenbild) ist Teil der Arbeit.
Als Maßstab für moderne technische Entwicklungen dient der Würdegrundsatz aufgrund seiner enormen Bedeutung für das Recht sowie für das zugrundeliegende und allgemeine Menschenbild. Im Rahmen einer an einem humanistischen Weltbild orientierten Gesellschaft steht die Menschenwürde als oberster Wert, dem moralische und rechtliche Entwicklungen gerecht werden müssen, über allem. Daher gilt es, moderne Entwicklungen immer auch im Hinblick auf ihre Vereinbarkeit mit der Menschenwürde zu überprüfen. So lässt sich feststellen, ob ein Regulierungsbedarf besteht und wie Regulierungen im Einzelnen auszugestalten sind.
Gleichzeitig muss aber auch die Menschenwürde gesellschaftlichen Entwicklungen gerecht werden. Demgemäß wird sie vom Bundesverfassungsgericht als Grundsatz, der sich aktuellen Herausforderungen stellt und zur Erzwingung eines gesellschaftlichen Diskurses führt, angesehen.
Die hiesige Arbeit soll einen Beitrag zu der bereits angestoßenen gesellschaftlichen Debatte rund um den technischen Fortschritt und konkret um die Probleme, die mit der zunehmenden Autonomie technischer Systeme einhergehen, leisten.