Institut für Informatik
Refine
Has Fulltext
- yes (347)
Year of publication
Document Type
- Doctoral Thesis (160)
- Journal article (123)
- Working Paper (39)
- Conference Proceeding (11)
- Master Thesis (5)
- Report (5)
- Bachelor Thesis (2)
- Book (1)
- Study Thesis (term paper) (1)
Language
- English (311)
- German (35)
- Multiple languages (1)
Keywords
- Leistungsbewertung (27)
- virtual reality (19)
- Datennetz (14)
- Netzwerk (10)
- Quality of Experience (10)
- Robotik (10)
- Modellierung (8)
- Autonomer Roboter (7)
- Cloud Computing (7)
- Mobiler Roboter (7)
Institute
- Institut für Informatik (347)
- Institut Mensch - Computer - Medien (11)
- Medizinische Klinik und Poliklinik I (7)
- Graduate School of Science and Technology (6)
- Institut für Sportwissenschaft (4)
- Medizinische Klinik und Poliklinik II (4)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (3)
- Institut für Klinische Epidemiologie und Biometrie (2)
- Institut für Psychologie (2)
- Institut für Pädagogik (2)
Schriftenreihe
Sonstige beteiligte Institutionen
- Cologne Game Lab (3)
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Open University of the Netherlands (2)
- Siemens AG (2)
- Zentrum für Telematik e.V. (2)
- Airbus Defence and Space GmbH (1)
- Beuth Hochschule für Technik Berlin (1)
- Birmingham City University (1)
- DLR (1)
- Hochschule Wismar (1)
With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game Deal or No Deal while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.
There is great interest in affordable, precise and reliable metrology underwater:
Archaeologists want to document artifacts in situ with high detail.
In marine research, biologists require the tools to monitor coral growth and geologists need recordings to model sediment transport.
Furthermore, for offshore construction projects, maintenance and inspection millimeter-accurate measurements of defects and offshore structures are essential.
While the process of digitizing individual objects and complete sites on land is well understood and standard methods, such as Structure from Motion or terrestrial laser scanning, are regularly applied, precise underwater surveying with high resolution is still a complex and difficult task.
Applying optical scanning techniques in water is challenging due to reduced visibility caused by turbidity and light absorption.
However, optical underwater scanners provide significant advantages in terms of achievable resolution and accuracy compared to acoustic systems.
This thesis proposes an underwater laser scanning system and the algorithms for creating dense and accurate 3D scans in water.
It is based on laser triangulation and the main optical components are an underwater camera and a cross-line laser projector.
The prototype is configured with a motorized yaw axis for capturing scans from a tripod.
Alternatively, it is mounted to a moving platform for mobile mapping.
The main focus lies on the refractive calibration of the underwater camera and laser projector, the image processing and 3D reconstruction.
For highest accuracy, the refraction at the individual media interfaces must be taken into account.
This is addressed by an optimization-based calibration framework using a physical-geometric camera model derived from an analytical formulation of a ray-tracing projection model.
In addition to scanning underwater structures, this work presents the 3D acquisition of semi-submerged structures and the correction of refraction effects.
As in-situ calibration in water is complex and time-consuming, the challenge of transferring an in-air scanner calibration to water without re-calibration is investigated, as well as self-calibration techniques for structured light.
The system was successfully deployed in various configurations for both static scanning and mobile mapping.
An evaluation of the calibration and 3D reconstruction using reference objects and a comparison of free-form surfaces in clear water demonstrate the high accuracy potential in the range of one millimeter to less than one centimeter, depending on the measurement distance.
Mobile underwater mapping and motion compensation based on visual-inertial odometry is demonstrated using a new optical underwater scanner based on fringe projection.
Continuous registration of individual scans allows the acquisition of 3D models from an underwater vehicle.
RGB images captured in parallel are used to create 3D point clouds of underwater scenes in full color.
3D maps are useful to the operator during the remote control of underwater vehicles and provide the building blocks to enable offshore inspection and surveying tasks.
The advancing automation of the measurement technology will allow non-experts to use it, significantly reduce acquisition time and increase accuracy, making underwater metrology more cost-effective.
Venus Research Station
(2023)
Because of the extreme conditions in the atmosphere, Venus has been less explored than for example Mars. Only a few probes have been able to survive on the surface for very short periods in the past and have sent data. The atmosphere is also far from being fully explored. It could even be that building blocks of life can be found in more moderate layers of the planet’s atmosphere. It can therefore be assumed that the planet Venus will increasingly become a focus of exploration. One way to collect significantly more data in situ is to build and operate an atmospheric research station over an extended period of time. This could carry out measurements at different positions and at different times and thus significantly expand our knowledge of the planet. In this work, the design of a Venus Research Station floating within the Venusian atmosphere is presented, which is complemented by the design of deployable atmospheric Scouts. The design of these components is done on a conceptual basis.
Going beyond the current trend of cooperating multiple small satellites we arrive at fractionated satellite architectures. Here the subsystems of all satellites directly self-organize and cooperate among themselves to achieve a common mission goal. Although this leads to a further increase of the advantages of the initial trend it also introduces new challenges, one of which is how to perform closed-loop control of a satellite over a network of subsystems. We present a two-fold approach to deal with the two main disturbances, data losses in the network and failure of the controller, in a networked predictive formation control scenario. To deal with data loss an event based networked model predictive control approach is extended to enable it to adapt to changing network conditions. The controller failure detection and compensation approach is tailored for a possibly large network of heterogeneous cooperating actuator- and controller nodes. The self-organized control task redistribution uses an auction-based methodology. It scales well with the number of nodes and allows to optimize for continuing good control performance despite the controller switch. The stability and smooth control behavior of our approach during a self-organized controller failure compensation while also being subject to data losses was demonstrated on a hardware testbed using as mission a formation control scenario.
Besides the integration of renewable energies, electric vehicles pose an additional challenge to modern power grids. However, electric vehicles can also be a flexibility source and contribute to the power system stability. Today, the power system still heavily relies on conventional technologies to stay stable. In order to operate a future power system based on renewable energies only, we need to understand the flexibility potential of assets such as electric vehicles and become able to use their flexibility. In this paper, we analyzed how vast amounts of coordinated charging processes can be used to provide frequency containment reserve power, one of the most important ancillary services for system stability. Therefore, we used an extensive simulation model of a virtual power plant of millions of electric vehicles. The model considers not only technical components but also the stochastic behavior of electric vehicle drivers based on real data. Our results show that, in 2030, electric vehicles have the potential to serve the whole frequency containment reserve power market in Germany. We differentiate between using unidirectional and bidirectional chargers. Bidirectional chargers have a larger potential but also result in unwanted battery degradation. Unidirectional chargers are more constrained in terms of flexibility, but do not lead to additional battery degradation. We conclude that using a mix of both can combine the advantages of both worlds. Thereby, average private cars can provide the service without any notable additional battery degradation and achieve yearly earnings between EUR 200 and EUR 500, depending on the volatile market prices. Commercial vehicles have an even higher potential, as the results increase with vehicle utilization and consumption.
An approach to aerodynamically optimizing cycling posture and reducing drag in an Ironman (IM) event was elaborated. Therefore, four commonly used positions in cycling were investigated and simulated for a flow velocity of 10 m/s and yaw angles of 0–20° using OpenFoam-based Nabla Flow CFD simulation software software. A cyclist was scanned using an IPhone 12, and a special-purpose meshing software BLENDER was used. Significant differences were observed by changing and optimizing the cyclist’s posture. Aerodynamic drag coefficient (CdA) varies by more than a factor of 2, ranging from 0.214 to 0.450. Within a position, the CdA tends to increase slightly at yaw angles of 5–10° and decrease at higher yaw angles compared to a straight head wind, except for the time trial (TT) position. The results were applied to the IM Hawaii bike course (180 km), estimating a constant power output of 300 W. Including the wind distributions, two different bike split models for performance prediction were applied. Significant time saving of roughly 1 h was found. Finally, a machine learning approach to deduce 3D triangulation for specific body shapes from 2D pictures was tested.
Snow is a vital environmental parameter and dynamically responsive to climate change, particularly in mountainous regions. Snow cover can be monitored at variable spatial scales using Earth Observation (EO) data. Long-lasting remote sensing missions enable the generation of multi-decadal time series and thus the detection of long-term trends. However, there have been few attempts to use these to model future snow cover dynamics. In this study, we, therefore, explore the potential of such time series to forecast the Snow Line Elevation (SLE) in the European Alps. We generate monthly SLE time series from the entire Landsat archive (1985–2021) in 43 Alpine catchments. Positive long-term SLE change rates are detected, with the highest rates (5–8 m/y) in the Western and Central Alps. We utilize this SLE dataset to implement and evaluate seven uni-variate time series modeling and forecasting approaches. The best results were achieved by Random Forests, with a Nash–Sutcliffe efficiency (NSE) of 0.79 and a Mean Absolute Error (MAE) of 258 m, Telescope (0.76, 268 m), and seasonal ARIMA (0.75, 270 m). Since the model performance varies strongly with the input data, we developed a combined forecast based on the best-performing methods in each catchment. This approach was then used to forecast the SLE for the years 2022–2029. In the majority of the catchments, the shift of the forecast median SLE level retained the sign of the long-term trend. In cases where a deviating SLE dynamic is forecast, a discussion based on the unique properties of the catchment and past SLE dynamics is required. In the future, we expect major improvements in our SLE forecasting efforts by including external predictor variables in a multi-variate modeling approach.
Service orchestration requires enormous attention and is a struggle nowadays. Of course, virtualization provides a base level of abstraction for services to be deployable on a lot of infrastructures. With container virtualization, the trend to migrate applications to a micro-services level in order to be executable in Fog and Edge Computing environments increases manageability and maintenance efforts rapidly. Similarly, network virtualization adds effort to calibrate IP flows for Software-Defined Networks and eventually route it by means of Network Function Virtualization. Nevertheless, there are concepts like MAPE-K to support micro-service distribution in next-generation cloud and network environments. We want to explore, how a service distribution can be improved by adopting machine learning concepts for infrastructure or service changes. Therefore, we show how federated machine learning is integrated into a cloud-to-fog-continuum without burdening single nodes.
In network research, reproducibility of experiments is not always easy to achieve. Infrastructures are cumbersome to set up or are not available due to vendor-specific devices. Emulators try to overcome those issues to a given extent and are available in different service models. Unfortunately, the usability of emulators requires time-consuming efforts and a deep understanding of their functionality. At first, we analyze to which extent currently available open-source emulators support network configurations and how user-friendly they are. With these insights, we describe, how an ease-to-use emulator is implemented and may run as a Network Emulator as a Service (NEaaS). Therefore, virtualization plays a major role in order to deploy a NEaaS based on Kathará.
This paper discusses the problem of finding multiple shortest disjoint paths in modern communication networks, which is essential for ultra-reliable and time-sensitive applications. Dijkstra’s algorithm has been a popular solution for the shortest path problem, but repetitive use of it to find multiple paths is not scalable. The Multiple Disjoint Path Algorithm (MDPAlg), published in 2021, proposes the use of a single full graph to construct multiple disjoint paths. This paper proposes modifications to the algorithm to include a delay constraint, which is important in time-sensitive applications. Different delay constraint least-cost routing algorithms are compared in a comprehensive manner to evaluate the benefits of the adapted MDPAlg algorithm. Fault tolerance, and thereby reliability, is ensured by generating multiple link-disjoint paths from source to destination.