Institut für Informatik
Refine
Has Fulltext
- yes (343)
Year of publication
Document Type
- Doctoral Thesis (159)
- Journal article (121)
- Working Paper (39)
- Conference Proceeding (11)
- Master Thesis (5)
- Report (4)
- Bachelor Thesis (2)
- Book (1)
- Study Thesis (term paper) (1)
Language
- English (307)
- German (35)
- Multiple languages (1)
Keywords
- Leistungsbewertung (27)
- virtual reality (19)
- Datennetz (14)
- Netzwerk (10)
- Quality of Experience (10)
- Robotik (10)
- Modellierung (8)
- Autonomer Roboter (7)
- Cloud Computing (7)
- Mobiler Roboter (7)
Institute
- Institut für Informatik (343)
- Institut Mensch - Computer - Medien (10)
- Medizinische Klinik und Poliklinik I (7)
- Graduate School of Science and Technology (5)
- Institut für Sportwissenschaft (4)
- Medizinische Klinik und Poliklinik II (4)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (3)
- Institut für Klinische Epidemiologie und Biometrie (2)
- Institut für Psychologie (2)
- Institut für Pädagogik (2)
Schriftenreihe
Sonstige beteiligte Institutionen
- Cologne Game Lab (3)
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Open University of the Netherlands (2)
- Siemens AG (2)
- Zentrum für Telematik e.V. (2)
- Airbus Defence and Space GmbH (1)
- Beuth Hochschule für Technik Berlin (1)
- Birmingham City University (1)
- DLR (1)
- Hochschule Wismar (1)
Besides the integration of renewable energies, electric vehicles pose an additional challenge to modern power grids. However, electric vehicles can also be a flexibility source and contribute to the power system stability. Today, the power system still heavily relies on conventional technologies to stay stable. In order to operate a future power system based on renewable energies only, we need to understand the flexibility potential of assets such as electric vehicles and become able to use their flexibility. In this paper, we analyzed how vast amounts of coordinated charging processes can be used to provide frequency containment reserve power, one of the most important ancillary services for system stability. Therefore, we used an extensive simulation model of a virtual power plant of millions of electric vehicles. The model considers not only technical components but also the stochastic behavior of electric vehicle drivers based on real data. Our results show that, in 2030, electric vehicles have the potential to serve the whole frequency containment reserve power market in Germany. We differentiate between using unidirectional and bidirectional chargers. Bidirectional chargers have a larger potential but also result in unwanted battery degradation. Unidirectional chargers are more constrained in terms of flexibility, but do not lead to additional battery degradation. We conclude that using a mix of both can combine the advantages of both worlds. Thereby, average private cars can provide the service without any notable additional battery degradation and achieve yearly earnings between EUR 200 and EUR 500, depending on the volatile market prices. Commercial vehicles have an even higher potential, as the results increase with vehicle utilization and consumption.
An approach to aerodynamically optimizing cycling posture and reducing drag in an Ironman (IM) event was elaborated. Therefore, four commonly used positions in cycling were investigated and simulated for a flow velocity of 10 m/s and yaw angles of 0–20° using OpenFoam-based Nabla Flow CFD simulation software software. A cyclist was scanned using an IPhone 12, and a special-purpose meshing software BLENDER was used. Significant differences were observed by changing and optimizing the cyclist’s posture. Aerodynamic drag coefficient (CdA) varies by more than a factor of 2, ranging from 0.214 to 0.450. Within a position, the CdA tends to increase slightly at yaw angles of 5–10° and decrease at higher yaw angles compared to a straight head wind, except for the time trial (TT) position. The results were applied to the IM Hawaii bike course (180 km), estimating a constant power output of 300 W. Including the wind distributions, two different bike split models for performance prediction were applied. Significant time saving of roughly 1 h was found. Finally, a machine learning approach to deduce 3D triangulation for specific body shapes from 2D pictures was tested.
Snow is a vital environmental parameter and dynamically responsive to climate change, particularly in mountainous regions. Snow cover can be monitored at variable spatial scales using Earth Observation (EO) data. Long-lasting remote sensing missions enable the generation of multi-decadal time series and thus the detection of long-term trends. However, there have been few attempts to use these to model future snow cover dynamics. In this study, we, therefore, explore the potential of such time series to forecast the Snow Line Elevation (SLE) in the European Alps. We generate monthly SLE time series from the entire Landsat archive (1985–2021) in 43 Alpine catchments. Positive long-term SLE change rates are detected, with the highest rates (5–8 m/y) in the Western and Central Alps. We utilize this SLE dataset to implement and evaluate seven uni-variate time series modeling and forecasting approaches. The best results were achieved by Random Forests, with a Nash–Sutcliffe efficiency (NSE) of 0.79 and a Mean Absolute Error (MAE) of 258 m, Telescope (0.76, 268 m), and seasonal ARIMA (0.75, 270 m). Since the model performance varies strongly with the input data, we developed a combined forecast based on the best-performing methods in each catchment. This approach was then used to forecast the SLE for the years 2022–2029. In the majority of the catchments, the shift of the forecast median SLE level retained the sign of the long-term trend. In cases where a deviating SLE dynamic is forecast, a discussion based on the unique properties of the catchment and past SLE dynamics is required. In the future, we expect major improvements in our SLE forecasting efforts by including external predictor variables in a multi-variate modeling approach.
Service orchestration requires enormous attention and is a struggle nowadays. Of course, virtualization provides a base level of abstraction for services to be deployable on a lot of infrastructures. With container virtualization, the trend to migrate applications to a micro-services level in order to be executable in Fog and Edge Computing environments increases manageability and maintenance efforts rapidly. Similarly, network virtualization adds effort to calibrate IP flows for Software-Defined Networks and eventually route it by means of Network Function Virtualization. Nevertheless, there are concepts like MAPE-K to support micro-service distribution in next-generation cloud and network environments. We want to explore, how a service distribution can be improved by adopting machine learning concepts for infrastructure or service changes. Therefore, we show how federated machine learning is integrated into a cloud-to-fog-continuum without burdening single nodes.
In network research, reproducibility of experiments is not always easy to achieve. Infrastructures are cumbersome to set up or are not available due to vendor-specific devices. Emulators try to overcome those issues to a given extent and are available in different service models. Unfortunately, the usability of emulators requires time-consuming efforts and a deep understanding of their functionality. At first, we analyze to which extent currently available open-source emulators support network configurations and how user-friendly they are. With these insights, we describe, how an ease-to-use emulator is implemented and may run as a Network Emulator as a Service (NEaaS). Therefore, virtualization plays a major role in order to deploy a NEaaS based on Kathará.
This paper discusses the problem of finding multiple shortest disjoint paths in modern communication networks, which is essential for ultra-reliable and time-sensitive applications. Dijkstra’s algorithm has been a popular solution for the shortest path problem, but repetitive use of it to find multiple paths is not scalable. The Multiple Disjoint Path Algorithm (MDPAlg), published in 2021, proposes the use of a single full graph to construct multiple disjoint paths. This paper proposes modifications to the algorithm to include a delay constraint, which is important in time-sensitive applications. Different delay constraint least-cost routing algorithms are compared in a comprehensive manner to evaluate the benefits of the adapted MDPAlg algorithm. Fault tolerance, and thereby reliability, is ensured by generating multiple link-disjoint paths from source to destination.
In this work, we describe the network from data collection to data processing and storage as a system based on different layers. We outline the different layers and highlight major tasks and dependencies with regard to energy consumption and energy efficiency. With this view, we can outwork challenges and questions a future system architect must answer to provide a more sustainable, green, resource friendly, and energy efficient application or system. Therefore, all system layers must be considered individually but also altogether for future IoT solutions. This requires, in particular, novel sustainability metrics in addition to current Quality of Service and Quality of Experience metrics to provide a high power, user satisfying, and sustainable network.
How to Model and Predict the Scalability of a Hardware-In-The-Loop Test Bench for Data Re-Injection?
(2023)
This paper describes a novel application of an empirical network calculus model based on measurements of a hardware-in-the-loop (HIL) test system. The aim is to predict the performance of a HIL test bench for open-loop re-injection in the context of scalability. HIL test benches are distributed computer systems including software, hardware, and networking devices. They are used to validate complex technical systems, but have not yet been system under study themselves. Our approach is to use measurements from the HIL system to create an empirical model for arrival and service curves. We predict the performance and design the previously unknown parameters of the HIL simulator with network calculus (NC), namely the buffer sizes and the minimum needed pre-buffer time for the playback buffer. We furthermore show, that it is possible to estimate the CPU load from arrival and service-curves based on the utilization theorem, and hence estimate the scalability of the HIL system in the context of the number of sensor streams.
In recent years, satellite communication has been expanding its field of application in the world of computer networks. This paper aims to provide an overview of how a typical scenario involving 5G Non-Terrestrial Networks (NTNs) for vehicle to everything (V2X) applications is characterized. In particular, a first implementation of a system that integrates them together will be described. Such a framework will later be used to evaluate the performance of applications such as Vehicle Monitoring (VM), Remote Driving (RD), Voice Over IP (VoIP), and others. Different configuration scenarios such as Low Earth Orbit and Geostationary Orbit will be considered.
The introduction of new types of frequency spectrum in 6G technology facilitates the convergence of conventional mobile communications and radar functions. Thus, the mobile network itself becomes a versatile sensor system. This enables mobile network operators to offer a sensing service in addition to conventional data and telephony services. The potential benefits are expected to accrue to various stakeholders, including individuals, the environment, and society in general. The paper discusses technological development, possible integration, and use cases, as well as future development areas.