Refine
Has Fulltext
- yes (56)
Is part of the Bibliography
- yes (56) (remove)
Year of publication
- 2022 (56) (remove)
Document Type
- Journal article (35)
- Doctoral Thesis (13)
- Working Paper (6)
- Bachelor Thesis (1)
- Study Thesis (term paper) (1)
Language
- English (56) (remove)
Keywords
- virtual reality (8)
- Datennetz (4)
- IoT (3)
- Kleinsatellit (3)
- deep learning (3)
- machine learning (3)
- CADe (2)
- Cloud Computing (2)
- Zuverlässigkeit (2)
- body weight modification (2)
- body weight perception (2)
- colonoscopy (2)
- immersion (2)
- simulation (2)
- virtual environments (2)
- 3D model generation (1)
- AVA (1)
- Alps (1)
- Anomalieerkennung (1)
- Apple Watch 7 (1)
- Automation (1)
- Autorotation (1)
- Balloon (1)
- Betriebssystem (1)
- Broadcast Growth Codes (BCGC) (1)
- CLIP (1)
- Cloud (1)
- Cloud-native (1)
- Compass framework (1)
- Compiler (1)
- Complicacy (1)
- Control room (1)
- CubeSat (1)
- Cyber-physisches System (1)
- DNA storage (1)
- Debugging (1)
- Delay Tolerant Network (1)
- Distributed System (1)
- Distributed computing (1)
- Drahtloses Sensorsystem (1)
- Earth Observation (1)
- End-to-End Automation (1)
- Ende-zu-Ende Automatisierung (1)
- Energieeffizienz (1)
- Energy Efficiency (1)
- Erderkundungssatellit (1)
- Failure Prediction (1)
- Fehlervorhersage (1)
- Fitbit Sense (1)
- Forecasting (1)
- Forschungssatellit (1)
- Fraud detection (1)
- Garmin Fenix 6 Pro (1)
- Graphenzeichnen (1)
- Human-Computer Interaction (1)
- IEEE Std 802.15.4 (1)
- Image Aesthetic Assessment (1)
- Implementierung <Informatik> (1)
- Industrie 4.0 (1)
- Intelligent Transportation Systems (1)
- InteractionSuitcase (1)
- Internet of Things (1)
- Kommunikationsprotokoll (1)
- Latenz (1)
- Leistungsbedarf (1)
- Leistungsbewertung (1)
- Lightning (1)
- LoRa (1)
- LoRaWAN (1)
- LoRaWan (1)
- Logistik (1)
- Low Earth Orbit (1)
- Mars (1)
- Maschinelles Lernen (1)
- Media Access Control (1)
- Mehrkriterielle Optimierung (1)
- Mensch-Maschine-Kommunikation (1)
- Mensch-Maschine-Schnittstelle (1)
- Microservice (1)
- Mikroservice (1)
- Model based communication (1)
- Model based mission realization (1)
- Modeling (1)
- Modellierung (1)
- Nano-Satellite (1)
- Nanosatellit (1)
- Optical Music Recognition (1)
- Performance (1)
- Platooning (1)
- Power Consumption (1)
- Prognose (1)
- Psychische Gesundheit (1)
- RLNC (1)
- Rechenzentrum (1)
- Routing (1)
- Satellit (1)
- Satellite Ground Station (1)
- Satellite Network (1)
- Satellite formation (1)
- Satellitenfunk (1)
- Self-Aware Computing (1)
- Simulation (1)
- Snow Line Elevation (1)
- Softwareentwicklung (1)
- Softwaremetrie (1)
- Softwaresystem (1)
- Softwaretest (1)
- Softwarewartung (1)
- Transportsystem (1)
- UAP (1)
- UFO (1)
- Vehicle Routing Problem (1)
- Venus (1)
- Verteiltes System (1)
- Virtuelle Realität (1)
- Withings ScanWatch (1)
- XR (1)
- YouTube (1)
- Zeitdiskretes System (1)
- Zeitreihe (1)
- Zeitreihenvorhersage (1)
- adaptive network coding (1)
- adaptive tutoring (1)
- adult learning (1)
- aerodynamic drag reduction (1)
- aerospace (1)
- affective appraisal (1)
- agency (1)
- ancillary services (1)
- annotation (1)
- ant-colony optimization (1)
- artificial intelligence (1)
- auction based task assignment (1)
- authoring platform (1)
- automation (1)
- autorotation (1)
- avatar embodiment (1)
- avatars (1)
- background knowledge (1)
- behavior perception (1)
- beyond planarity (1)
- body awareness (1)
- body image distortion (1)
- body image disturbance (1)
- cardiac training group (1)
- certifying algorithm (1)
- chain cover (1)
- channel management (1)
- circular layouts (1)
- circular-arc drawings (1)
- coherence (1)
- collision (1)
- congruence (1)
- controller failure recovery (1)
- convex bipartite graph (1)
- crossing minimization (1)
- cultural and media studies (1)
- culturally aware (1)
- cycling (1)
- descent (1)
- design (1)
- discrete-time analysis (1)
- distance compression (1)
- drag area (1)
- dynamic programming (1)
- eating and body weight disorders (1)
- education (1)
- electric vehicles (1)
- embodiment (1)
- emotions (1)
- encryption (1)
- endoscopy (1)
- environmental modeling (1)
- ethics (1)
- experience (1)
- explainable AI (1)
- explanation complexity (1)
- extended reality (XR) (1)
- food quality (1)
- force feedback (1)
- forecast (1)
- formation control (1)
- fractionated spacecraft (1)
- fruit temperature (1)
- fully convolutional neural networks (1)
- gambling (1)
- gamification (1)
- gastroenterology (1)
- genetic algorithm (1)
- glaucoma progression (1)
- graph algorithm (1)
- graph drawing (1)
- health tracker (1)
- healthcare (1)
- healthcare professionals (1)
- heart failure (1)
- heart failure training group (1)
- high-accuracy 3D measurements (1)
- higher education (1)
- historical document analysis (1)
- human-centered AI (1)
- human-robot interaction (1)
- illusion of self-motion (1)
- implicit association test (1)
- individual differences (1)
- induced matching (1)
- information systems and information technology (1)
- instrument (1)
- intelligent virtual agents (1)
- internet of things (1)
- internet traffic (1)
- intervention (1)
- landing (1)
- language-image pre-training (1)
- latency (1)
- latency cybersickness (1)
- laterality (1)
- light-gated proteins (1)
- logistics (1)
- m exercise training (1)
- medieval manuscripts (1)
- mixed-cultural (1)
- mobile networks (1)
- mobile streaming (1)
- motion compensation (1)
- mountains (1)
- multi-source multi-sink problem (1)
- nanocellulose (1)
- network softwarization (1)
- networked predictive control (1)
- neume notation (1)
- non-native accent (1)
- nycthemeral intraocular pressure (1)
- object detection (1)
- optical underwater 3D sensor (1)
- passage of time (1)
- performance evaluation (1)
- personalized medicine (1)
- photoplethysmography (1)
- plausibility (1)
- point cloud (1)
- polyp (1)
- precision horticulture (1)
- prediction (1)
- presence (1)
- prompt engineering (1)
- protein chip (1)
- public speaking (1)
- real-world application (1)
- recommender agent (1)
- rehabilitation (1)
- rich vehicle routing problem (1)
- right-left comparison (1)
- risks (1)
- robot-supported training (1)
- robotic tutor (1)
- rotorcraft (1)
- scheduling (1)
- science, technology and society (1)
- secure group communication (1)
- self-organization (1)
- single-electron transistors (1)
- smart charging (1)
- smart grid (1)
- smartwatch (1)
- social VR (1)
- social robot (1)
- social robotics (1)
- socially interactive agents (1)
- spacecrarft control (1)
- stereotypes (1)
- structured illumination (1)
- sunburn (1)
- technology acceptance (1)
- technology-supported education (1)
- technology-supported learning (1)
- text supervision (1)
- theory (1)
- therapeutic application (1)
- therapy (1)
- thermal point cloud (1)
- thrust vector control (1)
- time series (1)
- unmanned aerial vehicles (1)
- user experience (1)
- vection (1)
- virtual agent (1)
- virtual body ownership (1)
- virtual human (1)
- virtual stimuli (1)
- virtual tunnel (1)
- virtuel reality (1)
- wearable (1)
- wireless sensor network (1)
Institute
- Institut für Informatik (56) (remove)
Sonstige beteiligte Institutionen
Since the first CubeSat launch in 2003, the hardware and software complexity of the nanosatellites was continuosly increasing.
To keep up with the continuously increasing mission complexity and to retain the primary advantages of a CubeSat mission, a new approach for the overall space and ground software architecture and protocol configuration is elaborated in this work.
The aim of this thesis is to propose a uniform software and protocol architecture as a basis for software development, test, simulation and operation of multiple pico-/nanosatellites based on ultra-low power components.
In contrast to single-CubeSat missions, current and upcoming nanosatellite formation missions require faster and more straightforward development, pre-flight testing and calibration procedures as well as simultaneous operation of multiple satellites.
A dynamic and decentral Compass mission network was established in multiple active CubeSat missions, consisting of uniformly accessible nodes.
Compass middleware was elaborated to unify the communication and functional interfaces between all involved mission-related software and hardware components.
All systems can access each other via dynamic routes to perform service-based M2M communication.
With the proposed model-based communication approach, all states, abilities and functionalities of a system are accessed in a uniform way.
The Tiny scripting language was designed to allow dynamic code execution on ultra-low power components as a basis for constraint-based in-orbit scheduler and experiment execution.
The implemented Compass Operations front-end enables far-reaching monitoring and control capabilities of all ground and space systems.
Its integrated constraint-based operations task scheduler allows the recording of complex satellite operations, which are conducted automatically during the overpasses.
The outcome of this thesis became an enabling technology for UWE-3, UWE-4 and NetSat CubeSat missions.
A graph is an abstract network that represents a set of objects, called vertices, and relations between these objects, called edges. Graphs can model various networks. For example, a social network where the vertices correspond to users of the network and the edges represent relations between the users. To better see the structure of a graph it is helpful to visualize it. A standard visualization is a node-link diagram in the Euclidean plane. In such a representation the vertices are drawn as points in the plane and edges are drawn as Jordan curves between every two vertices connected by an edge. Edge crossings decrease the readability of a drawing, therefore, Crossing Optimization is a fundamental problem in Computer Science. This book explores the research frontiers and introduces novel approaches in Crossing Optimization.
This thesis deals with the first part of a larger project that follows the ultimate goal of implementing a software tool that creates a Mission Control Room in Virtual Reality. The software is to be used for the operation of spacecrafts and is specially developed for the unique real-time requirements of unmanned satellite missions. Beginning from launch, throughout the whole mission up to the recovery or disposal of the satellite, all systems need to be monitored and controlled in continuous intervals, to ensure the mission’s success. Mission Operation is an essential part of every space mission and has been undertaken for decades. Recent technological advancements in the realm of immersive technologies pave the way for innovative methods to operate spacecrafts. Virtual Reality has the capability to resolve the physical constraints set by traditional Mission Control Rooms and thereby delivers novel opportunities. The paper highlights underlying theoretical aspects of Virtual Reality, Mission Control and IP Communication. However, the focus lies upon the practical part of this thesis which revolves around the first steps of the implementation of the virtual Mission Control Room in the Unity Game Engine. Overall, this paper serves as a demonstration of Virtual Reality technology and shows its possibilities with respect to the operation of spacecrafts.
Continued reports over the past decades of unknown aerial phenomena (short UAP) have given high relevance to the investigation and research of these. Especially reports by US Navy pilots and official investigations by the US Office of the director of national intelligence have emphasized the value of such efforts. Due to the inherently limited scope of earth based observations, a satellite based instrument for detection of such phenomena may prove especially useful. This paper as such investigates the possible viability of such an instrument on a nano satellite mission.
In the last decades, the classical Vehicle Routing Problem (VRP), i.e., assigning a set of orders to vehicles and planning their routes has been intensively researched. As only the assignment of order to vehicles and their routes is already an NP-complete problem, the application of these algorithms in practice often fails to take into account the constraints and restrictions that apply in real-world applications, the so called rich VRP (rVRP) and are limited to single aspects. In this work, we incorporate the main relevant real-world constraints and requirements. We propose a two-stage strategy and a Timeline algorithm for time windows and pause times, and apply a Genetic Algorithm (GA) and Ant Colony Optimization (ACO) individually to the problem to find optimal solutions. Our evaluation of eight different problem instances against four state-of-the-art algorithms shows that our approach handles all given constraints in a reasonable time.
The first step towards aerial planetary exploration has been made. Ingenuity shows extremely promising results, and new missions are already underway. Rotorcraft are capable of flight. This capability could be utilized to support the last stages of Entry, Descent, and Landing. Thus, mass and complexity could be scaled down.
Autorotation is one method of descent. It describes unpowered descent and landing, typically performed by helicopters in case of an engine failure. MAPLE is suggested to test these procedures and understand autorotation on other planets. In this series of experiments, the Ingenuity helicopter is utilized. Ingenuity would autorotate a ”mid-air-landing” before continuing with normal flight. Ultimately, the collected data shall help to understand autorotation on Mars and its utilization for interplanetary exploration.
An enduring engineering problem is the creation of unreliable software leading to unreliable systems. One reason for this is source code is written in a complicated manner making it too hard for humans to review and understand. Complicated code leads to other issues beyond dependability, such as expanded development efforts and ongoing difficulties with maintenance, ultimately costing developers and users more money.
There are many ideas regarding where blame lies in the reation of buggy and unreliable systems. One prevalent idea is the selected life cycle model is to blame. The oft-maligned “waterfall” life cycle model is a particularly popular recipient of blame. In response, many organizations changed their life cycle model in hopes of addressing these issues. Agile life cycle models have become very popular, and they promote communication between team members and end users. In theory, this communication leads to fewer misunderstandings and should lead to less complicated and more reliable code.
Changing the life cycle model can indeed address communications ssues, which can resolve many problems with understanding requirements.
However, most life cycle models do not specifically address coding practices or software architecture. Since lifecycle models do not address the structure of the code, they are often ineffective at addressing problems related to code complicacy.
This dissertation answers several research questions concerning software complicacy, beginning with an investigation of traditional metrics and static analysis to evaluate their usefulness as measurement tools. This dissertation also establishes a new concept in applied linguistics by creating a measurement of software complicacy based on linguistic economy. Linguistic economy describes the efficiencies of speech, and this thesis shows the applicability of linguistic economy to software. Embedded in each topic is a discussion
of the ramifications of overly complicated software, including the relationship of complicacy to software faults. Image recognition using machine learning is also investigated as a potential method of identifying problematic source code.
The central part of the work focuses on analyzing the source code of hundreds of different projects from different areas. A static analysis was performed on the source code of each project, and traditional software metrics were calculated. Programs were also analyzed using techniques developed by linguists to measure expression and statement complicacy and identifier complicacy. Professional software engineers were also directly surveyed to understand mainstream perspectives.
This work shows it is possible to use traditional metrics as indicators of potential project bugginess. This work also discovered it is possible to use image recognition to identify problematic pieces of source code. Finally, this work discovered it is possible to use linguistic methods to determine which statements and expressions are least desirable and more complicated for programmers.
This work’s principle conclusion is that there are multiple ways to discover traits indicating a project or a piece of source code has characteristics of being buggy. Traditional metrics and static analysis can be used to gain some understanding of software complicacy and bugginess potential. Linguistic economy demonstrates a new tool for measuring software complicacy, and machine learning can predict where bugs may lie in source code. The significant implication of this work is developers can recognize when a project is becoming buggy and take practical steps to avoid creating buggy projects.
Lightning has fascinated humanity since the beginning of our existence. Different types of lightning like sprites and blue jets were discovered, and many more are theorized. However, it is very likely that these phenomena are not exclusive to our home planet. Venus’s dense and active atmosphere is a place where lightning is to be expected. Missions like Venera, Pioneer, and Galileo have carried instruments to measure electromagnetic activity. These measurements have indeed delivered results. However, these results are not clear. They could be explained by other effects like cosmic rays, plasma noise, or spacecraft noise. Furthermore, these lightning seem different from those we know from our home planet. In order to tackle these issues, a different approach to measurement is proposed. When multiple devices in different spacecraft or locations can measure the same atmospheric discharge, most other explanations become increasingly less likely. Thus, the suggested instrument and method of VELEX incorporates multiple spacecraft. With this approach, the question about the existence of lightning on Venus could be settled.
Detecting anomalies in transaction data is an important task with a high potential to avoid financial loss due to irregularities deliberately or inadvertently carried out, such as credit card fraud, occupational fraud in companies or ordering and accounting errors. With ongoing digitization of our world, data-driven approaches, including machine learning, can draw benefit from data with less manual effort and feature engineering. A large variety of machine learning-based anomaly detection methods approach this by learning a precise model of normality from which anomalies can be distinguished. Modeling normality in transactional data, however, requires to capture distributions and dependencies within the data precisely with special attention to numerical dependencies such as quantities, prices or amounts.
To implicitly model numerical dependencies, Neural Arithmetic Logic Units have been proposed as neural architecture. In practice, however, these have stability and precision issues.
Therefore, we first develop an improved neural network architecture, iNALU, which is designed to better model numerical dependencies as found in transaction data. We compare this architecture to the previous approach and show in several experiments of varying complexity that our novel architecture provides better precision and stability.
We integrate this architecture into two generative neural network models adapted for transaction data and investigate how well normal behavior is modeled. We show that both architectures can successfully model normal transaction data, with our neural architecture improving generative performance for one model.
Since categorical and numerical variables are common in transaction data, but many machine learning methods only process numerical representations, we explore different representation learning techniques to transform categorical transaction data into dense numerical vectors. We extend this approach by proposing an outlier-aware discretization, thus incorporating numerical attributes into the computation of categorical embeddings, and investigate latent spaces, as well as quantitative performance for anomaly detection.
Next, we evaluate different scenarios for anomaly detection on transaction data. We extend our iNALU architecture to a neural layer that can model both numerical and non-numerical dependencies and evaluate it in a supervised and one-class setting. We investigate the stability and generalizability of our approach and show that it outperforms a variety of models in the balanced supervised setting and performs comparably in the one-class setting. Finally, we evaluate three approaches to using a generative model as an anomaly detector and compare the anomaly detection performance.
Latency is an inherent problem of computing systems. Each computation takes time until the result is available. Virtual reality systems use elaborated computer resources to create virtual experiences. The latency of those systems is often ignored or assumed as small enough to provide a good experience.
This cumulative thesis is comprised of published peer reviewed research papers exploring the behaviour and effects of latency. Contrary to the common description of time invariant latency, latency is shown to fluctuate. Few other researchers have looked into this time variant behaviour. This thesis explores time variant latency with a focus on randomly occurring latency spikes. Latency spikes are observed both for small algorithms and as end to end latency in complete virtual reality systems. Most latency measurements gather close to the mean latency with potentially multiple smaller clusters of larger latency values and rare extreme outliers. The latency behaviour differs for different implementations of an algorithm. Operating system schedulers and programming language environments such as garbage collectors contribute to the overall latency behaviour. The thesis demonstrates these influences on the example of different implementations of message passing.
The plethora of latency sources result in an unpredictable latency behaviour. Measuring and reporting it in scientific experiments is important. This thesis describes established approaches to measuring latency and proposes an enhanced setup to gather detailed information. The thesis proposes to dissect the measured data with a stacked z-outlier-test to separate the clusters of latency measurements for better reporting.
Latency in virtual reality applications can degrade the experience in multiple ways. The thesis focuses on cybersickness as a major detrimental effect. An approach to simulate time variant latency is proposed to make latency available as an independent variable in experiments to understand latency's effects. An experiment with modified latency shows that latency spikes can contribute to cybersickness. A review of related research shows that different time invariant latency behaviour also contributes to cybersickness.