Refine
Has Fulltext
- yes (51)
Year of publication
- 2021 (51) (remove)
Document Type
- Journal article (28)
- Doctoral Thesis (13)
- Conference Proceeding (7)
- Book (1)
- Report (1)
- Working Paper (1)
Keywords
- virtual reality (9)
- artificial intelligence (3)
- augmented reality (3)
- Quality of Experience (2)
- Virtuelle Realität (2)
- avatars (2)
- human-computer interaction (2)
- immersion (2)
- immersive technologies (2)
- performance modeling (2)
- quality of experience (2)
- 3D Laser Scanning (1)
- 3D-reconstruction methods (1)
- 3DTK toolkit (1)
- Algorithmische Geometrie (1)
- Algorithmus (1)
- Angriff (1)
- Augmented Reality (1)
- Auto-Scaling (1)
- Außerschulische Bildung (1)
- Baseline Constrained LAMBDA (1)
- Benchmarking (1)
- Berechnungskomplexität (1)
- Beweissystem (1)
- Cloud Computing (1)
- Computer Science education (1)
- Computersicherheit (1)
- Computerspiel (1)
- Conjunction analysis (1)
- Corporate Network (1)
- CubeSat (1)
- CubeSat GNSS (1)
- Daedalus-Projekt (1)
- Data Mining (1)
- Datensicherung (1)
- Didaktik der Informatik (1)
- Dienstgüte (1)
- Disjoint pair (1)
- Drohne <Flugkörper> (1)
- EPM (1)
- Educational robotics (1)
- Educational robotics competitions (1)
- Eindringerkennung (1)
- Elementaggregation über dynamische Terme (1)
- Error-State Extendend Kalman Filter (1)
- Erweiterte Realität <Informatik> (1)
- FRAMEWORK <Programm> (1)
- Fachdidaktik (1)
- Feature Engineering & Extraction (1)
- Flugnavigation (1)
- Flugregelung (1)
- Forecasting (1)
- Game mechanic (1)
- Gamification (1)
- Graphenzeichnen (1)
- Gravitationsmodellunsicherheit (1)
- Gravity model uncertainty (1)
- HMD (Head-Mounted Display) (1)
- HTTP adaptive video streaming (1)
- IT-Sicherheit (1)
- Industrieroboter (1)
- Informatik (1)
- Integer Expression (1)
- Integer circuit (1)
- Intelligent Virtual Agents (1)
- Interkulturelles Lernen (1)
- International Comparative Research (1)
- Intrusion Detection (1)
- Kalman-Filter (1)
- Kerneldensity estimation (1)
- Knowledge encoding (1)
- Kombinatorik (1)
- Komplexität (1)
- Konjunktionsanalyse (1)
- Konvexe Zeichnungen (1)
- Künstliche Intelligenz (1)
- Lava (1)
- Lehrerbildung (1)
- Leistungsbewertung (1)
- Lernen (1)
- Loose Coupling (1)
- Lunar Caves (1)
- Lunar Exploration (1)
- Mapping (1)
- Markov model (1)
- Markovian and Non-Markovian systems (1)
- Mathematisches Modell (1)
- Medienkompetenz (1)
- Mensch-Maschine-Kommunikation (1)
- Mensch-Maschine-Schnittstelle (1)
- Modellgetriebene Entwicklung (1)
- Mond (1)
- NP-vollständiges Problem (1)
- Netzwerkdaten (1)
- Neuronale Netze (1)
- Nutzerschnittstellen (1)
- Onboard Software (1)
- Orakel <Informatik> (1)
- Orbit determination (1)
- Orbitbestimung (1)
- P-optimal (1)
- Phasenmehrdeutigkeit (1)
- Planare Graphen (1)
- Poisson surface reconstruction (1)
- Polyeder (1)
- Problemlösefähigkeiten (1)
- Prognose (1)
- Projektmanagement (1)
- Propositional proof system (1)
- Referenzmodell (1)
- Roboterwettbewerbe (1)
- Robotik (1)
- Rodos (1)
- SIMOC (1)
- Satellit (1)
- Sensorfusion (1)
- Serious game (1)
- Skalierbarkeit (1)
- Software-defined networking (1)
- Source Code Generation (1)
- Space Debris (1)
- Spherical Robot (1)
- Spielmechanik (1)
- Statische Analyse (1)
- TETCs (1)
- Telemetrie (1)
- Thermospheric density uncertainty (1)
- Thermosphärische Dichteunsicherheit (1)
- Ultra-Wideband (UWB) radio ranging (1)
- Umfrage (1)
- Uncertainty realism (1)
- Unmanned Aerial Vehicle (UAV) (1)
- Unsicherheitsrealismus (1)
- Veranstaltung (1)
- Videospiel (1)
- Videoübertragung (1)
- Virtualisierung (1)
- Vorgehensmodell (1)
- Wissensencodierung (1)
- XR (1)
- XR-artificial intelligence combination (1)
- XR-artificial intelligence continuum (1)
- Zeitreihenanalyse (1)
- acrophobia (1)
- affective computing (1)
- agency (1)
- agent-based models (1)
- agents (1)
- anxiety (1)
- application design (1)
- attack-aware (1)
- avatar embodiment (1)
- biosignals (1)
- clinical data warehouse (1)
- clinical study (1)
- co-authorships (1)
- co-inventorships (1)
- collaboration (1)
- communication network (1)
- communication networks (1)
- cost-sensitive learning (1)
- crowdsensing (1)
- crowdsourced measurements (1)
- decision-making (1)
- denial of service (1)
- dependable software (1)
- descriptors (1)
- design cycle (1)
- detection time simulation (1)
- dimensions of proximity (1)
- discrete-time models and analysis (1)
- education (1)
- educational games (1)
- electronic data capture (1)
- electronic health records (1)
- elevated plus-maze (1)
- embedding techniques (1)
- embodiment (1)
- emotions (1)
- event detection (1)
- foreign language learning and teaching (1)
- gait disorder (1)
- game mechanics (1)
- games (1)
- hackathons (1)
- handwriting (1)
- harness free satellite (1)
- head-mounted display (1)
- heterogeneous background (1)
- human behaviour (1)
- human computer interaction (HCI) (1)
- human-artificial intelligence interaction (1)
- human-artificial intelligence interface (1)
- human-centered design (1)
- human-centered, human-robot (1)
- imbalanced regression (1)
- immersive interfaces (1)
- immersive learning technologies (1)
- insect tracking (1)
- interactive authoring system (1)
- intercultural learning and teaching (1)
- interdisciplinary education (1)
- intrusion detection (1)
- iowa gambling task (1)
- laser ranging (1)
- learning environments (1)
- locomotion (1)
- machine learning (1)
- map projections (1)
- meditation (1)
- mindfulness (1)
- mixed reality (1)
- mixed-cultural settings (1)
- mobile networks (1)
- motivation (1)
- movement ecology (1)
- multimodal learning (1)
- multiple sclerosis (1)
- natural environment (1)
- natural language processing · · · (1)
- natural user interfaces (1)
- network function virtualization (1)
- octree (1)
- online survey (1)
- optimization (1)
- passive haptic feedback (1)
- performance analysis (1)
- performance evaluation (1)
- physiology (1)
- place-illusion (1)
- plausibility-illusion (1)
- point cloud compression (1)
- point cloud registration (1)
- point-to-plane measure (1)
- point-to-point measure (1)
- problem solving skills (1)
- procedural content generation (1)
- psychomotor training (1)
- psychophyisology (1)
- quality of experience prediction (1)
- queueing theory (1)
- real-time (1)
- realism (1)
- recommender system (1)
- rehabilitation (1)
- research methods (1)
- sample weighting (1)
- satellite technology (1)
- secondary data usage (1)
- self-assembly (1)
- self-aware (1)
- self-supervised learning (1)
- semantic understanding (1)
- sensor devices (1)
- serious games (1)
- sesnsors (1)
- sketching (1)
- software-definded networking (1)
- space–terrestrial networks (1)
- spatial presence (1)
- statistical methods (1)
- statistical validity (1)
- stochastic processes (1)
- stroke (1)
- structural battery (1)
- stylus (1)
- supervised learning (1)
- system simulation (1)
- systematic literature review (1)
- systematic review (1)
- teacher education (1)
- time perception (1)
- tools (1)
- trait anxiety (1)
- translational neuroscience (1)
- transportation (1)
- usability evaluation (1)
- use cases (1)
- user experience (1)
- user interaction (1)
- user study (1)
- user-generated content (1)
- verbal behaviour (1)
- video game QoE (1)
- video game context factors (1)
- video streaming (1)
- virtual body ownership (1)
- virtual environments (1)
- virtual humans (1)
- virtual-reality-continuum (1)
- wireless communication (1)
- wireless-bus (1)
Institute
- Institut für Informatik (51) (remove)
Sonstige beteiligte Institutionen
EU-Project number / Contract (GA) number
- 824128 (1)
Realistic and lifelike 3D-reconstruction of virtual humans has various exciting and important use cases. Our and others’ appearances have notable effects on ourselves and our interaction partners in virtual environments, e.g., on acceptance, preference, trust, believability, behavior (the Proteus effect), and more. Today, multiple approaches for the 3D-reconstruction of virtual humans exist. They significantly vary in terms of the degree of achievable realism, the technical complexities, and finally, the overall reconstruction costs involved. This article compares two 3D-reconstruction approaches with very different hardware requirements. The high-cost solution uses a typical complex and elaborated camera rig consisting of 94 digital single-lens reflex (DSLR) cameras. The recently developed low-cost solution uses a smartphone camera to create videos that capture multiple views of a person. Both methods use photogrammetric reconstruction and template fitting with the same template model and differ in their adaptation to the method-specific input material. Each method generates high-quality virtual humans ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity. We compare the results of the two 3D-reconstruction methods in an immersive virtual environment against each other in a user study. Our results indicate that the virtual humans from the low-cost approach are perceived similarly to those from the high-cost approach regarding the perceived similarity to the original, human-likeness, beauty, and uncanniness, despite significant differences in the objectively measured quality. The perceived feeling of change of the own body was higher for the low-cost virtual humans. Quality differences were perceived more strongly for one’s own body than for other virtual humans.
These days, we are living in a digitalized world. Both our professional and private lives are pervaded by various IT services, which are typically operated using distributed computing systems (e.g., cloud environments). Due to the high level of digitalization, the operators of such systems are confronted with fast-paced and changing requirements. In particular, cloud environments have to cope with load fluctuations and respective rapid and unexpected changes in the computing resource demands. To face this challenge, so-called auto-scalers, such as the threshold-based mechanism in Amazon Web Services EC2, can be employed to enable elastic scaling of the computing resources. However, despite this opportunity, business-critical applications are still run with highly overprovisioned resources to guarantee a stable and reliable service operation. This strategy is pursued due to the lack of trust in auto-scalers and the concern that inaccurate or delayed adaptations may result in financial losses.
To adapt the resource capacity in time, the future resource demands must be "foreseen", as reacting to changes once they are observed introduces an inherent delay. In other words, accurate forecasting methods are required to adapt systems proactively. A powerful approach in this context is time series forecasting, which is also applied in many other domains. The core idea is to examine past values and predict how these values will evolve as time progresses. According to the "No-Free-Lunch Theorem", there is no algorithm that performs best for all scenarios. Therefore, selecting a suitable forecasting method for a given use case is a crucial task. Simply put, each method has its benefits and drawbacks, depending on the specific use case. The choice of the forecasting method is usually based on expert knowledge, which cannot be fully automated, or on trial-and-error. In both cases, this is expensive and prone to error.
Although auto-scaling and time series forecasting are established research fields, existing approaches cannot fully address the mentioned challenges: (i) In our survey on time series forecasting, we found that publications on time series forecasting typically consider only a small set of (mostly related) methods and evaluate their performance on a small number of time series with only a few error measures while providing no information on the execution time of the studied methods. Therefore, such articles cannot be used to guide the choice of an appropriate method for a particular use case; (ii) Existing open-source hybrid forecasting methods that take advantage of at least two methods to tackle the "No-Free-Lunch Theorem" are computationally intensive, poorly automated, designed for a particular data set, or they lack a predictable time-to-result. Methods exhibiting a high variance in the time-to-result cannot be applied for time-critical scenarios (e.g., auto-scaling), while methods tailored to a specific data set introduce restrictions on the possible use cases (e.g., forecasting only annual time series); (iii) Auto-scalers typically scale an application either proactively or reactively. Even though some hybrid auto-scalers exist, they lack sophisticated solutions to combine reactive and proactive scaling. For instance, resources are only released proactively while resource allocation is entirely done in a reactive manner (inherently delayed); (iv) The majority of existing mechanisms do not take the provider's pricing scheme into account while scaling an application in a public cloud environment, which often results in excessive charged costs. Even though some cost-aware auto-scalers have been proposed, they only consider the current resource demands, neglecting their development over time. For example, resources are often shut down prematurely, even though they might be required again soon.
To address the mentioned challenges and the shortcomings of existing work, this thesis presents three contributions: (i) The first contribution-a forecasting benchmark-addresses the problem of limited comparability between existing forecasting methods; (ii) The second contribution-Telescope-provides an automated hybrid time series forecasting method addressing the challenge posed by the "No-Free-Lunch Theorem"; (iii) The third contribution-Chamulteon-provides a novel hybrid auto-scaler for coordinated scaling of applications comprising multiple services, leveraging Telescope to forecast the workload intensity as a basis for proactive resource provisioning. In the following, the three contributions of the thesis are summarized:
Contribution I - Forecasting Benchmark
To establish a level playing field for evaluating the performance of forecasting methods in a broad setting, we propose a novel benchmark that automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains. The data set was assembled from publicly available time series and was designed to exhibit much higher diversity than existing forecasting competitions. Besides proposing a new data set, we introduce two new measures that describe different aspects of a forecast. We applied the developed benchmark to evaluate Telescope.
Contribution II - Telescope
To provide a generic forecasting method, we introduce a novel machine learning-based forecasting approach that automatically retrieves relevant information from a given time series. More precisely, Telescope automatically extracts intrinsic time series features and then decomposes the time series into components, building a forecasting model for each of them. Each component is forecast by applying a different method and then the final forecast is assembled from the forecast components by employing a regression-based machine learning algorithm. In more than 1300 hours of experiments benchmarking 15 competing methods (including approaches from Uber and Facebook) on 400 time series, Telescope outperformed all methods, exhibiting the best forecast accuracy coupled with a low and reliable time-to-result. Compared to the competing methods that exhibited, on average, a forecast error (more precisely, the symmetric mean absolute forecast error) of 29%, Telescope exhibited an error of 20% while being 2556 times faster. In particular, the methods from Uber and Facebook exhibited an error of 48% and 36%, and were 7334 and 19 times slower than Telescope, respectively.
Contribution III - Chamulteon
To enable reliable auto-scaling, we present a hybrid auto-scaler that combines proactive and reactive techniques to scale distributed cloud applications comprising multiple services in a coordinated and cost-effective manner. More precisely, proactive adaptations are planned based on forecasts of Telescope, while reactive adaptations are triggered based on actual observations of the monitored load intensity. To solve occurring conflicts between reactive and proactive adaptations, a complex conflict resolution algorithm is implemented. Moreover, when deployed in public cloud environments, Chamulteon reviews adaptations with respect to the cloud provider's pricing scheme in order to minimize the charged costs. In more than 400 hours of experiments evaluating five competing auto-scaling mechanisms in scenarios covering five different workloads, four different applications, and three different cloud environments, Chamulteon exhibited the best auto-scaling performance and reliability while at the same time reducing the charged costs. The competing methods provided insufficient resources for (on average) 31% of the experimental time; in contrast, Chamulteon cut this time to 8% and the SLO (service level objective) violations from 18% to 6% while using up to 15% less resources and reducing the charged costs by up to 45%.
The contributions of this thesis can be seen as major milestones in the domain of time series forecasting and cloud resource management. (i) This thesis is the first to present a forecasting benchmark that covers a variety of different domains with a high diversity between the analyzed time series. Based on the provided data set and the automatic evaluation procedure, the proposed benchmark contributes to enhance the comparability of forecasting methods. The benchmarking results for different forecasting methods enable the selection of the most appropriate forecasting method for a given use case. (ii) Telescope provides the first generic and fully automated time series forecasting approach that delivers both accurate and reliable forecasts while making no assumptions about the analyzed time series. Hence, it eliminates the need for expensive, time-consuming, and error-prone procedures, such as trial-and-error searches or consulting an expert. This opens up new possibilities especially in time-critical scenarios, where Telescope can provide accurate forecasts with a short and reliable time-to-result.
Although Telescope was applied for this thesis in the field of cloud computing, there is absolutely no limitation regarding the applicability of Telescope in other domains, as demonstrated in the evaluation. Moreover, Telescope, which was made available on GitHub, is already used in a number of interdisciplinary data science projects, for instance, predictive maintenance in an Industry 4.0 context, heart failure prediction in medicine, or as a component of predictive models of beehive development. (iii) In the context of cloud resource management, Chamulteon is a major milestone for increasing the trust in cloud auto-scalers. The complex resolution algorithm enables reliable and accurate scaling behavior that reduces losses caused by excessive resource allocation or SLO violations. In other words, Chamulteon provides reliable online adaptations minimizing charged costs while at the same time maximizing user experience.
Immersive virtual environments provide users with the opportunity to escape from the real world, but scripted dialogues can disrupt the presence within the world the user is trying to escape within. Both Non-Playable Character (NPC) to Player and NPC to NPC dialogue can be non-natural and the reliance on responding with pre-defined dialogue does not always meet the players emotional expectations or provide responses appropriate to the given context or world states. This paper investigates the application of Artificial Intelligence (AI) and Natural Language Processing to generate dynamic human-like responses within a themed virtual world. Each thematic has been analysed against humangenerated responses for the same seed and demonstrates invariance of rating across a range of model sizes, but shows an effect of theme and the size of the corpus used for fine-tuning the context for the game world.
This thesis is divided into two parts.
In the first part we contribute to a working program initiated by Pudlák (2017) who lists several major complexity theoretic conjectures relevant to proof complexity and asks for oracles that separate pairs of corresponding relativized conjectures. Among these conjectures are:
- \(\mathsf{CON}\) and \(\mathsf{SAT}\): coNP (resp., NP) does not contain complete sets that have P-optimal proof systems.
- \(\mathsf{CON}^{\mathsf{N}}\): coNP does not contain complete sets that have optimal proof systems.
- \(\mathsf{TFNP}\): there do not exist complete total polynomial search problems (also known as total NP search problems).
- \(\mathsf{DisjNP}\) and \(\mathsf{DisjCoNP}\): There do not exist complete disjoint NP pairs (coNP pairs).
- \(\mathsf{UP}\): UP does not contain complete problems.
- \(\mathsf{NP}\cap\mathsf{coNP}\): \(\mathrm{NP}\cap\mathrm{coNP}\) does not contain complete problems.
- \(\mathrm{P}\ne\mathrm{NP}\).
We construct several of the oracles that Pudlák asks for.
In the second part we investigate the computational complexity of balance problems for \(\{-,\cdot\}\)-circuits computing finite sets of natural numbers (note that \(-\) denotes the set difference). These problems naturally build on problems for integer expressions and integer circuits studied by Stockmeyer and Meyer (1973), McKenzie and Wagner (2007), and Glaßer et al. (2010).
Our work shows that the balance problem for \(\{-,\cdot\}\)-circuits is undecidable which is the first natural problem for integer circuits or related constraint satisfaction problems that admits only one arithmetic operation and is proven to be undecidable.
Starting from this result we precisely characterize the complexity of balance problems for proper subsets of \(\{-,\cdot\}\). These problems turn out to be complete for one of the classes L, NL, and NP.
Dynamic point cloud compression based on projections, surface reconstruction and video compression
(2021)
In this paper we will present a new dynamic point cloud compression based on different projection types and bit depth, combined with the surface reconstruction algorithm and video compression for obtained geometry and texture maps. Texture maps have been compressed after creating Voronoi diagrams. Used video compression is specific for geometry (FFV1) and texture (H.265/HEVC). Decompressed point clouds are reconstructed using a Poisson surface reconstruction algorithm. Comparison with the original point clouds was performed using point-to-point and point-to-plane measures. Comprehensive experiments show better performance for some projection maps: cylindrical, Miller and Mercator projections.
Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions.
Natural walking in virtual reality games is constrained by the physical boundaries defined by the size of the player’s tracking space. Impossible spaces, a redirected walking technique, enlarge the virtual environment by creating overlapping architecture and letting multiple locations occupy the same physical space. Within certain thresholds, this is subtle to the player. In this paper, we present our approach to implement such impossible spaces and describe how we handled challenges like objects with simulated physics or precomputed global illumination.
Corfu is a framework for satellite software, not only for the onboard part but also for the ground. Developing software with Corfu follows an iterative model-driven approach. The basis of the process is an engineering model. Engineers formally describe the basic structure of the onboard software in configuration files, which build the engineering model. In the first step, Corfu verifies the model at different levels. Not only syntactically and semantically but also on a higher level such as the scheduling.
Based on the model, Corfu generates a software scaffold, which follows an application-centric approach. Software images onboard consist of a list of applications connected through communication channels called topics. Corfu’s generic and generated code covers this fundamental communication, telecommand, and telemetry handling. All users have to do is inheriting from a generated class and implement the behavior in overridden methods. For each application, the generator creates an abstract class with pure virtual methods. Those methods are callback functions, e.g., for handling telecommands or executing code in threads.
However, from the model, one can not foresee the software implementation by users. Therefore, as an innovation compared to other frameworks, Corfu introduces feedback from the user code back to the model. In this way, we extend the engineering model with information about functions/methods, their invocations, their stack usage, and information about events and telemetry emission. Indeed, it would be possible to add further information extraction for additional use cases. We extract the information in two ways: assembly and source code analysis. The assembly analysis collects information about the stack usage of functions and methods.
On the one side, Corfu uses the gathered information to accomplished additional verification steps, e.g., checking if stack usages exceed stack sizes of threads. On the other side, we use the gathered information to improve the performance of onboard software. In a use case, we show how the compiled binary and bandwidth towards the ground is reducible by exploiting source code information at run-time.
The capabilities of small satellites have improved significantly in recent years. Specifically multi-satellite systems become increasingly popular, since they allow the support of new applications. The development and testing of these multi-satellite systems is a new challenge for engineers and requires the implementation of appropriate development and testing environments. In this paper, a modular network simulation framework for space–terrestrial systems is presented. It enables discrete event simulations for the development and testing of communication protocols, as well as mission-based analysis of other satellite system aspects, such as power supply and attitude control. ESTNeT is based on the discrete event simulator OMNeT++ and will be released under an open source license.
The combination of globalization and digitalization emphasizes the importance of media-related and intercultural competencies of teacher educators and preservice teachers. This article reports on the initial prototypical implementation of a pedagogical concept to foster such competencies of preservice teachers. The proposed pedagogical concept utilizes a social virtual reality (VR) framework since related work on the characteristics of VR has indicated that this medium is particularly well suited for intercultural professional development processes. The development is integrated into a larger design-based research approach that develops a theory-guided and empirically grounded professional development concept for teacher educators with a special focus on teacher educator technology competencies (TETC8). TETCs provide a suitable competence framework capable of aligning requirements for both media-related and intercultural competencies. In an exploratory study with student teachers, we designed, implemented, and evaluated a pedagogical concept. Reflection reports were qualitatively analyzed to gain insights into factors that facilitate or hinder the implementation of the immersive learning scenario as well as into the participants’ evaluation of their learning experience. The results show that our proposed pedagogical concept is particularly suitable for promoting the experience of social presence, agency, and empathy in the group.
Modulating emotional responses to virtual stimuli is a fundamental goal of many immersive interactive applications. In this study, we leverage the illusion of illusory embodiment and show that owning a virtual body provides means to modulate emotional responses. In a single-factor repeated-measures experiment, we manipulated the degree of illusory embodiment and assessed the emotional responses to virtual stimuli. We presented emotional stimuli in the same environment as the virtual body. Participants experienced higher arousal, dominance, and more intense valence in the high embodiment condition compared to the low embodiment condition. The illusion of embodiment thus intensifies the emotional processing of the virtual environment. This result suggests that artificial bodies can increase the effectiveness of immersive applications psychotherapy, entertainment, computer-mediated social interactions, or health applications.
A new innovative satellite mission, the Innovative CubeSat for Education (InnoCube), is addressed. The goal of the mission is to demonstrate “the wireless satellite”, which replaces the data harness by robust, high-speed, real-time, very short-range radio communications using the SKITH (SKIpTheHarness) technology. This will make InnoCube the first wireless satellite in history. Another technology demonstration is an experimental energy-storing satellite structure that was developed in the previous Wall#E project and might replace conventional battery technology in the future. As a further payload, the hardware for the concept of a software-based solution for receiving signals from Global Navigation Satellite Systems (GNSS) will be developed to enable precise position determination of the CubeSat. Aside from technical goals this work aims to be of use in the teaching of engineering skills and practical sustainable education of students, important technical and scientific publications, and the increase of university skills. This article gives an overview of the overall design of the InnoCube.
Measurements of physiological parameters provide an objective, often non-intrusive, and (at least semi-)automatic evaluation and utilization of user behavior. In addition, specific hardware devices of Virtual Reality (VR) often ship with built-in sensors, i.e. eye-tracking and movements sensors. Hence, the combination of physiological measurements and VR applications seems promising. Several approaches have investigated the applicability and benefits of this combination for various fields of applications. However, the range of possible application fields, coupled with potentially useful and beneficial physiological parameters, types of sensor, target variables and factors, and analysis approaches and techniques is manifold. This article provides a systematic overview and an extensive state-of-the-art review of the usage of physiological measurements in VR. We identified 1,119 works that make use of physiological measurements in VR. Within these, we identified 32 approaches that focus on the classification of characteristics of experience, common in VR applications. The first part of this review categorizes the 1,119 works by field of application, i.e. therapy, training, entertainment, and communication and interaction, as well as by the specific target factors and variables measured by the physiological parameters. An additional category summarizes general VR approaches applicable to all specific fields of application since they target typical VR qualities. In the second part of this review, we analyze the target factors and variables regarding the respective methods used for an automatic analysis and, potentially, classification. For example, we highlight which measurement setups have been proven to be sensitive enough to distinguish different levels of arousal, valence, anxiety, stress, or cognitive workload in the virtual realm. This work may prove useful for all researchers wanting to use physiological data in VR and who want to have a good overview of prior approaches taken, their benefits and potential drawbacks.
This study provides a systematic literature review of research (2001–2020) in the field of teaching and learning a foreign language and intercultural learning using immersive technologies. Based on 2507 sources, 54 articles were selected according to a predefined selection criteria. The review is aimed at providing information about which immersive interventions are being used for foreign language learning and teaching and where potential research gaps exist. The papers were analyzed and coded according to the following categories: (1) investigation form and education level, (2) degree of immersion, and technology used, (3) predictors, and (4) criterions. The review identified key research findings relating the use of immersive technologies for learning and teaching a foreign language and intercultural learning at cognitive, affective, and conative levels. The findings revealed research gaps in the area of teachers as a target group, and virtual reality (VR) as a fully immersive intervention form. Furthermore, the studies reviewed rarely examined behavior, and implicit measurements related to inter- and trans-cultural learning and teaching. Inter- and transcultural learning and teaching especially is an underrepresented investigation subject. Finally, concrete suggestions for future research are given. The systematic review contributes to the challenge of interdisciplinary cooperation between pedagogy, foreign language didactics, and Human-Computer Interaction to achieve innovative teaching-learning formats and a successful digital transformation.
Crowdsensing offers a cost-effective way to collect large amounts of environmental sensor data; however, the spatial distribution of crowdsensing sensors can hardly be influenced, as the participants carry the sensors, and, additionally, the quality of the crowdsensed data can vary significantly. Hybrid systems that use mobile users in conjunction with fixed sensors might help to overcome these limitations, as such systems allow assessing the quality of the submitted crowdsensed data and provide sensor values where no crowdsensing data are typically available. In this work, we first used a simulation study to analyze a simple crowdsensing system concerning the detection performance of spatial events to highlight the potential and limitations of a pure crowdsourcing system. The results indicate that even if only a small share of inhabitants participate in crowdsensing, events that have locations correlated with the population density can be easily and quickly detected using such a system. On the contrary, events with uniformly randomly distributed locations are much harder to detect using a simple crowdsensing-based approach. A second evaluation shows that hybrid systems improve the detection probability and time. Finally, we illustrate how to compute the minimum number of fixed sensors for the given detection time thresholds in our exemplary scenario.
Over the last decades, cybersecurity has become an increasingly important issue. Between 2019 and 2011 alone, the losses from cyberattacks in the United States grew by 6217%. At the same time, attacks became not only more intensive but also more and more versatile and diverse. Cybersecurity has become everyone’s concern. Today, service providers require sophisticated and extensive security infrastructures comprising many security functions dedicated to various cyberattacks. Still, attacks become more violent to a level where infrastructures can no longer keep up. Simply scaling up is no longer sufficient. To address this challenge, in a whitepaper, the Cloud Security Alliance (CSA) proposed multiple work packages for security infrastructure, leveraging the possibilities of Software-defined Networking (SDN) and Network Function Virtualization (NFV).
Security functions require a more sophisticated modeling approach than regular network functions. Notably, the property to drop packets deemed malicious has a significant impact on Security Service Function Chains (SSFCs)—service chains consisting of multiple security functions to protect against multiple at- tack vectors. Under attack, the order of these chains influences the end-to-end system performance depending on the attack type. Unfortunately, it is hard to predict the attack composition at system design time. Thus, we make a case for dynamic attack-aware SSFC reordering. Also, we tackle the issues of the lack of integration between security functions and the surrounding network infrastructure, the insufficient use of short term CPU frequency boosting, and the lack of Intrusion Detection and Prevention Systems (IDPS) against database ransomware attacks.
Current works focus on characterizing the performance of security functions and their behavior under overload without considering the surrounding infrastructure. Other works aim at replacing security functions using network infrastructure features but do not consider integrating security functions within the network. Further publications deal with using SDN for security or how to deal with new vulnerabilities introduced through SDN. However, they do not take security function performance into account. NFV is a popular field for research dealing with frameworks, benchmarking methods, the combination with SDN, and implementing security functions as Virtualized Network
Functions (VNFs). Research in this area brought forth the concept of Service Function Chains (SFCs) that chain multiple network functions after one another. Nevertheless, they still do not consider the specifics of security functions. The mentioned CSA whitepaper proposes many valuable ideas but leaves their realization open to others.
This thesis presents solutions to increase the performance of single security functions using SDN, performance modeling, a framework for attack-aware SSFC reordering, a solution to make better use of CPU frequency boosting, and an IDPS against database ransomware.
Specifically, the primary contributions of this work are:
• We present approaches to dynamically bypass Intrusion Detection Systems (IDS) in order to increase their performance without reducing the security level. To this end, we develop and implement three SDN-based approaches (two dynamic and one static).
We evaluate the proposed approaches regarding security and performance and show that they significantly increase the performance com- pared to an inline IDS without significant security deficits. We show that using software switches can further increase the performance of the dynamic approaches up to a point where they can eliminate any throughput drawbacks when using the IDS.
• We design a DDoS Protection System (DPS) against TCP SYN flood at tacks in the form of a VNF that works inside an SDN-enabled network. This solution eliminates known scalability and performance drawbacks of existing solutions for this attack type.
Then, we evaluate this solution showing that it correctly handles the connection establishment and present solutions for an observed issue. Next, we evaluate the performance showing that our solution increases performance up to three times. Parallelization and parameter tuning yields another 76% performance boost. Based on these findings, we discuss optimal deployment strategies.
• We introduce the idea of attack-aware SSFC reordering and explain its impact in a theoretical scenario. Then, we discuss the required information to perform this process.
We validate our claim of the importance of the SSFC order by analyzing the behavior of single security functions and SSFCs. Based on the results, we conclude that there is a massive impact on the performance up to three orders of magnitude, and we find contradicting optimal orders
for different workloads. Thus, we demonstrate the need for dynamic reordering.
Last, we develop a model for SSFC regarding traffic composition and resource demands. We classify the traffic into multiple classes and model the effect of single security functions on the traffic and their generated resource demands as functions of the incoming network traffic. Based on our model, we propose three approaches to determine optimal orders for reordering.
• We implement a framework for attack-aware SSFC reordering based on this knowledge. The framework places all security functions inside an SDN-enabled network and reorders them using SDN flows.
Our evaluation shows that the framework can enforce all routes as desired. It correctly adapts to all attacks and returns to the original state after the attacks cease. We find possible security issues at the moment of reordering and present solutions to eliminate them.
• Next, we design and implement an approach to load balance servers while taking into account their ability to go into a state of Central Processing Unit (CPU) frequency boost. To this end, the approach collects temperature information from available hosts and places services on the host that can attain the boosted mode the longest.
We evaluate this approach and show its effectiveness. For high load scenarios, the approach increases the overall performance and the performance per watt. Even better results show up for low load workloads, where not only all performance metrics improve but also the temperatures and total power consumption decrease.
• Last, we design an IDPS protecting against database ransomware attacks that comprise multiple queries to attain their goal. Our solution models these attacks using a Colored Petri Net (CPN).
A proof-of-concept implementation shows that our approach is capable of detecting attacks without creating false positives for benign scenarios. Furthermore, our solution creates only a small performance impact.
Our contributions can help to improve the performance of security infrastructures. We see multiple application areas from data center operators over software and hardware developers to security and performance researchers. Most of the above-listed contributions found use in several research publications.
Regarding future work, we see the need to better integrate SDN-enabled security functions and SSFC reordering in data center networks. Future SSFC should discriminate between different traffic types, and security frameworks should support automatically learning models for security functions. We see the need to consider energy efficiency when regarding SSFCs and take CPU boosting technologies into account when designing performance models as well as placement, scaling, and deployment strategies. Last, for a faster adaptation against recent ransomware attacks, we propose machine-assisted learning for database IDPS signatures.
A deep integration of routine care and research remains challenging in many respects. We aimed to show the feasibility of an automated transformation and transfer process feeding deeply structured data with a high level of granularity collected for a clinical prospective cohort study from our hospital information system to the study's electronic data capture system, while accounting for study-specific data and visits. We developed a system integrating all necessary software and organizational processes then used in the study. The process and key system components are described together with descriptive statistics to show its feasibility in general and to identify individual challenges in particular. Data of 2051 patients enrolled between 2014 and 2020 was transferred. We were able to automate the transfer of approximately 11 million individual data values, representing 95% of all entered study data. These were recorded in n = 314 variables (28% of all variables), with some variables being used multiple times for follow-up visits. Our validation approach allowed for constant good data quality over the course of the study. In conclusion, the automated transfer of multi-dimensional routine medical data from HIS to study databases using specific study data and visit structures is complex, yet viable.
This article introduces the Off-The-Shelf Stylus (OTSS), a framework for 2D interaction (in 3D) as well as for handwriting and sketching with digital pen, ink, and paper on physically aligned virtual surfaces in Virtual, Augmented, and Mixed Reality (VR, AR, MR: XR for short). OTSS supports self-made XR styluses based on consumer-grade six-degrees-of-freedom XR controllers and commercially available styluses. The framework provides separate modules for three basic but vital features: 1) The stylus module provides stylus construction and calibration features. 2) The surface module provides surface calibration and visual feedback features for virtual-physical 2D surface alignment using our so-called 3ViSuAl procedure, and surface interaction features. 3) The evaluation suite provides a comprehensive test bed combining technical measurements for precision, accuracy, and latency with extensive usability evaluations including handwriting and sketching tasks based on established visuomotor, graphomotor, and handwriting research. The framework’s development is accompanied by an extensive open source reference implementation targeting the Unity game engine using an Oculus Rift S headset and Oculus Touch controllers. The development compares three low-cost and low-tech options to equip controllers with a tip and includes a web browser-based surface providing support for interacting, handwriting, and sketching. The evaluation of the reference implementation based on the OTSS framework identified an average stylus precision of 0.98 mm (SD = 0.54 mm) and an average surface accuracy of 0.60 mm (SD = 0.32 mm) in a seated VR environment. The time for displaying the stylus movement as digital ink on the web browser surface in VR was 79.40 ms on average (SD = 23.26 ms), including the physical controller’s motion-to-photon latency visualized by its virtual representation (M = 42.57 ms, SD = 15.70 ms). The usability evaluation (N = 10) revealed a low task load, high usability, and high user experience. Participants successfully reproduced given shapes and created legible handwriting, indicating that the OTSS and it’s reference implementation is ready for everyday use. We provide source code access to our implementation, including stylus and surface calibration and surface interaction features, making it easy to reuse, extend, adapt and/or replicate previous results (https://go.uniwue.de/hci-otss).
Immersive, sensor-enabled technologies such as augmented and virtual reality expand the way human beings interact with computers significantly. While these technologies are widely explored in entertainment games, they also offer possibilities for educational use. However,their uptake in education is so far very limited. Within the ImTech4Ed project, we aim at systematically exploring the power of interdisciplinary, international hackathons as a novel method to create immersive educational game prototypes and as a means to transfer these innovative technical prototypes into educational use. To achieve this, we bring together game design and development, where immersive and interactive solutions are designed and developed; computer science, where the technological foundations for immersive technologies and for scalable architectures for these are created; and teacher education, where future teachers are educated. This article reports on the concept and design of these hackathons.
Proximity dimensions and the emergence of collaboration: a HypTrails study on German AI research
(2021)
Creation and exchange of knowledge depends on collaboration. Recent work has suggested that the emergence of collaboration frequently relies on geographic proximity. However, being co-located tends to be associated with other dimensions of proximity, such as social ties or a shared organizational environment. To account for such factors, multiple dimensions of proximity have been proposed, including cognitive, institutional, organizational, social and geographical proximity. Since they strongly interrelate, disentangling these dimensions and their respective impact on collaboration is challenging. To address this issue, we propose various methods for measuring different dimensions of proximity. We then present an approach to compare and rank them with respect to the extent to which they indicate co-publications and co-inventions. We adapt the HypTrails approach, which was originally developed to explain human navigation, to co-author and co-inventor graphs. We evaluate this approach on a subset of the German research community, specifically academic authors and inventors active in research on artificial intelligence (AI). We find that social proximity and cognitive proximity are more important for the emergence of collaboration than geographic proximity.