TY - JOUR A1 - Palmisano, Chiara A1 - Kullmann, Peter A1 - Hanafi, Ibrahem A1 - Verrecchia, Marta A1 - Latoschik, Marc Erich A1 - Canessa, Andrea A1 - Fischbach, Martin A1 - Isaias, Ioannis Ugo T1 - A fully-immersive virtual reality setup to study gait modulation JF - Frontiers in Human Neuroscience N2 - Objective: Gait adaptation to environmental challenges is fundamental for independent and safe community ambulation. The possibility of precisely studying gait modulation using standardized protocols of gait analysis closely resembling everyday life scenarios is still an unmet need. Methods: We have developed a fully-immersive virtual reality (VR) environment where subjects have to adjust their walking pattern to avoid collision with a virtual agent (VA) crossing their gait trajectory. We collected kinematic data of 12 healthy young subjects walking in real world (RW) and in the VR environment, both with (VR/A+) and without (VR/A-) the VA perturbation. The VR environment closely resembled the RW scenario of the gait laboratory. To ensure standardization of the obstacle presentation the starting time speed and trajectory of the VA were defined using the kinematics of the participant as detected online during each walking trial. Results: We did not observe kinematic differences between walking in RW and VR/A-, suggesting that our VR environment per se might not induce significant changes in the locomotor pattern. When facing the VA all subjects consistently reduced stride length and velocity while increasing stride duration. Trunk inclination and mediolateral trajectory deviation also facilitated avoidance of the obstacle. Conclusions: This proof-of-concept study shows that our VR/A+ paradigm effectively induced a timely gait modulation in a standardized immersive and realistic scenario. This protocol could be a powerful research tool to study gait modulation and its derangements in relation to aging and clinical conditions. KW - gait modulation KW - virtual reality KW - obstacle avoidance KW - gait analysis KW - kinematics Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-267099 SN - 1662-5161 VL - 16 ER - TY - THES A1 - Fischbach, Martin Walter T1 - Enhancing Software Quality of Multimodal Interactive Systems T1 - Verbesserung der Softwarequalität multimodaler interaktiver Systeme N2 - Multimodal interfaces (MMIs) are a promising human-computer interaction paradigm. They are feasible for a wide rang of environments, yet they are especially suited if interactions are spatially and temporally grounded with an environment in which the user is (physically) situated. Real-time interactive systems (RISs) are technical realizations for situated interaction environments, originating from application areas like virtual reality, mixed reality, human-robot interaction, and computer games. RISs include various dedicated processing-, simulation-, and rendering subsystems which collectively maintain a real-time simulation of a coherent application state. They thus fulfil the complex functional requirements of their application areas. Two contradicting principles determine the architecture of RISs: coupling and cohesion. On the one hand, RIS subsystems commonly use specific data structures for multiple purposes to guarantee performance and rely on close semantic and temporal coupling between each other to maintain consistency. This coupling is exacerbated if the integration of artificial intelligence (AI) methods is necessary, such as for realizing MMIs. On the other hand, software qualities like reusability and modifiability call for a decoupling of subsystems and architectural elements with single well-defined purposes, i.e., high cohesion. Systems predominantly favour performance and consistency over reusability and modifiability to handle this contradiction. They thus accept low maintainability in general and hindered scientific progress in the long-term. This thesis presents six semantics-based techniques that extend the established entity-component system (ECS) pattern and pose a solution to this contradiction without sacrificing maintainability: semantic grounding, a semantic entity-component state, grounded actions, semantic queries, code from semantics, and decoupling by semantics. The extension solves the ECS pattern's runtime type deficit, improves component granularity, facilitates access to entity properties outside a subsystem's component association, incorporates a concept to semantically describe behavior as complement to the state representation, and enables compatibility even between RISs. The presented reference implementation Simulator X validates the feasibility of the six techniques and may be (re)used by other researchers due to its availability under an open-source licence. It includes a repertoire of common multimodal input processing steps that showcase the particular adequacy of the six techniques for such processing. The repertoire adds up to the integrated multimodal processing framework miPro, making Simulator X a RIS platform with explicit MMI support. The six semantics-based techniques as well as the reference implementation are validated by four expert reviews, multiple proof of concept prototypes, and two explorative studies. Informal insights gathered throughout the design and development supplement this assessment in the form of lessons learned meant to aid future development in the area. N2 - Multimodale Schnittstellen sind ein vielversprechendes Paradigma der Mensch-Computer-Interaktion. Sie sind in einer Vielzahl von Umgebungen einsetzbar und eignen sich besonders wenn Interaktionen zeitlich und räumlich mit einer Umgebung verankert sind in welcher der Benutzer (physikalisch) situiert ist. Interaktive Echtzeitsysteme (engl. Real-time Interactive Systems, RIS) sind technische Umsetzungen situierter Interaktionsumgebungen, die vor allem in Anwendungsgebieten wie der virtuellen Realität, der gemischten Realität, der Mensch-Roboter-Interaktion und im Bereich der Computerspiele eingesetzt werden. Interaktive Echtzeitsysteme bestehen aus vielfältigen dedizierten Subsystemen, die zusammen die Echtzeitsimulation eines kohärenten Anwendungszustands aufrecht erhalten und die komplexen funktionalen Anforderungen des Anwendungsgebiets erfüllen. Zwei gegensätzliche Prinzipien bestimmen die Softwarearchitekturen interaktiver Echtzeitsysteme: Kopplung und Kohäsion. Einerseits verwenden Subsysteme typischerweise spezialisierte Datenstrukturen um Performanzanforderungen gerecht zu werden. Um Konsistenz aufrecht zu erhalten sind sie zudem auf enge zeitliche- und semantische Abhängigkeiten untereinander angewiesen. Diese enge Kopplung wird verstärkt, falls Methoden der künstlichen Intelligenz in das RIS integriert werden müssen, wie es für die Umsetzung multimodaler Schnittstellen der Fall ist. Andererseits bedingen Softwarequalitätsmerkmale wie Wiederverwendbarkeit und Modifizierbarkeit die Entkopplung von Subsystemen und Architekturelementen und fordern hohe Kohäsion. Bestehende Systeme lösen diesen Konflikt überwiegend zu Gunsten von Performanz und Konsistenz und zu Lasten von Wiederverwendbarkeit und Modifizierbarkeit. Insgesamt wird auf diese Weise geringe Wartbarkeit akzeptiert und auf lange Sicht der wissenschaftliche Fortschritt eingeschränkt. Diese Arbeit stellt sechs Softwaretechniken auf Basis von Methoden der Wissensrepräsentation vor, welche das etablierte Entity-Component System (ECS) Entwurfsmuster erweitern und eine Lösung des Konflikts darstellen, die die Wartbarkeit nicht missachtet: semantic grounding, semantic entity-component state, grounded actions, semantic queries, code from semantics und decoupling by semantics. Diese Erweiterung löst das Introspektionsdefizit des ECS-Musters, verbessert die Granularität von ECS-Komponenten, erleichtert den Zugriff auf Entity-Eigenschaften außerhalb der Subsystem-Komponentenzuordnung, beinhaltet ein Konzept zur einheitlichen Beschreibung von Verhalten als Komplement zur Zustandsrepräsentation und ermöglicht sogar Kompatibilität zwischen interaktiven Echtzeitsystemen. Die vorgestellte Referenzimplementierung Simulator X weist die technische Machbarkeit der sechs Softwaretechniken nach. Sie kann von anderen Forschern auf Basis einer Open-Source Lizenz (wieder)verwendet werden und beinhaltet ein Repertoire an üblichen Verarbeitungsschritten für multimodalen Eingaben, welche die besondere Eignung der sechs Softwaretechniken für eine solche Eingabeverarbeitung veranschaulichen. Dieses Repertoire bildet zusammen das integrierte multimodale Eingabeverarbeitungs-Framework miPro und macht damit Simulator X zu einem RIS, welches explizit die Umsetzung von multimodalen Schnittstellen unterstützt. Die sechs Softwaretechniken sowie die Referenzimplementierung sind durch vier Expertengutachten, eine Vielzahl an technischen Demonstrationen sowie durch zwei explorative Studien validiert. Informelle Erkenntnisse, die während Design und Entwicklung gesammelt wurden, ergänzen diese Beurteilung in Form von lessons learned, welche bei künftigen Entwicklungsarbeiten in diesem Gebiet helfen sollen. KW - Echtzeitsystem KW - Framework KW - Ontologie KW - Multimodales System KW - Intelligent Real-time Interactive System KW - Virtual Reality KW - Mixed Reality KW - Multimodal System KW - Software Quality KW - Software Architecture KW - Multimodal Processing KW - Virtuelle Realität KW - Software Engineering Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-152723 ER - TY - JOUR A1 - Zimmerer, Chris A1 - Fischbach, Martin A1 - Latoschik, Marc Erich T1 - Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks JF - Multimodal Technologies and Interaction N2 - Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality. KW - multimodal fusion KW - multimodal interface KW - semantic fusion KW - procedural fusion methods KW - natural interfaces KW - human-computer interaction Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-197573 SN - 2414-4088 VL - 2 IS - 4 ER -