• Treffer 1 von 1
Zurück zur Trefferliste

Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks

Zitieren Sie bitte immer diese URN: urn:nbn:de:bvb:20-opus-197573
  • Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback,Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.zeige mehrzeige weniger

Volltext Dateien herunterladen

Metadaten exportieren

Weitere Dienste

Teilen auf Twitter Suche bei Google Scholar Statistik - Anzahl der Zugriffe auf das Dokument
Metadaten
Autor(en): Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik
URN:urn:nbn:de:bvb:20-opus-197573
Dokumentart:Artikel / Aufsatz in einer Zeitschrift
Institute der Universität:Fakultät für Mathematik und Informatik / Institut für Informatik
Sprache der Veröffentlichung:Englisch
Titel des übergeordneten Werkes / der Zeitschrift (Englisch):Multimodal Technologies and Interaction
ISSN:2414-4088
Erscheinungsjahr:2018
Band / Jahrgang:2
Heft / Ausgabe:4
Originalveröffentlichung / Quelle:Multimodal Technologies Interact. 2018, 2(4), 81; https://doi.org/10.3390/mti2040081
DOI:https://doi.org/10.3390/mti2040081
Allgemeine fachliche Zuordnung (DDC-Klassifikation):0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Freie Schlagwort(e):human-computer interaction; multimodal fusion; multimodal interface; natural interfaces; procedural fusion methods; semantic fusion
Datum der Freischaltung:18.06.2020
Datum der Erstveröffentlichung:06.12.2018
Lizenz (Deutsch):License LogoCC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International