Institut Mensch - Computer - Medien
Refine
Has Fulltext
- yes (51)
Is part of the Bibliography
- yes (51)
Year of publication
Document Type
- Journal article (30)
- Doctoral Thesis (18)
- Conference Proceeding (3)
Keywords
- virtual reality (5)
- Medien (3)
- Psychologie (3)
- Virtuelle Realität (3)
- human-computer interaction (3)
- measurement (3)
- Benutzeroberfläche (2)
- Human-Computer Interaction (2)
- Lernen (2)
- Medienpsychologie (2)
Institute
Sonstige beteiligte Institutionen
Die Rehabilitation von Gangstörungen bei Patienten mit MS und Schlaganfall erfolgt häufig mithilfe eines konventionellen Laufbandtrainings. Einige Studien haben bereits gezeigt, dass durch eine Erweiterung dieses Trainings um eine virtuelle Realität die Motivation der Patienten gesteigert und die Therapieergebnisse verbessert werden können.
In der vorliegenden Studie wurde eine immersive VR-Anwendung (unter Verwendung eines HMD) für die Gangrehabilitation von Patienten evaluiert. Hierbei wurden ihre Anwendbarkeit und Akzeptanz geprüft sowie ihre Kurzzeiteffekte mit einer semi-immersiven Präsentation (unter Verwendung eines Monitors) und mit einem konventionellen Laufbandtraining ohne VR verglichen. Der Fokus lag insbesondere auf der Untersuchung der Anwendbarkeit beider Systeme und der Auswirkungen auf die Laufgeschwindigkeit und Motivation der Benutzer.
Im Rahmen einer Studie mit Innersubjekt-Design nahmen zunächst 36 gesunde Teilnehmer und anschließend 14 Patienten mit MS oder Schlaganfall an drei experimentellen Bedingungen (VR über HMD, VR über Monitor, Laufbandtraining ohne VR) teil.
Sowohl in der Studie mit gesunden Teilnehmern als auch in der Patientenstudie zeigte sich in der HMD-Bedingung eine höhere Laufgeschwindigkeit als beim Laufbandtraining ohne VR und in der Monitor-Bedingung. Die gesunden Studienteilnehmer berichteten über eine höhere Motivation nach der HMD-Bedingung als nach den anderen Bedingungen. Es traten in beiden Gruppen keine Nebenwirkungen im Sinne einer Simulator Sickness auf und es wurden auch keine Erhöhungen der Herzfrequenzen nach den VR-Bedingungen detektiert. Die Bewertungen des Präsenzerlebens waren in beiden Gruppen in der HMD-Bedingung höher als in der Monitor-Bedingung. Beide VR-Bedingungen erhielten hohe Bewertungen für die Benutzerfreundlichkeit. Die meisten der gesunden Teilnehmer (89 %) und Patienten (71 %) präferierten das HMD-basierte Laufbandtraining unter den drei Trainingsformen und die meisten Patienten könnten sich vorstellen, es häufiger zu nutzen.
Mit der vorliegenden Studie wurde eine strukturierte Evaluation der Anwendbarkeit eines immersiven VR-Systems für die Gangrehabilitation geprüft und dieses erstmals in den direkten Vergleich zu einem semi-immersiven System und einem konventionellen Training ohne VR gesetzt. Die Studie bestätigte die Praktikabilität der Kombination eines Laufbandtrainings mit immersiver VR. Aufgrund ihrer hohen Benutzerfreundlichkeit und der geringen Nebenwirkungen scheint diese Trainingsform besonders für Patienten geeignet zu sein, um deren Trainingsmotivation und Trainingserfolge, wie z. B. die Laufgeschwindigkeit, zu steigern. Da immersive VR-Systeme allerdings nach wie vor spezifische technische Installationsprozeduren erfordern, sollte für die spezifische klinische Anwendung eine Kosten-Nutzen-Bewertung erfolgen.
With the rise of immersive media, advertisers have started to use 360° commercials to engage and persuade consumers. Two experiments were conducted to address research gaps and to validate the positive impact of 360° commercials in realistic settings. The first study (N = 62) compared the effects of 360° commercials using either a mobile cardboard head-mounted display (HMD) or a laptop. This experiment was conducted in the participants’ living rooms and incorporated individual feelings of cybersickness as a moderator. The participants who experienced the 360° commercial with the HMD reported higher spatial presence and product evaluation, but their purchase intentions were only increased when their reported cybersickness was low. The second experiment (N = 197) was conducted online and analyzed the impact of 360° commercials that were experienced with mobile (smartphone/tablet) or static (laptop/desktop) devices instead of HMDs. The positive effects of omnidirectional videos were stronger when participants used mobile devices.
This study provides a systematic literature review of research (2001–2020) in the field of teaching and learning a foreign language and intercultural learning using immersive technologies. Based on 2507 sources, 54 articles were selected according to a predefined selection criteria. The review is aimed at providing information about which immersive interventions are being used for foreign language learning and teaching and where potential research gaps exist. The papers were analyzed and coded according to the following categories: (1) investigation form and education level, (2) degree of immersion, and technology used, (3) predictors, and (4) criterions. The review identified key research findings relating the use of immersive technologies for learning and teaching a foreign language and intercultural learning at cognitive, affective, and conative levels. The findings revealed research gaps in the area of teachers as a target group, and virtual reality (VR) as a fully immersive intervention form. Furthermore, the studies reviewed rarely examined behavior, and implicit measurements related to inter- and trans-cultural learning and teaching. Inter- and transcultural learning and teaching especially is an underrepresented investigation subject. Finally, concrete suggestions for future research are given. The systematic review contributes to the challenge of interdisciplinary cooperation between pedagogy, foreign language didactics, and Human-Computer Interaction to achieve innovative teaching-learning formats and a successful digital transformation.
Meal-concurrent media use has been linked to several problematic outcomes, including higher caloric intake and an increased risk for obesity. Nevertheless, the sociocultural and dispositional predictors of using media while eating are not yet well-understood, including potential cross-cultural differences. Inspired by the recent emergence of a new food-related media phenomenon called “mukbang”—digital eating broadcasts that have become immensely popular throughout East and Southeast Asia—we inquire 296 participants from two cultures (Germany and South Korea) about their meal-concurrent media use. Our results suggest that South Koreans tend to use media more frequently during meals than Germans, especially for social purposes. Meanwhile, younger age only predicts meal-concurrent media use in the German sample. Apart from that, however, many other examined predictors (e.g., gender, living situation, body-esteem, the Big Five) remain statistically insignificant, indicating notable universality for the behavior in question. In the second part of our study, we then put special focus on the emerging mukbang trend and conduct a theory-driven exploration of its gratifications. Doing so, we find that participants' parasocial and social experiences during eating broadcasts significantly predict their enjoyment of the genre.
Plenty of theories, models, measures, and investigations target the understanding of virtual presence, i.e., the sense of presence in immersive Virtual Reality (VR). Other varieties of the so-called eXtended Realities (XR), e.g., Augmented and Mixed Reality (AR and MR) incorporate immersive features to a lesser degree and continuously combine spatial cues from the real physical space and the simulated virtual space. This blurred separation questions the applicability of the accumulated knowledge about the similarities of virtual presence and presence occurring in other varieties of XR, and corresponding outcomes. The present work bridges this gap by analyzing the construct of presence in mixed realities (MR). To achieve this, the following presents (1) a short review of definitions, dimensions, and measurements of presence in VR, and (2) the state of the art views on MR. Additionally, we (3) derived a working definition of MR, extending the Milgram continuum. This definition is based on entities reaching from real to virtual manifestations at one time point. Entities possess different degrees of referential power, determining the selection of the frame of reference. Furthermore, we (4) identified three research desiderata, including research questions about the frame of reference, the corresponding dimension of transportation, and the dimension of realism in MR. Mainly the relationship between the main aspects of virtual presence of immersive VR, i.e., the place-illusion, and the plausibility-illusion, and of the referential power of MR entities are discussed regarding the concept, measures, and design of presence in MR. Finally, (5) we suggested an experimental setup to reveal the research heuristic behind experiments investigating presence in MR. The present work contributes to the theories and the meaning of and approaches to simulate and measure presence in MR. We hypothesize that research about essential underlying factors determining user experience (UX) in MR simulations and experiences is still in its infancy and hopes this article provides an encouraging starting point to tackle related questions.
Artificial Intelligence (AI) covers a broad spectrum of computational problems and use cases. Many of those implicate profound and sometimes intricate questions of how humans interact or should interact with AIs. Moreover, many users or future users do have abstract ideas of what AI is, significantly depending on the specific embodiment of AI applications. Human-centered-design approaches would suggest evaluating the impact of different embodiments on human perception of and interaction with AI. An approach that is difficult to realize due to the sheer complexity of application fields and embodiments in reality. However, here XR opens new possibilities to research human-AI interactions. The article’s contribution is twofold: First, it provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum as a framework for and a perspective of different approaches of XR-AI combinations. It motivates XR-AI combinations as a method to learn about the effects of prospective human-AI interfaces and shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces. Second, the article provides two exemplary experiments investigating the aforementioned approach for two distinct AI-systems. The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system. Here the article introduces two paradigmatic implementations of the proposed XR testbed for human-AI interactions and interfaces and shows how a valid and systematic investigation can be conducted. In sum, the article opens new perspectives on how XR benefits human-centered AI design and development.
As an emerging market for voice assistants (VA), the healthcare sector imposes increasing requirements on the users’ trust in the technological system. To encourage patients to reveal sensitive data requires patients to trust in the technological counterpart. In an experimental laboratory study, participants were presented a VA, which was introduced as either a “specialist” or a “generalist” tool for sexual health. In both conditions, the VA asked the exact same health-related questions. Afterwards, participants assessed the trustworthiness of the tool and further source layers (provider, platform provider, automatic speech recognition in general, data receiver) and reported individual characteristics (disposition to trust and disclose sexual information). Results revealed that perceiving the VA as a specialist resulted in higher trustworthiness of the VA and of the provider, the platform provider and automatic speech recognition in general. Furthermore, the provider’s trustworthiness affected the perceived trustworthiness of the VA. Presenting both a theoretical line of reasoning and empirical data, the study points out the importance of the users’ perspective on the assistant. In sum, this paper argues for further analyses of trustworthiness in voice-based systems and its effects on the usage behavior as well as the impact on responsible design of future technology.
The concept of digital literacy has been introduced as a new cultural technique, which is regarded as essential for successful participation in a (future) digitized world. Regarding the increasing importance of AI, literacy concepts need to be extended to account for AI-related specifics. The easy handling of the systems results in increased usage, contrasting limited conceptualizations (e.g., imagination of future importance) and competencies (e.g., knowledge about functional principles). In reference to voice-based conversational agents as a concrete application of AI, the present paper aims for the development of a measurement to assess the conceptualizations and competencies about conversational agents. In a first step, a theoretical framework of “AI literacy” is transferred to the context of conversational agent literacy. Second, the “conversational agent literacy scale” (short CALS) is developed, constituting the first attempt to measure interindividual differences in the “(il) literate” usage of conversational agents. 29 items were derived, of which 170 participants answered. An explanatory factor analysis identified five factors leading to five subscales to assess CAL: storage and transfer of the smart speaker’s data input; smart speaker’s functional principles; smart speaker’s intelligent functions, learning abilities; smart speaker’s reach and potential; smart speaker’s technological (surrounding) infrastructure. Preliminary insights into construct validity and reliability of CALS showed satisfying results. Third, using the newly developed instrument, a student sample’s CAL was assessed, revealing intermediated values. Remarkably, owning a smart speaker did not lead to higher CAL scores, confirming our basic assumption that usage of systems does not guarantee enlightened conceptualizations and competencies. In sum, the paper contributes to the first insights into the operationalization and understanding of CAL as a specific subdomain of AI-related competencies.
In this paper, we present a virtual audience simulation system for Virtual Reality (VR). The system implements an audience perception model controlling the nonverbal behaviors of virtual spectators, such as facial expressions or postures. Groups of virtual spectators are animated by a set of nonverbal behavior rules representing a particular audience attitude (e.g., indifferent or enthusiastic). Each rule specifies a nonverbal behavior category: posture, head movement, facial expression and gaze direction as well as three parameters: type, frequency and proportion. In a first user-study, we asked participants to pretend to be a speaker in VR and then create sets of nonverbal behaviour parameters to simulate different attitudes. Participants manipulated the nonverbal behaviours of single virtual spectator to match a specific levels of engagement and opinion toward them. In a second user-study, we used these parameters to design different types of virtual audiences with our nonverbal behavior rules and evaluated their perceptions. Our results demonstrate our system’s ability to create virtual audiences with three types of different perceived attitudes: indifferent, critical, enthusiastic. The analysis of the results also lead to a set of recommendations and guidelines regarding attitudes and expressions for future design of audiences for VR therapy and training applications.
Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions.