Refine
Has Fulltext
- yes (17)
Is part of the Bibliography
- yes (17)
Year of publication
- 2021 (17) (remove)
Document Type
- Journal article (13)
- Doctoral Thesis (3)
- Conference Proceeding (1)
Keywords
- virtual reality (4)
- human-computer interaction (3)
- Virtuelle Realität (2)
- immersive technologies (2)
- perception (2)
- spatial presence (2)
- Aggression (1)
- Aggressive Driving (1)
- Avatar <Informatik> (1)
- Background <Musik> (1)
Institute
- Institut Mensch - Computer - Medien (17) (remove)
EU-Project number / Contract (GA) number
- 824128 (1)
Conversational agents and smart speakers have grown in popularity offering a variety of options for use, which are available through intuitive speech operation. In contrast to the standard dyad of a single user and a device, voice-controlled operations can be observed by further attendees resulting in new, more social usage scenarios. Referring to the concept of ‘media equation’ and to research on the idea of ‘computers as social actors,’ which describes the potential of technology to trigger emotional reactions in users, this paper asks for the capacity of smart speakers to elicit empathy in observers of interactions. In a 2 × 2 online experiment, 140 participants watched a video of a man talking to an Amazon Echo either rudely or neutrally (factor 1), addressing it as ‘Alexa’ or ‘Computer’ (factor 2). Controlling for participants’ trait empathy, the rude treatment results in participants’ significantly higher ratings of empathy with the device, compared to the neutral treatment. The form of address had no significant effect. Results were independent of the participants’ gender and usage experience indicating a rather universal effect, which confirms the basic idea of the media equation. Implications for users, developers and researchers were discussed in the light of (future) omnipresent voice-based technology interaction scenarios.
As an emerging market for voice assistants (VA), the healthcare sector imposes increasing requirements on the users’ trust in the technological system. To encourage patients to reveal sensitive data requires patients to trust in the technological counterpart. In an experimental laboratory study, participants were presented a VA, which was introduced as either a “specialist” or a “generalist” tool for sexual health. In both conditions, the VA asked the exact same health-related questions. Afterwards, participants assessed the trustworthiness of the tool and further source layers (provider, platform provider, automatic speech recognition in general, data receiver) and reported individual characteristics (disposition to trust and disclose sexual information). Results revealed that perceiving the VA as a specialist resulted in higher trustworthiness of the VA and of the provider, the platform provider and automatic speech recognition in general. Furthermore, the provider’s trustworthiness affected the perceived trustworthiness of the VA. Presenting both a theoretical line of reasoning and empirical data, the study points out the importance of the users’ perspective on the assistant. In sum, this paper argues for further analyses of trustworthiness in voice-based systems and its effects on the usage behavior as well as the impact on responsible design of future technology.
With the rise of immersive media, advertisers have started to use 360° commercials to engage and persuade consumers. Two experiments were conducted to address research gaps and to validate the positive impact of 360° commercials in realistic settings. The first study (N = 62) compared the effects of 360° commercials using either a mobile cardboard head-mounted display (HMD) or a laptop. This experiment was conducted in the participants’ living rooms and incorporated individual feelings of cybersickness as a moderator. The participants who experienced the 360° commercial with the HMD reported higher spatial presence and product evaluation, but their purchase intentions were only increased when their reported cybersickness was low. The second experiment (N = 197) was conducted online and analyzed the impact of 360° commercials that were experienced with mobile (smartphone/tablet) or static (laptop/desktop) devices instead of HMDs. The positive effects of omnidirectional videos were stronger when participants used mobile devices.
Plenty of theories, models, measures, and investigations target the understanding of virtual presence, i.e., the sense of presence in immersive Virtual Reality (VR). Other varieties of the so-called eXtended Realities (XR), e.g., Augmented and Mixed Reality (AR and MR) incorporate immersive features to a lesser degree and continuously combine spatial cues from the real physical space and the simulated virtual space. This blurred separation questions the applicability of the accumulated knowledge about the similarities of virtual presence and presence occurring in other varieties of XR, and corresponding outcomes. The present work bridges this gap by analyzing the construct of presence in mixed realities (MR). To achieve this, the following presents (1) a short review of definitions, dimensions, and measurements of presence in VR, and (2) the state of the art views on MR. Additionally, we (3) derived a working definition of MR, extending the Milgram continuum. This definition is based on entities reaching from real to virtual manifestations at one time point. Entities possess different degrees of referential power, determining the selection of the frame of reference. Furthermore, we (4) identified three research desiderata, including research questions about the frame of reference, the corresponding dimension of transportation, and the dimension of realism in MR. Mainly the relationship between the main aspects of virtual presence of immersive VR, i.e., the place-illusion, and the plausibility-illusion, and of the referential power of MR entities are discussed regarding the concept, measures, and design of presence in MR. Finally, (5) we suggested an experimental setup to reveal the research heuristic behind experiments investigating presence in MR. The present work contributes to the theories and the meaning of and approaches to simulate and measure presence in MR. We hypothesize that research about essential underlying factors determining user experience (UX) in MR simulations and experiences is still in its infancy and hopes this article provides an encouraging starting point to tackle related questions.
Companies increasingly seek to use gay protagonists in audio-visual commercials to attract a new affluent target group. There is also growing demand for the diversity present in society to be reflected in media formats such as advertising. Studies have shown, however, that heterosexual consumers (especially men), who may be part of the company's loyal consumer base, tend to react negatively to gay-themed advertising campaigns. Searching for an instrument to mitigate this unwanted effect, the present study investigated whether carefully selected background music can shape the perceived gender of gay male advertising protagonists. In a 2 × 2 between-subjects online experiment (musical connotation × gender of the participant), 218 heterosexual participants watched a commercial promoting engagement rings that featured gay male protagonists, scored with feminine- or masculine-connoted background music. As expected, women generally reacted more positively than men to the advertising. Men exposed to the masculine-connoted background music rated the promoted brand more positively, and masculine music also enhanced (at least in the short term) these men's acceptance of gay men in general (low and medium effect sizes) more than was the case for feminine background music. Carefully selected background music affecting the perceived gender of gay male advertising protagonists may prevent negative reactions from heterosexual audiences and, therefore, motivate companies to use gay protagonists in television commercials on a more regular basis.
Manifestations of aggressive driving, such as tailgating, speeding, or swearing, are not trivial offenses but are serious problems with hazardous consequences—for the offender as well as the target of aggression. Aggression on the road erases the joy of driving, affects heart health, causes traffic jams, and increases the risk of traffic accidents. This work is aimed at developing a technology-driven solution to mitigate aggressive driving according to the principles of Persuasive Technology. Persuasive Technology is a scientific field dealing with computerized software or information systems that are designed to reinforce, change, or shape attitudes, behaviors, or both without using coercion or deception.
Against this background, the Driving Feedback Avatar (DFA) was developed through this work. The system is a visual in-car interface that provides the driver with feedback on aggressive driving. The main element is an abstract avatar displayed in the vehicle. The feedback is transmitted through the emotional state of this avatar, i.e., if the driver behaves aggressively, the avatar becomes increasingly angry (negative feedback). If no aggressive action occurs, the avatar is more relaxed (positive feedback). In addition, directly after an aggressive action is recognized by the system, the display is flashing briefly to give the driver an instant feedback on his action.
Five empirical studies were carried out as part of the human-centered design process of the DFA. They were aimed at understanding the user and the use context of the future system, ideating system ideas, and evaluating a system prototype. The initial research question was about the triggers of aggressive driving. In a driver study on a public road, 34 participants reported their emotions and their triggers while they were driving (study 1). The second research question asked for interventions to cope with aggression in everyday life. For this purpose, 15 experts dealing with the treatment of aggressive individuals were interviewed (study 2). In total, 75 triggers of aggressive driving and 34 anti-aggression interventions were identified. Inspired by these findings, 108 participants generated more than 100 ideas of how to mitigate aggressive driving using technology in a series of ideation workshops (study 3). Based on these ideas, the concept of the DFA was elaborated on. In an online survey, the concept was evaluated by 1,047 German respondents to get a first assessment of its perception (study 4). Later on, the DFA was implemented into a prototype and evaluated in an experimental driving study with 32 participants, focusing on the system’s effectiveness (study 5). The DFA had only weak and, in part, unexpected effects on aggressive driving that require a deeper discussion.
With the DFA, this work has shown that there is room to change aggressive driving through Persuasive Technology. However, this is a very sensitive issue with special requirements regarding the design of avatar-based feedback systems in the context of aggressive driving. Moreover, this work makes a significant contribution through the number of empirical insights gained on the problem of aggressive driving and wants to encourage future research and design activities in this regard.
We investigated the accuracy of gender stereotypes regarding digital game genre preferences. In Study 1, 484 female and male participants rated their preference for 17 game genres (gender differences). In Study 2, another sample of 226 participants rated the extent to which the same genres were presumably preferred by women or men (gender stereotypes). We then compared the results of both studies in order to determine the accuracy of the gender stereotypes. Study 1 revealed actual gender differences for most genres—mostly of moderate size. Study 2 revealed substantial gender stereotypes about genre preferences. When comparing the results from both studies, we found that gender stereotypes were accurate in direction for most genres. However, they were, to some degree, inaccurate in size: For most genres, gender stereotypes overestimated the actual gender difference with a moderate mean effect size.
Meal-concurrent media use has been linked to several problematic outcomes, including higher caloric intake and an increased risk for obesity. Nevertheless, the sociocultural and dispositional predictors of using media while eating are not yet well-understood, including potential cross-cultural differences. Inspired by the recent emergence of a new food-related media phenomenon called “mukbang”—digital eating broadcasts that have become immensely popular throughout East and Southeast Asia—we inquire 296 participants from two cultures (Germany and South Korea) about their meal-concurrent media use. Our results suggest that South Koreans tend to use media more frequently during meals than Germans, especially for social purposes. Meanwhile, younger age only predicts meal-concurrent media use in the German sample. Apart from that, however, many other examined predictors (e.g., gender, living situation, body-esteem, the Big Five) remain statistically insignificant, indicating notable universality for the behavior in question. In the second part of our study, we then put special focus on the emerging mukbang trend and conduct a theory-driven exploration of its gratifications. Doing so, we find that participants' parasocial and social experiences during eating broadcasts significantly predict their enjoyment of the genre.
In this paper, we present a virtual audience simulation system for Virtual Reality (VR). The system implements an audience perception model controlling the nonverbal behaviors of virtual spectators, such as facial expressions or postures. Groups of virtual spectators are animated by a set of nonverbal behavior rules representing a particular audience attitude (e.g., indifferent or enthusiastic). Each rule specifies a nonverbal behavior category: posture, head movement, facial expression and gaze direction as well as three parameters: type, frequency and proportion. In a first user-study, we asked participants to pretend to be a speaker in VR and then create sets of nonverbal behaviour parameters to simulate different attitudes. Participants manipulated the nonverbal behaviours of single virtual spectator to match a specific levels of engagement and opinion toward them. In a second user-study, we used these parameters to design different types of virtual audiences with our nonverbal behavior rules and evaluated their perceptions. Our results demonstrate our system’s ability to create virtual audiences with three types of different perceived attitudes: indifferent, critical, enthusiastic. The analysis of the results also lead to a set of recommendations and guidelines regarding attitudes and expressions for future design of audiences for VR therapy and training applications.
The combination of globalization and digitalization emphasizes the importance of media-related and intercultural competencies of teacher educators and preservice teachers. This article reports on the initial prototypical implementation of a pedagogical concept to foster such competencies of preservice teachers. The proposed pedagogical concept utilizes a social virtual reality (VR) framework since related work on the characteristics of VR has indicated that this medium is particularly well suited for intercultural professional development processes. The development is integrated into a larger design-based research approach that develops a theory-guided and empirically grounded professional development concept for teacher educators with a special focus on teacher educator technology competencies (TETC8). TETCs provide a suitable competence framework capable of aligning requirements for both media-related and intercultural competencies. In an exploratory study with student teachers, we designed, implemented, and evaluated a pedagogical concept. Reflection reports were qualitatively analyzed to gain insights into factors that facilitate or hinder the implementation of the immersive learning scenario as well as into the participants’ evaluation of their learning experience. The results show that our proposed pedagogical concept is particularly suitable for promoting the experience of social presence, agency, and empathy in the group.
Artificial Intelligence (AI) covers a broad spectrum of computational problems and use cases. Many of those implicate profound and sometimes intricate questions of how humans interact or should interact with AIs. Moreover, many users or future users do have abstract ideas of what AI is, significantly depending on the specific embodiment of AI applications. Human-centered-design approaches would suggest evaluating the impact of different embodiments on human perception of and interaction with AI. An approach that is difficult to realize due to the sheer complexity of application fields and embodiments in reality. However, here XR opens new possibilities to research human-AI interactions. The article’s contribution is twofold: First, it provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum as a framework for and a perspective of different approaches of XR-AI combinations. It motivates XR-AI combinations as a method to learn about the effects of prospective human-AI interfaces and shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces. Second, the article provides two exemplary experiments investigating the aforementioned approach for two distinct AI-systems. The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system. Here the article introduces two paradigmatic implementations of the proposed XR testbed for human-AI interactions and interfaces and shows how a valid and systematic investigation can be conducted. In sum, the article opens new perspectives on how XR benefits human-centered AI design and development.
Hintergrundmusik wird in verschiedenen audiovisuellen Medienformaten häufig und meist mit einer ganz bestimmten Intention eingesetzt. Ziel dieser Dissertation war es, durch eine umfangreiche Aufarbeitung bisheriger Forschung Faktoren zu ermitteln und empirisch zu testen, die beeinflussen, dass Hintergrundmusik in audiovisuellen Medienformaten prognostizierbar die ihr zugesprochenen Funktionen erfüllt. Als interdisziplinärer Forschungsgegenstand bedarf Hintergrundmusik eines möglichst ausgewogenen Blickwinkels, der Musik- und Medienkontext-Spezifika (und deren potentielle Interaktionen) gleichermaßen berücksichtigt. Um Hintergrundmusik als komplexem audiovisuellen Stimulus in empirischer Forschung gerecht zu werden, spielen zudem auch praktische Implikationen eine große Rolle. Diese Herausforderungen berücksichtigend wurde die Wirksamkeit von Hintergrundmusik in fünf Studien im Kontext von drei verschiedenen Medienformaten untersucht. Da Werbung, Film und informationsvermittelnde Medienformate (wie Dokumentationen und TV-Magazine) die drei Kernfunktionen von Medien – Persuasion, Unterhaltung und Information – repräsentieren, sollte auf Grundlage dieses Dreiklangs die Bandbreite potenzieller Wirkungen von effektiv eingesetzter Musik und von Faktoren, die ihre Wirkung beeinflussen, möglichst umfassend (wenn auch sicher nicht vollständig) abgebildet werden. Über alle Medienformate hinweg kann als wichtiger, die Wirksamkeit von Hintergrundmusik verstärkender Einflussfaktor eine Kongruenz, d.h. eine intuitiv wahrgenommene Passung von Musik und Medienkontext ausgemacht werden, die durch eine sorgfältige Abstimmung der Spezifika von Musik und Medienformat auf emotionaler, auf assoziativer und auf struktureller Ebene erreicht werden kann. Findet diese Anwendung, kann Musik systematisch die Wirksamkeit von Werbespots steigern (Studie 1 und 2), gezielt Bedeutung vermitteln und dadurch die Wahrnehmung und Interpretation von (deutungsoffenen) Filmszenen prägen (Studie 3 und 4) oder unter bestimmten Bedingungen das persuasive Potenzial eines informationsvermittelnden Medienformats steigern und so die Meinungsbildung der Rezipierenden (zumindest kurzfristig) beeinflussen (Studie 5). Die Arbeit verdeutlicht, wie mittels interdisziplinärer Perspektivierung und der Beachtung praktischer Implikationen bereits etabliertes Wissen verstetigt und neue Erkenntnisse zur Verwendung und Wirkung von Hintergrundmusik für Wissenschaft und Medienpraxis abgeleitet werden können – inklusive eines Ausblicks auf daraus resultierende Potenziale für zukünftige Forschung.
The concept of digital literacy has been introduced as a new cultural technique, which is regarded as essential for successful participation in a (future) digitized world. Regarding the increasing importance of AI, literacy concepts need to be extended to account for AI-related specifics. The easy handling of the systems results in increased usage, contrasting limited conceptualizations (e.g., imagination of future importance) and competencies (e.g., knowledge about functional principles). In reference to voice-based conversational agents as a concrete application of AI, the present paper aims for the development of a measurement to assess the conceptualizations and competencies about conversational agents. In a first step, a theoretical framework of “AI literacy” is transferred to the context of conversational agent literacy. Second, the “conversational agent literacy scale” (short CALS) is developed, constituting the first attempt to measure interindividual differences in the “(il) literate” usage of conversational agents. 29 items were derived, of which 170 participants answered. An explanatory factor analysis identified five factors leading to five subscales to assess CAL: storage and transfer of the smart speaker’s data input; smart speaker’s functional principles; smart speaker’s intelligent functions, learning abilities; smart speaker’s reach and potential; smart speaker’s technological (surrounding) infrastructure. Preliminary insights into construct validity and reliability of CALS showed satisfying results. Third, using the newly developed instrument, a student sample’s CAL was assessed, revealing intermediated values. Remarkably, owning a smart speaker did not lead to higher CAL scores, confirming our basic assumption that usage of systems does not guarantee enlightened conceptualizations and competencies. In sum, the paper contributes to the first insights into the operationalization and understanding of CAL as a specific subdomain of AI-related competencies.
Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions.
Computer games are highly immersive, engaging, and motivating learning environments. By providing a tutorial at the start of a new game, players learn the basics of the game's underlying principles as well as practice how to successfully play the game. During the actual gameplay, players repetitively apply this knowledge, thus improving it due to repetition. Computer games also challenge players with a constant stream of new challenges which increase in difficulty over time. As a result, computer games even require players to transfer their knowledge to master these new challenges. A computer game consists of several game mechanics. Game mechanics are the rules of a computer game and encode the game's underlying principles. They create the virtual environments, generate a game's challenges and allow players to interact with the game. Game mechanics also can encode real world knowledge. This knowledge may be acquired by players via gameplay. However, the actual process of knowledge encoding and knowledge learning using game mechanics has not been thoroughly defined, yet. This thesis therefore proposes a theoretical model to define the knowledge learning using game mechanics: the Gamified Knowledge Encoding. The model is applied to design a serious game for affine transformations, i.e., GEtiT, and to predict the learning outcome of playing a computer game that encodes orbital mechanics in its game mechanics, i.e., Kerbal Space Program. To assess the effects of different visualization technologies on the overall learning outcome, GEtiT visualizes the gameplay in desktop-3D and immersive virtual reality. The model's applicability for effective game design as well as GEtiT's overall design are evaluated in a usability study. The learning outcome of playing GEtiT and Kerbal Space Program is assessed in four additional user studies. The studies' results validate the use of the Gamified Knowledge Encoding for the purpose of developing effective serious games and to predict the learning outcome of existing serious games. GEtiT and Kerbal Space Program yield a similar training effect but a higher motivation to tackle the assignments in comparison to a traditional learning method. In conclusion, this thesis expands the understanding of using game mechanics for an effective learning of knowledge. The presented results are of high importance for researches, educators, and developers as they also provide guidelines for the development of effective serious games.
The design and evaluation of assisting technologies to support behavior change processes have become an essential topic within the field of human-computer interaction research in general and the field of immersive intervention technologies in particular. The mechanisms and success of behavior change techniques and interventions are broadly investigated in the field of psychology. However, it is not always easy to adapt these psychological findings to the context of immersive technologies. The lack of theoretical foundation also leads to a lack of explanation as to why and how immersive interventions support behavior change processes. The Behavioral Framework for immersive Technologies (BehaveFIT) addresses this lack by 1) presenting an intelligible categorization and condensation of psychological barriers and immersive features, by 2) suggesting a mapping that shows why and how immersive technologies can help to overcome barriers and finally by 3) proposing a generic prediction path that enables a structured, theory-based approach to the development and evaluation of immersive interventions. These three steps explain how BehaveFIT can be used, and include guiding questions for each step. Further, two use cases illustrate the usage of BehaveFIT. Thus, the present paper contributes to guidance for immersive intervention design and evaluation, showing that immersive interventions support behavior change processes and explain and predict 'why' and 'how' immersive interventions can bridge the intention-behavior-gap.
This study provides a systematic literature review of research (2001–2020) in the field of teaching and learning a foreign language and intercultural learning using immersive technologies. Based on 2507 sources, 54 articles were selected according to a predefined selection criteria. The review is aimed at providing information about which immersive interventions are being used for foreign language learning and teaching and where potential research gaps exist. The papers were analyzed and coded according to the following categories: (1) investigation form and education level, (2) degree of immersion, and technology used, (3) predictors, and (4) criterions. The review identified key research findings relating the use of immersive technologies for learning and teaching a foreign language and intercultural learning at cognitive, affective, and conative levels. The findings revealed research gaps in the area of teachers as a target group, and virtual reality (VR) as a fully immersive intervention form. Furthermore, the studies reviewed rarely examined behavior, and implicit measurements related to inter- and trans-cultural learning and teaching. Inter- and transcultural learning and teaching especially is an underrepresented investigation subject. Finally, concrete suggestions for future research are given. The systematic review contributes to the challenge of interdisciplinary cooperation between pedagogy, foreign language didactics, and Human-Computer Interaction to achieve innovative teaching-learning formats and a successful digital transformation.