Institut Mensch - Computer - Medien
Refine
Has Fulltext
- yes (74)
Is part of the Bibliography
- yes (74)
Year of publication
Document Type
- Journal article (47)
- Doctoral Thesis (22)
- Conference Proceeding (4)
- Working Paper (1)
Keywords
- virtual reality (8)
- Mensch-Maschine-Kommunikation (5)
- Psychologie (4)
- Lernen (3)
- Medien (3)
- Virtuelle Realität (3)
- human-computer interaction (3)
- measurement (3)
- presence (3)
- transportation (3)
Institute
- Institut Mensch - Computer - Medien (74)
- Institut für Informatik (12)
- Institut für Psychologie (4)
- Klinik und Poliklinik für Anästhesiologie (ab 2004) (3)
- Institut für Evangelische Theologie und Religionspädagogik (1)
- Institut für Pädagogik (1)
- Medizinische Fakultät (1)
- Neurologische Klinik und Poliklinik (1)
Sonstige beteiligte Institutionen
- DATE Lab, KITE Research Insititute, University Health Network, Toronto, Canada (1)
- Institut für Evangelische Theologie und Religionspädagogik (1)
- Social and Technological Systems (SaTS) lab, School of Art, Media, Performance and Design, York University, Toronto, Canada (1)
- Technische Universität Dortmund, Fakultät für Informatik, Computer Graphics Group (1)
- University of Iceland (Human Behaviour Laboratory) (1)
- Zentrum für soziale Implikationen künstlicher Intelligenz (SOCAI) (1)
Given the growing interest of corporate stakeholders in Metaverse applications, there is a need to understand accessibility of these technologies for marginalized populations such as people living with dementia to ensure inclusive design of Metaverse applications. We assessed the accessibility of extended reality technology for people living with mild cognitive impairment and dementia to develop accessibility guidelines for these technologies. We used four strategies to synthesize evidence for barriers and facilitators of accessibility: (1) Findings from a non-systematic literature review, (2) guidelines from well-researched technology, (3) exploration of selected mixed reality technologies, and (4) observations from four sessions and video data of people living with dementia using mixed reality technologies. We utilized template analysis to develop codes and themes towards accessibility guidelines. Future work can validate our preliminary findings by applying them on video recordings or testing them in experiments.
Social robots are becoming increasingly prevalent in everyday life and sex robots are a sub-category of especially high public interest and controversy. Starting from the concept of the otaku, a term from Japanese youth culture that describes secluded persons with a high affinity for fictional manga characters, we examine individual differences behind sex robot appeal (anime and manga fandom, interest in Japanese culture, preference for indoor activities, shyness). In an online-experiment, 261 participants read one out of three randomly assigned descriptions of future technologies (sex robot, nursing robot, genetically modified organism) and reported on their overall evaluation, eeriness, and contact/purchase intentions. Higher anime and manga fandom was associated with higher appeal for all three future technologies. For our male subsample, sex robots and GMOs stood out as shyness yielded a particularly strong relationship to contact/purchase intentions for these new technologies.
With the rise of immersive media, advertisers have started to use 360° commercials to engage and persuade consumers. Two experiments were conducted to address research gaps and to validate the positive impact of 360° commercials in realistic settings. The first study (N = 62) compared the effects of 360° commercials using either a mobile cardboard head-mounted display (HMD) or a laptop. This experiment was conducted in the participants’ living rooms and incorporated individual feelings of cybersickness as a moderator. The participants who experienced the 360° commercial with the HMD reported higher spatial presence and product evaluation, but their purchase intentions were only increased when their reported cybersickness was low. The second experiment (N = 197) was conducted online and analyzed the impact of 360° commercials that were experienced with mobile (smartphone/tablet) or static (laptop/desktop) devices instead of HMDs. The positive effects of omnidirectional videos were stronger when participants used mobile devices.
Spatial presence is a state in which media users temporarily overlook the mediated nature of their experience.
This study discusses stimulus-dependent structure in spontaneous eye-blink behavior as analternative to presence selfreport measures. To this end, theories and empirical evidence on presence, spontaneous eye-blink behavior, and existing approaches for presence assessment are used to link antecedent processes of presence, especially attention, to presence and structure in blinking behavior.
Three experiments in different media environments relate three different methods for quantification of stimulus-dependent structure to an established presence scale. The results are not conclusive, but raise questions on presence and its measurement, and advance the understanding of stimulus-dependent structure in spontaneous eye-blink behavior.
Due to the complexityof research objects, theoretical concepts, and stimuli in media research, researchers in psychology and communications presumably need sophisticated measures beyond self-report scales to answer research questions on media use processes. The present study evaluates stimulus-dependent structure in spontaneous eye-blink behavior as an objective, corroborative measure for the media use phenomenon of spatial presence. To this end, a mixed methods approach is used in an experimental setting to collect, combine, analyze, and interpret data from standardized participant self-report, observation of participant behavior, and content analysis of the media stimulus. T-pattern detection is used to analyze stimulus-dependent blinking behavior, and this structural data is then contrasted with self-report data. The combined results show that behavioral indicators yield the predicted results, while self-report data shows unpredicted results that are not predicted by the underlying theory. The use of a mixed methods approach offered insights that support further theory development and theory testing beyond a traditional, mono-method experimental approach.
Introduction
Modern digital devices, such as conversational agents, simulate human–human interactions to an increasing extent. However, their outward appearance remains distinctly technological. While research revealed that mental representations of technology shape users' expectations and experiences, research on technology sending ambiguous cues is rare.
Methods
To bridge this gap, this study analyzes drawings of the outward appearance participants associate with voice assistants (Amazon Echo or Google Home).
Results
Human beings and (humanoid) robots were the most frequent associations, which were rated to be rather trustworthy, conscientious, agreeable, and intelligent. Drawings of the Amazon Echos and Google Homes differed marginally, but “human,” “robotic,” and “other” associations differed with respect to the ascribed humanness, consciousness, intellect, affinity to technology, and innovation ability.
Discussion
This study aims to further elaborate on the rather unconscious cognitive and emotional processes elicited by technology and discusses the implications of this perspective for developers, users, and researchers.
Conversational agents and smart speakers have grown in popularity offering a variety of options for use, which are available through intuitive speech operation. In contrast to the standard dyad of a single user and a device, voice-controlled operations can be observed by further attendees resulting in new, more social usage scenarios. Referring to the concept of ‘media equation’ and to research on the idea of ‘computers as social actors,’ which describes the potential of technology to trigger emotional reactions in users, this paper asks for the capacity of smart speakers to elicit empathy in observers of interactions. In a 2 × 2 online experiment, 140 participants watched a video of a man talking to an Amazon Echo either rudely or neutrally (factor 1), addressing it as ‘Alexa’ or ‘Computer’ (factor 2). Controlling for participants’ trait empathy, the rude treatment results in participants’ significantly higher ratings of empathy with the device, compared to the neutral treatment. The form of address had no significant effect. Results were independent of the participants’ gender and usage experience indicating a rather universal effect, which confirms the basic idea of the media equation. Implications for users, developers and researchers were discussed in the light of (future) omnipresent voice-based technology interaction scenarios.
Manifestations of aggressive driving, such as tailgating, speeding, or swearing, are not trivial offenses but are serious problems with hazardous consequences—for the offender as well as the target of aggression. Aggression on the road erases the joy of driving, affects heart health, causes traffic jams, and increases the risk of traffic accidents. This work is aimed at developing a technology-driven solution to mitigate aggressive driving according to the principles of Persuasive Technology. Persuasive Technology is a scientific field dealing with computerized software or information systems that are designed to reinforce, change, or shape attitudes, behaviors, or both without using coercion or deception.
Against this background, the Driving Feedback Avatar (DFA) was developed through this work. The system is a visual in-car interface that provides the driver with feedback on aggressive driving. The main element is an abstract avatar displayed in the vehicle. The feedback is transmitted through the emotional state of this avatar, i.e., if the driver behaves aggressively, the avatar becomes increasingly angry (negative feedback). If no aggressive action occurs, the avatar is more relaxed (positive feedback). In addition, directly after an aggressive action is recognized by the system, the display is flashing briefly to give the driver an instant feedback on his action.
Five empirical studies were carried out as part of the human-centered design process of the DFA. They were aimed at understanding the user and the use context of the future system, ideating system ideas, and evaluating a system prototype. The initial research question was about the triggers of aggressive driving. In a driver study on a public road, 34 participants reported their emotions and their triggers while they were driving (study 1). The second research question asked for interventions to cope with aggression in everyday life. For this purpose, 15 experts dealing with the treatment of aggressive individuals were interviewed (study 2). In total, 75 triggers of aggressive driving and 34 anti-aggression interventions were identified. Inspired by these findings, 108 participants generated more than 100 ideas of how to mitigate aggressive driving using technology in a series of ideation workshops (study 3). Based on these ideas, the concept of the DFA was elaborated on. In an online survey, the concept was evaluated by 1,047 German respondents to get a first assessment of its perception (study 4). Later on, the DFA was implemented into a prototype and evaluated in an experimental driving study with 32 participants, focusing on the system’s effectiveness (study 5). The DFA had only weak and, in part, unexpected effects on aggressive driving that require a deeper discussion.
With the DFA, this work has shown that there is room to change aggressive driving through Persuasive Technology. However, this is a very sensitive issue with special requirements regarding the design of avatar-based feedback systems in the context of aggressive driving. Moreover, this work makes a significant contribution through the number of empirical insights gained on the problem of aggressive driving and wants to encourage future research and design activities in this regard.
Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions.
Obesity is a serious disease that can affect both physical and psychological well-being. Due to weight stigmatization, many affected individuals suffer from body image disturbances whereby they perceive their body in a distorted way, evaluate it negatively, or neglect it. Beyond established interventions such as mirror exposure, recent advancements aim to complement body image treatments by the embodiment of visually altered virtual bodies in virtual reality (VR). We present a high-fidelity prototype of an advanced VR system that allows users to embody a rapidly generated personalized, photorealistic avatar and to realistically modulate its body weight in real-time within a carefully designed virtual environment. In a formative multi-method approach, a total of 12 participants rated the general user experience (UX) of our system during body scan and VR experience using semi-structured qualitative interviews and multiple quantitative UX measures. Using body weight modification tasks, we further compared three different interaction methods for real-time body weight modification and measured our system’s impact on the body image relevant measures body awareness and body weight perception. From the feedback received, demonstrating an already solid UX of our overall system and providing constructive input for further improvement, we derived a set of design guidelines to guide future development and evaluation processes of systems supporting body image interventions.