Refine
Has Fulltext
- yes (72)
Is part of the Bibliography
- yes (72)
Year of publication
Document Type
- Journal article (50)
- Doctoral Thesis (17)
- Conference Proceeding (4)
- Working Paper (1)
Language
- English (72) (remove)
Keywords
- virtual reality (8)
- Mensch-Maschine-Kommunikation (4)
- Psychologie (4)
- Medien (3)
- human-computer interaction (3)
- measurement (3)
- presence (3)
- transportation (3)
- Benutzeroberfläche (2)
- Evolutionspsychologie (2)
Institute
- Institut Mensch - Computer - Medien (72) (remove)
Sonstige beteiligte Institutionen
- DATE Lab, KITE Research Insititute, University Health Network, Toronto, Canada (1)
- Institut für Evangelische Theologie und Religionspädagogik (1)
- Social and Technological Systems (SaTS) lab, School of Art, Media, Performance and Design, York University, Toronto, Canada (1)
- Technische Universität Dortmund, Fakultät für Informatik, Computer Graphics Group (1)
- University of Iceland (Human Behaviour Laboratory) (1)
- Zentrum für soziale Implikationen künstlicher Intelligenz (SOCAI) (1)
Few topics have been the subject of more controversy than those encapsulated by the terms "sex" and "gender". Social-cultural and biological-evolutionary argumentation patterns frequently clash and especially the public debate appears to be stuck in a stalemate between the two competing parties.
From a psychological perspective both topics appear deeply intertwined and are not easy to be separated. This study pursues an integrative approach to better understand the roots of differences best subsumed under the term sex/gender. It will become apparent that both nature and nurture variables interact and form the complex system of human behavior and experience.
Over the past years, scholars have explored eudaimonic video game experiences—profound entertainment responses that include meaningfulness, reflection, and others. In a comparatively short time, a plethora of explanations for the formation of such eudaimonic gaming experiences has been developed across multiple disciplines, making it difficult to keep track of the state of theory development. Hence, we present a theoretical overview of these explanations. We first provide a working definition of eudaimonic gaming experiences (i.e., experiences that reflect human virtues and encourage players to develop their potential as human beings fully) and outline four layers of video games—agency, narrative, sociality, and aesthetics—that form the basis for theorizing. Subsequently, we provide an overview of the theoretical approaches, categorizing them based on which of the four game layers their explanation mainly rests upon. Finally, we suggest the contingency of the different theoretical approaches for explaining eudaimonic experiences by describing how their usefulness varies as a function of interactivity. As different types of games offer players various levels of interactivity, our overview suggests which theories and which game layers should be considered when examining eudaimonic experiences for specific game types.
A substantial number of people refused to get vaccinated against COVID-19, which prompts the question as to why. We focus on the role of individual worldviews about the nature and generation of knowledge (epistemic beliefs). We propose a model that includes epistemic beliefs, their relationship to the Dark Factor of Personality (D), and their mutual effect on the probability of having been vaccinated against COVID-19. Based on a US nationally representative sample (N = 1268), we show that stronger endorsement of post-truth epistemic beliefs was associated with a lower probability of having been vaccinated against COVID-19. D was also linked to a lower probability of having been vaccinated against COVID-19, which can be explained by post-truth epistemic beliefs. Our results indicate that the more individuals deliberately refrain from adhering to the better argument, the less likely they are vaccinated. More generally, post-truth epistemic beliefs pose a challenge for rational communication.
Learning about informal fallacies and the detection of fake news: an experimental intervention
(2023)
The philosophical concept of informal fallacies–arguments that fail to provide sufficient support for a claim–is introduced and connected to the topic of fake news detection. We assumed that the ability to identify informal fallacies can be trained and that this ability enables individuals to better distinguish between fake news and real news. We tested these assumptions in a two-group between-participants experiment (N = 116). The two groups participated in a 30-minute-long text-based learning intervention: either about informal fallacies or about fake news. Learning about informal fallacies enhanced participants’ ability to identify fallacious arguments one week later. Furthermore, the ability to identify fallacious arguments was associated with a better discernment between real news and fake news. Participants in the informal fallacy intervention group and the fake news intervention group performed equally well on the news discernment task. The contribution of (identifying) informal fallacies for research and practice is discussed.
Social patterns and roles can develop when users talk to intelligent voice assistants (IVAs) daily. The current study investigates whether users assign different roles to devices and how this affects their usage behavior, user experience, and social perceptions. Since social roles take time to establish, we equipped 106 participants with Alexa or Google assistants and some smart home devices and observed their interactions for nine months. We analyzed diverse subjective (questionnaire) and objective data (interaction data). By combining social science and data science analyses, we identified two distinct clusters—users who assigned a friendship role to IVAs over time and users who did not. Interestingly, these clusters exhibited significant differences in their usage behavior, user experience, and social perceptions of the devices. For example, participants who assigned a role to IVAs attributed more friendship to them used them more frequently, reported more enjoyment during interactions, and perceived more empathy for IVAs. In addition, these users had distinct personal requirements, for example, they reported more loneliness. This study provides valuable insights into the role-specific effects and consequences of voice assistants. Recent developments in conversational language models such as ChatGPT suggest that the findings of this study could make an important contribution to the design of dialogic human–AI interactions.
In the context of medical device training, e-Learning can address problems like unstandardized content and different learning paces. However, staff and students value hands-on activities during medical device training. In a blended learning approach, we examined whether using a syringe pump while conducting an e-Learning program improves the procedural skills needed to operate the pump compared to using the e-Learning program only. In two experiments, the e-Learning only group learned using only the e-Learning program. The e-Learning + hands-on group was instructed to use a syringe pump during the e-Learning to repeat the presented content (section “Experiment 1”) or to alternate between learning on the e-Learning program and applying the learned content using the pump (section “Experiment 2”). We conducted a skills test, a knowledge test, and assessed confidence in using the pump immediately after learning and two weeks later. Simply repeating the content (section “Experiment 1”) did not improve performance of e-Learning + hands-on compared with e-Learning only. The instructed learning process (section “Experiment 1”) resulted in significantly better skills test performance for e-Learning + hands-on compared to the e-Learning only. Only a structured learning process based on multi-media learning principles and memory research improved procedural skills in relation to operating a medical device.
When interacting with sophisticated digital technologies, people often fall back on the same interaction scripts they apply to the communication with other humans—especially if the technology in question provides strong anthropomorphic cues (e.g., a human-like embodiment). Accordingly, research indicates that observers tend to interpret the body language of social robots in the same way as they would with another human being. Backed by initial evidence, we assumed that a humanoid robot will be considered as more dominant and competent, but also as more eerie and threatening once it strikes a so-called power pose. Moreover, we pursued the research question whether these effects might be accentuated by the robot’s body size. To this end, the current study presented 204 participants with pictures of the robot NAO in different poses (expansive vs. constrictive), while also manipulating its height (child-sized vs. adult-sized). Our results show that NAO’s posture indeed exerted strong effects on perceptions of dominance and competence. Conversely, participants’ threat and eeriness ratings remained statistically independent of the robot’s depicted body language. Further, we found that the machine’s size did not affect any of the measured interpersonal perceptions in a notable way. The study findings are discussed considering limitations and future research directions.
The relevance of user experience in safety–critical domains has been questioned and lacks empirical investigation. Based on previous studies examining user experience in consumer technology, we conducted an online survey on positive experiences with interactive technology in acute care. The participants of the study consisted of anaesthesiologists, nurses, and paramedics (N = 55) from three German cities. We report qualitative and quantitative data examining (1) the relevance and notion of user experience, (2) motivational orientations and psychological need satisfaction, and (3) potential correlates of hedonic, eudaimonic, and extrinsic motivations such as affect or meaning. Our findings reveal that eudaimonia was the most salient aspect in these experiences and that the relevance of psychological needs is differently ranked than in experiences with interactive consumer technology. We conclude that user experience should be considered in safety–critical domains, but research needs to develop further tools and methods to address the domain-specific requirements.
The current condition of (Western) academic psychology can be criticized for various reasons. In the past years, many debates have been centered around the so-called “replication crisis” and the “WEIRD people problem”. However, one aspect which has received relatively little attention is the fact that psychological research is typically limited to currently living individuals, while the psychology of the past remains unexplored. We find that more research in the field of historical psychology is required to capture both the similarities and differences between psychological mechanisms both then and now. We begin by outlining the potential benefits of understanding psychology also as a historical science and explore these benefits using the example of stress. Finally, we consider methodological, ideological, and practical pitfalls, which could endanger the attempt to direct more attention toward cross-temporal variation. Nevertheless, we suggest that historical psychology would contribute to making academic psychology a truly universal endeavor that explores the psychology of all humans.
Ownership and usage of personal voice assistant devices like Amazon Echo or Google Home have increased drastically over the last decade since their market launch. This thesis builds upon existing computers are social actors (CASA) and media equation research that is concerned with humans displaying social reactions usually exclusive to human-human interaction when interacting with media and technological devices. CASA research has been conducted with a variety of technological devices such as desktop computers, smartphones, embodied virtual agents, and robots. However, despite their increasing popularity, little empirical work has been done to examine social reactions towards these personal stand-alone voice assistant devices, also referred to as smart speakers. Thus, this dissertation aims to adopt the CASA approach to empirically evaluate social responses to smart speakers. With this goal in mind, four laboratory experiments with a total of 407 participants have been conducted for this thesis. Results show that participants display a wide range of social reactions when interacting with voice assistants. This includes the utilization of politeness strategies such as the interviewer-bias, which led to participants giving better evaluations directly to a smart speaker device compared to a separate computer. Participants also displayed prosocial behavior toward a smart speaker after interdependence and thus a team affiliation had been induced. In a third study, participants applied gender stereotypes to a smart speaker not only in self-reports but also exhibited conformal behavior patterns based on the voice the device used. In a fourth and final study, participants followed the rule of reciprocity and provided help to a smart speaker device that helped them in a prior interaction. This effect was also moderated by subjects’ personalities, indicating that individual differences are relevant for CASA research. Consequently, this thesis provides strong empirical support for a voice assistants are social actors paradigm. This doctoral dissertation demonstrates the power and utility of this research paradigm for media psychological research and shows how considering voice assistant devices as social actors lead to a more profound understanding of voice-based technology. The findings discussed in this thesis also have implications for these devices that need to be carefully considered both in future research as well as in practical design.
With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game Deal or No Deal while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.
With the continuous development of artificial intelligence, there is an effort to let the expressed mind of robots resemble more and more human-like minds. However, just as the human-like appearance of robots can lead to feelings of aversion to such robots, recent research has shown that the apparent mind expressed by machines can also be responsible for their negative evaluations. This work strives to explore facets of aversion evoked by machines with human-like mind (uncanny valley of mind) within three empirical projects from a psychological point of view in different contexts, including the resulting consequences.
In Manuscript #1, the perspective of previous work in the research area is reversed and thus shows that humans feel eeriness in response to robots that can read human minds, a capability unknown from human-human interaction. In Manuscript #2, it is explored whether empathy for a robot being harmed by a human is a way to alleviate the uncanny valley of mind. A result of this work worth highlighting is that aversion in this study did not arise from the manipulation of the robot’s mental capabilities but from its attributed incompetence and failure. The results of Manuscript #3 highlight that status threat is revealed if humans perform worse than machines in a work-relevant task requiring human-like mental capabilities, while higher status threat is linked with a higher willingness to interact, due to the machine’s perceived usefulness.
In sum, if explanatory variables and concrete scenarios are considered, people will react fairly positively to machines with human-like mental capabilities. As long as the machine’s usefulness is palpable to people, but machines are not fully autonomous, people seem willing to interact with them, accepting aversion in favor of the expected benefits.
Emotional shifts are often a fundamental part of the narrative experience and engrained into the schematic structures of stories. Recent theoretical work suggests that these shifts are key for narrative influence and are interconnected with transportation, a known mechanism of narrative effects. Empirical research examining this proposition is still scarce, inconclusive, and lacking measures that assess the experience of emotional shifts throughout a narrative to explain effects. This thesis aims to contribute to this research lacuna and investigates the link between emotional shifts, transportation, and story-consistent outcomes using different methods to measure emotional shifts in the moment they occur (Manuscript #1 and #2), and using various narrative stimuli (audiovisual, written, auditive).
Manuscript #1 uses real-time-response (RTR) measurement to examine the relationship of valence shifts experienced during film viewing with transportation and post-exposure self-reported emotional flow. Manuscript #2 reports a pilot study and two experiments in which a self-probed emotional retrospection task is used to measure the number and intensity of emotional shifts during reading. I investigate the effect of reviews on transportation, the link between transportation and emotional shifts, and their respective associations with story-consistent attitudes, social sharing intentions, and donation behavior. In Manuscript #3, narrative structures are manipulated. Two experiments examine the effects of audio stories with shifting (positive-negative-positive) vs. positive-only emotional trajectories on the experience of happiness- and sadness-shifts, transportation, and post-exposure emotional flow.
Transportation was positively linked to valence shifts (M#1), and the number and intensity of emotional shifts (M#2), and emotional flow (M#1, M#3). In M#3, transportation was predicted by shifts in happiness, but not sadness. Emotional flow was linked to shifts in happiness, sadness, and RTR valence (M#1, M#3). Emotional shifts and transportation were associated with social sharing intentions, but only transportation was linked to some story-consistent attitudes (affective attitudes in particular).
Humans have long used external memory aids to support remembering. However, modern digital technologies could facilitate recording and remembering personal information in an unprecedented manner. The present research sought to understand the potential impact of these technologies on autobiographical memory based on interviews with users of smart journaling apps. In Study 1 (N = 12), participants who had no prior experience with smart journaling apps tested the app Day One for 2 weeks and were interviewed about their subjective perceptions afterwards. In order to cross-validate the obtained findings, Study 2 (N = 4) was based on in-depth interviews with long-time users of different smart journaling apps. Taken together, the two studies provide insights into the way autobiographical remembering may change in the digital age – but also into the opportunities and risks potentially associated with the use of technologies that allow creating a detailed and multimedia-based record of one's life.
Earliest autobiographical memories mark a potential beginning of our life story. However, their meaning has hardly been investigated. Against this background, participants (N = 182) were asked to think about two kinds of meaning: the meaning that the remembered event might have had in the moment of experience and the meaning that the memory of the event has for their present life situation. With respect to the meaning in the moment of experience, participants most frequently referred to situational characteristics. The meaning for the present life situation was most frequently related to aspects of the memory that told something about the person beyond the immediate context of the remembered event. Moreover, these meanings were more frequently associated with continuity than with a contrast between then and now. Apart from these overarching commonalities, our data also show that the earliest autobiographical memories of different people can tell very different stories.
Given the growing interest of corporate stakeholders in Metaverse applications, there is a need to understand accessibility of these technologies for marginalized populations such as people living with dementia to ensure inclusive design of Metaverse applications. We assessed the accessibility of extended reality technology for people living with mild cognitive impairment and dementia to develop accessibility guidelines for these technologies. We used four strategies to synthesize evidence for barriers and facilitators of accessibility: (1) Findings from a non-systematic literature review, (2) guidelines from well-researched technology, (3) exploration of selected mixed reality technologies, and (4) observations from four sessions and video data of people living with dementia using mixed reality technologies. We utilized template analysis to develop codes and themes towards accessibility guidelines. Future work can validate our preliminary findings by applying them on video recordings or testing them in experiments.
Objective
Global challenges such as climate change or the COVID‐19 pandemic have drawn public attention to conspiracy theories and citizens' non‐compliance to science‐based behavioral guidelines. We focus on individuals' worldviews about how one can and should construct reality (epistemic beliefs) to explain the endorsement of conspiracy theories and behavior during the COVID‐19 pandemic and propose the Dark Factor of Personality (D) as an antecedent of post‐truth epistemic beliefs.
Method and Results
This model is tested in four pre‐registered studies. In Study 1 (N = 321), we found first evidence for a positive association between D and post‐truth epistemic beliefs (Faith in Intuition for Facts, Need for Evidence, Truth is Political). In Study 2 (N = 453), we tested the model proper by further showing that post‐truth epistemic beliefs predict the endorsement of COVID‐19 conspiracies and disregarding COVID‐19 behavioral guidelines. Study 3 (N = 923) largely replicated these results at a later stage of the pandemic. Finally, in Study 4 (N = 513), we replicated the results in a German sample, corroborating their cross‐cultural validity. Interactions with political orientation were observed.
Conclusion
Our research highlights that epistemic beliefs need to be taken into account when addressing major challenges to humankind.
Visual stimuli are frequently used to improve memory, language learning or perception, and understanding of metacognitive processes. However, in virtual reality (VR), there are few systematically and empirically derived databases. This paper proposes the first collection of virtual objects based on empirical evaluation for inter-and transcultural encounters between English- and German-speaking learners. We used explicit and implicit measurement methods to identify cultural associations and the degree of stereotypical perception for each virtual stimuli (n = 293) through two online studies, including native German and English-speaking participants. The analysis resulted in a final well-describable database of 128 objects (called InteractionSuitcase). In future applications, the objects can be used as a great interaction or conversation asset and behavioral measurement tool in social VR applications, especially in the field of foreign language education. For example, encounters can use the objects to describe their culture, or teachers can intuitively assess stereotyped attitudes of the encounters.
Virtual reality applications employing avatar embodiment typically use virtual mirrors to allow users to perceive their digital selves not only from a first-person but also from a holistic third-person perspective. However, due to distance-related biases such as the distance compression effect or a reduced relative rendering resolution, the self-observation distance (SOD) between the user and the virtual mirror might influence how users perceive their embodied avatar. Our article systematically investigates the effects of a short (1 m), middle (2.5 m), and far (4 m) SOD between users and mirror on the perception of their personalized and self-embodied avatars. The avatars were photorealistic reconstructed using state-of-the-art photogrammetric methods. Thirty participants repeatedly faced their real-time animated self-embodied avatars in each of the three SOD conditions, where they were repeatedly altered in their body weight, and participants rated the 1) sense of embodiment, 2) body weight perception, and 3) affective appraisal towards their avatar. We found that the different SODs are unlikely to influence any of our measures except for the perceived body weight estimation difficulty. Here, the participants perceived the difficulty significantly higher for the farthest SOD. We further found that the participants’ self-esteem significantly impacted their ability to modify their avatar’s body weight to their current body weight and that it positively correlated with the perceived attractiveness of the avatar. Additionally, the participants’ concerns about their body shape affected how eerie they perceived their avatars. The participants’ self-esteem and concerns about their body shape influenced the perceived body weight estimation difficulty. We conclude that the virtual mirror in embodiment scenarios can be freely placed and varied at a distance of one to four meters from the user without expecting major effects on the perception of the avatar.
Introduction
Modern digital devices, such as conversational agents, simulate human–human interactions to an increasing extent. However, their outward appearance remains distinctly technological. While research revealed that mental representations of technology shape users' expectations and experiences, research on technology sending ambiguous cues is rare.
Methods
To bridge this gap, this study analyzes drawings of the outward appearance participants associate with voice assistants (Amazon Echo or Google Home).
Results
Human beings and (humanoid) robots were the most frequent associations, which were rated to be rather trustworthy, conscientious, agreeable, and intelligent. Drawings of the Amazon Echos and Google Homes differed marginally, but “human,” “robotic,” and “other” associations differed with respect to the ascribed humanness, consciousness, intellect, affinity to technology, and innovation ability.
Discussion
This study aims to further elaborate on the rather unconscious cognitive and emotional processes elicited by technology and discusses the implications of this perspective for developers, users, and researchers.
For formative evaluations of user experience (UX) a variety of methods have been developed over the years. However, most techniques require the users to interact with the study as a secondary task. This active involvement in the evaluation is not inclusive of all users and potentially biases the experience currently being studied. Yet there is a lack of methods for situations in which the user has no spare cognitive resources. This condition occurs when 1) users' cognitive abilities are impaired (e.g., people with dementia) or 2) users are confronted with very demanding tasks (e.g., air traffic controllers). In this work we focus on emotions as a key component of UX and propose the new structured observation method Proxemo for formative UX evaluations. Proxemo allows qualified observers to document users' emotions by proxy in real time and then directly link them to triggers. Technically this is achieved by synchronising the timestamps of emotions documented by observers with a video recording of the interaction.
In order to facilitate the documentation of observed emotions in highly diverse contexts we conceptualise and implement two separate versions of a documentation aid named Proxemo App. For formative UX evaluations of technology-supported reminiscence sessions with people with dementia, we create a smartwatch app to discreetly document emotions from the categories anger, general alertness, pleasure, wistfulness and pride. For formative UX evaluations of prototypical user interfaces with air traffic controllers we create a smartphone app to efficiently document emotions from the categories anger, boredom, surprise, stress and pride. Descriptive case studies in both application domains indicate the feasibility and utility of the method Proxemo and the appropriateness of the respectively adapted design of the Proxemo App.
The third part of this work is a series of meta-evaluation studies to determine quality criteria of Proxemo. We evaluate Proxemo regarding its reliability, validity, thoroughness and effectiveness, and compare Proxemo's efficiency and the observers' experience to documentation with pen and paper. Proxemo is reliable, as well as more efficient, thorough and effective than handwritten notes and provides a better UX to observers. Proxemo compares well with existing methods where benchmarks are available.
With Proxemo we contribute a validated structured observation method that has shown to meet requirements formative UX evaluations in the extreme contexts of users with cognitive impairments or high task demands. Proxemo is agnostic regarding researchers' theoretical approaches and unites reductionist and holistic perspectives within one method.
Future work should explore the applicability of Proxemo for further domains and extend the list of audited quality criteria to include, for instance, downstream utility. With respect to basic research we strive to better understand the sources leading observers to empathic judgments and propose reminisce and older adults as model environment for investigating mixed emotions.
In this article we offer initial insights into the fairly new interdisciplinary and international domain of robotics in Christian religious practice. We are a group of scholars in media ethics, practical theology/religious education, and human computer interaction, who have been engaged in this discourse since 2017.
A natural starting point is our study of BlessU2, a “blessing robot,” a device which received considerable recognition from the global public at the Wittenberg 500th reformation anniversary in 2017. We thus begin with the results of this study. Secondly, we will briefly address the relevant theses from Gabriele Trovato et al., as presented in their 2019 article on so-called theomorphic robots – followed by our interdisciplinary discussion of their approach. Finally, we draw conclusions for further work on the field of “religious robots.”
Somewhat more carefully: Section 1 offers starting points within the perspectives of Christian religious practice: here, the blessing robot is both cause and occasion for doing religion and theologizing in the context of existential questions (1.1). We continue with perceptions in the field of religion regarding “Discursive Design Theory” (1.2). The interaction of humans with computers as posing questions for theological standardization of religious practice is focused upon in 1.3. Section 2 reconstructs the HRI/HCI-initiative to develop theomorphic robots in a twofold manner, i.e., the idea of developing theomorphic robots (2.1) and the concept of theomorphic robots: Questions and objections (2.2). In this part of the article we raise discussion points concerning the relationship between technology and religion and the need for sharpening the understanding of religion within the research field. Section 3 closes with propositions and alternatives.
Previous research suggested that people prefer to administer unpleasant electric shocks to themselves rather than being left alone with their thoughts because engagement in thinking is an unpleasant activity. The present research examined this negative reinforcement hypothesis by giving participants a choice of distracting themselves with the generation of electric shock causing no to intense pain. Four experiments (N = 254) replicated the result that a large proportion of participants opted to administer painful shocks to themselves during the thinking period. However, they administered strong electric shocks to themselves even when an innocuous response option generating no or a mild shock was available. Furthermore, participants inflicted pain to themselves when they were assisted in the generation of pleasant thoughts during the waiting period, with no difference between pleasant versus unpleasant thought conditions. Overall, these results question that the primary motivation for the self-administration of painful shocks is avoidance of thinking. Instead, it seems that the self-infliction of pain was attractive for many participants, because they were curious about the shocks, their intensities, and the effects they would have on them.
Objective: Gait adaptation to environmental challenges is fundamental for independent and safe community ambulation. The possibility of precisely studying gait modulation using standardized protocols of gait analysis closely resembling everyday life scenarios is still an unmet need.
Methods: We have developed a fully-immersive virtual reality (VR) environment where subjects have to adjust their walking pattern to avoid collision with a virtual agent (VA) crossing their gait trajectory. We collected kinematic data of 12 healthy young subjects walking in real world (RW) and in the VR environment, both with (VR/A+) and without (VR/A-) the VA perturbation. The VR environment closely resembled the RW scenario of the gait laboratory. To ensure standardization of the obstacle presentation the starting time speed and trajectory of the VA were defined using the kinematics of the participant as detected online during each walking trial.
Results: We did not observe kinematic differences between walking in RW and VR/A-, suggesting that our VR environment per se might not induce significant changes in the locomotor pattern. When facing the VA all subjects consistently reduced stride length and velocity while increasing stride duration. Trunk inclination and mediolateral trajectory deviation also facilitated avoidance of the obstacle.
Conclusions: This proof-of-concept study shows that our VR/A+ paradigm effectively induced a timely gait modulation in a standardized immersive and realistic scenario. This protocol could be a powerful research tool to study gait modulation and its derangements in relation to aging and clinical conditions.
Presence is often considered the most important quale describing the subjective feeling of being in a computer-generated and/or computer-mediated virtual environment. The identification and separation of orthogonal presence components, i.e., the place illusion and the plausibility illusion, has been an accepted theoretical model describing Virtual Reality (VR) experiences for some time. This perspective article challenges this presence-oriented VR theory. First, we argue that a place illusion cannot be the major construct to describe the much wider scope of virtual, augmented, and mixed reality (VR, AR, MR: or XR for short). Second, we argue that there is no plausibility illusion but merely plausibility, and we derive the place illusion caused by the congruent and plausible generation of spatial cues and similarly for all the current model’s so-defined illusions. Finally, we propose congruence and plausibility to become the central essential conditions in a novel theoretical model describing XR experiences and effects.
Obesity is a serious disease that can affect both physical and psychological well-being. Due to weight stigmatization, many affected individuals suffer from body image disturbances whereby they perceive their body in a distorted way, evaluate it negatively, or neglect it. Beyond established interventions such as mirror exposure, recent advancements aim to complement body image treatments by the embodiment of visually altered virtual bodies in virtual reality (VR). We present a high-fidelity prototype of an advanced VR system that allows users to embody a rapidly generated personalized, photorealistic avatar and to realistically modulate its body weight in real-time within a carefully designed virtual environment. In a formative multi-method approach, a total of 12 participants rated the general user experience (UX) of our system during body scan and VR experience using semi-structured qualitative interviews and multiple quantitative UX measures. Using body weight modification tasks, we further compared three different interaction methods for real-time body weight modification and measured our system’s impact on the body image relevant measures body awareness and body weight perception. From the feedback received, demonstrating an already solid UX of our overall system and providing constructive input for further improvement, we derived a set of design guidelines to guide future development and evaluation processes of systems supporting body image interventions.
With the rise of immersive media, advertisers have started to use 360° commercials to engage and persuade consumers. Two experiments were conducted to address research gaps and to validate the positive impact of 360° commercials in realistic settings. The first study (N = 62) compared the effects of 360° commercials using either a mobile cardboard head-mounted display (HMD) or a laptop. This experiment was conducted in the participants’ living rooms and incorporated individual feelings of cybersickness as a moderator. The participants who experienced the 360° commercial with the HMD reported higher spatial presence and product evaluation, but their purchase intentions were only increased when their reported cybersickness was low. The second experiment (N = 197) was conducted online and analyzed the impact of 360° commercials that were experienced with mobile (smartphone/tablet) or static (laptop/desktop) devices instead of HMDs. The positive effects of omnidirectional videos were stronger when participants used mobile devices.
This study provides a systematic literature review of research (2001–2020) in the field of teaching and learning a foreign language and intercultural learning using immersive technologies. Based on 2507 sources, 54 articles were selected according to a predefined selection criteria. The review is aimed at providing information about which immersive interventions are being used for foreign language learning and teaching and where potential research gaps exist. The papers were analyzed and coded according to the following categories: (1) investigation form and education level, (2) degree of immersion, and technology used, (3) predictors, and (4) criterions. The review identified key research findings relating the use of immersive technologies for learning and teaching a foreign language and intercultural learning at cognitive, affective, and conative levels. The findings revealed research gaps in the area of teachers as a target group, and virtual reality (VR) as a fully immersive intervention form. Furthermore, the studies reviewed rarely examined behavior, and implicit measurements related to inter- and trans-cultural learning and teaching. Inter- and transcultural learning and teaching especially is an underrepresented investigation subject. Finally, concrete suggestions for future research are given. The systematic review contributes to the challenge of interdisciplinary cooperation between pedagogy, foreign language didactics, and Human-Computer Interaction to achieve innovative teaching-learning formats and a successful digital transformation.
Meal-concurrent media use has been linked to several problematic outcomes, including higher caloric intake and an increased risk for obesity. Nevertheless, the sociocultural and dispositional predictors of using media while eating are not yet well-understood, including potential cross-cultural differences. Inspired by the recent emergence of a new food-related media phenomenon called “mukbang”—digital eating broadcasts that have become immensely popular throughout East and Southeast Asia—we inquire 296 participants from two cultures (Germany and South Korea) about their meal-concurrent media use. Our results suggest that South Koreans tend to use media more frequently during meals than Germans, especially for social purposes. Meanwhile, younger age only predicts meal-concurrent media use in the German sample. Apart from that, however, many other examined predictors (e.g., gender, living situation, body-esteem, the Big Five) remain statistically insignificant, indicating notable universality for the behavior in question. In the second part of our study, we then put special focus on the emerging mukbang trend and conduct a theory-driven exploration of its gratifications. Doing so, we find that participants' parasocial and social experiences during eating broadcasts significantly predict their enjoyment of the genre.
Plenty of theories, models, measures, and investigations target the understanding of virtual presence, i.e., the sense of presence in immersive Virtual Reality (VR). Other varieties of the so-called eXtended Realities (XR), e.g., Augmented and Mixed Reality (AR and MR) incorporate immersive features to a lesser degree and continuously combine spatial cues from the real physical space and the simulated virtual space. This blurred separation questions the applicability of the accumulated knowledge about the similarities of virtual presence and presence occurring in other varieties of XR, and corresponding outcomes. The present work bridges this gap by analyzing the construct of presence in mixed realities (MR). To achieve this, the following presents (1) a short review of definitions, dimensions, and measurements of presence in VR, and (2) the state of the art views on MR. Additionally, we (3) derived a working definition of MR, extending the Milgram continuum. This definition is based on entities reaching from real to virtual manifestations at one time point. Entities possess different degrees of referential power, determining the selection of the frame of reference. Furthermore, we (4) identified three research desiderata, including research questions about the frame of reference, the corresponding dimension of transportation, and the dimension of realism in MR. Mainly the relationship between the main aspects of virtual presence of immersive VR, i.e., the place-illusion, and the plausibility-illusion, and of the referential power of MR entities are discussed regarding the concept, measures, and design of presence in MR. Finally, (5) we suggested an experimental setup to reveal the research heuristic behind experiments investigating presence in MR. The present work contributes to the theories and the meaning of and approaches to simulate and measure presence in MR. We hypothesize that research about essential underlying factors determining user experience (UX) in MR simulations and experiences is still in its infancy and hopes this article provides an encouraging starting point to tackle related questions.
Artificial Intelligence (AI) covers a broad spectrum of computational problems and use cases. Many of those implicate profound and sometimes intricate questions of how humans interact or should interact with AIs. Moreover, many users or future users do have abstract ideas of what AI is, significantly depending on the specific embodiment of AI applications. Human-centered-design approaches would suggest evaluating the impact of different embodiments on human perception of and interaction with AI. An approach that is difficult to realize due to the sheer complexity of application fields and embodiments in reality. However, here XR opens new possibilities to research human-AI interactions. The article’s contribution is twofold: First, it provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum as a framework for and a perspective of different approaches of XR-AI combinations. It motivates XR-AI combinations as a method to learn about the effects of prospective human-AI interfaces and shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces. Second, the article provides two exemplary experiments investigating the aforementioned approach for two distinct AI-systems. The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system. Here the article introduces two paradigmatic implementations of the proposed XR testbed for human-AI interactions and interfaces and shows how a valid and systematic investigation can be conducted. In sum, the article opens new perspectives on how XR benefits human-centered AI design and development.
As an emerging market for voice assistants (VA), the healthcare sector imposes increasing requirements on the users’ trust in the technological system. To encourage patients to reveal sensitive data requires patients to trust in the technological counterpart. In an experimental laboratory study, participants were presented a VA, which was introduced as either a “specialist” or a “generalist” tool for sexual health. In both conditions, the VA asked the exact same health-related questions. Afterwards, participants assessed the trustworthiness of the tool and further source layers (provider, platform provider, automatic speech recognition in general, data receiver) and reported individual characteristics (disposition to trust and disclose sexual information). Results revealed that perceiving the VA as a specialist resulted in higher trustworthiness of the VA and of the provider, the platform provider and automatic speech recognition in general. Furthermore, the provider’s trustworthiness affected the perceived trustworthiness of the VA. Presenting both a theoretical line of reasoning and empirical data, the study points out the importance of the users’ perspective on the assistant. In sum, this paper argues for further analyses of trustworthiness in voice-based systems and its effects on the usage behavior as well as the impact on responsible design of future technology.
The concept of digital literacy has been introduced as a new cultural technique, which is regarded as essential for successful participation in a (future) digitized world. Regarding the increasing importance of AI, literacy concepts need to be extended to account for AI-related specifics. The easy handling of the systems results in increased usage, contrasting limited conceptualizations (e.g., imagination of future importance) and competencies (e.g., knowledge about functional principles). In reference to voice-based conversational agents as a concrete application of AI, the present paper aims for the development of a measurement to assess the conceptualizations and competencies about conversational agents. In a first step, a theoretical framework of “AI literacy” is transferred to the context of conversational agent literacy. Second, the “conversational agent literacy scale” (short CALS) is developed, constituting the first attempt to measure interindividual differences in the “(il) literate” usage of conversational agents. 29 items were derived, of which 170 participants answered. An explanatory factor analysis identified five factors leading to five subscales to assess CAL: storage and transfer of the smart speaker’s data input; smart speaker’s functional principles; smart speaker’s intelligent functions, learning abilities; smart speaker’s reach and potential; smart speaker’s technological (surrounding) infrastructure. Preliminary insights into construct validity and reliability of CALS showed satisfying results. Third, using the newly developed instrument, a student sample’s CAL was assessed, revealing intermediated values. Remarkably, owning a smart speaker did not lead to higher CAL scores, confirming our basic assumption that usage of systems does not guarantee enlightened conceptualizations and competencies. In sum, the paper contributes to the first insights into the operationalization and understanding of CAL as a specific subdomain of AI-related competencies.
In this paper, we present a virtual audience simulation system for Virtual Reality (VR). The system implements an audience perception model controlling the nonverbal behaviors of virtual spectators, such as facial expressions or postures. Groups of virtual spectators are animated by a set of nonverbal behavior rules representing a particular audience attitude (e.g., indifferent or enthusiastic). Each rule specifies a nonverbal behavior category: posture, head movement, facial expression and gaze direction as well as three parameters: type, frequency and proportion. In a first user-study, we asked participants to pretend to be a speaker in VR and then create sets of nonverbal behaviour parameters to simulate different attitudes. Participants manipulated the nonverbal behaviours of single virtual spectator to match a specific levels of engagement and opinion toward them. In a second user-study, we used these parameters to design different types of virtual audiences with our nonverbal behavior rules and evaluated their perceptions. Our results demonstrate our system’s ability to create virtual audiences with three types of different perceived attitudes: indifferent, critical, enthusiastic. The analysis of the results also lead to a set of recommendations and guidelines regarding attitudes and expressions for future design of audiences for VR therapy and training applications.
Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions.
Conversational agents and smart speakers have grown in popularity offering a variety of options for use, which are available through intuitive speech operation. In contrast to the standard dyad of a single user and a device, voice-controlled operations can be observed by further attendees resulting in new, more social usage scenarios. Referring to the concept of ‘media equation’ and to research on the idea of ‘computers as social actors,’ which describes the potential of technology to trigger emotional reactions in users, this paper asks for the capacity of smart speakers to elicit empathy in observers of interactions. In a 2 × 2 online experiment, 140 participants watched a video of a man talking to an Amazon Echo either rudely or neutrally (factor 1), addressing it as ‘Alexa’ or ‘Computer’ (factor 2). Controlling for participants’ trait empathy, the rude treatment results in participants’ significantly higher ratings of empathy with the device, compared to the neutral treatment. The form of address had no significant effect. Results were independent of the participants’ gender and usage experience indicating a rather universal effect, which confirms the basic idea of the media equation. Implications for users, developers and researchers were discussed in the light of (future) omnipresent voice-based technology interaction scenarios.
The design and evaluation of assisting technologies to support behavior change processes have become an essential topic within the field of human-computer interaction research in general and the field of immersive intervention technologies in particular. The mechanisms and success of behavior change techniques and interventions are broadly investigated in the field of psychology. However, it is not always easy to adapt these psychological findings to the context of immersive technologies. The lack of theoretical foundation also leads to a lack of explanation as to why and how immersive interventions support behavior change processes. The Behavioral Framework for immersive Technologies (BehaveFIT) addresses this lack by 1) presenting an intelligible categorization and condensation of psychological barriers and immersive features, by 2) suggesting a mapping that shows why and how immersive technologies can help to overcome barriers and finally by 3) proposing a generic prediction path that enables a structured, theory-based approach to the development and evaluation of immersive interventions. These three steps explain how BehaveFIT can be used, and include guiding questions for each step. Further, two use cases illustrate the usage of BehaveFIT. Thus, the present paper contributes to guidance for immersive intervention design and evaluation, showing that immersive interventions support behavior change processes and explain and predict 'why' and 'how' immersive interventions can bridge the intention-behavior-gap.
The combination of globalization and digitalization emphasizes the importance of media-related and intercultural competencies of teacher educators and preservice teachers. This article reports on the initial prototypical implementation of a pedagogical concept to foster such competencies of preservice teachers. The proposed pedagogical concept utilizes a social virtual reality (VR) framework since related work on the characteristics of VR has indicated that this medium is particularly well suited for intercultural professional development processes. The development is integrated into a larger design-based research approach that develops a theory-guided and empirically grounded professional development concept for teacher educators with a special focus on teacher educator technology competencies (TETC8). TETCs provide a suitable competence framework capable of aligning requirements for both media-related and intercultural competencies. In an exploratory study with student teachers, we designed, implemented, and evaluated a pedagogical concept. Reflection reports were qualitatively analyzed to gain insights into factors that facilitate or hinder the implementation of the immersive learning scenario as well as into the participants’ evaluation of their learning experience. The results show that our proposed pedagogical concept is particularly suitable for promoting the experience of social presence, agency, and empathy in the group.
We investigated the accuracy of gender stereotypes regarding digital game genre preferences. In Study 1, 484 female and male participants rated their preference for 17 game genres (gender differences). In Study 2, another sample of 226 participants rated the extent to which the same genres were presumably preferred by women or men (gender stereotypes). We then compared the results of both studies in order to determine the accuracy of the gender stereotypes. Study 1 revealed actual gender differences for most genres—mostly of moderate size. Study 2 revealed substantial gender stereotypes about genre preferences. When comparing the results from both studies, we found that gender stereotypes were accurate in direction for most genres. However, they were, to some degree, inaccurate in size: For most genres, gender stereotypes overestimated the actual gender difference with a moderate mean effect size.
Companies increasingly seek to use gay protagonists in audio-visual commercials to attract a new affluent target group. There is also growing demand for the diversity present in society to be reflected in media formats such as advertising. Studies have shown, however, that heterosexual consumers (especially men), who may be part of the company's loyal consumer base, tend to react negatively to gay-themed advertising campaigns. Searching for an instrument to mitigate this unwanted effect, the present study investigated whether carefully selected background music can shape the perceived gender of gay male advertising protagonists. In a 2 × 2 between-subjects online experiment (musical connotation × gender of the participant), 218 heterosexual participants watched a commercial promoting engagement rings that featured gay male protagonists, scored with feminine- or masculine-connoted background music. As expected, women generally reacted more positively than men to the advertising. Men exposed to the masculine-connoted background music rated the promoted brand more positively, and masculine music also enhanced (at least in the short term) these men's acceptance of gay men in general (low and medium effect sizes) more than was the case for feminine background music. Carefully selected background music affecting the perceived gender of gay male advertising protagonists may prevent negative reactions from heterosexual audiences and, therefore, motivate companies to use gay protagonists in television commercials on a more regular basis.
Pictures in a rapid serial visual presentation (RSVP) stream are better remembered when they are simultaneously presented with targets of an unrelated detection task than when they are presented with distractors. However, it is unclear whether this so-called “attentional boost effect” depends on the intentionality of encoding. While there are studies suggesting that the attentional boost effect even occurs when encoding is incidental, there are several methodological issues with these studies, which may have undermined the incidental encoding instructions. The present study (N = 141) investigated the role of the intentionality of encoding with an improved experimental design. Specifically, to prevent a spill-over of intentional resources to the pictures in the RSVP stream, the speed of the stream was increased (to four pictures per second) and each picture was presented only once during the course of the experiment. An attentional boost effect was only found when encoding was intentional but not when encoding was incidental. Interestingly, memory performance for incidentally encoded pictures was nevertheless substantially above chance, independently of whether images were presented with search-relevant targets or distractors. These results suggest that the attentional boost effect is a memory advantage that occurs only under intentional encoding conditions, and that perceptual long-term memory representations are formed as a natural product of perception, independently of the presence of behaviorally relevant events.
Manifestations of aggressive driving, such as tailgating, speeding, or swearing, are not trivial offenses but are serious problems with hazardous consequences—for the offender as well as the target of aggression. Aggression on the road erases the joy of driving, affects heart health, causes traffic jams, and increases the risk of traffic accidents. This work is aimed at developing a technology-driven solution to mitigate aggressive driving according to the principles of Persuasive Technology. Persuasive Technology is a scientific field dealing with computerized software or information systems that are designed to reinforce, change, or shape attitudes, behaviors, or both without using coercion or deception.
Against this background, the Driving Feedback Avatar (DFA) was developed through this work. The system is a visual in-car interface that provides the driver with feedback on aggressive driving. The main element is an abstract avatar displayed in the vehicle. The feedback is transmitted through the emotional state of this avatar, i.e., if the driver behaves aggressively, the avatar becomes increasingly angry (negative feedback). If no aggressive action occurs, the avatar is more relaxed (positive feedback). In addition, directly after an aggressive action is recognized by the system, the display is flashing briefly to give the driver an instant feedback on his action.
Five empirical studies were carried out as part of the human-centered design process of the DFA. They were aimed at understanding the user and the use context of the future system, ideating system ideas, and evaluating a system prototype. The initial research question was about the triggers of aggressive driving. In a driver study on a public road, 34 participants reported their emotions and their triggers while they were driving (study 1). The second research question asked for interventions to cope with aggression in everyday life. For this purpose, 15 experts dealing with the treatment of aggressive individuals were interviewed (study 2). In total, 75 triggers of aggressive driving and 34 anti-aggression interventions were identified. Inspired by these findings, 108 participants generated more than 100 ideas of how to mitigate aggressive driving using technology in a series of ideation workshops (study 3). Based on these ideas, the concept of the DFA was elaborated on. In an online survey, the concept was evaluated by 1,047 German respondents to get a first assessment of its perception (study 4). Later on, the DFA was implemented into a prototype and evaluated in an experimental driving study with 32 participants, focusing on the system’s effectiveness (study 5). The DFA had only weak and, in part, unexpected effects on aggressive driving that require a deeper discussion.
With the DFA, this work has shown that there is room to change aggressive driving through Persuasive Technology. However, this is a very sensitive issue with special requirements regarding the design of avatar-based feedback systems in the context of aggressive driving. Moreover, this work makes a significant contribution through the number of empirical insights gained on the problem of aggressive driving and wants to encourage future research and design activities in this regard.
Computer games are highly immersive, engaging, and motivating learning environments. By providing a tutorial at the start of a new game, players learn the basics of the game's underlying principles as well as practice how to successfully play the game. During the actual gameplay, players repetitively apply this knowledge, thus improving it due to repetition. Computer games also challenge players with a constant stream of new challenges which increase in difficulty over time. As a result, computer games even require players to transfer their knowledge to master these new challenges. A computer game consists of several game mechanics. Game mechanics are the rules of a computer game and encode the game's underlying principles. They create the virtual environments, generate a game's challenges and allow players to interact with the game. Game mechanics also can encode real world knowledge. This knowledge may be acquired by players via gameplay. However, the actual process of knowledge encoding and knowledge learning using game mechanics has not been thoroughly defined, yet. This thesis therefore proposes a theoretical model to define the knowledge learning using game mechanics: the Gamified Knowledge Encoding. The model is applied to design a serious game for affine transformations, i.e., GEtiT, and to predict the learning outcome of playing a computer game that encodes orbital mechanics in its game mechanics, i.e., Kerbal Space Program. To assess the effects of different visualization technologies on the overall learning outcome, GEtiT visualizes the gameplay in desktop-3D and immersive virtual reality. The model's applicability for effective game design as well as GEtiT's overall design are evaluated in a usability study. The learning outcome of playing GEtiT and Kerbal Space Program is assessed in four additional user studies. The studies' results validate the use of the Gamified Knowledge Encoding for the purpose of developing effective serious games and to predict the learning outcome of existing serious games. GEtiT and Kerbal Space Program yield a similar training effect but a higher motivation to tackle the assignments in comparison to a traditional learning method. In conclusion, this thesis expands the understanding of using game mechanics for an effective learning of knowledge. The presented results are of high importance for researches, educators, and developers as they also provide guidelines for the development of effective serious games.
In this article, we present approaches to interactive simulations of biohybrid systems. These simulations are comprised of two major computational components: (1) agent-based developmental models that retrace organismal growth and unfolding of technical scaffoldings and (2) interfaces to explore these models interactively. Simulations of biohybrid systems allow us to fast forward and experience their evolution over time based on our design decisions involving the choice, configuration and initial states of the deployed biological and robotic actors as well as their interplay with the environment. We briefly introduce the concept of swarm grammars, an agent-based extension of L-systems for retracing growth processes and structural artifacts. Next, we review an early augmented reality prototype for designing and projecting biohybrid system simulations into real space. In addition to models that retrace plant behaviors, we specify swarm grammar agents to braid structures in a self-organizing manner. Based on this model, both robotic and plant-driven braiding processes can be experienced and explored in virtual worlds. We present an according user interface for use in virtual reality. As we present interactive models concerning rather diverse description levels, we only ensured their principal capacity for interaction but did not consider efficiency analyzes beyond prototypic operation. We conclude this article with an outlook on future works on melding reality and virtuality to drive the design and deployment of biohybrid systems.
Stories are a powerful means to change recipients’ views on themselves by being transported into the story world and by identifying with story characters. Previous studies showed that recipients temporarily change in line with a story and its characters (assimilation). Conversely, assimilation might be less likely when recipients are less identified with story protagonists or less transported into a story by comparing themselves with a story character. This may lead to changes, which are opposite to a story and its characters (contrast). In two experiments, we manipulated transportation and experience taking via two written reviews (Experiment 1; N = 164) and by varying the perspective of the story’s narrator (Experiment 2; N = 79) of a short story about a negligent student. Recipients’ self-ratings in comparison to others, motives, and problem-solving behavior served as dependent variables. However, neither the review nor the perspective manipulation affected transportation or experience taking while reading the story. Against our expectations, highly transported recipients (in Study 1) and recipients with high experience taking (in Study 2) showed more persistency working on an anagram-solving task, even when controlling for trait conscientiousness. Our findings are critically discussed in light of previous research.
Two studies are reported that investigate how readily accessible and applicable ten force-dynamic categories are to novices in describing short episodes of human-technology interaction (Study 1) and that establish a measure of inter-coder reliability when re-classifying these episodes into force-dynamic categories (Study 2). The results of the first study show that people can easily and confidently relate their experiences with technology to the definitions of force-dynamic events (e.g. “The driver released the handbrake” as an example of restraint removal). The results of the second study show moderate agreement between four expert coders across all ten force-dynamic categories (Cohen’s kappa = .59) when re-classifying these episodes. Agreement values for single force-dynamic categories ranged between ‘fair’ and ‘almost perfect’, i.e. between kappa = .30 and .95. Agreement with the originally intended classifications of study 1 was higher than the pure inter-coder reliabilities. Single coders achieved an average kappa of .71, indicating substantial agreement. Using more than one coder increased kappas to almost perfect: up to .87 for four coders. A qualitative analysis of the predicted versus the observed number of category confusions revealed that about half of the category disagreement could be predicted from strong overlaps in the definitions of force-dynamic categories. From the quantitative and qualitative results, guidelines are derived to aid the better training of coders in order to increase inter-coder reliability.
In today’s social online world there is a variety of interaction and participatory possibilities which enable web users to actively produce content themselves.
This user-generated content is omnipresent in the web and there is growing evidence that it is used to select or evaluate professionally created online information.
The present study investigated how this surrounding content affects online advertising by drawing from social influence theory. Specifically, it was assumed that
web users sharing an interpersonal relationship (interpersonal influence) and/or a group membership (collective influence) with authors of user-generated content
which appears next to advertising on the web page are more strongly influenced in their response to the advertising than unrelated users. These assumptions were
tested in a 2 × 2 between-subject experiment with 118 students who were exposed to four different Facebook profiles that differed in terms of interpersonal
connection to the source (existent/non-existent) and collective connection to the source (existent/non-existent). The results show a significant impact in the case
of collective influence, but not in the case of interpersonal influence. The underlying mechanisms of this effect and implications of the results for online advertising
are discussed.
Disfluency as a Desirable Difficulty — The Effects of Letter Deletion on Monitoring and Performance
(2018)
Desirable difficulties initiate learning processes that foster performance. Such a desirable difficulty is generation, e.g., filling in deleted letters in a deleted letter text. Likewise, letter deletion is a manipulation of processing fluency: A deleted letter text is more difficult to process than an intact text. Disfluency theory also supposes that disfluency initiates analytic processes and thus, improves performance. However, performance is often not affected but, rather, monitoring is affected. The aim of this study is to propose a specification of the effects of disfluency as a desirable difficulty: We suppose that mentally filling in deleted letters activates analytic monitoring but not necessarily analytic cognitive processing and improved performance. Moreover, once activated, analytic monitoring should remain for succeeding fluent text. To test our assumptions, half of the students (n = 32) first learned with a disfluent (deleted letter) text and then with a fluent (intact) text. Results show no differences in monitoring between the disfluent and the fluent text. This supports our assumption that disfluency activates analytic monitoring that remains for succeeding fluent text. When the other half of the students (n = 33) first learned with a fluent and then with a disfluent text, differences in monitoring between the disfluent and the fluent text were found. Performance was significantly affected by fluency but in favor of the fluent texts, and hence, disfluency did not activate analytic cognitive processing. Thus, difficulties can foster analytic monitoring that remains for succeeding fluent text, but they do not necessarily improve performance. Further research is required to investigate how analytic monitoring can lead to improved cognitive processing and performance.