Refine
Has Fulltext
- yes (54)
Is part of the Bibliography
- yes (54)
Year of publication
- 2021 (54) (remove)
Document Type
- Journal article (47)
- Doctoral Thesis (6)
- Other (1)
Language
- English (54) (remove)
Keywords
- anxiety (4)
- virtual reality (4)
- EEG (3)
- children (3)
- fatigue (3)
- motivation (3)
- perception and action (3)
- text comprehension (3)
- Kognition (2)
- classical conditioning (2)
Institute
- Institut für Psychologie (54) (remove)
Background: The distinctness of grief from depression has been the subject of a long scholarly debate, even influencing definitions of diagnostic criteria. Aims: This study aims at clarifying the issue by a multifaceted analysis of data from a large German sample. Method: A community sample of 406 bereaved persons answered the Wuerzburg Grief Inventory (WGI), a multidimensional grief questionnaire designed to measure normal grief in the German language, and the General Depression Scale – Short Version (GDS-S), a self-report depression scale. Data were analyzed by factor analysis to identify structural (dis-)similarities of the constructs, and analysis of variance (ANOVA) to identify the influence of the factors relationship to the deceased, type of death, and time since loss on grief measures and depression scores. Results: Factor analysis clustered items referring to grief-related impairments and depression into one factor, items referring to other dimensions of grief on separate factors, however. Relationship to the deceased influenced the grief measures impairments and nearness to the deceased, but not depression scores if controlled for impairments. Type of death showed specific effects on grief scores, but not on depression scores. Time since loss influenced grief scores, but not depression scores. Limitations: The analysis is based on a self-selected community sample of grieving persons, self-report measures, and in part, on cross-sectional data. Conclusion: Factor analysis and objective data show a clear distinction of dimensions of grief and depression. The human experience of grief contains a sense of nearness to the lost person, feelings of guilt, and positive aspects of the loss experience in addition to components resembling depression.
Humans use their eyes not only as visual input devices to perceive the environment, but also as an action tool in order to generate intended effects in their environment. For instance, glances are used to direct someone else's attention to a place of interest, indicating that gaze control is an important part of social communication. Previous research on gaze control in a social context mainly focused on the gaze recipient by asking how humans respond to perceived gaze (gaze cueing). So far, this perspective has hardly considered the actor’s point of view by neglecting to investigate what mental processes are involved when actors decide to perform an eye movement to trigger a gaze response in another person. Furthermore, eye movements are also used to affect the non-social environment, for instance when unlocking the smartphone with the help of the eyes. This and other observations demonstrate the necessity to consider gaze control in contexts other than social communication whilst at the same time focusing on commonalities and differences inherent to the nature of a social (vs. non-social) action context. Thus, the present work explores the cognitive mechanisms that control such goal-oriented eye movements in both social and non-social contexts.
The experiments presented throughout this work are built on pre-established paradigms from both the oculomotor research domain and from basic cognitive psychology. These paradigms are based on the principle of ideomotor action control, which provides an explanatory framework for understanding how goal-oriented, intentional actions come into being. The ideomotor idea suggests that humans acquire associations between their actions and the resulting effects, which can be accessed in a bi-directional manner: Actions can trigger anticipations of their effects, but the anticipated resulting effects can also trigger the associated actions. According to ideomotor theory, action generation involves the mental anticipation of the intended effect (i.e., the action goal) to activate the associated motor pattern. The present experiments involve situations where participants control the gaze of a virtual face via their eye movements. The triggered gaze responses of the virtual face are consistent to the participant’s eye movements, representing visual action effects. Experimental situations are varied with respect to determinants of action-effect learning (e.g., contingency, contiguity, action mode during acquisition) in order to unravel the underlying dynamics of oculomotor control in these situations. In addition to faces, conditions involving changes in non-social objects were included to address the question of whether mechanisms underlying gaze control differ for social versus non-social context situations.
The results of the present work can be summarized into three major findings. 1. My data suggest that humans indeed acquire bi-directional associations between their eye movements and the subsequently perceived gaze response of another person, which in turn affect oculomotor action control via the anticipation of the intended effects. The observed results show for the first time that eye movements in a gaze-interaction scenario are represented in terms of their gaze response in others. This observation is in line with the ideomotor theory of action control. 2. The present series of experiments confirms and extends pioneering results of Huestegge and Kreutzfeldt (2012) with respect to the significant influence of action effects in gaze control. I have shown that the results of Huestegge and Kreutzfeldt (2012) can be replicated across different contexts with different stimulus material given that the perceived action effects were sufficiently salient. 3. Furthermore, I could show that mechanisms of gaze control in a social gaze-interaction context do not appear to be qualitatively different from those in a non-social context.
All in all, the results support recent theoretical claims emphasizing the role of anticipation-based action control in social interaction. Moreover, my results suggest that anticipation-based gaze control in a social context is based on the same general psychological mechanisms as ideomotor gaze control, and thus should be considered as an integral part rather than as a special form of ideomotor gaze control.
This doctoral thesis is part of a research project on the development of the cognitive compre-hension of film at Würzburg University that was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft) between 2013 and 2019 and awarded to Gerhild Nied-ing. That project examined children’s comprehension of narrative text and its development in illustrated versus non-illustrated formats. For this purpose, van Dijk and Kintsch’s (1983) tri-partite model was used, according to which text recipients form text surface and textbase rep-resentations and construct a situation model. In particular, predictions referring to the influ-ence of illustrations on these three levels of text representation were derived from the inte-grated model of text and picture comprehension (ITPC; Schnotz, 2014), which holds that text-picture units are processed on both text-based (descriptive) and picture-based (depictive) paths. Accordingly, illustrations support the construction of a situation model. Moreover, in line with the embodied cognition account (e.g., Barsalou, 1999), it was assumed that the situa-tion model is grounded in perception and action; text recipients mentally simulate the situation addressed in the text through their neural systems related to perception (perceptual simulation) and action (motor resonance). Therefore, the thesis also examines whether perceptual simula-tion takes place during story reception, whether it improves the comprehension of illustrated stories, and whether motor resonance is related to the comprehension of text accompanied by dynamic illustrations. Finally, predictions concerning the development of comprehending illus-trated text were made in line with Springer’s (2001) hypotheses according to which younger children, compared with older children and adults, focus more on illustrations during text comprehension (perceptual boundedness) and use illustrations for the development of cogni-tive skills (perceptual support).
The first research question sought to validate the tripartite model in the context of children’s comprehension of narrative text, so Hypothesis 1 predicted that children yield representations of the text surface, the textbase, and the situation model during text reception. The second research question comprised the assumptions regarding the impact of illustrations on text comprehension. Accordingly, it was expected that illustrations improve the situation model (Hypothesis 2a), especially when they are processed before their corresponding text passages (Hypothesis 2b). Both hypotheses were derived from the ITPC and the assumption that per-ceptual simulation supports the situation model. It was further predicted that dynamic illustra-tions evoke more accurate situation models than static ones (Hypothesis 2c); this followed from the assumption that motor resonance supports the situation model. In line with the ITPC, it was assumed that illustrations impair the textbase (Hypothesis 2d), especially when they are presented after their corresponding text passages (Hypothesis 2e). In accordance with earlier results, it was posited that illustrations have a beneficial effect for the text surface (Hypothesis 2f). The third research question addressed the embodied approach to the situation model. Here, it was assumed that perceptual simulation takes place during text reception (Hypothesis 3a) and that it is more pronounced in illustrated than in non-illustrated text (Hypothesis 3b); the latter hypothesis was related to a necessary premise of the assumption that perceptual sim-ulation improves the comprehension of illustrated text. The fourth research question was relat-ed to perceptual boundedness and perceptual support and predicted age-related differences; younger children were expected to benefit more from illustrations regarding the situation model (Hypothesis 4a) and to simulate vertical object movements in a more pronounced fash-ion (Hypothesis 4b) than older children. In addition, Hypothesis 4c held that perceptual simu-lation is more pronounced in younger children particularly when illustrations are present.
Three experiments were conducted to investigate these hypotheses. Experiment 1 (Seger, Wannagat, & Nieding, submitted).compared the tripartite representations of written text without illustrations, with illustrations presented first, and with illustrations presented after their corresponding sentences. Students between 7 and 13 years old (N = 146) took part. Ex-periment 2 (Seger, Wannagat, & Nieding, 2019) investigated the tripartite representations of auditory text, audiovisual text with static illustrations, and audiovisual text with dynamic il-lustrations among children in the same age range (N = 108). In both experiments, a sentence recognition method similar to that introduced by Schmalhofer and Glavanov (1986) was em-ployed. This method enables the simultaneous measurement of all three text representations. Experiment 3 (Seger, Hauf, & Nieding, 2020) determined the perceptual simulation of vertical object movements during the reception of auditory and audiovisual narrative text among chil-dren between 5 and 11 years old and among adults (N = 190). For this experiment, a picture verification task based on Stanfield and Zwaan’s (2001) paradigm and adapted from Hauf (2016) was used.
The first two experiments confirmed Hypothesis 1, indicating that the tripartite model is appli-cable to the comprehension of auditory and written narrative text among children. A benefi-cial effect of illustrations to the situation model was observed when they were presented syn-chronously with auditory text (Hypotheses 2a), but not when presented asynchronously with written text (Hypothesis 2b), so the ITPC is partly supported on this point. Hypothesis 2c was rejected, indicating that motor resonance does not make an additional contribution to the comprehension of narrative text with dynamic illustrations. Regarding the textbase, a general negative effect of illustrations was not observed (Hypothesis 2d), but a specific negative effect of illustrations that follow their corresponding text passages was seen (Hypothesis 2e); the latter result is also in line with the ITPC. The text surface (Hypothesis 2f) appears to benefit from illustrations in auditory but not written text. The results obtained in Experiment 3 sug-gest that children and adults perceptually simulate vertical object movements (Hypothesis 3a), but there appears to be no difference between auditory and audiovisual text (Hypothesis 3b), so there is no support for a functional relationship between perceptual simulation and the situ-ation model in illustrated text. Hypotheses 4a–4c were investigated in all three experiments and did not receive support in any of them, which indicates that representations of illustrated and non-illustrated narrative text remain stable within the age range examined here.
In human interactions, the facial expression of a bargaining partner may contain relevant information that affects prosocial decisions. We were interested in whether facial expressions of the recipient in the dictator game influence dictators´ ehavior. To test this, we conducted an online study (n = 106) based on a modified version of a dictator game. The dictators allocated money between themselves and another person (recipient), who had no possibility to respond to the dictator.
Importantly, before the allocation decision, the dictator was presented with the facial expression of the recipient (angry, disgusted, sad, smiling, or neutral). The results showed that dictators sent more money to recipients with sad or smiling facial expressions and less to recipients with angry or disgusted facial expressions compared with a neutral facial expression. Moreover, based on the sequential analysis of the decision and the interaction partner in the preceding trial, we found that decision-making depends upon previous interactions.
Body representations are readily expanded based on sensorimotor experience. A dynamic view of body representations, however, holds that these representations cannot only be expanded but that they can also be narrowed down by disembodying elements of the body representation that are no longer warranted. Here we induced illusory ownership in terms of a moving rubber hand illusion and studied the maintenance of this illusion across different conditions. We observed ownership experience to decrease gradually unless participants continued to receive confirmatory multisensory input. Moreover, a single instance of multisensory mismatch – a hammer striking the rubber hand but not the real hand – triggered substantial and immediate disembodiment. Together, these findings support and extend previous theoretical efforts to model body representations through basic mechanisms of multisensory integration. They further support an updating model suggesting that embodied entities fade from the body representation if they are not refreshed continuously.
Voluntary actions and causally linked sensory stimuli are perceived to be shifted towards each other in time. This so-called temporal binding is commonly assessed in paradigms using the Libet Clock. In such experiments, participants have to estimate the timing of actions performed or ensuing sensory stimuli (usually tones) by means of a rotating clock hand presented on a screen. The aforementioned task setup is however ill-suited for many conceivable setups, especially when they involve visual effects. To address this shortcoming, the line of research presented here establishes an alternative measure for temporal binding by using a sequence of timed sounds. This method uses an auditory timer, a sequence of letters presented during task execution, which serve as anchors for temporal judgments. In four experiments, we manipulated four design factors of this auditory timer, namely interval length, interval filling, sequence predictability, and sequence length, to determine the most effective and economic method for measuring temporal binding with an auditory timer.
Stronger reactivity to social gaze in virtual reality compared to a classical laboratory environment
(2021)
People show a robust tendency to gaze at other human beings when viewing images or videos, but were also found to relatively avoid gaze at others in several real‐world situations. This discrepancy, along with theoretical considerations, spawned doubts about the appropriateness of classical laboratory‐based experimental paradigms in social attention research. Several researchers instead suggested the use of immersive virtual scenarios in eliciting and measuring naturalistic attentional patterns, but the field, struggling with methodological challenges, still needs to establish the advantages of this approach. Here, we show using eye‐tracking in a complex social scenario displayed in virtual reality that participants show enhanced attention towards the face of an avatar at near distance and demonstrate an increased reactivity towards her social gaze as compared to participants who viewed the same scene on a computer monitor. The present study suggests that reactive virtual agents observed in immersive virtual reality can elicit natural modes of information processing and can help to conduct ecologically more valid experiments while maintaining high experimental control.
In an experiment with 114 children aged 9–12 years, we compared the ability to establish local and global coherence of narrative texts between auditory and audiovisual (auditory text and pictures) presentation. The participants listened to a series of short narrative texts, in each of which a protagonist pursued a goal. Following each text, we collected the response time to a query word that was either associated with a near or a distant causal antecedent of the final sentence. Analysis of these response times indicated that audiovisual presentation has advantages over auditory presentation for accessing information relevant for establishing both local and global coherence, but there are indications that this effect may be slightly more pronounced for global coherence.
Purpose
Examine the effects of an 8-week yoga therapy on fatigue in patients with different types of cancer.
Methods
A total of 173 cancer patients suffering from mild to severe fatigue were randomly allocated to yoga intervention (n = 84) (IG) versus waitlist control group (CG) (n = 88). Yoga therapy consisted of eight weekly sessions with 60 min each. The primary outcome was self-reported fatigue symptoms. Secondary outcomes were symptoms of depression and quality of life (QoL). Data were assessed using questionnaires before (T0) and after yoga therapy for IG versus waiting period for CG (T1).
Results
A stronger reduction of general fatigue (P = .033), physical fatigue (P = .048), and depression (P < .001) as well as a stronger increase in QoL (P = .002) was found for patients who attended 7 or 8 sessions compared with controls. Within the yoga group, both higher attendance rate and lower T0-fatigue were significant predictors of lower T1-fatigue (P ≤ .001). Exploratory results revealed that women with breast cancer report a higher reduction of fatigue than women with other types of cancer (P = .016) after yoga therapy.
Conclusion
The findings support the assumption that yoga therapy is useful to reduce cancer-related fatigue, especially for the physical aspects of fatigue. Women with breast cancer seem to benefit most, and higher attendance rate results in greater reduction of fatigue.
Trial registration
German Clinical Trials Register DRKS00016034
Virtual reality exposure therapy (VRET) is an effective cognitive-behavioral treatment for anxiety disorders that comprises systematic confrontations to virtual representations of feared stimuli and situations.
However, not all patients respond to VRET, and some patients relapse after successful treatment. One explanation for this limitation of VRET is that its underlying mechanisms are not yet fully understood, leaving room for further improvement.
On these grounds, the present thesis aimed to investigate two major research questions: first, it explored how virtual stimuli induce fear responses in height-fearful participants, and second, it tested if VRET outcome could be improved by incorporating techniques derived from two different theories of exposure therapy. To this end, five studies in virtual reality (VR) were conducted.
Study 1 (N = 99) established a virtual environment for height exposure using a Computer Automatic Virtual Environment (CAVE) and investigated the effects of tactile wind simulation in VR. Height-fearful and non-fearful participants climbed a virtual outlook, and half of the participants received wind simulation. Results revealed that height-fearful participants showed stronger fear responses, on both a subjective and behavioral level, and that wind simulation increased subjective fear. However, adding tactile wind simulation in VR did not affect presence, the user's sense of 'being there' in the virtual environment. Replicating previous studies, fear and presence in VR were correlated, and the correlation was higher in height-fearful compared to non-fearful participants.
Study 2 (N = 43) sought to corroborate the findings of the first study, using a different VR system for exposure (a head-mounted display) and measuring physiological fear responses. In addition, the effects of a visual cognitive distractor on fear in VR were investigated. Participants' fear responses were evident on both a subjective and physiological level---although much more pronounced on skin conductance than on heart rate---but the virtual distractor did not affect the strength of fear responses.
In Study 3 (N = 50), the effects of trait height-fearfulness and height level on fear responses were investigated in more detail. Self-rated level of acrophobia and five different height levels in VR (1 m--20 m) were used as linear predictors of subjective and physiological indices of fear. Results showed that subjective fear and skin conductance responses were a function of both trait height-fearfulness and height level, whereas no clear effects were visible for heart rate.
Study 4 (N = 64 + N = 49) aimed to advance the understanding of the relationship between presence and fear in VR. Previous research indicates a positive correlation between both measures, but possible causal mechanisms have not yet been identified. The study was the first to experimentally manipulate both presence (via the visual and auditive realism of the virtual environment) and fear (by presenting both height and control situations). Results indicated a causal effect of fear on presence, i.e., experiencing fear in a virtual environment led to a stronger sense of `being there' in the virtual environment. However, conversely, presence increased by higher scene realism did not affect fear responses. Nonetheless, presence seemed to have some effects on fear responding via another pathway, as participants whose presence levels were highest in the first safe context were also those who had the strongest fear responses in a later height situation. This finding indicated the importance of immersive user characteristics in the emergence of presence and fear in VR.
The findings of the first four studies were integrated into a model of fear in VR, extending previous models and highlighting factors that lead to the emergence of both fear and presence in VR. Results of the studies showed that fear responses towards virtual heights were affected by trait height-fearfulness, phobic elements in the virtual environment, and, at least to some degree, on presence. Presence, on the other hand, was affected by experiencing fear in VR, immersion---the characteristics of the VR system---and immersive user characteristics. Of note, the manipulations of immersion used in the present thesis, visual and auditory realism of the virtual environment and tactile wind simulation, were not particularly effective in manipulating presence.
Finally, Study 5 (N = 34) compared two different implementations of VRET for acrophobia to investigate mechanisms underlying its efficacy. The first implementation followed the Emotional Processing Theory, assuming that fear reduction during exposure is crucial for positive treatment outcome. In this condition, patients were asked to focus on their fear responses and on the decline of fear (habituation) during exposures. The second implementation was based on the inhibitory learning model, assuming that expectancy violation is the primary mechanism underlying exposure therapy efficacy. In this condition, patients were asked to focus on the non-occurrence of feared outcomes (e.g., 'I could fall off') during exposure. Based on predictions of the inhibitory learning model, the hypothesis for the study was that expectancy-violation-based exposure would outperform habituation-based exposure.
After two treatment sessions in VR, both treatment conditions effectively reduced the patients' fear of heights, but the two conditions did not differ in their efficacy. The study replicated previous studies by showing that VRET is an effective treatment for acrophobia; however, contrary to the assumption, explicitly targeting the violation of threat expectancies did not improve outcome. This finding adds to other studies failing to provide clear evidence for expectancy violation as the primary mechanism underlying exposure therapy. Possible explanations for this finding and clinical implications are discussed, along with suggestions for further research.