Refine
Is part of the Bibliography
- yes (595)
Year of publication
Document Type
- Journal article (387)
- Doctoral Thesis (159)
- Book article / Book chapter (21)
- Conference Proceeding (12)
- Book (4)
- Review (4)
- Report (3)
- Other (2)
- Preprint (2)
- Master Thesis (1)
Keywords
- Psychologie (65)
- EEG (24)
- virtual reality (20)
- attention (17)
- Kognition (15)
- P300 (15)
- anxiety (13)
- emotion (13)
- Virtuelle Realität (12)
- psychology (12)
Institute
- Institut für Psychologie (595) (remove)
Sonstige beteiligte Institutionen
- Adam Opel AG (1)
- BMBF (1)
- Blindeninstitut, Ohmstr. 7, 97076, Wuerzburg, Germany (1)
- Deutsches Zentrum für Präventionsforschung Psychische Gesundheit (DZPP) (1)
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society (ESI) (1)
- Evangelisches Studienwerk e.V. (1)
- Forschungsverbund ForChange des Bayrischen Kultusministeriums (1)
- IFT Institut für Therapieforschung München (1)
- Klinik für Psychiatrie und Psychotherapie, Universität Würzburg (1)
- Opel Automobile GmbH (1)
Animals, just like humans, can freely move. They do so for various important reasons, such as finding food and escaping predators. Observing these behaviors can inform us about the underlying cognitive processes. In addition, while humans can convey complicated information easily through speaking, animals need to move their bodies to communicate. This has prompted many creative solutions by animal neuroscientists to enable studying the brain during movement. In this review, we first summarize how animal researchers record from the brain while an animal is moving, by describing the most common neural recording techniques in animals and how they were adapted to record during movement. We further discuss the challenge of controlling or monitoring sensory input during free movement.
However, not only is free movement a necessity to reflect the outcome of certain internal cognitive processes in animals, it is also a fascinating field of research since certain crucial behavioral patterns can only be observed and studied during free movement. Therefore, in a second part of the review, we focus on some key findings in animal research that specifically address the interaction between free movement and brain activity. First, focusing on walking as a fundamental form of free movement, we discuss how important such intentional movements are for understanding processes as diverse as spatial navigation, active sensing, and complex motor planning. Second, we propose the idea of regarding free movement as the expression of a behavioral state. This view can help to understand the general influence of movement on brain function.
Together, the technological advancements towards recording from the brain during movement, and the scientific questions asked about the brain engaged in movement, make animal research highly valuable to research into the human “moving brain”.
Pupil dilation is known to be affected by a variety of factors, including physical (e.g., light) and cognitive sources of influence (e.g., mental load due to working memory demands, stimulus/response competition etc.). In the present experiment, we tested the extent to which vocal demands (speaking) can affect pupil dilation. Based on corresponding preliminary evidence found in a reanalysis of an existing data set from our lab, we setup a new experiment that systematically investigated vocal response‐related effects compared to mere jaw/lip movement and button press responses. Conditions changed on a trial‐by‐trial basis while participants were instructed to keep fixating a central cross on a screen throughout. In line with our prediction (and previous observation), speaking caused the pupils to dilate strongest, followed by nonvocal movements and finally a baseline condition without any vocal or muscular demands. An additional analysis of blink rates showed no difference in blink frequency between vocal and baseline conditions, but different blink dynamics. Finally, simultaneously recorded electromyographic activity showed that muscle activity may contribute to some (but not all) aspects of the observed effects on pupil size. The results are discussed in the context of other recent research indicating effects of perceived (instead of executed) vocal action on pupil dynamics.
Does generation benefit learning for narrative and expository texts? A direct replication attempt
(2021)
Generated information is better recognized and recalled than information that is read. This so‐called generation effect has been replicated several times for different types of material, including texts. Perhaps the most influential demonstration was by McDaniel et al. (1986, Journal of Memory and Language, 25, 645–656; henceforth MEDC). This group tested whether the generation effect occurs only if the generation task stimulates cognitive processes not already stimulated by the text. Numerous studies, however, report difficulties replicating this text by generation‐task interaction, which suggests that the effect might only be found under conditions closer to the original method of MEDC. To test this assumption, we will closely replicate MEDC's Experiment 2 in German and English‐speaking samples. Replicating the effect would suggest that it can be reproduced, at least under limited conditions, which will provide the necessary foundation for future investigations into the boundary conditions of this effect, with an eye towards its utility in applied contexts.
The ability to spell words correctly is a key competence for educational and professional achievement. Economical procedures are essential to identifying children with spelling problems as early as possible. Given the strong evidence showing that reading and spelling are based on the same orthographic knowledge, error-detection tasks (EDTs) could be considered such an economical procedure. Although EDTs are widely used in English-speaking countries, the few studies in German-speaking countries investigated only pupils in secondary school. The present study investigated N = 1,513 children in elementary school. We predicted spelling competencies (measured by dictation or gap-fill dictation) based on an EDT via linear regression. Error-detection abilities significantly predicted spelling competencies (R² between .509 and .679), indicating a strong connection. Predictive values in identifying children with poor spelling abilities with an EDT proved to be sufficient. Error detection for the assessment of spelling skills is therefore a valid instrument for transparent languages as well.
Studieren stellt hohe Anforderungen an selbstregulatorische Fähigkeiten und eigenverantwortlichen Umgang mit schwierigen Situationen. Aus den zusätzlichen sprachlichen Barrieren für ausländische Studierende erwachsen spezifische selbstregulatorische Aufgaben, wie der Umgang mit Verständnisproblemen in Vorlesungen. Da hierfür bisher kaum geeignete Erhebungsinstrumente existieren, versucht ScenEx diese Lücke zu schließen. Der Test erfasst das metakognitive Strategiewissen in sprachlich herausfordernden Situationen im Studienalltag. Anhand einer Stichprobe von 290 ausländischen Studierenden im ersten Fachsemester wird die psychometrische Qualität und interne Struktur des Instruments überprüft. ScenEx zeigt eine zufriedenstellende interne Konsistenz und gute Itemfit-Kennwerte, erwartungskonform liegen lokale stochastische Abhängigkeiten der Aufgaben innerhalb der Szenarien vor. Eine konfirmatorische Faktorenanalyse bestätigt die Grobstruktur der Szenarien und des Gesamtscores des Tests. Das Verfahren ist für die weitere Entwicklung der Sprachkompetenz über die anfängliche Sprachfähigkeit hinaus prädiktiv. ScenEx erweist sich insgesamt als ein reliables und valides Instrument zur Erfassung des Strategiewissens in schwierigen Situationen im Studium.
Stranger, Lover, Friend?
(2021)
Social exclusion, even from minimal game-based interactions, induces negative consequences. We investigated whether the nature of the relationship with the excluder modulates the effects of ostracism. Participants played a virtual ball-tossing game with a stranger and a friend (friend condition) or a stranger and their romantic partner (partner condition) while being fully included, fully excluded, excluded only by the stranger, or excluded only by their close other. Replicating previous findings, full exclusion impaired participants’ basic-need satisfaction and relationship evaluation most severely. While the degree of exclusion mattered, the relationship to the excluder did not: Classic null hypothesis testing and Bayesian statistics showed no modulation of ostracism effects depending on whether participants were excluded by a stranger, a friend, or their partner.
Background: The distinctness of grief from depression has been the subject of a long scholarly debate, even influencing definitions of diagnostic criteria. Aims: This study aims at clarifying the issue by a multifaceted analysis of data from a large German sample. Method: A community sample of 406 bereaved persons answered the Wuerzburg Grief Inventory (WGI), a multidimensional grief questionnaire designed to measure normal grief in the German language, and the General Depression Scale – Short Version (GDS-S), a self-report depression scale. Data were analyzed by factor analysis to identify structural (dis-)similarities of the constructs, and analysis of variance (ANOVA) to identify the influence of the factors relationship to the deceased, type of death, and time since loss on grief measures and depression scores. Results: Factor analysis clustered items referring to grief-related impairments and depression into one factor, items referring to other dimensions of grief on separate factors, however. Relationship to the deceased influenced the grief measures impairments and nearness to the deceased, but not depression scores if controlled for impairments. Type of death showed specific effects on grief scores, but not on depression scores. Time since loss influenced grief scores, but not depression scores. Limitations: The analysis is based on a self-selected community sample of grieving persons, self-report measures, and in part, on cross-sectional data. Conclusion: Factor analysis and objective data show a clear distinction of dimensions of grief and depression. The human experience of grief contains a sense of nearness to the lost person, feelings of guilt, and positive aspects of the loss experience in addition to components resembling depression.
Humans use their eyes not only as visual input devices to perceive the environment, but also as an action tool in order to generate intended effects in their environment. For instance, glances are used to direct someone else's attention to a place of interest, indicating that gaze control is an important part of social communication. Previous research on gaze control in a social context mainly focused on the gaze recipient by asking how humans respond to perceived gaze (gaze cueing). So far, this perspective has hardly considered the actor’s point of view by neglecting to investigate what mental processes are involved when actors decide to perform an eye movement to trigger a gaze response in another person. Furthermore, eye movements are also used to affect the non-social environment, for instance when unlocking the smartphone with the help of the eyes. This and other observations demonstrate the necessity to consider gaze control in contexts other than social communication whilst at the same time focusing on commonalities and differences inherent to the nature of a social (vs. non-social) action context. Thus, the present work explores the cognitive mechanisms that control such goal-oriented eye movements in both social and non-social contexts.
The experiments presented throughout this work are built on pre-established paradigms from both the oculomotor research domain and from basic cognitive psychology. These paradigms are based on the principle of ideomotor action control, which provides an explanatory framework for understanding how goal-oriented, intentional actions come into being. The ideomotor idea suggests that humans acquire associations between their actions and the resulting effects, which can be accessed in a bi-directional manner: Actions can trigger anticipations of their effects, but the anticipated resulting effects can also trigger the associated actions. According to ideomotor theory, action generation involves the mental anticipation of the intended effect (i.e., the action goal) to activate the associated motor pattern. The present experiments involve situations where participants control the gaze of a virtual face via their eye movements. The triggered gaze responses of the virtual face are consistent to the participant’s eye movements, representing visual action effects. Experimental situations are varied with respect to determinants of action-effect learning (e.g., contingency, contiguity, action mode during acquisition) in order to unravel the underlying dynamics of oculomotor control in these situations. In addition to faces, conditions involving changes in non-social objects were included to address the question of whether mechanisms underlying gaze control differ for social versus non-social context situations.
The results of the present work can be summarized into three major findings. 1. My data suggest that humans indeed acquire bi-directional associations between their eye movements and the subsequently perceived gaze response of another person, which in turn affect oculomotor action control via the anticipation of the intended effects. The observed results show for the first time that eye movements in a gaze-interaction scenario are represented in terms of their gaze response in others. This observation is in line with the ideomotor theory of action control. 2. The present series of experiments confirms and extends pioneering results of Huestegge and Kreutzfeldt (2012) with respect to the significant influence of action effects in gaze control. I have shown that the results of Huestegge and Kreutzfeldt (2012) can be replicated across different contexts with different stimulus material given that the perceived action effects were sufficiently salient. 3. Furthermore, I could show that mechanisms of gaze control in a social gaze-interaction context do not appear to be qualitatively different from those in a non-social context.
All in all, the results support recent theoretical claims emphasizing the role of anticipation-based action control in social interaction. Moreover, my results suggest that anticipation-based gaze control in a social context is based on the same general psychological mechanisms as ideomotor gaze control, and thus should be considered as an integral part rather than as a special form of ideomotor gaze control.
Sensory input as well as cognitive factors can drive the modulation of blinking. Our aim was to dissociate sensory driven bottom-up from cognitive top-down influences on blinking behavior and compare these influences between the auditory and the visual domain.
Using an oddball paradigm, we found a significant pre-stimulus decrease in blink probability for visual input compared to auditory input. Sensory input further led to an early post-stimulus blink increase in both modalities if a task demanded attention to the input. Only visual input caused a pronounced early increase without a task. In case of a target or the omission of a stimulus (as compared to standard input), an additional late increase in blink rate was found in the auditory and visual domain. This suggests that blink modulation must be based on the interpretation of the input, but does not need any sensory input at all to occur.
Our results show a complex modulation of blinking based on top-down factors such as prediction and attention in addition to sensory-based influences. The magnitude of the modulation is mainly influenced by general attentional demands, while the latency of this modulation allows to dissociate general from specific top-down influences that are independent of the sensory domain.
This doctoral thesis is part of a research project on the development of the cognitive compre-hension of film at Würzburg University that was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft) between 2013 and 2019 and awarded to Gerhild Nied-ing. That project examined children’s comprehension of narrative text and its development in illustrated versus non-illustrated formats. For this purpose, van Dijk and Kintsch’s (1983) tri-partite model was used, according to which text recipients form text surface and textbase rep-resentations and construct a situation model. In particular, predictions referring to the influ-ence of illustrations on these three levels of text representation were derived from the inte-grated model of text and picture comprehension (ITPC; Schnotz, 2014), which holds that text-picture units are processed on both text-based (descriptive) and picture-based (depictive) paths. Accordingly, illustrations support the construction of a situation model. Moreover, in line with the embodied cognition account (e.g., Barsalou, 1999), it was assumed that the situa-tion model is grounded in perception and action; text recipients mentally simulate the situation addressed in the text through their neural systems related to perception (perceptual simulation) and action (motor resonance). Therefore, the thesis also examines whether perceptual simula-tion takes place during story reception, whether it improves the comprehension of illustrated stories, and whether motor resonance is related to the comprehension of text accompanied by dynamic illustrations. Finally, predictions concerning the development of comprehending illus-trated text were made in line with Springer’s (2001) hypotheses according to which younger children, compared with older children and adults, focus more on illustrations during text comprehension (perceptual boundedness) and use illustrations for the development of cogni-tive skills (perceptual support).
The first research question sought to validate the tripartite model in the context of children’s comprehension of narrative text, so Hypothesis 1 predicted that children yield representations of the text surface, the textbase, and the situation model during text reception. The second research question comprised the assumptions regarding the impact of illustrations on text comprehension. Accordingly, it was expected that illustrations improve the situation model (Hypothesis 2a), especially when they are processed before their corresponding text passages (Hypothesis 2b). Both hypotheses were derived from the ITPC and the assumption that per-ceptual simulation supports the situation model. It was further predicted that dynamic illustra-tions evoke more accurate situation models than static ones (Hypothesis 2c); this followed from the assumption that motor resonance supports the situation model. In line with the ITPC, it was assumed that illustrations impair the textbase (Hypothesis 2d), especially when they are presented after their corresponding text passages (Hypothesis 2e). In accordance with earlier results, it was posited that illustrations have a beneficial effect for the text surface (Hypothesis 2f). The third research question addressed the embodied approach to the situation model. Here, it was assumed that perceptual simulation takes place during text reception (Hypothesis 3a) and that it is more pronounced in illustrated than in non-illustrated text (Hypothesis 3b); the latter hypothesis was related to a necessary premise of the assumption that perceptual sim-ulation improves the comprehension of illustrated text. The fourth research question was relat-ed to perceptual boundedness and perceptual support and predicted age-related differences; younger children were expected to benefit more from illustrations regarding the situation model (Hypothesis 4a) and to simulate vertical object movements in a more pronounced fash-ion (Hypothesis 4b) than older children. In addition, Hypothesis 4c held that perceptual simu-lation is more pronounced in younger children particularly when illustrations are present.
Three experiments were conducted to investigate these hypotheses. Experiment 1 (Seger, Wannagat, & Nieding, submitted).compared the tripartite representations of written text without illustrations, with illustrations presented first, and with illustrations presented after their corresponding sentences. Students between 7 and 13 years old (N = 146) took part. Ex-periment 2 (Seger, Wannagat, & Nieding, 2019) investigated the tripartite representations of auditory text, audiovisual text with static illustrations, and audiovisual text with dynamic il-lustrations among children in the same age range (N = 108). In both experiments, a sentence recognition method similar to that introduced by Schmalhofer and Glavanov (1986) was em-ployed. This method enables the simultaneous measurement of all three text representations. Experiment 3 (Seger, Hauf, & Nieding, 2020) determined the perceptual simulation of vertical object movements during the reception of auditory and audiovisual narrative text among chil-dren between 5 and 11 years old and among adults (N = 190). For this experiment, a picture verification task based on Stanfield and Zwaan’s (2001) paradigm and adapted from Hauf (2016) was used.
The first two experiments confirmed Hypothesis 1, indicating that the tripartite model is appli-cable to the comprehension of auditory and written narrative text among children. A benefi-cial effect of illustrations to the situation model was observed when they were presented syn-chronously with auditory text (Hypotheses 2a), but not when presented asynchronously with written text (Hypothesis 2b), so the ITPC is partly supported on this point. Hypothesis 2c was rejected, indicating that motor resonance does not make an additional contribution to the comprehension of narrative text with dynamic illustrations. Regarding the textbase, a general negative effect of illustrations was not observed (Hypothesis 2d), but a specific negative effect of illustrations that follow their corresponding text passages was seen (Hypothesis 2e); the latter result is also in line with the ITPC. The text surface (Hypothesis 2f) appears to benefit from illustrations in auditory but not written text. The results obtained in Experiment 3 sug-gest that children and adults perceptually simulate vertical object movements (Hypothesis 3a), but there appears to be no difference between auditory and audiovisual text (Hypothesis 3b), so there is no support for a functional relationship between perceptual simulation and the situ-ation model in illustrated text. Hypotheses 4a–4c were investigated in all three experiments and did not receive support in any of them, which indicates that representations of illustrated and non-illustrated narrative text remain stable within the age range examined here.