Refine
Is part of the Bibliography
- yes (484)
Year of publication
Document Type
- Journal article (351)
- Doctoral Thesis (110)
- Book article / Book chapter (8)
- Conference Proceeding (4)
- Review (4)
- Book (2)
- Other (2)
- Preprint (2)
- Report (1)
Language
- English (484) (remove)
Keywords
- Psychologie (56)
- EEG (20)
- virtual reality (19)
- attention (15)
- P300 (14)
- anxiety (13)
- Kognition (12)
- emotion (12)
- psychology (12)
- event-related potentials (11)
Institute
- Institut für Psychologie (484) (remove)
Sonstige beteiligte Institutionen
- Adam Opel AG (1)
- BMBF (1)
- Blindeninstitut, Ohmstr. 7, 97076, Wuerzburg, Germany (1)
- Deutsches Zentrum für Präventionsforschung Psychische Gesundheit (DZPP) (1)
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society (ESI) (1)
- Evangelisches Studienwerk e.V. (1)
- Forschungsverbund ForChange des Bayrischen Kultusministeriums (1)
- Klinik für Psychiatrie und Psychotherapie, Universität Würzburg (1)
- Opel Automobile GmbH (1)
- Technische Universität Dresden (1)
This thesis consists of three studies investigating the influence media literacy has on political variables, cognitive variables, and learning. Adolescents from 13 years of age and young adults are included in the studies. This thesis is divided into three chapters. Study I and II are one comprehensive study, but will be presented separately for better readability. Chapter I provides the reader with background knowledge for the original studies presented in chapter II includes information about media use, different conceptualizations of media literacy and its development over the lifetime, as well as media literacy’s impact on cognitive and political variables. Additionally, current literature on the comparison of the learning outcomes of different kinds of texts (written, auditory, and audiovisual) is presented, with a differentiation between text-based information and inferences. In chapter II, the original studies are placed in the current state of research and presented in detail. In chapter III, a critical discussion of the studies is conducted, and a general model of the influence media literacy has on the investigated cognitive and political factors is presented, followed by a conclusion of the research.
The theoretical foundation of this thesis is three models of media literacy proposed by Groeben (2002, 2004), Hobbs (1997), and Potter (1998, 2016). These three models are similar in that they define media literacy as a multifactorial construct with skills that develop further in the course of life. Their ideas are integrated and developed further, leading to our own model of media literacy. It encompasses five scales: media sign literacy, distinction between reality and fiction, knowledge of media law, knowledge of media effects, and production skills. Thereupon, the assessment tool Würzburg Media Literacy Test (WMK; Würzburger Medienkompetenztest) is designed.
There is evidence that media use and media literacy influence socio-political factors. Young adults name the internet as the main source of information on political topics (see Pasek et al., 2006), and knowledge demonstrably fosters political participation (Delli Carpini & Keeter, 1996). However, the kind of participation activity regarded is important (Quintelier & Vissers, 2008), as sometimes real-life participation is supplemented by online activities (Quan-Haase & Wellman, 2002). Media literacy is the key to evaluating the quality of information from media. Whether or not a direct link between media literacy and political interest exists has, as far as I know, not yet been investigated. Several studies have shown that precursors and subcomponents of media literacy have the capacity to influence cognitive variables. For instance, children with higher media sign literacy possess better reading proficiency (Nieding et al., 2017) and are better at collecting information and drawing inferences from hypermedia and films (Diergarten et al., 2017) as compared to children with low literacy. These precursors and subcomponents are more efficient in processing medial sign systems, reducing cognitive load, and consequently, liberating cognitive capacity for other mental tasks (Sweller, 1988). Paino and Renzulli (2012) showed that highly computer-proficient adolescents exhibit better mathematics and reading abilities. Different types of media influence the learning process differently, and the learning process can be enhanced by combining these different types of media, if the material is prepared according to the research findings and Mayer’s (2002) cognitive theory of multimedia learning. Similarly, a reduction in cognitive load takes place and more resources can be invested in the learning process itself (Mayer & Moreno, 2003; Sweller, 1988). It is not easy to answer the question of whether one medium is superior for learning to another. Generally, adults learn best from written texts (e.g., Byrne & Curtis, 2000), and audiovisual and auditory texts are comparable (e.g., Hayes et al., 1986); however, there is little research regarding the comparison of the latter two.
Study I examined whether media literacy has a positive impact on interest in politics and the political self-concept. A sample of 101 13-to 20-year-olds was drawn. The control variables were intelligence, socio-economic status (SES), openness to experiences, perspective-taking, age, and sex. Additionally, an evaluation of the WMK was conducted, which indicated good construct validity and excellent overall reliability. Media literacy was positively associated with interest in politics, political self-concept, and perspective-taking but not with openness. In hierarchical regressions and path analysis, a direct influence of media literacy and openness on interest in politics could be found. Political self-concept was solely influenced by interest in politics. Although media literacy had no direct influence on political self-concept, it influenced its precursor interest in politics and was thus expected to have distal influence. The results of the first study confirm previous findings (e.g., Vecchione & Caprara, 2009), where political self-concept is regarded as a precursor of political participation. In conclusion, the findings of study I suggested that by stimulating political interest, media literacy could, mediated through political self-concept, foster political participation.
Study II (which was conducted on the same sample as study I) was concerned with the question of whether highly media-literate adolescent and young adult participants exhibit better academic skills (mathematics; reading) and academic achievement (grades) compared to less media-literate participants. Additionally, to obtain information about potential development during adolescence, a group of 50 13-year-olds was compared with a group of 51 19-year-olds in terms of their media literacy. The control variables were intelligence, SES, sex, and age. The results showed that a significant development of media literacy took place during adolescence (∆M = .17), agreeing with Potter’s (1998, 2013) development theory of media literacy. Media literacy was significantly correlated with reading skills and school grades. Regarding adults, media literacy was also significantly correlated with mathematical skills; the association was greater than that with reading skills. However, no connection with mathematical skills was found for adolescents. To control for the influence of age and intelligence, which were both associated with media literacy, hierarchical regressions and path analyses were conducted. The results revealed that media literacy had a greater impact on grades and academic abilities than intelligence. These results are in line with those obtained by Paino and Renzulli (2012).
Study III investigated whether media literacy helps young adults to better learn from three kinds of media, a written, an auditory, and an audio-visual text, and which medium achieves the best learning results. Three groups of 91 young adults were compared (written, auditory, and audio-visual text) in terms of their learning outcomes. These outcomes were conceptualized as directly stated information in the text (assessed by text-based questions) and inferential learning (inference questions). A computer-based short version of the WMK was applied to assess media literacy, which should be optimized in the future. The control variables were intelligence, verbal ability, media usage, prior knowledge, and SES. In hierarchical regression, media literacy turned out to be a significant predictor of text inferences, even when other relevant variables, such as intelligence, were controlled for. Inferences foster the building of the situation model, which is believed by many authors to be true comprehension of a text (Zwaan & Radvansky, 1998). The outcomes of study III support Ohler’s (1994) assumption that media literacy fosters the creation of a more elaborated situational model. Text-based questions were only influenced by prior knowledge. As assumed by Potter (1998, 2016), the media literacy of young adults in the Western world suffices to extract relevant facts from educational learning material. Both subjects were best in the written text condition for text-based and inference question results. Audiovisual and auditory texts showed no significant differences. The written text condition did not excel in the auditory text condition for inferences. The results accord with those obtained by, for instance, Byrne and Curtis (2000).
Taken together, these studies show that media literacy can influence several cognitive and political variables. It stimulates political interest, reading comprehension, school grades, and mathematical abilities in young adults, as well as drawing inferences from different kinds of texts. Additionally, media literacy develops further during adolescence.
Spin-lock based functional magnetic resonance imaging (fMRI) has the potential for direct spatially-resolved detection of neuronal activity and thus may represent an important step for basic research in neuroscience. In this work, the corresponding fundamental effect of Rotary EXcitation (REX) is investigated both in simulations as well as in phantom and in vivo experiments. An empirical law for predicting optimal spin-lock pulse durations for maximum magnetic field sensitivity was found. Experimental conditions were established that allow robust detection of ultra-weak magnetic field oscillations with simultaneous compensation of static field inhomogeneities. Furthermore, this work presents a novel concept for the emulation of brain activity utilizing the built-in MRI gradient system, which allows REX sequences to be validated in vivo under controlled and reproducible conditions. Via transmission of Rotary EXcitation (tREX), we successfully detected magnetic field oscillations in the lower nano-Tesla range in brain tissue. Moreover, tREX paves the way for the quantification of biomagnetic fields.
Despite high levels of distress, family caregivers of patients with cancer rarely seek psychosocial support and Internet-based interventions (IBIs) are a promising approach to reduce some access barriers. Therefore, we developed a self-guided IBI for family caregivers of patients with cancer (OAse), which, in addition to patients' spouses, also addresses other family members (e.g., adult children, parents). This study aimed to determine the feasibility of OAse (recruitment, dropout, adherence, participant satisfaction). Secondary outcomes were caregivers’ self-efficacy, emotional state, and supportive care needs. N = 41 family caregivers participated in the study (female: 65%), mostly spouses (71%), followed by children (20%), parents (7%), and friends (2%). Recruitment (47%), retention (68%), and adherence rates (76% completed at least 4 of 6 lessons) support the feasibility of OAse. Overall, the results showed a high degree of overall participant satisfaction (96%). There were no significant pre-post differences in secondary outcome criteria, but a trend toward improvement in managing difficult interactions/emotions (p = .06) and depression/anxiety (p = .06). Although the efficacy of the intervention remains to be investigated, our results suggest that OAse can be well implemented in caregivers’ daily lives and has the potential to improve family caregivers’ coping strategies.
Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits) that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present.
Research with adults in laboratory settings has shown that distributed rereading is a beneficial learning strategy but its effects depend on time of test. When learning outcomes are measured immediately after rereading, distributed rereading yields no benefits or even detrimental effects on learning, but the beneficial effects emerge two days later. In a preregistered experiment, the effects of distributed rereading were investigated in a classroom setting with school students. Seventh-graders (N = 191) reread a text either immediately or after 1 week. Learning outcomes were measured after 4 min or 1 week. Participants in the distributed rereading condition reread the text more slowly, predicted their learning success to be lower, and reported a lower on-task focus. At the shorter retention interval, massed rereading outperformed distributed rereading in terms of learning outcomes. Contrary to students in the massed condition, students in the distributed condition showed no forgetting from the short to the long retention interval. As a result, they performed equally well as the students in the massed condition at the longer retention interval. Our results indicate that distributed rereading makes learning more demanding and difficult and leads to higher effort during rereading. Its effects on learning depend on time of test, but no beneficial effects were found, not even at the delayed test.
Distributed practice is a well-known learning strategy whose beneficial effects on long-term learning are well proven by various experiments. In learning from texts, the benefits of distribution might even go beyond distributed practice, i.e. distribution of repeated materials. In realistic learning scenarios as for example school or university learning, the reader might read multiple texts that not repeat but complement each other. Therefore, distribution might also be implemented between multiple texts and benefit long-term learning in analogy to distributed practice. The assumption of beneficial effects of this distributed learning can be deduced from theories about text comprehension as the landscape model of reading (van den Broek et al., 1996) in combination with theories of desirable difficulties in general (R. A. Bjork & Bjork, 1992) and distributed practice in particular (Benjamin & Tullis, 2010). This dissertation aims to investigate (1) whether distributed learning benefits learning; (2) whether the amount of domain-specific prior knowledge moderates the effects of distribution, (3) whether distributed learning affects the learner’s meta-cognitive judgments in analogy to distributed practice and (4) whether distributed practice is beneficial for seventh graders in learning from single text.
In Experiment 1, seventh graders read two complementary texts either massed or distributed by a lag of one week between the texts. Learning outcomes were measured immediately after reading the second text and one week later. Judgements of learning were assessed immediately after each text. Experiment 2 replicated the paradigm of Experiment 1 while shortening the lag between the texts in the distributed condition to 15 min. In both experiments, an interaction effect between learning condition (distributed vs. massed) and retention interval (immediate vs. delayed) was found. In the distributed condition, the participants showed no decrease in performance between the two tests, whereas participants in the massed condition did. However, no beneficial effects were found in the delayed test for the distributed condition but even detrimental effects for the distributed condition in the immediate test. In Experiment 1, participants in the distributed condition perceived learning as less difficult but predicted lower success than the participants in the massed condition.
Experiment 3 replicated the paradigm of Experiment 1 with university students in the laboratory. In the preregistered Experiment 4, an additional retention interval of two weeks was realized. In both experiments, the same interaction between learning condition and retention interval was found. In Experiment 3, the participants in the distributed condition again showed no decrease in performance between the two tests, whereas participants in the massed condition did. However, even at the longer retention interval in Experiment 4, no beneficial effects were found for the distributed condition. Domain-specific prior knowledge was positively associated with test performance in both experiments. In Experiment 4, the participants with low prior knowledge seemed to be impaired by distributed learning, whereas no difference was found for participants with medium or high prior knowledge.
In the preregistered Experiment 5, seventh graders read a single text twice. The rereading took place either massed or distributed with one week. Immediately after rereading, judgements of learning were assessed. Learning outcomes were assessed four min after second reading or one week later. Participants in the distributed condition predicted lower learning success than participants in the massed condition. An interaction effect between learning condition and retention interval was found, but no advantage for the distributed condition. Participants with low domain-specific prior knowledge showed lower performance in short-answer questions in the distributed condition than in the massed condition.
Overall, the results seem less encouraging regarding the effectiveness of distribution on learning from single and multiple texts. However, the experiments reported here can be perceived as first step in the realistic investigation of distribution in learning from texts.
Distributed learning is often recommended as a general learning strategy, but previous research has established its benefits mainly for learning with repeated materials. In two experiments, we investigated distributed learning with complementary text materials. 77 (Experiment 1) and 130 (Experiment 2) seventh graders read two texts, massed vs. distributed, by 1 week (Experiment 1) or 15 min (Experiment 2). Learning outcomes were measured immediately and 1 week later and metacognitive judgments of learning were assessed. In Experiment 1, distributed learning was perceived as more difficult than massed learning. In both experiments, massed learning led to better outcomes immediately after learning but learning outcomes were lower after 1 week. No such decrease occurred for distributed learning, yielding similar outcomes for massed and distributed learning after 1 week. In sum, no benefits of distributed learning vs. massed learning were found, but distributed learning might lower the decrease in learning outcomes over time.
Improving retention of learned content by means of a practice test is a learning strategy that has been researched since a century and has been consistently found to be more effective than comparable learning strategies such as restudy (i.e., the testing effect). Most importantly, practicing test questions has been found to outperform restudy even when no additional information about the correct answers was provided to practice test takers, rendering practice tests effective and efficient in fostering retention of learning content. Since 15 years, additional scientific attention is devoted to this memory phenomenon and additional research investigated to what extend practicing test questions is relevant in real-world educational settings. This dissertation first presents the evidence for testing effects in applied educational settings by presenting key publications and presenting findings from a methodological review conducted for this purpose. Within this dissertation, theories are presented why practicing test questions should benefit learning in real-world educational settings even without the provision of additional information and key variables for the effectiveness of practicing test questions are presented. Four studies presented in this dissertation aimed at exploring these assumptions in actual university classrooms while also trying to implement new methods of practicing learning content and thus augment course procedures. Findings from these studies—although not often consistent—will be incorporated and interpreted in the light of the theoretical accounts on the testing effect. The main conclusion that can be drawn from this dissertation is that, given the right circumstances, practicing test questions can elicit beneficial effects on the retention of learning content that are independent of additional information and thus taking a practice test per se, can foster retention of real-world learning content.
Examining the testing effect in university teaching: retrievability and question format matter
(2018)
Review of learned material is crucial for the learning process. One approach that promises to increase the effectiveness of reviewing during learning is to answer questions about the learning content rather than restudying the material (testing effect). This effect is well established in lab experiments. However, existing research in educational contexts has often combined testing with additional didactical measures that hampers the interpretation of testing effects. We aimed to examine the testing effect in its pure form by implementing a minimal intervention design in a university lecture (N = 92). The last 10 min of each lecture session were used for reviewing the lecture content by either answering short-answer questions, multiple-choice questions, or reading summarizing statements about core lecture content. Three unannounced criterial tests measured the retention of learning content at different times (1, 12, and 23 weeks after the last lecture). A positive testing effect emerged for short-answer questions that targeted information that participants could retrieve from memory. This effect was independent of the time of test. The results indicated no testing effect for multiple-choice testing. These results suggest that short-answer testing but not multiple-choice testing may benefit learning in higher education contexts.
Virtual reality exposure therapy (VRET) is an effective cognitive-behavioral treatment for anxiety disorders that comprises systematic confrontations to virtual representations of feared stimuli and situations.
However, not all patients respond to VRET, and some patients relapse after successful treatment. One explanation for this limitation of VRET is that its underlying mechanisms are not yet fully understood, leaving room for further improvement.
On these grounds, the present thesis aimed to investigate two major research questions: first, it explored how virtual stimuli induce fear responses in height-fearful participants, and second, it tested if VRET outcome could be improved by incorporating techniques derived from two different theories of exposure therapy. To this end, five studies in virtual reality (VR) were conducted.
Study 1 (N = 99) established a virtual environment for height exposure using a Computer Automatic Virtual Environment (CAVE) and investigated the effects of tactile wind simulation in VR. Height-fearful and non-fearful participants climbed a virtual outlook, and half of the participants received wind simulation. Results revealed that height-fearful participants showed stronger fear responses, on both a subjective and behavioral level, and that wind simulation increased subjective fear. However, adding tactile wind simulation in VR did not affect presence, the user's sense of 'being there' in the virtual environment. Replicating previous studies, fear and presence in VR were correlated, and the correlation was higher in height-fearful compared to non-fearful participants.
Study 2 (N = 43) sought to corroborate the findings of the first study, using a different VR system for exposure (a head-mounted display) and measuring physiological fear responses. In addition, the effects of a visual cognitive distractor on fear in VR were investigated. Participants' fear responses were evident on both a subjective and physiological level---although much more pronounced on skin conductance than on heart rate---but the virtual distractor did not affect the strength of fear responses.
In Study 3 (N = 50), the effects of trait height-fearfulness and height level on fear responses were investigated in more detail. Self-rated level of acrophobia and five different height levels in VR (1 m--20 m) were used as linear predictors of subjective and physiological indices of fear. Results showed that subjective fear and skin conductance responses were a function of both trait height-fearfulness and height level, whereas no clear effects were visible for heart rate.
Study 4 (N = 64 + N = 49) aimed to advance the understanding of the relationship between presence and fear in VR. Previous research indicates a positive correlation between both measures, but possible causal mechanisms have not yet been identified. The study was the first to experimentally manipulate both presence (via the visual and auditive realism of the virtual environment) and fear (by presenting both height and control situations). Results indicated a causal effect of fear on presence, i.e., experiencing fear in a virtual environment led to a stronger sense of `being there' in the virtual environment. However, conversely, presence increased by higher scene realism did not affect fear responses. Nonetheless, presence seemed to have some effects on fear responding via another pathway, as participants whose presence levels were highest in the first safe context were also those who had the strongest fear responses in a later height situation. This finding indicated the importance of immersive user characteristics in the emergence of presence and fear in VR.
The findings of the first four studies were integrated into a model of fear in VR, extending previous models and highlighting factors that lead to the emergence of both fear and presence in VR. Results of the studies showed that fear responses towards virtual heights were affected by trait height-fearfulness, phobic elements in the virtual environment, and, at least to some degree, on presence. Presence, on the other hand, was affected by experiencing fear in VR, immersion---the characteristics of the VR system---and immersive user characteristics. Of note, the manipulations of immersion used in the present thesis, visual and auditory realism of the virtual environment and tactile wind simulation, were not particularly effective in manipulating presence.
Finally, Study 5 (N = 34) compared two different implementations of VRET for acrophobia to investigate mechanisms underlying its efficacy. The first implementation followed the Emotional Processing Theory, assuming that fear reduction during exposure is crucial for positive treatment outcome. In this condition, patients were asked to focus on their fear responses and on the decline of fear (habituation) during exposures. The second implementation was based on the inhibitory learning model, assuming that expectancy violation is the primary mechanism underlying exposure therapy efficacy. In this condition, patients were asked to focus on the non-occurrence of feared outcomes (e.g., 'I could fall off') during exposure. Based on predictions of the inhibitory learning model, the hypothesis for the study was that expectancy-violation-based exposure would outperform habituation-based exposure.
After two treatment sessions in VR, both treatment conditions effectively reduced the patients' fear of heights, but the two conditions did not differ in their efficacy. The study replicated previous studies by showing that VRET is an effective treatment for acrophobia; however, contrary to the assumption, explicitly targeting the violation of threat expectancies did not improve outcome. This finding adds to other studies failing to provide clear evidence for expectancy violation as the primary mechanism underlying exposure therapy. Possible explanations for this finding and clinical implications are discussed, along with suggestions for further research.
Animal models are used to study neurobiological mechanisms in mental disorders. Although there has been significant progress in the understanding of neurobiological underpinnings of threat-related behaviors and anxiety, little progress was made with regard to new or improved treatments for mental disorders. A possible reason for this lack of success is the unknown predictive and cross-species translational validity of animal models used in preclinical studies. Re-translational approaches, therefore, seek to establish cross-species translational validity by identifying behavioral operations shared across species. To this end, we implemented a human open field test in virtual reality and measured behavioral indices derived from animal studies in three experiments (N=31, N=30, and N=80). In addition, we investigated the associations between anxious traits and such behaviors. Results indicated a strong similarity in behavior across species, i.e., participants in our study-like rodents in animal studies-preferred to stay in the outer region of the open field, as indexed by multiple behavioral parameters. However, correlational analyses did not clearly indicate that these behaviors were a function of anxious traits of participants. We conclude that the realized virtual open field test is able to elicit thigmotaxis and thus demonstrates cross-species validity of this aspect of the test. Modulatory effects of anxiety on human open field behavior should be examined further by incorporating possible threats in the virtual scenario and/or by examining participants with higher anxiety levels or anxiety disorder patients.
Acrophobia is characterized by intense fear in height situations. Virtual reality (VR) can be used to trigger such phobic fear, and VR exposure therapy (VRET) has proven effective for treatment of phobias, although it remains important to further elucidate factors that modulate and mediate the fear responses triggered in VR. The present study assessed verbal and behavioral fear responses triggered by a height simulation in a 5-sided cave automatic virtual environment (CAVE) with visual and acoustic simulation and further investigated how fear responses are modulated by immersion, i.e., an additional wind simulation, and presence, i.e., the feeling to be present in the VE. Results revealed a high validity for the CAVE and VE in provoking height related self-reported fear and avoidance behavior in accordance with a trait measure of acrophobic fear. Increasing immersion significantly increased fear responses in high height anxious (HHA) participants, but did not affect presence. Nevertheless, presence was found to be an important predictor of fear responses. We conclude that a CAVE system can be used to elicit valid fear responses, which might be further enhanced by immersion manipulations independent from presence. These results may help to improve VRET efficacy and its transfer to real situations.
Virtual reality plays an increasingly important role in research and therapy of pathological fear. However, the mechanisms how virtual environments elicit and modify fear responses are not yet fully understood. Presence, a psychological construct referring to the ‘sense of being there’ in a virtual environment, is widely assumed to crucially influence the strength of the elicited fear responses, however, causality is still under debate. The present study is the first that experimentally manipulated both variables to unravel the causal link between presence and fear responses. Height-fearful participants (N = 49) were immersed into a virtual height situation and a neutral control situation (fear manipulation) with either high versus low sensory realism (presence manipulation). Ratings of presence and verbal and physiological (skin conductance, heart rate) fear responses were recorded. Results revealed an effect of the fear manipulation on presence, i.e., higher presence ratings in the height situation compared to the neutral control situation, but no effect of the presence manipulation on fear responses. However, the presence ratings during the first exposure to the high quality neutral environment were predictive of later fear responses in the height situation. Our findings support the hypothesis that experiencing emotional responses in a virtual environment leads to a stronger feeling of being there, i.e., increase presence. In contrast, the effects of presence on fear seem to be more complex: on the one hand, increased presence due to the quality of the virtual environment did not influence fear; on the other hand, presence variability that likely stemmed from differences in user characteristics did predict later fear responses. These findings underscore the importance of user characteristics in the emergence of presence.
A negative mood-congruent attention bias has been consistently observed, for example, in clinical studies on major depression. This bias is assumed to be dysfunctional in that it supports maintaining a sad mood, whereas a potentially adaptive role has largely been neglected. Previous experiments involving sad mood induction techniques found a negative mood-congruent attention bias specifically for young individuals, explained by an adaptive need for information transfer in the service of mood regulation. In the present study we investigated the attentional bias in typically developing children (aged 6–12 years) when happy and sad moods were induced. Crucially, we manipulated the age (adult vs. child) of the displayed pairs of facial expressions depicting sadness, anger, fear and happiness. The results indicate that sad children indeed exhibited a mood specific attention bias toward sad facial expressions. Additionally, this bias was more pronounced for adult faces. Results are discussed in the context of an information gain which should be stronger when looking at adult faces due to their more expansive life experience. These findings bear implications for both research methods and future interventions.
Gazes are of central relevance for people. They are crucial for navigating the world and communicating with others. Nevertheless, research in recent years shows that many findings from experimental research on gaze behavior cannot be transferred from the laboratory to everyday behavior. For example, the frequency with which conspecifics are looked at is considerably higher in experimental contexts than what can be observed in daily behavior. In short: findings from laboratories cannot be generalized into general statements. This thesis is dedicated to this matter. The dissertation describes and documents the current state of research on social attention through a literature review, including a meta-analysis on the /gaze cueing/ paradigm and an empirical study on the robustness of gaze following behavior. In addition, virtual reality was used in one of the first studies in this research field. Virtual reality has the potential to significantly improve the transferability of experimental laboratory studies to everyday behavior. This is because the technology enables a high degree of experimental control in naturalistic research designs. As such, it has the potential to transform empirical research in the same way that the introduction of computers to psychological research did some 50 years ago. The general literature review on social attention is extended to the classic /gaze cueing/ paradigm through a systematic review of publications and a meta-analytic evaluation (Study 1). The cumulative evidence supported the findings of primary studies: Covert spatial attention is directed by faces. However, the experimental factors included do not explain the surprisingly large variance in the published results. Thus, there seem to be further, not well-understood variables influencing these social processes. Moreover, classic /gaze cueing/ studies have limited ecological validity. This is discussed as a central reason for the lack of generalisability. Ecological validity describes the correspondence between experimental factors and realistic situations. A stimulus or an experimental design can have high and low ecological validity on different dimensions and have different influences on behavior. Empirical research on gaze following behavior showed that the /gaze cueing/ effect also occurs with contextually embedded stimuli (Study 2). The contextual integration of the directional cue contrasted classical /gaze cueing/ studies, which usually show heads in isolation. The research results can thus be transferred /within/ laboratory studies to higher ecologically valid research paradigms. However, research shows that the lack of ecological validity in experimental designs significantly limits the transferability of experimental findings to complex situations /outside/ the laboratory. This seems to be particularly the case when social interactions and norms are investigated. However, ecological validity is also often limited in these studies for other factors, such as contextual embedding /of participants/, free exploration behavior (and, thus, attentional control), or multimodality. In a first study, such high ecological validity was achieved for these factors with virtual reality, which could not be achieved in the laboratory so far (Study 3). Notably, the observed fixation patterns showed differences even under /most similar/ conditions in the laboratory and natural environments. Interestingly, these were similar to findings also derived from comparisons of eye movement in the laboratory and field investigations. These findings, which previously came from hardly comparable groups, were thus confirmed by the present Study 3 (which did not have this limitation). Overall, /virtual reality/ is a new technical approach to contemporary social attention research that pushes the boundaries of previous experimental research. The traditional trade-off between ecological validity and experimental control thus becomes obsolete, and laboratory studies can closely inherit an excellent approximation of reality. Finally, the present work describes and discusses the possibilities of this technology and its practical implementation. Within this context, the extent to which this development can still guarantee a constructive classification of different laboratory tests in the future is examined.
Face processing can be explored using electrophysiological methods. Research with event-related potentials has demonstrated the so-called face inversion effect, in which the N170 component is enhanced in amplitude and latency to inverted, compared to upright, faces. The present study explored the extent to which repetitive lower-level visual cortical engagement, reflected in flicker steady-state visual evoked potentials (ssVEPs), shows similar amplitude enhancement to face inversion. We also asked if inversion-related ssVEP modulation would be dependent on the stimulation rate at which upright and inverted faces were flickered. To this end, multiple tagging frequencies were used (5, 10, 15, and 20 Hz) across two studies (n=21, n=18). Results showed that amplitude enhancement of the ssVEP for inverted faces was found solely at higher stimulation frequencies (15 and 20 Hz). By contrast, lower frequency ssVEPs did not show this inversion effect. These findings suggest that stimulation frequency affects the sensitivity of ssVEPs to face inversion.
A variety of factors contribute to the degree to which a person feels lonely and socially isolated. These factors may be particularly relevant in contexts requiring social distancing, e.g., during the COVID-19 pandemic or in states of immunodeficiency. We present the Loneliness and Isolation during Social Distancing (LISD) Scale. Extending existing measures, the LISD scale measures both state and trait aspects of loneliness and isolation, including indicators of social connectedness and support. In addition, it reliably predicts individual differences in anxiety and depression. Data were collected online from two independent samples in a social distancing context (the COVID-19 pandemic). Factorial validation was based on exploratory factor analysis (EFA; Sample 1, N = 244) and confirmatory factor analysis (CFA; Sample 2, N = 304). Multiple regression analyses were used to assess how the LISD scale predicts state anxiety and depression. The LISD scale showed satisfactory fit in both samples. Its two state factors indicate being lonely and isolated as well as connected and supported, while its three trait factors reflect general loneliness and isolation, sociability and sense of belonging, and social closeness and support. Our results imply strong predictive power of the LISD scale for state anxiety and depression, explaining 33 and 51% of variance, respectively. Anxiety and depression scores were particularly predicted by low dispositional sociability and sense of belonging and by currently being more lonely and isolated. In turn, being lonely and isolated was related to being less connected and supported (state) as well as having lower social closeness and support in general (trait). We provide a novel scale which distinguishes between acute and general dimensions of loneliness and social isolation while also predicting mental health. The LISD scale could be a valuable and economic addition to the assessment of mental health factors impacted by social distancing.
The extinction of conditioned fear depends on an efficient interplay between the amygdala and the medial prefrontal cortex (mPFC). In rats, high-frequency electrical mPFC stimulation has been shown to improve extinction by means of a reduction of amygdala activity. However, so far it is unclear whether stimulation of homologues regions in humans might have similar beneficial effects. Healthy volunteers received one session of either active or sham repetitive transcranial magnetic stimulation (rTMS) covering the mPFC while undergoing a 2-day fear conditioning and extinction paradigm. Repetitive TMS was applied offline after fear acquisition in which one of two faces (CS+ but not CS−) was associated with an aversive scream (UCS). Immediate extinction learning (day 1) and extinction recall (day 2) were conducted without UCS delivery. Conditioned responses (CR) were assessed in a multimodal approach using fear-potentiated startle (FPS), skin conductance responses (SCR), functional near-infrared spectroscopy (fNIRS), and self-report scales. Consistent with the hypothesis of a modulated processing of conditioned fear after high-frequency rTMS, the active group showed a reduced CS+/CS− discrimination during extinction learning as evident in FPS as well as in SCR and arousal ratings. FPS responses to CS+ further showed a linear decrement throughout both extinction sessions. This study describes the first experimental approach of influencing conditioned fear by using rTMS and can thus be a basis for future studies investigating a complementation of mPFC stimulation to cognitive behavioral therapy (CBT).
This study investigates the sense of agency (SoA) for saccades with implicit and explicit agency measures. In two eye tracking experiments, participants moved their eyes towards on-screen stimuli that subsequently changed color. Participants then either reproduced the temporal interval between saccade and color-change (Experiment 1) or reported the time points of these events with an auditory Libet clock (Experiment 2) to measure temporal binding effects as implicit indices of SoA. Participants were either made to believe to exert control over the color change or not (agency manipulation). Explicit ratings indicated that the manipulation of causal beliefs and hence agency was successful. However, temporal binding was only evident for caused effects, and only when a sufficiently sensitive procedure was used (auditory Libet clock). This suggests a feebler connection between temporal binding and SoA than previously proposed. The results also provide evidence for a relatively fast acquisition of sense of agency for previously never experienced types of action-effect associations. This indicates that the underlying processes of action control may be rooted in more intricate and adaptable cognitive models than previously thought. Oculomotor SoA as addressed in the present study presumably represents an important cognitive foundation of gaze-based social interaction (social sense of agency) or gaze-based human-machine interaction scenarios.
Public significance statement: In this study, sense of agency for eye movements in the non-social domain is investigated in detail, using both explicit and implicit measures. Therefore, it offers novel and specific insights into comprehending sense of agency concerning effects induced by eye movements, as well as broader insights into agency pertaining to entirely newly acquired types of action-effect associations. Oculomotor sense of agency presumably represents an important cognitive foundation of gaze-based social interaction (social agency) or gaze-based human-machine interaction scenarios. Due to peculiarities of the oculomotor domain such as the varying degree of volitional control, eye movements could provide new information regarding more general theories of sense of agency in future research.
Social Cueing of Numerical Magnitude : Observed Head Orientation Influences Number Processing
(2019)
In many parts of the modern world, numbers are used as tools to describe spatial relationships, be it heights, latitudes, or distances. However, this connection goes deeper as a myriad of studies showed that number representations are rooted in space (vertical, horizontal, and/or radial). For instance, numbers were shown to affect spatial perception and, conversely, perceptions or movements in space were shown to affect number estimations. This bidirectional link has already found didactic application in the classroom when children are taught the meaning of numbers. However, our knowledge about the cognitive (and neuropsychological) processes underlying the numerical magnitude operations is still very limited.
Several authors indicated that the processing within peripersonal space (i.e. the space surrounding the body in reaching distance) and numerical magnitude operations are functionally equivalent. This assumption has several implications that the present work aims at describing. For instance, vision and visuospatial attention orienting play a prominent role for processing within peripersonal space. Indeed, both neuropsychological and behavioral studies also suggested a similar role of vision and visuospatial attention orienting for number processing. Moreover, social cognition research showed that movements, posture and gestures affect not only the representation of one's own peripersonal space, but also the visuospatial attention behavior of an observer. Against this background, the current work tests the specific implication of the functional equivalence assumption that the spatial attention response to an observed person’s posture should extend to the observer’s numerical magnitude operations.
The empirical part of the present work tests the spatial attention response of observers to vertical head postures (with continuing eye contact to the observer) in both perceptual and numerical space. Two experimental series are presented that follow both steps from the observation of another person’s vertical head orientation (within his/her peripersonal space) to the observer’s attention orienting response (Experimental series A) as well as from there to the observer’s magnitude operations with numbers (Experimental Series B). Results show that the observation of a movement from a neutral to a vertical head orientation (Experiment 1) as well as the observation of the vertical head orientation alone (Experiment 3) shifted the observer’s spatial attention in correspondence with the direction information of the observed head (up vs. down). Movement from a vertical to a neutral end position, however, had no effect on the observer's spatial attention orienting response (Experiment 2). Furthermore, following down-tilted head posture (relative to up- or non-tilted head orientation), observers generated smaller numbers in a random number generation task (range 1- 9, Experiment 4), gave smaller estimates to numerical trivia questions (mostly multi-digit numbers, Experiment 5) and chose response keys less frequently in a free choice task that was associated with larger numerical magnitude in a intermixed numerical magnitude task.
Experimental Series A served as groundwork for Experimental Series B, as it demonstrated that observing another person’s head orientation indeed triggered the expected directional attention orienting response in the observer. Based on this preliminary work, the results of Experimental Series B lend support to the assumption that numerical magnitude operations are grounded in visuospatial processing of peripersonal space. Thus, the present studies brought together numerical and social cognition as well as peripersonal space research. Moreover, the Empirical Part of the present work provides the basis for elaborating on the role of processing within peripersonal space in terms of Walsh’s (2003, 2013) Theory of Magnitude. In this context, a specification of the Theory of Magnitude was staked out in a processing model that stresses the pivotal role of spatial attention orienting. Implications for mental magnitude operations are discussed. Possible applications in the classroom and beyond are described.
Objective
Alzheimer’s disease (AD) is a growing challenge worldwide, which is why the search for early-onset predictors must be focused as soon as possible. Longitudinal studies that investigate courses of neuropsychological and other variables screen for such predictors correlated to mild cognitive impairment (MCI). However, one often neglected issue in analyses of such studies is measurement invariance (MI), which is often assumed but not tested for. This study uses the absence of MI (non-MI) and latent factor scores instead of composite variables to assess properties of cognitive domains, compensation mechanisms, and their predictability to establish a method for a more comprehensive understanding of pathological cognitive decline.
Methods
An exploratory factor analysis (EFA) and a set of increasingly restricted confirmatory factor analyses (CFAs) were conducted to find latent factors, compared them with the composite approach, and to test for longitudinal (partial-)MI in a neuropsychiatric test battery, consisting of 14 test variables. A total of 330 elderly (mean age: 73.78 ± 1.52 years at baseline) were analyzed two times (3 years apart).
Results
EFA revealed a four-factor model representing declarative memory, attention, working memory, and visual–spatial processing. Based on CFA, an accurate model was estimated across both measurement timepoints. Partial non-MI was found for parameters such as loadings, test- and latent factor intercepts as well as latent factor variances. The latent factor approach was preferable to the composite approach.
Conclusion
The overall assessment of non-MI latent factors may pose a possible target for this field of research. Hence, the non-MI of variances indicated variables that are especially suited for the prediction of pathological cognitive decline, while non-MI of intercepts indicated general aging-related decline. As a result, the sole assessment of MI may help distinguish pathological from normative aging processes and additionally may reveal compensatory neuropsychological mechanisms.
When a key press causes a stimulus, the key press is perceived later and the stimulus earlier than key presses and stimuli presented independently. This bias in time perception has been linked to the intention to produce the effect and thus been called intentional binding (IB). In recent studies it has been shown that the IB effect is stronger when participants believed that they caused the effect stimulus compared to when they believed that another person caused the effect (Desantis et al., 2011). In this experiment we ask whether causal beliefs influence the perceived time of an effect when the putative effect occurs temporally close to another stimulus that is also an effect. In our study two participants performed the same task on connected computers with separate screens. Each trial started synchro- nously on both computers. When a participant pressed a key, a red and a yellow stimulus appeared as action effects simultaneously or with a slight delay of up to 50 ms. The partic- ipants’ task was to judge the temporal order of these two effect stimuli. Participants were either told that one participant caused one of the two stimuli while the other participant seated at the other computer caused the other stimulus, or each participant was told that he/she caused both stimuli. The different causal beliefs changed the perceived time of the effects’ appearance relative to each other. When participants believed they each caused one effect, their “own” effect was perceived earlier than the other participant’s effect. When the participants believed each caused both effects, no difference in the perceived temporal order of the red and yellow effect was found. These results confirm that higher order causal beliefs change the perceived time of an action effect even in a setting in which the occurrence of the putative effect can be directly compared to a reference stimulus.
Objective
Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball.
Methods
Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude.
Results
Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy.
Conclusions
Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection.
Significance
Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population.
Objective: Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Methods: Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Results: Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Conclusions: Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Significance: Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population.
Brain-computer interfaces (BCIs) provide a non-muscular communication channel for persons with severe motor impairments. Previous studies have shown that the aptitude with which a BCI can be controlled varies from person to person. A reliable predictor of performance could facilitate selection of a suitable BCI paradigm. Eleven severely motor impaired participants performed three sessions of a P300 BCI web browsing task. Before each session auditory oddball data were collected to predict the BCI aptitude of the participants exhibited in the current session. We found a strong relationship of early positive and negative potentials around 200 ms (elicited with the auditory oddball task) with performance. The amplitude of the P2 (r = −0.77) and of the N2 (r = −0.86) had the strongest correlations. Aptitude prediction using an auditory oddball was successful. The finding that the N2 amplitude is a stronger predictor of performance than P3 amplitude was reproduced after initially showing this effect with a healthy sample of BCI users. This will reduce strain on the end-users by minimizing the time needed to find suitable paradigms and inspire new approaches to improve performance.
Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants (N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications.
Objective: Brain-computer interface (BCI) provide a non-muscular communication channel for patients with impairments of the motor system. A significant number of BCI users is unable to obtain voluntary control of a BCI-system in proper time. This makes methods that can be used to determine the aptitude of a user necessary.
Methods: We hypothesized that integrity and connectivity of involved white matter connections may serve as a predictor of individual BCI-performance. Therefore, we analyzed structural data from anatomical scans and DTI of motor imagery BCI-users differentiated into high and low BCI-aptitude groups based on their overall performance.
Results: Using a machine learning classification method we identified discriminating structural brain trait features and correlated the best features with a continuous measure of individual BCI-performance. Prediction of the aptitude group of each participant was possible with near perfect accuracy (one error).
Conclusions: Tissue volumetric analysis yielded only poor classification results. In contrast, the structural integrity and myelination quality of deep white matter structures such as the Corpus Callosum, Cingulum, and Superior Fronto-Occipital Fascicle were positively correlated with individual BCI-performance.
Significance: This confirms that structural brain traits contribute to individual performance in BCI use.
Pain conditions and chronic pain disorders are among the leading reasons for seeking medical help and immensely burden patients and the healthcare system. Therefore, research on the underlying mechanisms of pain processing and modulation is necessary and warranted. One crucial part of this pain research includes identifying resilience factors that protect from chronic pain development and enhance its treatment. The ability to use emotion regulation strategies has been suggested to serve as a resilience factor, facilitating pain regulation and management. Acceptance has been discussed as a promising pain regulation strategy, but results in this domain have been mixed so far. Moreover, the allocation of acceptance in Gross’s (1998) process model of emotion regulation has been under debate. Thus, comparing acceptance with the already established strategies of distraction and reappraisal could provide insights into underlying mechanisms. This dissertation project consisted of three successive experimental studies which aimed to investigate these strategies by applying different modalities of individually adjusted pain stimuli of varying durations. In the first study (N = 29), we introduced a within-subjects design where participants were asked to either accept (acceptance condition) or react to the short heat pain stimuli (10 s) without using any pain regulation strategies (control condition). In the second study (N = 36), we extended the design of study 1 by additionally applying brief, electrical pain stimuli (20 ms) and including the new experimental condition distraction, where participants should distract themselves from the pain experience by imagining a neutral situation. In the third study (N = 121), all three strategies, acceptance, distraction, and reappraisal were compared with each other and additionally with a neutral control condition in a mixed design. Participants were randomly assigned to one of three strategy groups, including a control condition and a strategy condition. All participants received short heat pain stimuli of 10 s, alternating with tonic heat pain stimuli of 3 minutes. In the reappraisal condition, participants were instructed to imagine the pain having a positive outcome or valence. The self-reported pain intensity, unpleasantness, and regulation ratings were measured in all studies. We further recorded the autonomic measures heart rate and skin conductance continuously and assessed the habitual emotion regulation styles and pain-related trait factors via questionnaires. Results revealed that the strategies acceptance, distraction, and reappraisal significantly reduced the self-reported electrical and heat pain stimulation with both durations compared to a neutral control condition. Additionally, regulatory efforts with acceptance in study 2 and with all strategies in study 3 were reflected by a decreased skin conductance level compared to the control condition. However, there were no significant differences between the strategies for any of the assessed variables. These findings implicate similar mechanisms underlying all three strategies, which led to the proposition of an extended process model of emotion regulation. We identified another sequence in the emotion-generative process and suggest that acceptance can flexibly affect at least four sequences in the process. Correlation analyses further indicated that the emotion regulation style did not affect regulatory success, suggesting that pain regulation strategies can be learned effectively irrespective of habitual tendencies. Moreover, we found indications that trait factors such as optimism and resilience facilitated pain regulation, especially with acceptance. Conclusively, we propose that acceptance could be flexibly used by adapting to different circumstances. The habitual use of acceptance could therefore be considered a resilience factor. Thus, acceptance appears to be a promising and versatile strategy to prevent the development of and improve the treatment of various chronic pain disorders. Future studies should further examine factors and circumstances that support effective pain regulation with acceptance.
Acceptance-based regulation of pain, which focuses on the allowing of pain and pain related thoughts and emotions, was found to modulate pain. However, results so far are inconsistent regarding different pain modalities and indices. Moreover, studies so far often lack a suitable control condition, focus on behavioral pain measures rather than physiological correlates, and often use between-subject designs, which potentially impede the evaluation of the effectiveness of the strategies. Therefore, we investigated whether acceptance-based strategies can reduce subjective and physiological markers of acute pain in comparison to a control condition in a within-subject design. To this end, participants (N = 30) completed 24 trials comprising 10 s of heat pain stimulation. Each trial started with a cue instructing participants to welcome and experience pain (acceptance trials) or to react to the pain as it is without employing any regulation strategies (control trials). In addition to pain intensity and unpleasantness ratings, heart rate (HR) and skin conductance (SC) were recorded. Results showed significantly decreased pain intensity and unpleasantness ratings for acceptance compared to control trials. Additionally, HR was significantly lower during acceptance compared to control trials, whereas SC revealed no significant differences. These results demonstrate the effectiveness of acceptance-based strategies in reducing subjective and physiological pain responses relative to a control condition, even after short training. Therefore, the systematic investigation of acceptance in different pain modalities in healthy and chronic pain patients is warranted.
Based on an embodied account of language comprehension, this study investigated the dynamic characteristics of children and adults’ perceptual simulations during sentence comprehension, using a novel paradigm to assess the perceptual simulation of objects moving up and down a vertical axis. The participants comprised adults (N = 40) and 6-, 8-, and 10-year-old children (N = 116). After listening in experimental trials to sentences implying that objects moved upward or downward, the participants were shown pictures and had to decide as quickly as possible whether the objects depicted had been mentioned in the sentences. The target pictures moved either up or down and then stopped in the middle of the screen. All age groups’ reaction times were found to be shorter when the objects moved in the directions that the sentences implied. Age exerted no developmental effect on reaction times. The findings suggest that dynamic perceptual simulations are fundamental to language comprehension in text recipients aged 6 and older.
Making judgments of learning (JOLs) after studying can directly improve learning. This JOL reactivity has been shown for simple materials but has scarcely been investigated with educationally relevant materials such as expository texts. The few existing studies have not yet reported any consistent gains in text comprehension due to providing JOLs. In the present study, we hypothesized that increasing the chances of covert retrieval attempts when making JOLs after each of five to-be-studied text passages would produce comprehension benefits at 1 week compared to restudy. In a between-subjects design, we manipulated both whether participants (N = 210) were instructed to covertly retrieve the texts, and whether they made delayed target-absent JOLs. The results indicated that delayed, target-absent JOLs did not improve text comprehension after 1 week, regardless of whether prior instructions to engage in covert retrieval were provided. Based on the two-stage model of JOLs, we reasoned that participants’ retrieval attempts during metacomprehension judgments were either insufficient (i.e., due to a quick familiarity assessment) or were ineffective (e.g., due to low retrieval success).
Emotion-motivation models propose that behaviors, including health behaviors, should be predicted by the same variables that also predict negative affect since emotional reactions should induce a motivation to avoid threatening situations. In contrast, social cognitive models propose that safety behaviors are predicted by a different set of variables that mainly reflect cognitive and socio-structural aspects. Here, we directly tested these opposing hypotheses in young adults (N = 4134) in the context of COVID-19-related safety behaviors to prevent infections. In each participant, we collected measures of negative affect as well as cognitive and socio-structural variables during the lockdown in the first infection wave in Germany. We found a negative effect of the pandemic on emotional responses. However, this was not the main predictor for young adults’ willingness to comply with COVID-19-related safety measures. Instead, individual differences in compliance were mainly predicted by cognitive and socio-structural variables. These results were confirmed in an independent data set. This study shows that individuals scoring high on negative affect during the pandemic are not necessarily more likely to comply with safety regulations. Instead, political measures should focus on cognitive interventions and the societal relevance of the health issue. These findings provide important insights into the basis of health-related concerns and feelings as well as behavioral adaptations.
The individual sensitivity for ones internal bodily signals ("interoceptive awareness") has been shown to be of relevance for a broad range of cognitive and affective functions. Interoceptive awareness has been primarily assessed via measuring the sensitivity for ones cardiac signals ("cardiac awareness") which can be non-invasively measured by heartbeat perception tasks. It is an open question whether cardiac awareness is related to the sensitivity for other bodily, visceral functions. This study investigated the relationship between cardiac awareness and the sensitivity for gastric functions in healthy female persons by using non-invasive methods. Heartbeat perception as a measure for cardiac awareness was assessed by a heartbeat tracking task and gastric sensitivity was assessed by a water load test. Gastric myoelectrical activity was measured by electrogastrography (EGG) and subjective feelings of fullness, valence, arousal and nausea were assessed. The results show that cardiac awareness was inversely correlated with ingested water volume and with normogastric activity after water load. However, persons with good and poor cardiac awareness did not differ in their subjective ratings of fullness, nausea and affective feelings after drinking. This suggests that good heartbeat perceivers ingested less water because they subjectively felt more intense signals of fullness during this lower amount of water intake compared to poor heartbeat perceivers who ingested more water until feeling the same signs of fullness. These findings demonstrate that cardiac awareness is related to greater sensitivity for gastric functions, suggesting that there is a general sensitivity for interoceptive processes across the gastric and cardiac modality.
The present study investigated event-related brain potentials elicited by true and false negated statements to evaluate if discrimination of the truth value of negated information relies on conscious processing and requires higher-order cognitive processing in healthy subjects across different levels of stimulus complexity. The stimulus material consisted of true and false negated sentences (sentence level) and prime-target expressions (word level). Stimuli were presented acoustically and no overt behavioral response of the participants was required. Event-related brain potentials to target words preceded by true and false negated expressions were analyzed both within group and at the single subject level. Across the different processing conditions (word pairs and sentences), target words elicited a frontal negativity and a late positivity in the time window from 600–1000 msec post target word onset. Amplitudes of both brain potentials varied as a function of the truth value of the negated expressions. Results were confirmed at the single-subject level. In sum, our results support recent suggestions according to which evaluation of the truth value of a negated expression is a time- and cognitively demanding process that cannot be solved automatically, and thus requires conscious processing. Our paradigm provides insight into higher-order processing related to language comprehension and reasoning in healthy subjects. Future studies are needed to evaluate if our paradigm also proves sensitive for the detection of consciousness in non-responsive patients.
The present study investigated event-related brain potentials elicited by true and false negated statements to evaluate if discrimination of the truth value of negated information relies on conscious processing and requires higher-order cognitive processing in healthy subjects across different levels of stimulus complexity. The stimulus material consisted of true and false negated sentences (sentence level) and prime-target expressions (word level). Stimuli were presented acoustically and no overt behavioral response of the participants was required. Event-related brain potentials to target words preceded by true and false negated expressions were analyzed both within group and at the single subject level. Across the different processing conditions (word pairs and sentences), target words elicited a frontal negativity and a late positivity in the time window from 600-1000 msec post target word onset. Amplitudes of both brain potentials varied as a function of the truth value of the negated expressions. Results were confirmed at the single-subject level. In sum, our results support recent suggestions according to which evaluation of the truth value of a negated expression is a time-and cognitively demanding process that cannot be solved automatically, and thus requires conscious processing. Our paradigm provides insight into higher-order processing related to language comprehension and reasoning in healthy subjects. Future studies are needed to evaluate if our paradigm also proves sensitive for the detection of consciousness in non-responsive patients.
Body image disturbances are core symptoms of eating disorders (EDs). Recent evidence suggests that changes in body image may occur prior to ED onset and are not restricted to in-vivo exposure (e.g. mirror image), but also evident during presentation of abstract cues such as body shape and weight-related words. In the present study startle modulation, heart rate and subjective evaluations were examined during reading of body words and neutral words in 41 student female volunteers screened for risk of EDs. The aim was to determine if responses to body words are attributable to a general negativity bias regardless of ED risk or if activated, ED relevant negative body schemas facilitate priming of defensive responses. Heart rate and word ratings differed between body words and neutral words in the whole female sample, supporting a general processing bias for body weight and shape-related concepts in young women regardless of ED risk. Startle modulation was specifically related to eating disorder symptoms, as was indicated by significant positive correlations with self-reported body dissatisfaction. These results emphasize the relevance of examining body schema representations as a function of ED risk across different levels of responding. Peripheral-physiological measures such as the startle reflex could possibly be used as predictors of females’ risk for developing EDs in the future.
Empirical evidence suggests that words are powerful regulators of emotion processing. Although a number of studies have used words as contextual cues for emotion processing, the role of what is being labeled by the words (i.e., one's own emotion as compared to the emotion expressed by the sender) is poorly understood. The present study reports results from two experiments which used ERP methodology to evaluate the impact of emotional faces and self- vs. sender-related emotional pronoun-noun pairs (e.g., my fear vs. his fear) as cues for emotional face processing. The influence of self- and sender-related cues on the processing of fearful, angry and happy faces was investigated in two contexts: an automatic (experiment 1) and intentional affect labeling task (experiment 2), along with control conditions of passive face processing. ERP patterns varied as a function of the label's reference (self vs. sender) and the intentionality of the labeling task (experiment 1 vs. experiment 2). In experiment 1, self-related labels increased the motivational relevance of the emotional faces in the time-window of the EPN component. Processing of sender-related labels improved emotion recognition specifically for fearful faces in the N170 time-window. Spontaneous processing of affective labels modulated later stages of face processing as well. Amplitudes of the late positive potential (LPP) were reduced for fearful, happy, and angry faces relative to the control condition of passive viewing. During intentional regulation (experiment 2) amplitudes of the LPP were enhanced for emotional faces when subjects used the self-related emotion labels to label their own emotion during face processing, and they rated the faces as higher in arousal than the emotional faces that had been presented in the “label sender's emotion” condition or the passive viewing condition. The present results argue in favor of a differentiated view of language-as-context for emotion processing.
Previous research using neuroimaging methods proposed a link between mechanisms controlling motor response inhibition and suppression of unwanted memories.The present study investigated this hypothesis behaviorally by combining the think/no-think paradigm (TNT) with a go/no-go motor inhibition task. Participants first learned unpleasant cue-target pairs. Cue words were then presented as go or no-go items in the TNT. Participants’ task was to respond to the cues and think of the target word aloud or to inhibit their response to the cue and the target word from coming to mind. Cued recall assessed immediately after the TNT revealed reduced recall performance for no-go targets compared to go targets or baseline cues not presented in the TNT. The results demonstrate that doing the no-think and no-go task concurrently leads to memory suppression of unpleasant items during later recall. Results are discussed in line with recent empirical research and theoretical positions.
Encoding Redundancy for Task-dependent Optimal Control : A Neural Network Model of Human Reaching
(2008)
The human motor system is adaptive in two senses. It adapts to the properties of the body to enable effective control. It also adapts to different situational requirements and constraints. This thesis proposes a new neural network model of both kinds of adaptivity for the motor cortical control of human reaching movements, called SURE_REACH (sensorimotor unsupervised learning redundancy resolving control architecture). In this neural network approach, the kinematic and sensorimotor redundancy of a three-joint planar arm is encoded in task-independent internal models by an unsupervised learning scheme. Before a movement is executed, the neural networks prepare a movement plan from the task-independent internal models, which flexibly incorporates external, task-specific constraints. The movement plan is then implemented by proprioceptive or visual closed-loop control. This structure enables SURE_REACH to reach hand targets while incorporating task-specific contraints, for example adhering to kinematic constraints, anticipating the demands of subsequent movements, avoiding obstacles, or reducing the motion of impaired joints. Besides this functionality, the model accounts for temporal aspects of human reaching movements or for data from priming experiments. Additionally, the neural network structure reflects properties of motor cortical networks like interdependent population encoded body space representations, recurrent connectivity, or associative learning schemes. This thesis introduces and describes the new model, relates it to current computational models, evaluates its functionality, relates it to human behavior and neurophysiology, and finally discusses potential extensions as well as the validity of the model. In conclusion, the proposed model grounds highly flexible task-dependent behavior in a neural network framework and unsupervised sensorimotor learning.
In recent years, Ideomotor Theory has regained widespread attention and sparked the development of a number of theories on goal-directed behavior and learning. However, there are two issues with previous studies’ use of Ideomotor Theory. Although Ideomotor Theory is seen as very general, it is often studied in settings that are considerably more simplistic than most natural situations. Moreover, Ideomotor Theory’s claim that effect anticipations directly trigger actions and that action-effect learning is based on the formation of direct action-effect associations is hard to address empirically. We address these points from a computational perspective. A simple computational model of Ideomotor Theory was tested in tasks with different degrees of complexity.The model evaluation showed that Ideomotor Theory is a computationally feasible approach for understanding efficient action-effect learning for goal-directed behavior if the following preconditions are met: (1) The range of potential actions and effects has to be restricted. (2) Effects have to follow actions within a short time window. (3) Actions have to be simple and may not require sequencing. The first two preconditions also limit human performance and thus support Ideomotor Theory. The last precondition can be circumvented by extending the model with more complex, indirect action generation processes. In conclusion, we suggest that IdeomotorTheory offers a comprehensive framework to understand action-effect learning. However, we also suggest that additional processes may mediate the conversion of effect anticipations into actions in many situations.
Pointing is a ubiquitous means of communication. Nevertheless, observers systematically misinterpret the location indicated by pointers. We examined whether these misunderstandings result from the typically different viewpoints of pointers and observers. Participants either pointed themselves or interpreted points while assuming the pointer’s or a typical observer perspective in a virtual reality environment. The perspective had a strong effect on the relationship between pointing gestures and referents, whereas the task had only a minor influence. This suggests that misunderstandings between pointers and observers primarily result from their typically different viewpoints.
The limbic system and especially the amygdala have been identified as key structures in emotion induction and regulation. Recently research has additionally focused on the influence of prefrontal areas on emotion processing in the limbic system and the amygdala. Results from fMRI studies indicate that the prefrontal cortex (PFC) is involved not only in emotion induction but also in emotion regulation. However, studies using fNIRS only report prefrontal brain activation during emotion induction. So far it lacks the attempt to compare emotion induction and emotion regulation with regard to prefrontal activation measured with fNIRS, to exclude the possibility that the reported prefrontal brain activation in fNIRS studies are mainly caused by automatic emotion regulation processes. Therefore this work tried to distinguish emotion induction from regulation via fNIRS of the prefrontal cortex. 20 healthy women viewed neutral pictures as a baseline condition, fearful pictures as induction condition and reappraised fearful pictures as regulation condition in randomized order. As predicted, the view-fearful condition led to higher arousal ratings than the view-neutral condition with the reappraise-fearful condition in between. For the fNIRS results the induction condition showed an activation of the bilateral PFC compared to the baseline condition (viewing neutral). The regulation condition showed an activation only of the left PFC compared to the baseline condition, although the direct comparison between induction and regulation condition revealed no significant difference in brain activation. Therefore our study underscores the results of previous fNIRS studies showing prefrontal brain activation during emotion induction and rejects the hypothesis that this prefrontal brain activation might only be a result of automatic emotion regulation processes.
Anxiety patients over-generalize fear, possibly because of an incapacity to discriminate threat and safety signals. Discrimination trainings are promising approaches for reducing such fear over-generalization. Here we investigated the efficacy of a fear-relevant vs. a fear-irrelevant discrimination training on fear generalization and whether the effects are increased with feedback during training. Eighty participants underwent two fear acquisition blocks, during which one face (conditioned stimulus, CS+), but not another face (CS−), was associated with a female scream (unconditioned stimulus, US). During two generalization blocks, both CSs plus four morphs (generalization stimuli, GS1–GS4) were presented. Between these generalization blocks, half of the participants underwent a fear-relevant discrimination training (discrimination between CS+ and the other faces) with or without feedback and the other half a fear-irrelevant discrimination training (discrimination between the width of lines) with or without feedback. US expectancy, arousal, valence ratings, and skin conductance responses (SCR) indicated successful fear acquisition. Importantly, fear-relevant vs. fear-irrelevant discrimination trainings and feedback vs. no feedback reduced generalization as reflected in US expectancy ratings independently from one another. No effects of training condition were found for arousal and valence ratings or SCR. In summary, this is a first indication that fear-relevant discrimination training and feedback can improve the discrimination between threat and safety signals in healthy individuals, at least for learning-related evaluations, but not evaluations of valence or (physiological) arousal.
Pain-associated approach and avoidance behaviours are critically involved in the development and maintenance of chronic pain. Empirical research suggests a key role of operant learning mechanisms, and first experimental paradigms were developed for their investigation within a controlled laboratory setting. We introduce a new Virtual Reality paradigm to the study of pain-related behaviour and investigate pain experiences on multiple dimensions. The paradigm evaluates the effects of three-tiered heat-pain stimuli applied contingent versus non-contingent with three types of arm movements in naturalistic virtual sceneries. Behaviour, self-reported pain-related fear, pain expectancy and electrodermal activity were assessed in 42 healthy participants during an acquisition phase (contingent movement-pain association) and a modification phase (no contingent movement-pain association). Pain-associated approach behaviour, as measured by arm movements followed by a severe heat stimulus, quickly decreased in-line with the arm movement-pain contingency. Slower effects were observed in fear of movement-related pain and pain expectancy ratings. During the subsequent modification phase, the removal of the pain contingencies modified all three indices. In both phases, skin conductance responses resemble the pattern observed for approach behaviour, while skin conductance levels equal the pattern observed for the self-ratings. Our findings highlight a fast reduction in approach behaviour in the face of acute pain and inform about accompanying psychological and physiological processes. We discuss strength and limitations of our paradigm for future investigations with the ultimate goal of gaining a comprehensive understanding of the mechanisms involved in chronic pain development, maintenance, and its therapy.
Multitasking, defined as performing more than one task at a time, typically yields performance decrements, for instance, in processing speed and accuracy. These performance costs are often distributed asymmetrically among the involved tasks. Under suitable conditions, this can be interpreted as a marker for prioritization of one task – the one that suffers less – over the other. One source of such task prioritization is based on the use of different effector systems (e.g., oculomotor system, vocal tract, limbs) and their characteristics. The present work explores such effector system-based task prioritization by examining to which extent associated effector systems determine which task is processed with higher priority in multitasking situations. Thus, three different paradigms are used, namely the simultaneous (stimulus) onset paradigm, the psychological refractory period (PRP) paradigm, and the task switching paradigm. These paradigms invoke situations in which two (in the present studies basic spatial decision) tasks are a) initiated at exactly the same time, b) initiated with a short varying temporal distance (but still temporally overlapping), or c) in which tasks alternate randomly (without temporal overlap). The results allow for three major conclusions: 1. The assumption of effector system-based task prioritization according to an ordinal pattern (oculomotor > pedal > vocal > manual, indicating decreasing prioritization) is supported by the observed data in the simultaneous onset paradigm. This data pattern cannot be explained by a rigid “first come, first served” task scheduling principle. 2. The data from the PRP paradigm confirmed the assumption of vocal-over-manual prioritization and showed that classic PRP effects (as a marker for task order-based prioritization) can be modulated by effector system characteristics. 3. The mere cognitive representation of task sets (that must be held active to switch between them) differing in effector systems without an actual temporal overlap in task processing, however, is not sufficient to elicit the same effector system prioritization phenomena observed for overlapping tasks. In summary, the insights obtained by the present work support the assumptions of parallel central task processing and resource sharing among tasks, as opposed to exclusively serial processing of central processing stages. Moreover, they indicate that effector systems are a crucial factor in multitasking and suggest an integration of corresponding weighting parameters in existing dual-task control frameworks.
In task-switching studies, performance is typically worse in task-switch trials than in task-repetition trials. These switch costs are often asymmetrical, a phenomenon that has been explained by referring to a dominance of one task over the other. Previous studies also indicated that response modalities associated with two tasks may be considered as integral components for defining a task set. However, a systematic assessment of the role of response modalities in task switching is still lacking: Are some response modalities harder to switch to than others? The present study systematically examined switch costs when combining tasks that differ only with respect to their associated effector systems. In Experiment 1, 16 participants switched (in unpredictable sequence) between oculomotor and vocal tasks. In Experiment 2, 72 participants switched (in pairwise combinations) between oculomotor, vocal, and manual tasks. We observed systematic performance costs when switching between response modalities under otherwise constant task features and could thereby replicate previous observations of response modality switch costs. However, we did not observe any substantial switch-cost asymmetries. As previous studies using temporally overlapping dual-task paradigms found substantial prioritization effects (in terms of asymmetric costs) especially for oculomotor tasks, the present results suggest different underlying processes in sequential task switching than in simultaneous multitasking. While more research is needed to further substantiate a lack of response modality switch-cost asymmetries in a broader range of task switching situations, we suggest that task-set representations related to specific response modalities may exhibit rapid decay.
When processing of two tasks overlaps, performance is known to suffer. In the well-established psychological refractory period (PRP) paradigm, tasks are triggered by two stimuli with a short temporal delay (stimulus onset asynchrony; SOA), thereby allowing control of the degree of task overlap. A decrease of the SOA reliably yields longer RTs of the task associated with the second stimulus (Task 2) while performance in the other task (Task 1) remains largely unaffected. This Task 2-specific SOA effect is usually interpreted in terms of central capacity limitations. Particularly, it has been assumed that response selection in Task 2 is delayed due to the allocation of less capacity until this process has been completed in Task 1. Recently, another important factor determining task prioritization has been proposed—namely, the particular effector systems associated with tasks. Here, we study both sources of task prioritization simultaneously by systematically combining three different effector systems (pairwise combinations of oculomotor, vocal, and manual responses) in the PRP paradigm. Specifically, we asked whether task order-based task prioritization (SOA effect) is modulated as a function of Task 2 effector system. The results indicate a modulation of SOA effects when the same (oculomotor) Task 1 is combined with a vocal versus a manual Task 2. This is incompatible with the assumption that SOA effects are solely determined by Task 1 response selection duration. Instead, they support the view that dual-task processing bottlenecks are resolved by establishing a capacity allocation scheme fed by multiple input factors, including attentional weights associated with particular effector systems.
Endogenous Testosterone and Exogenous Oxytocin Modulate Attentional Processing of Infant Faces
(2016)
Evidence indicates that hormones modulate the intensity of maternal care. Oxytocin is known for its positive influence on maternal behavior and its important role for childbirth. In contrast, testosterone promotes egocentric choices and reduces empathy. Further, testosterone decreases during parenthood which could be an adaptation to increased parental investment. The present study investigated the interaction between testosterone and oxytocin in attentional control and their influence on attention to baby schema in women. Higher endogenous testosterone was expected to decrease selective attention to child portraits in a face-in-the-crowd-paradigm, while oxytocin was expected to counteract this effect. As predicted, women with higher salivary testosterone were slower in orienting attention to infant targets in the context of adult distractors. Interestingly, reaction times to infant and adult stimuli decreased after oxytocin administration, but only in women with high endogenous testosterone. These results suggest that oxytocin may counteract the adverse effects of testosterone on a central aspect of social behavior and maternal caretaking.
Brain-computer interfaces (BCIs) are devices that translate signals from the brain into control commands for applications. Within the last twenty years, BCI applications have been developed for communication, environmental control, entertainment, and substitution of motor functions. Since BCIs provide muscle independent communication and control of the environment by circumventing motor pathways, they are considered as assistive technologies for persons with neurological and neurodegenerative diseases leading to motor paralysis, such as amyotrophic lateral sclerosis (ALS), muscular dystrophy, spinal muscular atrophy and stroke (Kübler, Kotchoubey, Kaiser, Wolpaw, & Birbaumer, 2001). Although most researcher mention persons with severe motor impairment as target group for their BCI systems, most studies include healthy participants and studies including potential BCI end-users are sparse. Thus, there is a substantial lack of studies that investigate whether results obtained in healthy participants can be transferred to patients with neurodegenerative diseases. This clearly shows that BCI research faces a translational gap between intense BCI research and bringing BCI applications to end-users outside the lab (Kübler, Mattia, Rupp, & Tangermann, 2013). Translational studies are needed that investigate whether BCIs can be successfully used by severely disabled end-users and whether those end-users would accept BCIs as assistive devices. Another obvious discrepancy exists between a plethora of short-term studies and a sparse number of long-term studies. BCI research thus also faces a reliability gap (Kübler, Mattia, et al., 2013). Most studies present only one BCI session, however the few studies that include several testing sessions indicate high inter- and intra-individual variance in the end-users’ performance due to non-stationarity of signals. Long-term studies, however, are needed to demonstrate whether a BCI can be reliably used as assistive device over a longer period of time in the daily-life of a person. Therefore there is also a great need for reliability studies.
The purpose of the present thesis was to address these research gaps and to bring BCIs closer to end-users in need, especially into their daily-lives, following a user-centred design (UCD). The UCD was suggested as theoretical framework for bringing BCIs to end-users by Kübler and colleagues (Kübler et al., 2014; Zickler et al., 2011). This approach aims at the close and iterative interaction between BCI developers and end-users with the final goal to develop BCI systems that are accepted as assistive devices by end-users. The UCD focuses on usability, that is, how well a BCI technology matches the purpose and meets the needs and requirements of the targeted end-users and was standardized with the ISO 9241-210.
Within the UCD framework, usability of a device can be defined with regard to its effectiveness, efficiency and satisfaction. These aspects were operationalized by Kübler and colleagues to evaluate BCI-controlled applications. As suggested by Vaughan and colleagues, the number of BCI sessions, the total usage duration and the impact of the BCI on the life of the person can be considered as indicators of usefulness of the BCI in long-term daily-life use (Vaughan, Sellers, & Wolpaw, 2012). These definitions and metrics for usability and usefulness were applied for evaluating BCI applications as assistive devices in controlled settings and independent use. Three different BCI applications were tested and evaluated by in total N=10 end-users: In study 1 a motor-imagery (MI) based BCI for gaming was tested by four end-users with severe motor impairment. In study 2, a hybrid P300 event-related (ERP) based BCI for communication was tested by four severely motor restricted end-users with severe motor impairment. Study 1 and 2 are short-term studies conducted in a controlled-setting. In study 3 a P300-ERP BCI for creative expression was installed for long-term independent use at the homes of two end-users in the locked-in state. Both end-users are artists who had gradually lost the ability to paint after being diagnosed with ALS.
Results reveal that BCI controlled devices are accepted as assistive devices. Main obstacles for daily-life use were the not very aesthetic design of the EEG-cap and electrodes (cap is eye-catching and looks medical), low comfort (cables disturb, immobility, electrodes press against head if lying on a head cushion), complicated and time-consuming adjustment, low efficiency and low effectiveness, and not very high reliability (many influencing factors). While effectiveness and efficiency in the MI based BCI were lower compared to applications using the P300-ERP as input channel, the MI controlled gaming application was nevertheless better accepted by the end-users and end-users would rather like to use it compared to the communication applications. Thus, malfunctioning and errors, low speed, and the EEG cap are rather tolerated in gaming applications, compared to communication devices. Since communication is essential for daily-life, it has to be fast and reliable. BCIs for communication, at the current state of the art, are not considered competitive with other assistive devices, if other devices, such as eye-gaze, are still an option. However BCIs might be an option when controlling an application for entertainment in daily-life, if communication is still available. Results demonstrate that BCI is adopted in daily-life if it matches the end-users needs and requirements. Brain Painting serves as best representative, as it matches the artists’ need for creative expression. Caveats such as uncomfortable cap, dependence on others for set-up, and experienced low control are tolerated and do not prevent BCI use on a daily basis. Also end-users in real need of means for communication, such as persons in the locked-in state with unreliable eye-movement or no means for independent communication, do accept obstacles of the BCI, as it is the last or only solution to communicate or control devices. Thus, these aspects are “no real obstacles” but rather “challenges” that do not prevent end-users to use the BCI in their daily-lives. For instance, one end-user, who uses a BCI in her daily-life, stated: “I don’t care about aesthetic design of EEG cap and electrodes nor amplifier”. Thus, the question is not which system is superior to the other, but which system is best for an individual user with specific symptoms, needs, requirements, existing assistive solutions, support by caregivers/family etc.; it is thereby a question of indication. These factors seem to be better “predictors” for adoption of a BCI in daily-life, than common usability criterions such as effectiveness or efficiency. The face valid measures of daily-life demonstrate that BCI-controlled applications can be used in daily-life for more than 3 years, with high satisfaction for the end-users, without experts being present and despite a decrease in the amplitude of the P300 signal. Brain Painting re-enabled both artists to be creatively active in their home environment and thus improved their feelings of happiness, usefulness, self-esteem, well-being, and consequently quality of life and supports social inclusion. This thesis suggests that BCIs are valuable tools for people in the locked-in state.
Children's information processing of risky choice alternatives was investigated in two studies without using verbal reports. In Study 1, the ability to integrate the probabilities and the payoffs of simple bets was examined using the rating scale methodology. Children's choices among three of those simple bets were recorded also. By cross-classifying the children's choice and rating behavior it was shown that a three-stage developmental hypothesis of decision making is not sufficient. A four-stage hypothesis is proposed. In Study 2, the influence of enlarging the presented number of alternatives from two to three and the influence of the similarity of the alternatives on children's choice probabilities was examined with those bets. Children's choice behavior was probabilistic and was influenced only by enlarging the presented number of alternatives. These results suggest that a Bayesian approach, based on two probabilistic choice models, should not be applied in order to analyze children's choice behavior. The functional measurement approach is, as was demonstrated in Study 1, a powerful implement to further the understanding of the development of decision making.
The unification of two major approaches to moral judgment is the purpose of the present approach. Kohlberg's well-known stage theory assumes a sequence of discrete stages that underlie all moral judgment. Stage theory recognizes the problem of integrating considerations but gives no way to solve such integration, even with information from any one stage. And, of course, the stage concept denies any significant integration from different stages. Thus, research on moral judgment needs to study the integration problem which can be tested within Anderson's theory of information integration. The main purpose of the present study was to extend this unificationist approach to the issue of sexual morality. A novel task presents information from two very different stages. The results showed that in contrast to discreteness the stage informers were positively correlated in punishment judgments of both genders about consensual sex of juveniles. Furthermore, the subjects integrated considerations from those very different stages also in contrast to the hypothesis that only a single stage was operative at any time.
In order to unify two major theories of moral judgment, a novel task is employed which combines elements of Kohlberg´s stage theory and of the theory of information integration. In contrast to the format of Kohlberg´s moral judgment interview, a nonverbal and quantitative response which makes low demands on verbal facility was used . Moral informers differing in value, i.e. high and low, are presented. The differences in effect of those two pieces of information should be substantial for a person at that specific moral stage, but small for a person at a different stage. Hence, these differences may diagnose the person's moral stage in the simplest possible way as the two levels of each of the thoughts were about typical content of the four Kohlbergian preconventional and conventional stages. The novel task allowed additionally to measure the influence of another moral concept which was about the non-Kohlbergian moral concept of recompense. After a training phase, pairs of those thoughts were presented to allow for the study of integration and individual differences. German and Korean children, 8, 10, and 12 years in age, judged deserved punishment. The patterns of means, correlations and factor loadings showed that elements of both theories can be unified, but produced unexpected results also. Additive integration of each of the two pairs of moral informers appeared, either with two Kohlbergian moral informers or with another Kohlbergian moral informer in combination with information about recompense. Also cultural independence as well as dependence, developmental changes between 8 and 10 years, and an outstanding moral impact of recompense in size and distinctiveness were observed.
This paper seeks to unify two major theories of moral judgment: Kohlberg's stage theory and Anderson's moral information integration theory. Subjects were told about thoughts of actors in Kohlberg's classic altruistic Heinz dilemma and in a new egoistical dilemma. These actors's thoughts represented Kohlberg's stages I (Personal Risk) and IV (Societal Risk) and had three levels, High, Medium, and Low. They were presented singly and in a 3 x 3 integration design. Subjects judged how many months of prison the actor deserved. The data supported the averaging model of moral integration theory, whereas Kohlberg's theory has no way to handle the integration problem. Following this, subjects ranked statements related to Kohlberg's first four stages in a procedure similar to that of Rest (1975). Higher score went with larger effect of Societal Risk as predicted by Kohlberg's theory. But contrary to Kohlberg's theory, no age trends were found. Also strongly contrary to Kohlberg's theory, effects of Personal Risk (Stage I) and Societal Risk (Stage IV) correlated positively.
Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes
(2016)
Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards.
Models of eye-movement control distinguish between different control levels, ranging from automatic (bottom-up, stimulus-driven selection) and automatized (based on well-learned routines) to voluntary (top-down, goal-driven selection, e.g., based on instructions). However, one type of voluntary control has yet only been examined in the manual and not in the oculomotor domain, namely free-choice selection among arbitrary targets, that is, targets that are of equal interest from both a bottom-up and top-down processing perspective. Here, we ask which features of targets (identity- or location-related) are used to determine such oculomotor free-choice behavior. In two experiments, participants executed a saccade to one of four peripheral targets in three different choice conditions: unconstrained free choice, constrained free choice based on target identity (color), and constrained free choice based on target location. The analysis of choice frequencies revealed that unconstrained free-choice selection closely resembled constrained choice based on target location. The results suggest that free-choice oculomotor control is mainly guided by spatial (location-based) target characteristics. We explain these results by assuming that participants tend to avoid less parsimonious recoding of target-identity representations into spatial codes, the latter being a necessary prerequisite to configure oculomotor commands.
The question of how behavior is represented in the mind lies at the core of psychology as the science of mind and behavior. While a long-standing research tradition has established two opposing fundamental views of perceptual representation, Structuralism and Gestalt psychology, we test both accounts with respect to action representation: Are multiple actions (characterizing human behavior in general) represented as the sum of their component actions (Structuralist view) or holistically (Gestalt view)? Using a single-/dual-response switch paradigm, we analyzed switches between dual ([A + B]) and single ([A], [B]) responses across different effector systems and revealed comparable performance in partial repetitions and full switches of behavioral requirements (e.g., in [A + B] → [A] vs. [B] → [A], or [A] → [A + B] vs. [B] → [A + B]), but only when the presence of dimensional overlap between responses allows for Gestalt formation. This evidence for a Gestalt view of behavior in our paradigm challenges some fundamental assumptions in current (tacitly Structuralist) action control theories (in particular the idea that all actions are represented compositionally with reference to their components), provides a novel explanatory angle for understanding complex, highly synchronized human behavior (e.g., dance), and delimitates the degree to which complex behavior can be analyzed in terms of its basic components.
Cognitive theories on causes of developmental dyslexia can be divided into language-specific and general accounts. While the former assume that words are special in that associated processing problems are rooted in language-related cognition (e.g., phonology) deficits, the latter propose that dyslexia is rather rooted in a general impairment of cognitive (e.g., visual and/or auditory) processing streams. In the present study, we examined to what extent dyslexia (typically characterized by poor orthographic representations) may be associated with a general deficit in visual long-term memory (LTM) for details. We compared object- and detail-related visual LTM performance (and phonological skills) between dyslexic primary school children and IQ-, age-, and gender-matched controls. The results revealed that while the overall amount of LTM errors was comparable between groups, dyslexic children exhibited a greater portion of detail-related errors. The results suggest that not only phonological, but also general visual resolution deficits in LTM may play an important role in developmental dyslexia.
The present thesis addresses cognitive processing of voice information. Based on general theoretical concepts regarding mental processes it will differentiate between modular, abstract information processing approaches to cognition and interactive, embodied ideas of mental processing. These general concepts will then be transferred to the context of processing voice-related information in the context of parallel face-related processing streams. One central issue here is whether and to what extent cognitive voice processing can occur independently, that is, encapsulated from the simultaneous processing of visual person-related information (and vice versa). In Study 1 (Huestegge & Raettig, in press), participants are presented with audio-visual stimuli displaying faces uttering digits.
Audiovisual gender congruency was manipulated: There were male and female faces, each uttering digits with either a male or female voice (all stimuli were AV- synchronized). Participants were asked to categorize the gender of either the face or the voice by pressing one of two keys in each trial. A central result was that audio-visual gender congruency affected performance: Incongruent stimuli were categorized slower and more error-prone, suggesting a strong cross-modal interaction of the underlying visual and auditory processing routes. Additionally, the effect of incongruent visual information on auditory classification was stronger than the effect of incongruent auditory information on visual categorization, suggesting visual dominance over auditory processing in the context of gender classification. A gender congruency effect was also present under high cognitive load. Study 2 (Huestegge, Raettig, & Huestegge, in press) utilized the same (gender-congruent and -incongruent) stimuli, but different tasks for the participants, namely categorizing the spoken digits (into odd/even or smaller/larger than 5). This should effectively direct attention away from gender information, which was no longer task-relevant. Nevertheless, congruency effects were still observed in this study. This suggests a relatively automatic processing of cross-modal gender information, which
eventually affects basic speech-based information processing. Study 3 (Huestegge, subm.) focused on the ability of participants to match unfamiliar voices to (either static or dynamic) faces. One result was that participants were indeed able to match voices to faces. Moreover, there was no evidence for any performance increase when dynamic (vs. mere static) faces had to be matched to concurrent voices. The results support the idea that common person-related source information affects both vocal and facial features, and implicit corresponding knowledge appears to be used by participants to successfully complete face-voice matching. Taken together, the three studies (Huestegge, subm.; Huestegge & Raettig, in press; Huestegge et al., in press) provided information to further develop current theories of voice processing (in the context of face processing). On a general level, the results of all three studies are not in line with an abstract, modular view of cognition, but rather lend further support to interactive, embodied accounts of mental processing.
Anxiety patients overgeneralize fear, also because of an inability to perceptually discriminate threat and safety signals. Therefore, some studies have developed discrimination training that successfully reduced the occurrence of fear generalization. The present work is the first to take a treatment-like approach by using discrimination training after generalization has occurred. Therefore, two studies were conducted with healthy participants using the same fear conditioning and generalization paradigm, with two faces as conditioned stimuli (CSs), and four facial morphs between CSs as generalization stimuli (GSs). Only one face (CS+) was followed by a loud scream (unconditioned stimulus, US). In Study 1, participants underwent either fear-relevant (discriminating faces) or fear-irrelevant discrimination training (discriminating width of lines) or a non-discriminative control training between the two generalization tests, each with or without feedback (n = 20 each). Generalization of US expectancy was reduced more effectively by fear-relevant compared to fear-irrelevant discrimination training. However, neither discrimination training was more effective than non-discriminative control training. Moreover, feedback reduced generalization of US expectancy only in discrimination training. Study 2 was designed to replicate the effects of the discrimination-training conditions in a large sample (N = 244) and examine their benefits in individuals at risk for anxiety disorders. Again, feedback reduced fear generalization particularly well for US expectancy. Fear relevance was not confirmed to be particularly fear-reducing in healthy participants, but may enhance training effects in individuals at risk of anxiety disorder. In summary, this work provides evidence that existing fear generalization can be reduced by discrimination training, likely involving several (higher-level) processes besides perceptual discrimination (e.g., motivational mechanisms in feedback conditions). Its use may be promising as part of individualized therapy for patients with difficulty discriminating similar stimuli.
Animals, just like humans, can freely move. They do so for various important reasons, such as finding food and escaping predators. Observing these behaviors can inform us about the underlying cognitive processes. In addition, while humans can convey complicated information easily through speaking, animals need to move their bodies to communicate. This has prompted many creative solutions by animal neuroscientists to enable studying the brain during movement. In this review, we first summarize how animal researchers record from the brain while an animal is moving, by describing the most common neural recording techniques in animals and how they were adapted to record during movement. We further discuss the challenge of controlling or monitoring sensory input during free movement.
However, not only is free movement a necessity to reflect the outcome of certain internal cognitive processes in animals, it is also a fascinating field of research since certain crucial behavioral patterns can only be observed and studied during free movement. Therefore, in a second part of the review, we focus on some key findings in animal research that specifically address the interaction between free movement and brain activity. First, focusing on walking as a fundamental form of free movement, we discuss how important such intentional movements are for understanding processes as diverse as spatial navigation, active sensing, and complex motor planning. Second, we propose the idea of regarding free movement as the expression of a behavioral state. This view can help to understand the general influence of movement on brain function.
Together, the technological advancements towards recording from the brain during movement, and the scientific questions asked about the brain engaged in movement, make animal research highly valuable to research into the human “moving brain”.
The main goals of the present thesis were to investigate how food deprivation influences food related disgust and to identify mental mechanisms that might underlie alterations in food related disgust. For this purpose, 9 studies were conducted that employed direct and indirect measures of attitudes, biological measures of affect as well as measures of real eating behavior and food choice, and compared responses of deprived and non deprived subjects on each of these measures. Spontaneous facial reactions were assessed via EMG and revealed that food deprived subjects showed weaker disgust reactions than satiated participants when being confronted with photographs of disgusting foods. Interestingly, deprived and non deprived subjects evaluated disgusting foods equally negative on a conscious level of information processing, indicating that food deprivation has the potential to attenuate food related disgust irrespective of conscious evaluations. Furthermore, it was found that food deprived participants readily consumed disgust related foods (“genetically modified foods”), while satiated participants rejected those foods. Again, no difference emerged between deprived and non deprived subjects in respect to their conscious evaluations of genetically modified foods (that were negative in both experimental groups). The dissociation between conscious evaluations and actual eating behavior that was observed amongst food deprived participants resembles the dissociation between conscious evaluations and facial reactions, thereby corroborating the assumption that alterations in food related disgust might directly influence eating behavior without changing conscious evaluations of foods. The assumption that a shift in automatic attitudes towards disgusting foods might be responsible for these effects received only partial support. That is, there was only a non significant tendency for food deprived subjects to evaluate disgusting foods more positive than satiated subjects on an automatic level of information processing. Instead, the results of the present thesis suggest that food deprived subjects exhibit a stronger motivation than satiated subjects to approach disgusting foods immediately. More precisely, food deprived participants exhibited strong approach motivational tendencies towards both, palatable and disgusting foods in an “Approach- Avoidance Task” whereas satiated participants only approached palatable (but not disgusting) foods on an automatic level of information processing. Moreover, food deprivation seems to change the subjective weighting of hedonic and functional food attributes in the context of more elaborated decisions about which foods to pick for consumption and which foods to reject. It was found that individual taste preferences were of minor importance for food deprived subjects but very important for satiated subjects when actually choosing between several food alternatives. In contrast, functional food attributes (e.g., immediate availability of a given food, large portion size) were more important selection criteria for food deprived subjects than for satiated subjects. Thus, food deprived participants were less picky than satiated participants, but showed a clear preference for those food alternatives that were functional in ending a state of food deprivation quickly – even if this meant choosing a food that was not considered tasty. Taken together, the present thesis shows that physiological need states (e.g., food deprivation) are tightly linked to the affective and motivational processing of need relevant cues. This link is so strong that food deprivation even modulates affective and motivational reactions as well as eating behavior and choice behavior towards disgusting (but need relevant) foods.
Brain-Computer Interfaces (BCIs) strive to decode brain signals into control commands for severely handicapped people with no means of muscular control. These potential users of noninvasive BCIs display a large range of physical and mental conditions. Prior studies have shown the general applicability of BCI with patients, with the conflict of either using many training sessions or studying only moderately restricted patients. We present a BCI system designed to establish external control for severely motor-impaired patients within a very short time. Within only six experimental sessions, three out of four patients were able to gain significant control over the BCI, which was based on motor imagery or attempted execution. For the most affected patient, we found evidence that the BCI could outperform the best assistive technology (AT) of the patient in terms of control accuracy, reaction time and information transfer rate. We credit this success to the applied user-centered design approach and to a highly flexible technical setup. State-of-the art machine learning methods allowed the exploitation and combination of multiple relevant features contained in the EEG, which rapidly enabled the patients to gain substantial BCI control. Thus, we could show the feasibility of a flexible and tailorable BCI application in severely disabled users. This can be considered a significant success for two reasons: Firstly, the results were obtained within a short period of time, matching the tight clinical requirements. Secondly, the participating patients showed, compared to most other studies, very severe communication deficits. They were dependent on everyday use of AT and two patients were in a locked-in state. For the most affected patient a reliable communication was rarely possible with existing AT.
Learning with digital media has become a substantial part of formal and informal educational processes and is gaining more and more importance. Technological progress has brought overwhelming opportunities for learners, but challenges them at the same time. Learners have to regulate their learning process to a much greater extent than in traditional learning situations in which teachers support them through external regulation. This means that learners must plan their learning process themselves, apply appropriate learning strategies, monitor, control and evaluate it. These requirements are taken into account in various models of self-regulated learning (SRL). Although the roots of research on SRL go back to the 1980s, the measurement and adequate support of SRL in technology-enhanced learning environments is still not solved in a satisfactory way. An important obstacle are the data sources used to operationalize SRL processes. In order to support SRL in adaptive learning systems and to validate theoretical models, instruments are needed which meet the classical quality criteria and also fulfil additional requirements. Suitable data channels must be measurable "online", i.e., they must be available in real time during learning for analyses or the individual adaptation of interventions. Researchers no longer only have an interest in the final results of questionnaires or tasks, but also need to examine process data from interactions between learners and learning environments in order to advance the development of theories and interventions. In addition, data sources should not be obtrusive so that the learning process is not interrupted or disturbed. Measurements of physiological data, for example, require learners to wear measuring devices. Moreover, measurements should not be reactive. This means that other variables such as learning outcomes should not be influenced by the measurement. Different data sources that are already used to study and support SRL processes, such as protocols on thinking aloud, screen recording, eye tracking, log files, video observations or physiological sensors, meet these criteria to varying degrees. One data channel that has received little attention in research on educational psychology, but is non-obtrusive, non-reactive, objective and available online, is the detailed, timely high-resolution data on observable interactions of learners in online learning environments. This data channel is introduced in this thesis as "peripheral data". It records both the content of learning environments as context, and related actions of learners triggered by mouse and keyboard, as well as the reactions of learning environments, such as structural or content changes. Although the above criteria for the use of the data are met, it is unclear whether this data can be interpreted reliably and validly with regard to relevant variables and behavior.
Therefore, the aim of this dissertation is to examine this data channel from the perspective of SRL and thus further close the existing research gap. One development project and four research projects were carried out and documented in this thesis.
The paper addresses the question of how to approach consciousness in unreflective actions. Unreflective actions differ from reflective, conscious actions in that the intentional description under which the agent knows what she is doing is not available or present to the agent at the moment of acting. Yet, unreflective actions belong to the field in which an agent experiences herself as capable of acting. Some unreflective actions, however, narrow this field and can be characterized by intentionality being inhibited. By studying inhibited intentionality in unreflective actions, the aim of the paper is to show how weaker forms of action urge us to expand our overall understanding of action. If we expand the field of actions such that it encompasses also some of the involuntary aspects of action, we are able to understand how unreflective actions can remain actions and do not fall under the scope of automatic behavior. With the notion of weak agency, the paper thus addresses one aspect of unreflective action, namely, “inhibited intentionality” in which an agent feels a diminished sense of authorship in relation to her possibility for self-understanding. The notion of weak agency clarifies how agency itself remains intact but can involve a process of appropriation of one’s actions as one’s own. With a diachronic account of consciousness in unreflective action, the paper accounts for possible self-understanding in cases where none seems available at the moment of action.
Flexible behavior is only possible if contingencies between own actions and following environmental effects are acquired as quickly as possible; and recent findings indeed point toward an immediate formation of action-effect bindings already after a single coupling of an action and its effect. The present study explored whether these short-term bindings occur for both, stimulus- and goal-driven actions (“forced-choice actions” vs. “free-choice actions”). Two experiments confirmed that immediate action-effect bindings are formed for both types of actions and affect upcoming behavior. These findings support the view that action-effect binding is a ubiquitous phenomenon which occurs for any type of action.
Background: One of the most common types of brain-computer interfaces (BCIs) is called a P300 BCI, since it relies on the P300 and other event-related potentials (ERPs). In the canonical P300 BCI approach, items on a monitor flash briefly to elicit the necessary ERPs. Very recent work has shown that this approach may yield lower performance than alternate paradigms in which the items do not flash but instead change in other ways, such as moving, changing colour or changing to characters overlaid with faces.
Methodology/Principal Findings: The present study sought to extend this research direction by parametrically comparing different ways to change items in a P300 BCI. Healthy subjects used a P300 BCI across six different conditions. Three conditions were similar to our prior work, providing the first direct comparison of characters flashing, moving, and changing to faces. Three new conditions also explored facial motion and emotional expression. The six conditions were compared across objective measures such as classification accuracy and bit rate as well as subjective measures such as perceived difficulty. In line with recent studies, our results indicated that the character flash condition resulted in the lowest accuracy and bit rate. All four face conditions (mean accuracy >91%) yielded significantly better performance than the flash condition (mean accuracy = 75%).
Conclusions/Significance: Objective results reaffirmed that the face paradigm is superior to the canonical flash approach that has dominated P300 BCIs for over 20 years. The subjective reports indicated that the conditions that yielded better performance were not considered especially burdensome. Therefore, although further work is needed to identify which face paradigm is best, it is clear that the canonical flash approach should be replaced with a face paradigm when aiming at increasing bit rate. However, the face paradigm has to be further explored with practical applications particularly with locked-in patients.
Background
Individuals’ physical and mental health, as well as their chances of returning to work after their ability to work is damaged, can be addressed by medical rehabilitation.
Aim
This study investigated the developmental trends of mental and physical health among patients in medical rehabilitation and the roles of self-efficacy and physical fitness in the development of mental and physical health.
Design
A longitudinal design that included four time-point measurements across 15 months.
Setting
A medical rehabilitation center in Germany.
Population
Participants included 201 patients who were recruited from a medical rehabilitation center.
Methods
To objectively measure physical fitness (lung functioning), oxygen reabsorption at anaerobic threshold (VO2AT) was used, along with several self-report scales.
Results
We found a nonlinear change in mental health among medical rehabilitation patients. The results underscored the importance of medical rehabilitation for patients’ mental health over time. In addition, patients’ physical health was stable over time. The initial level of physical fitness (VO2AT) positively predicted their mental health and kept the trend more stable. Self-efficacy appeared to have a positive relationship with mental health after rehabilitation treatment.
Conclusions
This study revealed a nonlinear change in mental health among medical rehabilitation patients. Self-efficacy was positively related to mental health, and the initial level of physical fitness positively predicted the level of mental health after rehabilitation treatment.
Clinical Rehabilitation
More attention could be given to physical capacity and self-efficacy for improving and maintaining rehabilitants’ mental health.
Reading skills are among the most important basic skills in society. However, not all readers are able to adequately understand texts or decode individual words. Findings from the Progress in International Reading Literacy Study (PIRLS; German: IGLU) show that about one fifth of fourth graders can only establish coherence at the local level, and in some cases they only have a rudimentary understanding of the text they read (Bremerich-Vos et al., 2017). In addition, these reading deficits persist and have a negative impact on academic and professional success (Jimerson, 1999). Therefore, identifying the causes of these deficits and creating opportunities for interventions at an early stage is an important research objective.
The aim of this dissertation was to examine the relationship between the aspects of reading fluency and their influence on reading comprehension. Despite the increasing scientific interest in reading fluency in recent years, a research gap still exists in the relationship between word recognition accuracy and both speed and the relevance of prosodic patterns for reading comprehension.
Study 1 investigated whether German fourth graders (N = 826) were required to reach a certain word-recognition accuracy threshold before their word-recognition speed improved. In addition, a sub-sample (n = 170) with a pre-/posttest design was examined to assess the extent that the existing word-recognition accuracy can influence the effects of a syllable-based reading intervention on word-recognition accuracy and word-recognition speed. Results showed that word-recognition speed improved after children achieved a word-recognition accuracy of 71%. A positive intervention effect was also found on word-recognition accuracy for children who were below the 71% threshold before the intervention, whereas the intervention effect on word-recognition speed was positive for all children. However, a positive effect on reading comprehension was only found for children who were above the 71% threshold before the intervention.
Study 2 investigated the relationship between word-recognition accuracy threshold and word-recognition speed shown in the first study in a longitudinal design with German students (N = 1,095). Word-recognition accuracy and speed were assessed from the end of Grade 1 to 4, whereas reading comprehension was assessed from the end of Grade 2 to 4. The results showed that the developmental trajectories of word recognition speed and reading comprehension were steeper in children who reached the word-recognition accuracy threshold by the end of the first grade than in children who later reached or had not reached this threshold.
In Study 3, recurrence analysis (RQA) was used to extract prosodic patterns from reading recordings of struggling and skilled readers in the second (n = 67) and fourth grade (n = 69) and was used for the classification into struggling and skilled readers. In addition, the classification based on the prosodic patterns from the recurrence quantification analysis was compared with the classification of prosodic features from the manual transcription of the reading recordings. The results showed that second-grade struggling readers have lengthier pauses within or between words and take more time between pauses on average, whereas fourth-grade struggling readers spend more time between recurring stresses and have multiple diverse patterns in pitch and more recurring accents. Although the recurrence analysis had a good goodness of fit and provided additional information about the relationship of prosody with reading comprehension, the model using prosodic features from transcription had a better fit.
In summary, the three studies in this dissertation provide four important insights into reading fluency in German. First, a threshold in word-recognition accuracy must be achieved before word-recognition speed improves. Second, the earlier this accuracy level is reached, the greater the gain in word-recognition speed and reading comprehension. Third, the intervention effects of a primary school reading intervention are influenced by the accuracy level. Fourth, although incorrect pauses within or between words play an important role in identifying and describing struggling readers in second grade, the importance of prosodic patterns increases in fourth grade.
Anxiety is an affective state characterized by a sustained, long-lasting defensive response, induced by unpredictable, diffuse threat. In comparison, fear is a phasic response to predictable threat. Fear can be experimentally modeled with the help of cue conditioning. Context conditioning, in which the context serves as the best predictor of a threat due to the absence of any conditioned cues, is seen as an operationalization of sustained anxiety.
This thesis used a differential context conditioning paradigm to examine sustained attention processes in a threat context compared to a safety context for the first time. In three studies, the attention mechanisms during the processing of contextual anxiety were examined by measuring heart rate responses and steady-state-visually evoked potentials (ssVEPs). An additional focus was set on the processing of social cues (i.e. faces) and the influence of contextual information on these cues. In a last step, the correlates of sustained anxiety were compared to evoked responses by phasic fear, which was realized in a previously established paradigm combining predictable and unpredictable threat.
In the first study, a contextual stimulus was associated with an aversive loud noise, while a second context remained unpaired. This conditioning paradigm created an anxiety context (CTX+) and a safety context (CTX-). After acquisition, a social agent vs. an object was presented as a distractor in both contexts. Heart rate and cortical responses, with ssVEPs by using frequency tagging, to the contexts and the distractors were assessed. Results revealed enhanced ssVEP amplitudes for the CTX+ compared to the CTX− during acquisition and during presentation of distractor stimuli. Additionally, the heart rate was accelerated in the acquisition phase, followed by a heart rate deceleration as a psychophysiological marker of contextual anxiety.
Study 2 used the same context conditioning paradigm as Study 1. In contrast to the first study, persons with different emotional facial expressions were presented in the anxiety and safety contexts in order to compare the differential processing of these cues within periods of threat and safety. A similar anxiety response was found in the second study, although only participants who
Abstract
VIII
were aware of the contingency between contexts and aversive event showed a sensory amplification of the threat context, indicated by heart rate response and ssVEP activation. All faces irrespective of their emotional expression received increased attentional resources when presented within the anxiety context, which suggests a general hypervigilance in anxiety contexts.
In the third study, the differentiation of predictable and unpredictable threat as an operationalization of fear and anxiety was examined on a cortical and physiological level. In the predictable condition, a social cue was paired with an aversive event, while in the unpredictable condition the aversive event remained unpaired with the respective cue. A fear response to the predictable cue was found, indicated by increased oscillatory response and accelerated heart rate. Both predictable and unpredictable threat yielded increased ssVEP amplitudes evoked by the context stimuli, while the response in the unpredictable context showed longer-lasting ssVEP activation to the threat context.
To sum up, all three studies endorsed anxiety as a long-lasting defensive response. Due to the unpredictability of the aversive events, the individuals reacted with hypervigilance in the anxiety context, reflected in a facilitated processing of sensory information and an orienting response. This hypervigilance had an impact on the processing of novel cues, which appeared in the anxiety context. Considering the compared stimuli categories, the stimuli perceived in a state of anxiety received increased attentional resources, irrespective of the emotional arousal conveyed by the facial expression. Both predictable and unpredictable threat elicited sensory amplification of the contexts, while the response in the unpredictable context showed longer-lasting sensory facilitation of the threat context.
Objective: Brain Computer Interfaces (BCI) provide a muscle independent interaction channel making them particularly valuable for individuals with severe motor impairment. Thus, different BCI systems and applications have been proposed as assistive technology (AT) solutions for such patients. The most prominent system for communication utilizes event-related potentials (ERP) obtained from the electroencephalogram (EEG) to allow for communication on a character-by-character basis. Yet in their current state of technology, daily life use cases of such systems are rare. In addition to the high EEG preparation effort, one of the main reasons is the low information throughput compared to other existing AT solutions. Furthermore, when testing BCI systems in patients, a performance drop is usually observed compared to healthy users. Patients often display a low signal-to-noise ratio of the recorded EEG and detection of brain responses may be aggravated due to internally (e.g. spasm) or externally induced artifacts (e.g. from ventilation devices). Consequently, practical BCI systems need to cope with mani-fold inter-individual differences. Whilst these high demands lead to increasing complexity of the technology, daily life use of BCI systems requires straightforward setup including an easy-to-use graphical user interface that nonprofessionals can handle without expert support. Research questions of this thesis: This dissertation project aimed at bringing forward BCI technology toward a possible integration into end-users' daily life. Four basic research questions were addressed: (1) Can we identify performance predictors so that we can provide users with individual BCI solutions without the need of multiple, demanding testing sessions? (2) Can we provide complex BCI technology in an automated, user-friendly and easy-to-use manner, so that BCIs can be used without expert support at end-users' homes? (3) How can we account for and improve the low information transfer rates as compared to other existing assistive technology solutions? (4) How can we prevent the performance drop often seen when bringing BCI technology that was tested in healthy users to those with severe motor impairment? Results and discussion: (1) Heart rate variability (HRV) as an index of inhibitory control (i.e. the ability to allocate attention resources and inhibit distracting stimuli) was significantly related to ERP-BCI performance and accounted for almost 26% of variance. HRV is easy to assess from short heartbeat recordings and may thus serve as a performance predictor for ERP-BCIs. Due to missing software solutions for appropriate processing of artifacts in heartbeat data (electrocardiogram and inter-beat interval data), our own tool was developed that is available free of charge. To date, more than 100 researchers worldwide have requested the tool. Recently, a new version was developed and released together with a website (www.artiifact.de). (2) Furthermore, a study of this thesis demonstrated that BCI technology can be incorporated into easy-to-use software, including auto-calibration and predictive text entry. Naïve, healthy nonprofessionals were able to control the software without expert support and successfully spelled words using the auto-calibrated BCI. They reported that software handling was straightforward and that they would be able to explain the system to others. However, future research is required to study transfer of the results to patient samples. (3) The commonly used ERP-BCI paradigm was significantly improved. Instead of simply highlighting visually displayed characters as is usually done, pictures of famous faces were used as stimulus material. As a result, specific brain potentials involved in face recognition and face processing were elicited. The event-related EEG thus displayed an increased signal-to-noise ratio, which facilitated the detection of ERPs extremely well. Consequently, BCI performance was significantly increased. (4) The good results of this new face-flashing paradigm achieved with healthy participants transferred well to users with neurodegenerative disease. Using a face paradigm boosted information throughput. Importantly, two users who were highly inefficient with the commonly used paradigm displayed high accuracy when exposed to the face paradigm. The increased signal-to-noise ratio of the recorded EEG thus helped them to overcome their BCI inefficiency. Significance: The presented work at hand (1) successfully identified a physiological predictor of ERP-BCI performance, (2) proved the technology ready to be operated by naïve nonprofessionals without expert support, (3) significantly improved the commonly used spelling paradigm and (4) thereby displayed a way to effectively prevent BCI inefficiency in patients with neurodegenerative disease. Additionally, missing software solutions for appropriate handling of artifacts in heartbeat data encouraged development of our own software tool that is available to the research community free of charge. In sum, this thesis significantly improved current BCI technology and enhanced our understanding of physiological correlates of BCI performance.
Background
People with severe disabilities, e.g. due to neurodegenerative disease, depend on technology that allows for accurate wheelchair control. For those who cannot operate a wheelchair with a joystick, brain-computer interfaces (BCI) may offer a valuable option. Technology depending on visual or auditory input may not be feasible as these modalities are dedicated to processing of environmental stimuli (e.g. recognition of obstacles, ambient noise). Herein we thus validated the feasibility of a BCI based on tactually-evoked event-related potentials (ERP) for wheelchair control. Furthermore, we investigated use of a dynamic stopping method to improve speed of the tactile BCI system.
Methods
Positions of four tactile stimulators represented navigation directions (left thigh: move left; right thigh: move right; abdomen: move forward; lower neck: move backward) and N = 15 participants delivered navigation commands by focusing their attention on the desired tactile stimulus in an oddball-paradigm.
Results
Participants navigated a virtual wheelchair through a building and eleven participants successfully completed the task of reaching 4 checkpoints in the building. The virtual wheelchair was equipped with simulated shared-control sensors (collision avoidance), yet these sensors were rarely needed.
Conclusion
We conclude that most participants achieved tactile ERP-BCI control sufficient to reliably operate a wheelchair and dynamic stopping was of high value for tactile ERP classification. Finally, this paper discusses feasibility of tactile ERPs for BCI based wheelchair control.
This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the study.
Objectives
Virtual reality exposure therapy (VRET) is a promising treatment for patients with fear of driving. The present pilot study is the first one focusing on behavioral effects of VRET on patients with fear of driving as measured by a post-treatment driving test in real traffic.
Methods
The therapy followed a standardized manual including psychotherapeutic and medical examination, two preparative psychotherapy sessions, five virtual reality exposure sessions, a final behavioral avoidance test (BAT) in real traffic, a closing session, and two follow-up phone assessments after six and twelve weeks. VRE was conducted in a driving simulator with a fully equipped mockup. The exposure scenarios were individually tailored to the patients’ anxiety hierarchy. A total of 14 patients were treated. Parameters on the verbal, behavioral and physiological level were assessed.
Results
The treatment was helpful to overcome driving fear and avoidance. In the final BAT, all patients mastered driving tasks they had avoided before, 71% showed an adequate driving behavior as assessed by the driving instructor, and 93% could maintain their treatment success until the second follow-up phone call. Further analyses suggest that treatment reduces avoidance behavior as well as symptoms of posttraumatic stress disorder as measured by standardized questionnaires (Avoidance and Fusion Questionnaire: p < .10, PTSD Symptom Scale–Self Report: p < .05).
Conclusions
VRET in driving simulation is very promising to treat driving fear. Further research with randomized controlled trials is needed to verify efficacy. Moreover, simulators with lower configuration stages should be tested for a broad availability in psychotherapy.
Forward Collision Alarms (FCA) intend to signal hazardous traffic situations and the need for an immediate corrective driver response. However, data of naturalistic driving studies revealed that approximately the half of all alarms activated by conventional FCA systems represented unnecessary alarms. In these situations, the alarm activation was correct according to the implemented algorithm, whereas the alarms led to no or only minimal driver responses. Psychological research can make an important contribution to understand drivers’ needs when interacting with driver assistance systems.
The overarching objective of this thesis was to gain a systematic understanding of psychological factors and processes that influence drivers’ perceived need for assistance in potential collision situations. To elucidate under which conditions drivers perceive alarms as unnecessary, a theoretical framework of drivers’ subjective alarm evaluation was developed. A further goal was to investigate the impact of unnecessary alarms on drivers’ responses and acceptance. Four driving simulator studies were carried out to examine the outlined research questions.
In line with the hypotheses derived from the theoretical framework, the results suggest that drivers’ perceived need for assistance is determined by their retrospective subjective hazard perception. While predictions of conventional FCA systems are exclusively based on physical measurements resulting in a time to collision, human drivers additionally consider their own manoeuvre intentions and those attributed to other road users to anticipate the further course of a potentially critical situation. When drivers anticipate a dissolving outcome of a potential conflict, they perceive the situation as less hazardous than the system. Based on this discrepancy, the system would activate an alarm, while drivers’ perceived need for assistance is low. To sum up, the described factors and processes cause drivers to perceive certain alarms as unnecessary. Although drivers accept unnecessary alarms less than useful alarms, unnecessary alarms do not reduce their overall system acceptance. While unnecessary alarms cause moderate driver responses in the short term, the intensity of responses decrease with multiple exposures to unnecessary alarms. However, overall, effects of unnecessary alarms on drivers’ alarm responses and acceptance seem to be rather uncritical.
This thesis provides insights into human factors that explain when FCAs are perceived as unnecessary. These factors might contribute to design FCA systems tailored to drivers’ needs.
Well-developed phonological awareness skills are a core prerequisite for early literacy development. Although effective phonological awareness training programs exist, children at risk often do not reach similar levels of phonological awareness after the intervention as children with normally developed skills. Based on theoretical considerations and first promising results the present study explores effects of an early musical training in combination with a conventional phonological training in children with weak phonological awareness skills. Using a quasi-experimental pretest-posttest control group design and measurements across a period of 2 years, we tested the effects of two interventions: a consecutive combination of a musical and a phonological training and a phonological training alone. The design made it possible to disentangle effects of the musical training alone as well the effects of its combination with the phonological training. The outcome measures of these groups were compared with the control group with multivariate analyses, controlling for a number of background variables. The sample included N = 424 German-speaking children aged 4–5 years at the beginning of the study. We found a positive relationship between musical abilities and phonological awareness. Yet, whereas the well-established phonological training produced the expected effects, adding a musical training did not contribute significantly to phonological awareness development. Training effects were partly dependent on the initial level of phonological awareness. Possible reasons for the lack of training effects in the musical part of the combination condition as well as practical implications for early literacy education are discussed.
In 1999, a tragic catastrophe occurred in the Mont Blanc Tunnel, one of the most important transalpine road tunnels. Twenty-seven of the victims never left their vehicles as a result of which they were trapped in smoke and suffocated (Beard & Carvel, 2005). Immediate evacuation is crucial in tunnel fires, but still many tunnel users stay passive. During emergency situations people strongly influence each other’s behavior (e.g. Nilsson & Johansson, 2009a). So far, only few empirical experimental studies investigated the interaction of individuals during emergencies. Recent developments of advanced immersive virtual worlds, allow simulating emergency situations which makes analogue studies possible. In the present dissertation project, theoretical aspects of human behavior and SI in emergencies are addressed (Chapter 1). The question of Social Influence in emergency situations is investigated in five simulation studies during different relevant stages of the evacuation process from a simulated road tunnel fire (Chapter 2). In the last part, the results are discussed and criticized (Chapter 3). Using a virtual reality (VR) road tunnel scenario, study 1 (pilot study) and 2 investigated the effect of information about adequate behavior in tunnel emergencies as well as Social Influence (SI) on drivers’ behavior. Based on a classic study of Darley and Latané (1968) on bystander inhibition, the effect of passive bystanders on self-evacuation was analyzed. Sixty participants were confronted with an accident and smoke in a road tunnel. The presence of bystanders and information status was manipulated and consequently, participants were randomly assigned into four different groups. Informed participants read a brochure containing relevant information about safety behavior in emergency situations prior to the tunnel drives. In the bystander conditions, passive bystanders were situated in a car in front of the emergency situation. Participants who had received relevant information left the car more frequently than the other participants. Neither significant effect of bystanders nor interaction with information status on the participants’ behavior was observed. Study 3 (pilot study) examined a possible alternative explanation for weak SI in VR. Based on the Threshold Theory of Social Influence (Blascovich, 2002b) and the work of Guadagno et al. (2007), the perception of virtual humans as an avatar (a virtual representation of a real human being) or as an agent (a computer-controlled animated character) was manipulated. Subsequently, 32 participants experienced an accident similar to the one in study 1. However, they were co-drivers and a virtual agent (VA) was the driver. Participants reacted differently in avatar and agent condition. Consequently, the manipulation of the avatar condition was implemented in study 4. In study 4, SI within the vehicle was investigated, as drivers are mostly not alone in their car. In a tunnel scenario similar to the first study, 34 participants were confronted with an emergency situation either as drivers or co-drivers. In the driver group, participants drove themselves and a VA was sitting on the passenger seat. Correspondently, participants in the co-driver group were seated on the passenger seat and the VA drove the vehicle on a pre-recorded path. Like in study 1, the tunnel was blocked by an accident and smoke was coming from the accident in one drive. The VA initially stayed inactive after stopping the vehicle but started to evacuate after ca. 30 seconds. About one third of the sample left the vehicle during the situation. There were no significant differences between drivers and co-drivers regarding the frequency of leaving the vehicle. Co-drivers waited significantly longer than drivers before leaving the vehicle. Study 5 looked at the pre-movement and movement phase of the evacuation process. Forty participants were repeatedly confronted with an emergency situation in a virtual road tunnel filled with smoke. Four different experimental conditions systematically varied the presence and behavior of a VA. In all but one conditions a VA was present. Across all conditions at least 60% of the participants went to the emergency exit. If the VA went to the emergency exit, the ratio increased to 75%. If the VA went in the opposite direction of the exit, however, only 61% went there. If participants were confronted with a passive VA, they needed significantly longer until they started moving and reached the emergency exit. The main and most important finding across all studies is that SI is relevant for self-evacuation, but the degree of SI varies across the phases of evacuation and situation. In addition to the core findings, relevant theoretical and methodological questions regarding the general usefulness and limitations of VR as a research tool are discussed. Finally, a short summary and outlook on possible future studies is presented.
The present study explored how task instructions mediate the impact of action on perception. Participants saw a target object while performing finger movements. Then either the size of the target or the size of the adopted finger postures was judged. The target judgment was attracted by the adopted finger posture indicating sensory integration of body-related and visual signals. The magnitude of integration, however, depended on how the task was initially described. It was substantially larger when the experimental instructions indicated that finger movements and the target object relate to the same event than when they suggested that they are unrelated. This outcome highlights the role of causal inference processes in the emergence of action specific influences in perception.
We examined whether movement costs as defined by movement magnitude have an impact on distance perception in near space. In Experiment 1, participants were given a numerical cue regarding the amplitude of a hand movement to be carried out. Before the movement execution, the length of a visual distance had to be judged. These visual distances were judged to be larger, the larger the amplitude of the concurrently prepared hand movement was. In Experiment 2, in which numerical cues were merely memorized without concurrent movement planning, this general increase of distance with cue size was not observed. The results of these experiments indicate that visual perception of near space is specifically affected by the costs of planned hand movements.
The present study explored the origin of perceptual changes repeatedly observed in the context of actions. In Experiment 1, participants tried to hit a circular target with a stylus movement under restricted feedback conditions. We measured the perception of target size during action planning and observed larger estimates for larger movement distances. In Experiment 2, we then tested the hypothesis that this action specific influence on perception is due to changes in the allocation of spatial attention. For this purpose, we replaced the hitting task by conditions of focused and distributed attention and measured the perception of the former target stimulus. The results revealed changes in the perceived stimulus size very similar to those observed in Experiment 1. These results indicate that action's effects on perception root in changes of spatial attention.
The present study examined the perceptual consequences of learning arbitrary mappings between visual stimuli and hand movements. Participants moved a small cursor with their unseen hand twice to a large visual target object and then judged either the relative distance of the hand movements (Exp.1), or the relative number of dots that appeared in the two consecutive target objects (Exp.2) using a two-alternative forced choice method. During a learning phase, the numbers of dots that appeared in the target object were correlated with the hand movement distance. In Exp.1, we observed that after the participants were trained to expect many dots with larger hand movements, they judged movements made to targets with many dots as being longer than the same movements made to targets with few dots. In Exp.2, another group of participants who received the same training judged the same number of dots as smaller when larger rather than smaller hand movements were executed. When many dots were paired with smaller hand movements during the learning phase of both experiments, no significant changes in the perception of movements and of visual stimuli were observed. These results suggest that changes in the perception of body states and of external objects can arise when certain body characteristics co-occur with certain characteristics of the environment. They also indicate that the (dis)integration of multimodal perceptual signals depends not only on the physical or statistical relation between these signals, but on which signal is currently attended.
Neuroanatomical variations across the visual field of human observers go along with corresponding variations of the perceived coarseness of visual stimuli. Here we show that horizontal gratings are perceived as having lower spatial frequency than vertical gratings when occurring along the horizontal meridian of the visual field, whereas gratings occurring along the vertical meridian show the exact opposite effect. This finding indicates a new peculiarity of processes operating along the cardinal axes of the visual field.
Changes in body perception often arise when observers are confronted with related yet discrepant multisensory signals. Some of these effects are interpreted as outcomes of sensory integration of various signals, whereas related biases are ascribed to learning-dependent recalibration of coding individual signals. The present study explored whether the same sensorimotor experience entails changes in body perception that are indicative of multisensory integration and those that indicate recalibration. Participants enclosed visual objects by a pair of visual cursors controlled by finger movements. Then either they judged their perceived finger posture (indicating multisensory integration) or they produced a certain finger posture (indicating recalibration). An experimental variation of the size of the visual object resulted in systematic and opposite biases of the perceived and produced finger distances. This pattern of results is consistent with the assumption that multisensory integration and recalibration had a common origin in the task we used.
Previous research has revealed changes in the perception of objects due to changes of object-oriented actions. In present study, we varied the arm and finger postures in the context of a virtual reaching and grasping task and tested whether this manipulation can simultaneously affect the perceived size and distance of external objects. Participants manually controlled visual cursors, aiming at reaching and enclosing a distant target object, and judged the size and distance of this object. We observed that a visual-proprioceptive discrepancy introduced during the reaching part of the action simultaneously affected the judgments of target distance and of target size (Experiment 1). A related variation applied to the grasping part of the action affected the judgments of size, but not of distance of the target (Experiment 2). These results indicate that perceptual effects observed in the context of actions can directly arise through sensory integration of multimodal redundant signals and indirectly through perceptual constancy mechanisms.
Action feedback affects the perception of action-related objects beyond actual action success
(2014)
Successful object-oriented action typically increases the perceived size of aimed target objects. This phenomenon has been assumed to reflect an impact of an actor's current action ability on visual perception. The actual action ability and the explicit knowledge of action outcome, however, were confounded in previous studies. The present experiments aimed at disentangling these two factors. Participants repeatedly tried to hit a circular target varying in size with a stylus movement under restricted feedback conditions. After each movement they were explicitly informed about the success in hitting the target and were then asked to judge target size. The explicit feedback regarding movement success was manipulated orthogonally to actual movement success. The results of three experiments indicated the participants' bias to judge relatively small targets as larger and relatively large targets as smaller after explicit feedback of failure than after explicit feedback of success. This pattern was independent of the actual motor performance, suggesting that the actors' evaluations of motor actions may bias perception of target objects in itself.
It has been argued that several reported non-visual influences on perception cannot be truly perceptual. If they were, they should affect the perception of target objects and reference objects used to express perceptual judgments, and thus cancel each other out. This reasoning presumes that non-visual manipulations impact target objects and comparison objects equally. In the present study we show that equalizing a body-related manipulation between target objects and reference objects essentially abolishes the impact of that manipulation so as it should do when that manipulation actually altered perception. Moreover, the manipulation has an impact on judgements when applied to only the target object but not to the reference object, and that impact reverses when only applied to the reference object but not to the target object. A perceptual explanation predicts this reversal, whereas explanations in terms of post-perceptual response biases or demand effects do not. Altogether these results suggest that body-related influences on perception cannot as a whole be attributed to extra-perceptual factors.
Reading fluency is a major determinant of reading comprehension but depends on moderating factors such as auditory working memory (AWM), word recognition and sentence reading skills. We investigated how word and sentence reading skills relate to reading comprehension differentially across the first 6 years of schooling and tested which reading variable best predicted teacher judgements. We conducted our research in a rather transparent language, namely, German, drawing on two different data sets. The first was derived from the normative sample of a reading comprehension test (ELFE-II), including 2056 first to sixth graders with readings tests at the word, sentence and text level. The second sample included 114 students from second to fourth grade. The latter completed a series of tests that measured word and sentence reading fluency, pseudoword reading, AWM, reading comprehension, self-concept and teacher ratings. We analysed the data via hierarchical regression analyses to predict reading comprehension and teacher judgements. The impact of reading fluency was strongest in second and third grade, afterwards superseded by sentence comprehension. AWM significantly contributed to reading comprehension independently of reading fluency, whereas basic decoding skills disappeared after considering fluency. Students' AWM and reading comprehension predicted teacher judgements on reading fluency. Reading comprehension judgements depended both on the students' self-concept and reading comprehension. Our results underline that the role of word reading accuracy for reading comprehension quickly diminishes during elementary school and that teachers base their assessments mainly on the current reading comprehension skill.
Approach and avoidance of positive and negative social cues are fundamental to prevent isolation and ensure survival. High trait social anxiety is characterized by an avoidance of social situations and extensive avoidance is a risk factor for the development of social anxiety disorder (SAD). Therefore, experimental methods to assess social avoidance behavior in humans are essential. The social conditioned place preference (SCPP) paradigm is a well-established experimental paradigm in animal research that is used to objectively investigate social approach–avoidance mechanisms. We retranslated this paradigm for human research using virtual reality. To this end, 58 healthy adults were exposed to either a happy- or angry-looking virtual agent in a specific room, and the effects of this encounter on dwell time as well as evaluation of this room in a later test without an agent were examined. We did not observe a general SCPP effect on dwell time or ratings but discovered a moderation by trait social anxiety, in which participants with higher trait social anxiety spent less time in the room in which the angry agent was present before, suggesting that higher levels of trait social anxiety foster conditioned social avoidance. However, further studies are needed to verify this observation and substantiate an association with social anxiety disorder. We discussed the strengths, limitations, and technical implications of our paradigm for future investigations to more comprehensively understand the mechanisms involved in social anxiety and facilitate the development of new personalized treatment approaches by using virtual reality.
It has been proposed that statistical integration of multisensory cues may be a suitable framework to explain temporal binding, that is, the finding that causally related events such as an action and its effect are perceived to be shifted towards each other in time. A multisensory approach to temporal binding construes actions and effects as individual sensory signals, which are each perceived with a specific temporal precision. When they are integrated into one multimodal event, like an action-effect chain, the extent to which they affect this event's perception depends on their relative reliability. We test whether this assumption holds true in a temporal binding task by manipulating certainty of actions and effects. Two experiments suggest that a relatively uncertain sensory signal in such action-effect sequences is shifted more towards its counterpart than a relatively certain one. This was especially pronounced for temporal binding of the action towards its effect but could also be shown for effect binding. Other conceptual approaches to temporal binding cannot easily explain these results, and the study therefore adds to the growing body of evidence endorsing a multisensory approach to temporal binding.
The etiology of emotion-related disorders such as anxiety or affective disorders is considered to be complex with an interaction of biological and environmental factors. Particular evidence has accumulated for alterations in the dopaminergic and noradrenergic system - partly conferred by catechol-O-methyltransferase (COMT) gene variation - for the adenosinergic system as well as for early life trauma to constitute risk factors for those conditions. Applying a multi-level approach, in a sample of 95 healthy adults, we investigated effects of the functional COMT Val158Met polymorphism, caffeine as an adenosine A2A receptor antagonist (300 mg in a placebo-controlled intervention design) and childhood maltreatment (CTQ) as well as their interaction on the affect-modulated startle response as a neurobiologically founded defensive reflex potentially related to fear- and distress-related disorders. COMT val/val genotype significantly increased startle magnitude in response to unpleasant stimuli, while met/met homozygotes showed a blunted startle response to aversive pictures. Furthermore, significant gene-environment interaction of COMT Val158Met genotype with CTQ was discerned with more maltreatment being associated with higher startle potentiation in val/val subjects but not in met carriers. No main effect of or interaction effects with caffeine were observed. Results indicate a main as well as a GxE effect of the COMT Val158Met variant and childhood maltreatment on the affect-modulated startle reflex, supporting a complex pathogenetic model of the affect-modulated startle reflex as a basic neurobiological defensive reflex potentially related to anxiety and affective disorders.
People with post-stroke motor aphasia know what they would like to say but cannot express it through motor pathways due to disruption of cortical circuits. We present a theoretical background for our hypothesized connection between attention and aphasia rehabilitation and suggest why in this context, Brain-Computer Interface (BCI) use might be beneficial for patients diagnosed with aphasia. Not only could BCI technology provide a communication tool, it might support neuronal plasticity by activating language circuits and thereby boost aphasia recovery. However, stroke may lead to heterogeneous symptoms that might hinder BCI use, which is why the feasibility of this approach needs to be investigated first. In this pilot study, we included five participants diagnosed with post-stroke aphasia. Four participants were initially unable to use the visual P300 speller paradigm. By adjusting the paradigm to their needs, participants could successfully learn to use the speller for communication with accuracies up to 100%. We describe necessary adjustments to the paradigm and present future steps to investigate further this approach.
The objective of this study was to test the usability of a new auditory Brain-Computer Interface (BCI) application for communication. We introduce a word based, intuitive auditory spelling paradigm the WIN-speller. In the WIN-speller letters are grouped by words, such as the word KLANG representing the letters A, G, K, L, and N. Thereby, the decoding step between perceiving a code and translating it to the stimuli it represents becomes superfluous. We tested 11 healthy volunteers and four end-users with motor impairment in the copy spelling mode. Spelling was successful with an average accuracy of 84% in the healthy sample. Three of the end-users communicated with average accuracies of 80% or higher while one user was not able to communicate reliably. Even though further evaluation is required, the WIN-speller represents a potential alternative for BCI based communication in end-users.
Motivation moderately influences brain–computer interface (BCI) performance in healthy subjects when monetary reward is used to manipulate extrinsic motivation. However, the motivation of severely paralyzed patients, who are potentially in need for BCI, could mainly be internal and thus, an intrinsic motivator may be more powerful. Also healthy subjects who participate in BCI studies could be internally motivated as they may wish to contribute to research and thus extrinsic motivation by monetary reward would be less important than the content of the study. In this respect, motivation could be defined as “motivation-to-help.” The aim of this study was to investigate, whether subjects with high motivation for helping and who are highly empathic would perform better with a BCI controlled by event-related potentials (P300-BCI). We included N = 20 healthy young participants naïve to BCI and grouped them according to their motivation for participating in a BCI study in a low and highly motivated group. Motivation was further manipulated with interesting or boring presentations about BCI and the possibility to help patients. Motivation for helping did neither influence BCI performance nor the P300 amplitude. Post hoc, subjects were re-grouped according to their ability for perspective taking. We found significantly higher P300 amplitudes on parietal electrodes in participants with a low ability for perspective taking and therefore, lower empathy, as compared to participants with higher empathy. The lack of an effect of motivation on BCI performance contradicts previous findings and thus, requires further investigation. We speculate that subjects with higher empathy who are good perspective takers with regards to patients in potential need of BCI, may be more emotionally involved and therefore, less able to allocate attention on the BCI task at hand.
Introduction
We investigated a slow-cortical potential (SCP) neurofeedback therapy approach for rehabilitating chronic attention deficits after stroke. This study is the first attempt to train patients who survived stroke with SCP neurofeedback therapy.
Methods
We included N = 5 participants in a within-subjects follow-up design. We assessed neuropsychological and psychological performance at baseline (4 weeks before study onset), before study onset, after neurofeedback training, and at 3 months follow-up. Participants underwent 20 sessions of SCP neurofeedback training.
Results
Participants learned to regulate SCPs toward negativity, and we found indications for improved attention after the SCP neurofeedback therapy in some participants. Quality of life improved throughout the study according to engagement in activities of daily living. The self-reported motivation was related to mean SCP activation in two participants.
Discussion
We would like to bring attention to the potential of SCP neurofeedback therapy as a new rehabilitation method for treating post-stroke cognitive deficits. Studies with larger samples are warranted to corroborate the results.
While decades of research have investigated and technically improved brain–computer interface (BCI)-controlled applications, relatively little is known about the psychological aspects of brain–computer interfacing. In 35 healthy students, we investigated whether extrinsic motivation manipulated via monetary reward and emotional state manipulated via video and music would influence behavioral and psychophysiological measures of performance with a sensorimotor rhythm (SMR)-based BCI. We found increased task-related brain activity in extrinsically motivated (rewarded) as compared with nonmotivated participants but no clear effect of emotional state manipulation. Our experiment investigated the short-term effect of motivation and emotion manipulation in a group of young healthy subjects, and thus, the significance for patients in the locked-in state, who may be in need of a BCI, remains to be investigated.
When trying to conceal one's knowledge, various ocular changes occur. However, which cognitive mechanisms drive these changes? Do orienting or inhibition—two processes previously associated with autonomic changes—play a role? To answer this question, we used a Concealed Information Test (CIT) in which participants were either motivated to conceal (orienting + inhibition) or reveal (orienting only) their knowledge. While pupil size increased in both motivational conditions, the fixation and blink CIT effects were confined to the conceal condition. These results were mirrored in autonomic changes, with skin conductance increasing in both conditions while heart rate decreased solely under motivation to conceal. Thus, different cognitive mechanisms seem to drive ocular responses. Pupil size appears to be linked to the orienting of attention (akin to skin conductance changes), while fixations and blinks rather seem to reflect arousal inhibition (comparable to heart rate changes). This knowledge strengthens CIT theory and illuminates the relationship between ocular and autonomic activity.