Refine
Has Fulltext
- yes (358)
Is part of the Bibliography
- yes (358)
Year of publication
Document Type
- Journal article (358) (remove)
Language
- English (358) (remove)
Keywords
- Psychologie (41)
- EEG (14)
- anxiety (13)
- attention (13)
- psychology (13)
- virtual reality (13)
- P300 (12)
- event-related potentials (10)
- emotion (8)
- emotions (8)
Institute
- Institut für Psychologie (358) (remove)
Sonstige beteiligte Institutionen
The limbic system and especially the amygdala have been identified as key structures in emotion induction and regulation. Recently research has additionally focused on the influence of prefrontal areas on emotion processing in the limbic system and the amygdala. Results from fMRI studies indicate that the prefrontal cortex (PFC) is involved not only in emotion induction but also in emotion regulation. However, studies using fNIRS only report prefrontal brain activation during emotion induction. So far it lacks the attempt to compare emotion induction and emotion regulation with regard to prefrontal activation measured with fNIRS, to exclude the possibility that the reported prefrontal brain activation in fNIRS studies are mainly caused by automatic emotion regulation processes. Therefore this work tried to distinguish emotion induction from regulation via fNIRS of the prefrontal cortex. 20 healthy women viewed neutral pictures as a baseline condition, fearful pictures as induction condition and reappraised fearful pictures as regulation condition in randomized order. As predicted, the view-fearful condition led to higher arousal ratings than the view-neutral condition with the reappraise-fearful condition in between. For the fNIRS results the induction condition showed an activation of the bilateral PFC compared to the baseline condition (viewing neutral). The regulation condition showed an activation only of the left PFC compared to the baseline condition, although the direct comparison between induction and regulation condition revealed no significant difference in brain activation. Therefore our study underscores the results of previous fNIRS studies showing prefrontal brain activation during emotion induction and rejects the hypothesis that this prefrontal brain activation might only be a result of automatic emotion regulation processes.
In order to unify two major theories of moral judgment, a novel task is employed which combines elements of Kohlberg´s stage theory and of the theory of information integration. In contrast to the format of Kohlberg´s moral judgment interview, a nonverbal and quantitative response which makes low demands on verbal facility was used . Moral informers differing in value, i.e. high and low, are presented. The differences in effect of those two pieces of information should be substantial for a person at that specific moral stage, but small for a person at a different stage. Hence, these differences may diagnose the person's moral stage in the simplest possible way as the two levels of each of the thoughts were about typical content of the four Kohlbergian preconventional and conventional stages. The novel task allowed additionally to measure the influence of another moral concept which was about the non-Kohlbergian moral concept of recompense. After a training phase, pairs of those thoughts were presented to allow for the study of integration and individual differences. German and Korean children, 8, 10, and 12 years in age, judged deserved punishment. The patterns of means, correlations and factor loadings showed that elements of both theories can be unified, but produced unexpected results also. Additive integration of each of the two pairs of moral informers appeared, either with two Kohlbergian moral informers or with another Kohlbergian moral informer in combination with information about recompense. Also cultural independence as well as dependence, developmental changes between 8 and 10 years, and an outstanding moral impact of recompense in size and distinctiveness were observed.
Background
The impact of task relevance on event-related potential amplitudes of early visual processing was previously demonstrated. Study designs, however, differ greatly, not allowing simultaneous investigation of how both degree of distraction and task relevance influence processing variations. In our study, we combined different features of previous tasks. We used a modified 1-back task in which task relevant and task irrelevant stimuli were alternately presented. The task irrelevant stimuli could be from the same or from a different category as the task relevant stimuli, thereby producing high and low distracting task irrelevant stimuli. In addition, the paradigm comprised a passive viewing condition. Thus, our paradigm enabled us to compare the processing of task relevant stimuli, task irrelevant stimuli with differing degrees of distraction, and passively viewed stimuli. EEG data from twenty participants was collected and mean P100 and N170 amplitudes were analyzed. Furthermore, a potential connection of stimulus processing and symptoms of attention deficit hyperactivity disorder (ADHD) was investigated.
Results
Our results show a modulation of peak N170 amplitudes by task relevance. N170 amplitudes to task relevant stimuli were significantly higher than to high distracting task irrelevant or passively viewed stimuli. In addition, amplitudes to low distracting task irrelevant stimuli were significantly higher than to high distracting stimuli. N170 amplitudes to passively viewed stimuli were not significantly different from either kind of task irrelevant stimuli. Participants with more symptoms of hyperactivity and impulsivity showed decreased N170 amplitudes across all task conditions. On a behavioral level, lower N170 enhancement efficiency was significantly correlated with false alarm responses.
Conclusions
Our results point to a processing enhancement of task relevant stimuli. Unlike P100 amplitudes, N170 amplitudes were strongly influenced by enhancement and enhancement efficiency seemed to have direct behavioral consequences. These findings have potential implications for models of clinical disorders affecting selective attention, especially ADHD.
Brain-computer interfaces (BCIs) provide a non-muscular communication channel for persons with severe motor impairments. Previous studies have shown that the aptitude with which a BCI can be controlled varies from person to person. A reliable predictor of performance could facilitate selection of a suitable BCI paradigm. Eleven severely motor impaired participants performed three sessions of a P300 BCI web browsing task. Before each session auditory oddball data were collected to predict the BCI aptitude of the participants exhibited in the current session. We found a strong relationship of early positive and negative potentials around 200 ms (elicited with the auditory oddball task) with performance. The amplitude of the P2 (r = −0.77) and of the N2 (r = −0.86) had the strongest correlations. Aptitude prediction using an auditory oddball was successful. The finding that the N2 amplitude is a stronger predictor of performance than P3 amplitude was reproduced after initially showing this effect with a healthy sample of BCI users. This will reduce strain on the end-users by minimizing the time needed to find suitable paradigms and inspire new approaches to improve performance.
This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the study.
Empirical evidence suggests that words are powerful regulators of emotion processing. Although a number of studies have used words as contextual cues for emotion processing, the role of what is being labeled by the words (i.e., one's own emotion as compared to the emotion expressed by the sender) is poorly understood. The present study reports results from two experiments which used ERP methodology to evaluate the impact of emotional faces and self- vs. sender-related emotional pronoun-noun pairs (e.g., my fear vs. his fear) as cues for emotional face processing. The influence of self- and sender-related cues on the processing of fearful, angry and happy faces was investigated in two contexts: an automatic (experiment 1) and intentional affect labeling task (experiment 2), along with control conditions of passive face processing. ERP patterns varied as a function of the label's reference (self vs. sender) and the intentionality of the labeling task (experiment 1 vs. experiment 2). In experiment 1, self-related labels increased the motivational relevance of the emotional faces in the time-window of the EPN component. Processing of sender-related labels improved emotion recognition specifically for fearful faces in the N170 time-window. Spontaneous processing of affective labels modulated later stages of face processing as well. Amplitudes of the late positive potential (LPP) were reduced for fearful, happy, and angry faces relative to the control condition of passive viewing. During intentional regulation (experiment 2) amplitudes of the LPP were enhanced for emotional faces when subjects used the self-related emotion labels to label their own emotion during face processing, and they rated the faces as higher in arousal than the emotional faces that had been presented in the “label sender's emotion” condition or the passive viewing condition. The present results argue in favor of a differentiated view of language-as-context for emotion processing.
Introduction
There is mounting evidence for the influence of emotional content on working memory performance. This is particularly important in light of the emotion processing that needs to take place when emotional content interferes with executive functions. In this study, we used emotional words of different valence but with similar arousal levels in an n-back task.
Methods
We examined the effects on activation in the prefrontal cortex by means of functional near-infrared spectroscopy (fNIRS) and on the late positive potential (LPP). FNIRS and LPP data were examined in 30 healthy subjects.
Results
Behavioral results show an influence of valence on the error rate depending on the difficulty of the task: more errors were made when the valence was negative and the task difficult. Brain activation was dependent both on the difficulty of the task and on the valence: negative valence of a word diminished the increase in activation, whereas positive valence did not influence the increase in activation, while difficulty levels increased. The LPP also differentiated between the different valences, and in addition was influenced by the task difficulty, the more difficult the task, the less differentiation could be observed.
Conclusions
Summarized, this study shows the influence of valence on a verbal working memory task. When a word contained a negative valence, the emotional content seemed to take precedence in contrast to words containing a positive valence. Working memory and emotion processing sites seemed to overlap and compete for resources even when words are carriers of the emotional content.
Objective: Brain-computer interface (BCI) provide a non-muscular communication channel for patients with impairments of the motor system. A significant number of BCI users is unable to obtain voluntary control of a BCI-system in proper time. This makes methods that can be used to determine the aptitude of a user necessary.
Methods: We hypothesized that integrity and connectivity of involved white matter connections may serve as a predictor of individual BCI-performance. Therefore, we analyzed structural data from anatomical scans and DTI of motor imagery BCI-users differentiated into high and low BCI-aptitude groups based on their overall performance.
Results: Using a machine learning classification method we identified discriminating structural brain trait features and correlated the best features with a continuous measure of individual BCI-performance. Prediction of the aptitude group of each participant was possible with near perfect accuracy (one error).
Conclusions: Tissue volumetric analysis yielded only poor classification results. In contrast, the structural integrity and myelination quality of deep white matter structures such as the Corpus Callosum, Cingulum, and Superior Fronto-Occipital Fascicle were positively correlated with individual BCI-performance.
Significance: This confirms that structural brain traits contribute to individual performance in BCI use.
The serotonin (5-HT) and neuropeptide S (NPS) systems are discussed as important genetic modulators of fear and sustained anxiety contributing to the etiology of anxiety disorders. Sustained anxiety is a crucial characteristic of most anxiety disorders which likely develops through contextual fear conditioning. This study investigated if and how genetic alterations of the 5-HT and the NPS systems as well as their interaction modulate contextual fear conditioning; specifically, function polymorphic variants in the genes coding for the 5-HT transporter (5HTT) and the NPS receptor (NPSR1) were studied. A large group of healthy volunteers was therefore stratified for 5HTTLPR (S+ vs. LL carriers) and NPSR1 rs324981 (T+ vs. AA carriers) polymorphisms resulting in four genotype groups (S+/T+, S+/AA, LL/T+, LL/AA) of 20 participants each. All participants underwent contextual fear conditioning and extinction using a virtual reality (VR) paradigm. During acquisition, one virtual office room (anxiety context, CXT+) was paired with an unpredictable electric stimulus (unconditioned stimulus, US), whereas another virtual office room was not paired with any US (safety context, CXT−). During extinction no US was administered. Anxiety responses were quantified by fear-potentiated startle and ratings. Most importantly, we found a gene × gene interaction on fear-potentiated startle. Only carriers of both risk alleles (S+/T+) exhibited higher startle responses in CXT+ compared to CXT−. In contrast, anxiety ratings were only influenced by the NPSR1 polymorphism with AA carriers showing higher anxiety ratings in CXT+ as compared to CXT−. Our results speak in favor of a two level account of fear conditioning with diverging effects on implicit vs. explicit fear responses. Enhanced contextual fear conditioning as reflected in potentiated startle responses may be an endophenotype for anxiety disorders.
This study examined the impact of three clinical psychological variables (non-pathological levels of depression and anxiety, as well as experimentally manipulated mood) on fat and taste perception in healthy subjects. After a baseline orosensory evaluation, ‘sad’, ‘happy’ and ‘neutral’ video clips were presented to induce corresponding moods in eighty participants. Following mood manipulation, subjects rated five different oral stimuli, appearing sweet, umami, sour, bitter, fatty, which were delivered at five different concentrations each. Depression levels were assessed with Beck’s Depression Inventory (BDI) and anxiety levels were assessed via the Spielberger’s STAI-trait and state questionnaire. Overall, subjects were able to track the concentrations of the stimuli correctly, yet depression level affected taste ratings. First, depression scores were positively correlated with sucrose ratings. Second, subjects with depression scores above the sample median rated sucrose and quinine as more intense after mood induction (positive, negative and neutral). Third and most important, the group with enhanced depression scores did not rate low and high fat stimuli differently after positive or negative mood induction, whereas, during baseline or during the non-emotional neutral condition they rated the fat intensity as increasing with concentration. Consistent with others’ prior observations we also found that sweet and bitter stimuli at baseline were rated as more intense by participants with higher anxiety scores and that after positive and negative mood induction, citric acid was rated as stronger tasting compared to baseline. The observation that subjects with mild subclinical depression rated low and high fat stimuli similarly when in positive or negative mood is novel and likely has potential implications for unhealthy eating patterns. This deficit may foster unconscious eating of fatty foods in sub-clinical mildly depressed populations.