Refine
Is part of the Bibliography
- yes (35) (remove)
Year of publication
- 2014 (35) (remove)
Document Type
- Journal article (26)
- Doctoral Thesis (9)
Keywords
- ERP (3)
- Essgewohnheit (3)
- Essverhalten (3)
- fMRI (3)
- food-cues (3)
- Achtsamkeit (2)
- Achtsamkeitsbasiertes Training (2)
- EEG (2)
- body mass index (2)
- consciousness (2)
Institute
- Institut für Psychologie (35) (remove)
Extinction is an important mechanism to inhibit initially acquired fear responses. There is growing evidence that the ventromedial prefrontal cortex (vmPFC) inhibits the amygdala and therefore plays an important role in the extinction of delay fear conditioning. To our knowledge, there is no evidence on the role of the prefrontal cortex in the extinction of trace conditioning up to now. Thus, we compared brain structures involved in the extinction of human delay and trace fear conditioning in a between-subjects-design in an fMRI study. Participants were passively guided through a virtual environment during learning and extinction of conditioned fear. Two different lights served as conditioned stimuli (CS); as unconditioned stimulus (US) a mildly painful electric stimulus was delivered. In the delay conditioning group (DCG) the US was administered with offset of one light (CS+), whereas in the trace conditioning group (TCG) the US was presented 4s after CS+ offset. Both groups showed insular and striatal activation during early extinction, but differed in their prefrontal activation. The vmPFC was mainly activated in the DCG, whereas the TCG showed activation of the dorsolateral prefrontal cortex (dlPFC) during extinction. These results point to different extinction processes in delay and trace conditioning. VmPFC activation during extinction of delay conditioning might reflect the inhibition of the fear response. In contrast, dlPFC activation during extinction of trace conditioning may reflect modulation of working memory processes which are involved in bridging the trace interval and hold information in short term memory.
Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc.) and food composition (calories, macronutrients, etc.) are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g), energy density (kcal), and physical image characteristics (color composition, contrast, brightness, size, complexity) are provided. The food-pics image database is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior. Read F
Cognitive theories on causes of developmental dyslexia can be divided into language-specific and general accounts. While the former assume that words are special in that associated processing problems are rooted in language-related cognition (e.g., phonology) deficits, the latter propose that dyslexia is rather rooted in a general impairment of cognitive (e.g., visual and/or auditory) processing streams. In the present study, we examined to what extent dyslexia (typically characterized by poor orthographic representations) may be associated with a general deficit in visual long-term memory (LTM) for details. We compared object- and detail-related visual LTM performance (and phonological skills) between dyslexic primary school children and IQ-, age-, and gender-matched controls. The results revealed that while the overall amount of LTM errors was comparable between groups, dyslexic children exhibited a greater portion of detail-related errors. The results suggest that not only phonological, but also general visual resolution deficits in LTM may play an important role in developmental dyslexia.
Task instructions modulate the attentional mode affecting the auditory MMN and the semantic N400
(2014)
Event-related potentials (ERPs) have been proven to be a useful tool to complement clinical assessment and to detect residual cognitive functions in patients with disorders of consciousness. These ERPs are of ten recorded using passive or unspecific instructions. Patient data obtained this way are then compared to data from healthy participants, which are usually recorded using active instructions. The present study investigates the effect of attentive modulations and particularly the effect of activevs. passive instruction on the ERP components mismatch negativity (MMN) and N400. A sample of 18 healthy participants listened to three auditory paradigms: anoddball, aword priming, and a sentence paradigm. Each paradigm was presented three times with different instructions: ignoring auditory stimuli, passive listening, and focused attention on the auditory stimuli. After each task, the participants indicated their subjective effort. The N400 decreased from the focused task to the passive task, and was extinct in the ignore task. The MMN exhibited higher amplitudes in the focused and passive task compared to the ignore task. The data indicate an effect of attention on the supratemporal component of the MMN. Subjective effort was equally high in the passive and focused tasks but reduced in the ignore task. We conclude that passive listening during EEG recording is stressful and attenuates ERPs, which renders the interpretation of the results obtained in such conditions difficult.
Electroencephalography (EEG) often fails to assess both the level (i.e., arousal) and the content (i.e., awareness) of pathologically altered consciousness in patients without motor responsiveness. This might be related to a decline of awareness, to episodes of low arousal and disturbed sleep patterns, and/or to distorting and attenuating effects of the skull and intermediate tissue on the recorded brain signals. Novel approaches are required to overcome these limitations. We introduced epidural electrocorticography (ECoG) for monitoring of cortical physiology in a late-stage amytrophic lateral sclerosis patient in completely locked-in state (CLIS) Despite long-term application for a period of six months, no implant related complications occurred. Recordings from the left frontal cortex were sufficient to identify three arousal states. Spectral analysis of the intrinsic oscillatory activity enabled us to extract state-dependent dominant frequencies at <4, similar to 7 and similar to 20 Hz, representing sleep-like periods, and phases of low and elevated arousal, respectively. In the absence of other biomarkers, ECoG proved to be a reliable tool for monitoring circadian rhythmicity, i.e., avoiding interference with the patient when he was sleeping and exploiting time windows of responsiveness. Moreover, the effects of interventions addressing the patient's arousal, e.g., amantadine medication, could be evaluated objectively on the basis of physiological markers, even in the absence of behavioral parameters. Epidural ECoG constitutes a feasible trade-off between surgical risk and quality of recorded brain signals to gain information on the patient's present level of arousal. This approach enables us to optimize the timing of interactions and medical interventions, all of which should take place when the patient is in a phase of high arousal. Furthermore, avoiding low responsiveness periods will facilitate measures to implement alternative communication pathways involving brain-computer interfaces (BCI).
In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice's emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.
This study aimed at evaluating the performance of the Studentized Continuous Wavelet Transform (t-CWT) as a method for the extraction and assessment of event-related brain potentials (ERP) in data from a single subject. Sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of the t-CWT were assessed and compared to a variety of competing procedures using simulated EEG data at six low signal-to-noise ratios. Results show that the t-CWT combines high sensitivity and specificity with favorable PPV and NPV. Applying the t-CWT to authentic EEG data obtained from 14 healthy participants confirmed its high sensitivity. The t-CWT may thus be well suited for the assessment of weak ERPs in single-subject settings.
Modulation of sensorimotor rhythms (SMR) was suggested as a control signal for brain-computer interfaces (BCI). Yet, there is a population of users estimated between 10 to 50% not able to achieve reliable control and only about 20% of users achieve high (80–100%) performance. Predicting performance prior to BCI use would facilitate selection of the most feasible system for an individual, thus constitute a practical benefit for the user, and increase our knowledge about the correlates of BCI control. In a recent study, we predicted SMR-BCI performance from psychological variables that were assessed prior to the BCI sessions and BCI control was supported with machine-learning techniques. We described two significant psychological predictors, namely the visuo-motor coordination ability and the ability to concentrate on the task. The purpose of the current study was to replicate these results thereby validating these predictors within a neurofeedback based SMR-BCI that involved no machine learning.Thirty-three healthy BCI novices participated in a calibration session and three further neurofeedback training sessions. Two variables were related with mean SMR-BCI performance: (1) a measure for the accuracy of fine motor skills, i.e., a trade for a person’s visuo-motor control ability; and (2) subject’s “attentional impulsivity”. In a linear regression they accounted for almost 20% in variance of SMR-BCI performance, but predictor (1) failed significance. Nevertheless, on the basis of our prior regression model for sensorimotor control ability we could predict current SMR-BCI performance with an average prediction error of M = 12.07%. In more than 50% of the participants, the prediction error was smaller than 10%. Hence, psychological variables played a moderate role in predicting SMR-BCI performance in a neurofeedback approach that involved no machine learning. Future studies are needed to further consolidate (or reject) the present predictors.
Humans form impressions of others by associating persons (faces) with negative or positive social outcomes. This learning process has been referred to as social conditioning. In everyday life, affective nonverbal gestures may constitute important social signals cueing threat or safety, which therefore may support aforementioned learning processes. In conventional aversive conditioning, studies using electroencephalography to investigate visuocortical processing of visual stimuli paired with danger cues such as aversive noise have demonstrated facilitated processing and enhanced sensory gain in visual cortex. The present study aimed at extending this line of research to the field of social conditioning by pairing neutral face stimuli with affective nonverbal gestures. To this end, electro-cortical processing of faces serving as different conditioned stimuli was investigated in a differential social conditioning paradigm. Behavioral ratings and visually evoked steady-state potentials (ssVEP) were recorded in twenty healthy human participants, who underwent a differential conditioning procedure in which three neutral faces were paired with pictures of negative (raised middle finger), neutral (pointing), or positive (thumbs-up) gestures. As expected, faces associated with the aversive hand gesture (raised middle finger) elicited larger ssVEP amplitudes during conditioning. Moreover, theses faces were rated as to be more arousing and unpleasant. These results suggest that cortical engagement in response to faces aversively conditioned with nonverbal gestures is facilitated in order to establish persistent vigilance for social threat-related cues. This form of social conditioning allows to establish a predictive relationship between social stimuli and motivationally relevant outcomes.
Objective: The aim of this longitudinal study was to identify predictors of instantaneous well-being in patients with amyotrophic lateral sclerosis (ALS). Based on flow theory well-being was expected to be highest when perceived demands and perceived control were in balance, and that thinking about the past would be a risk factor for rumination which would in turn reduce well-being.
Methods: Using the experience sampling method, data on current activities, associated aspects of perceived demands, control, and well-being were collected from 10 patients with ALS three times a day for two weeks.
Results: Results show that perceived control was uniformly and positively associated with well-being, but that demands were only positively associated with well-being when they were perceived as controllable. Mediation analysis confirmed thinking about the past, but not thinking about the future, to be a risk factor for rumination and reduced well-being.
Discussion: Findings extend our knowledge of factors contributing to well-being in ALS as not only perceived control but also perceived demands can contribute to well-being. They further show that a focus on present experiences might contribute to increased well-being.