Refine
Has Fulltext
- yes (389)
Is part of the Bibliography
- yes (389)
Year of publication
Document Type
- Journal article (389) (remove)
Keywords
- Psychologie (45)
- EEG (14)
- anxiety (13)
- attention (13)
- psychology (13)
- virtual reality (13)
- P300 (12)
- event-related potentials (10)
- emotion (8)
- emotions (8)
Institute
- Institut für Psychologie (389) (remove)
Sonstige beteiligte Institutionen
The affective dimensions of emotional valence and emotional arousal affect processing of verbal and pictorial stimuli. Traditional emotional theories assume a linear relationship between these dimensions, with valence determining the direction of a behavior (approach vs. withdrawal) and arousal its intensity or strength. In contrast, according to the valence-arousal conflict theory, both dimensions are interactively related: positive valence and low arousal (PL) are associated with an implicit tendency to approach a stimulus, whereas negative valence and high arousal (NH) are associated with withdrawal. Hence, positive, high-arousal (PH) and negative, low-arousal (NL) stimuli elicit conflicting action tendencies. By extending previous research that used several tasks and methods, the present study investigated whether and how emotional valence and arousal affect subjective approach vs. withdrawal tendencies toward emotional words during two novel tasks. In Study 1, participants had to decide whether they would approach or withdraw from concepts expressed by written words. In Studies 2 and 3 participants had to respond to each word by pressing one of two keys labeled with an arrow pointing upward or downward. Across experiments, positive and negative words, high or low in arousal, were presented. In Study 1 (explicit task), in line with the valence-arousal conflict theory, PH and NL words were responded to more slowly than PL and NH words. In addition, participants decided to approach positive words more often than negative words. In Studies 2 and 3, participants responded faster to positive than negative words, irrespective of their level of arousal. Furthermore, positive words were significantly more often associated with “up” responses than negative words, thus supporting the existence of implicit associations between stimulus valence and response coding (positive is up and negative is down). Hence, in contexts in which participants' spontaneous responses are based on implicit associations between stimulus valence and response, there is no influence of arousal. In line with the valence-arousal conflict theory, arousal seems to affect participants' approach-withdrawal tendencies only when such tendencies are made explicit by the task, and a minimal degree of processing depth is required.
Cognitive Processing in Non-Communicative Patients: What Can Event-Related Potentials Tell Us?
(2016)
Event-related potentials (ERP) have been proposed to improve the differential diagnosis of non-responsive patients. We investigated the potential of the P300 as a reliable marker of conscious processing in patients with locked-in syndrome (LIS). Eleven chronic LIS patients and 10 healthy subjects (HS) listened to a complex-tone auditory oddball paradigm, first in a passive condition (listen to the sounds) and then in an active condition (counting the deviant tones). Seven out of nine HS displayed a P300 waveform in the passive condition and all in the active condition. HS showed statistically significant changes in peak and area amplitude between conditions. Three out of seven LIS patients showed the P3 waveform in the passive condition and five of seven in the active condition. No changes in peak amplitude and only a significant difference at one electrode in area amplitude were observed in this group between conditions. We conclude that, in spite of keeping full consciousness and intact or nearly intact cortical functions, compared to HS, LIS patients present less reliable results when testing with ERP, specifically in the passive condition. We thus strongly recommend applying ERP paradigms in an active condition when evaluating consciousness in non-responsive patients.
Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI
(2016)
Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.
The present study investigates how different emotions can alter social bargaining behavior. An important paradigm to study social bargaining is the Ultimatum Game. There, a proposer gets a pot of money and has to offer part of it to a responder. If the responder accepts, both players get the money as proposed by the proposer. If he rejects, none of the players gets anything. Rational choice models would predict that responders accept all offers above 0. However, evidence shows that responders typically reject a large proportion of all unfair offers. We analyzed participants’ behavior when they played the Ultimatum Game as responders and simultaneously collected electroencephalogram data in order to quantify the feedback-related negativity and P3b components. We induced state affect (momentarily emotions unrelated to the task) via short movie clips and measured trait affect (longer-lasting emotional dispositions) via questionnaires. State happiness led to increased acceptance rates of very unfair offers. Regarding neurophysiology, we found that unfair offers elicited larger feedback-related negativity amplitudes than fair offers. Additionally, an interaction of state and trait affect occurred: high trait negative affect (subsuming a variety of aversive mood states) led to increased feedback-related negativity amplitudes when participants were in an angry mood, but not if they currently experienced fear or happiness. We discuss that increased rumination might be responsible for this result, which might not occur, however, when people experience happiness or fear. Apart from that, we found that fair offers elicited larger P3b components than unfair offers, which might reflect increased pleasure in response to fair offers. Moreover, high trait negative affect was associated with decreased P3b amplitudes, potentially reflecting decreased motivation to engage in activities. We discuss implications of our results in the light of theories and research on depression and anxiety.
Theta oscillations in the EEG have been shown to reflect ongoing cognitive processes related to mental effort. Here, we show that the pattern of theta oscillation in response to varying cognitive demands reflects stable individual differences in the personality trait epistemic motivation: Individuals with high levels of epistemic motivation recruit relatively more cognitive resources in response to situations possessing high, compared to low, cognitive demand; individuals with low levels do not show such a specific response. Our results provide direct evidence for the theory of the construct need for cognition and add to our understanding of the neural processes underlying theta oscillations. More generally, we provide an explanation how individual differences in personality traits might be represented on a neural level.
Previous studies of social phobia have reported an increased vigilance to social threat cues but also an avoidance of socially relevant stimuli such as eye gaze. The primary aim of this study was to examine attentional mechanisms relevant for perceiving social cues by means of abnormalities in scanning of facial features in patients with social phobia. In two novel experimental paradigms, patients with social phobia and healthy controls matched on age, gender and education were compared regarding their gazing behavior towards facial cues. The first experiment was an emotion classification paradigm which allowed for differentiating reflexive attentional shifts from sustained attention towards diagnostically relevant facial features. In the second experiment, attentional orienting by gaze direction was assessed in a gaze-cueing paradigm in which non-predictive gaze cues shifted attention towards or away from subsequently presented targets. We found that patients as compared to controls reflexively oriented their attention more frequently towards the eyes of emotional faces in the emotion classification paradigm. This initial hypervigilance for the eye region was observed at very early attentional stages when faces were presented for 150 ms, and persisted when facial stimuli were shown for 3 s. Moreover, a delayed attentional orienting into the direction of eye gaze was observed in individuals with social phobia suggesting a differential time course of eye gaze processing in patients and controls. Our findings suggest that basic mechanisms of early attentional exploration of social cues are biased in social phobia and might contribute to the development and maintenance of the disorder.
Fear is elicited by imminent threat and leads to phasic fear responses with selective attention, whereas anxiety is characterized by a sustained state of heightened vigilance due to uncertain danger. In the present study, we investigated attention mechanisms in fear and anxiety by adapting the NPU-threat test to measure steady-state visual evoked potentials (ssVEPs). We investigated ssVEPs across no aversive events (N), predictable aversive events (P), and unpredictable aversive events (U), signaled by four-object arrays (30 s). In addition, central cues were presented during all conditions but predictably signaled imminent threat only during the P condition. Importantly, cues and context events were flickered at different frequencies (15 Hz vs. 20 Hz) in order to disentangle respective electrocortical responses. The onset of the context elicited larger electrocortical responses for U compared to P context. Conversely, P cues elicited larger electrocortical responses compared to N cues. Interestingly, during the presence of the P cue, visuocortical processing of the concurrent context was also enhanced. The results support the notion of enhanced initial hypervigilance to unpredictable compared to predictable threat contexts, while predictable cues show electrocortical enhancement of the cues themselves but additionally a boost of context processing.
Strong bottom-up impulses and weak top-down control may interactively lead to overeating and, consequently, weight gain. In the present study, female university freshmen were tested at the start of the first semester and again at the start of the second semester. Attentional bias toward high- or low-calorie food-cues was assessed using a dot-probe paradigm and participants completed the Barratt Impulsiveness Scale. Attentional bias and motor impulsivity interactively predicted change in body mass index: motor impulsivity positively predicted weight gain only when participants showed an attentional bias toward high-calorie food-cues. Attentional and non-planning impulsivity were unrelated to weight change. Results support findings showing that weight gain is prospectively predicted by a combination of weak top-down control (i.e. high impulsivity) and strong bottom-up impulses (i.e. high automatic motivational drive toward high-calorie food stimuli). They also highlight the fact that only specific aspects of impulsivity are relevant in eating and weight regulation.
Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes
(2016)
Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards.
People with post-stroke motor aphasia know what they would like to say but cannot express it through motor pathways due to disruption of cortical circuits. We present a theoretical background for our hypothesized connection between attention and aphasia rehabilitation and suggest why in this context, Brain-Computer Interface (BCI) use might be beneficial for patients diagnosed with aphasia. Not only could BCI technology provide a communication tool, it might support neuronal plasticity by activating language circuits and thereby boost aphasia recovery. However, stroke may lead to heterogeneous symptoms that might hinder BCI use, which is why the feasibility of this approach needs to be investigated first. In this pilot study, we included five participants diagnosed with post-stroke aphasia. Four participants were initially unable to use the visual P300 speller paradigm. By adjusting the paradigm to their needs, participants could successfully learn to use the speller for communication with accuracies up to 100%. We describe necessary adjustments to the paradigm and present future steps to investigate further this approach.
It has been argued that several reported non-visual influences on perception cannot be truly perceptual. If they were, they should affect the perception of target objects and reference objects used to express perceptual judgments, and thus cancel each other out. This reasoning presumes that non-visual manipulations impact target objects and comparison objects equally. In the present study we show that equalizing a body-related manipulation between target objects and reference objects essentially abolishes the impact of that manipulation so as it should do when that manipulation actually altered perception. Moreover, the manipulation has an impact on judgements when applied to only the target object but not to the reference object, and that impact reverses when only applied to the reference object but not to the target object. A perceptual explanation predicts this reversal, whereas explanations in terms of post-perceptual response biases or demand effects do not. Altogether these results suggest that body-related influences on perception cannot as a whole be attributed to extra-perceptual factors.
BACKGROUND:
Thigmotaxis refers to a specific behavior of animals (i.e., to stay close to walls when exploring an open space). Such behavior can be assessed with the open field test (OFT), which is a well-established indicator of animal fear. The detection of similar open field behavior in humans may verify the translational validity of this paradigm. Enhanced thigmotaxis related to anxiety may suggest the relevance of such behavior for anxiety disorders, especially agoraphobia.
METHODS:
A global positioning system was used to analyze the behavior of 16 patients with agoraphobia and 18 healthy individuals with a risk for agoraphobia (i.e., high anxiety sensitivity) during a human OFT and compare it with appropriate control groups (n = 16 and n = 19). We also tracked 17 patients with agoraphobia and 17 control participants during a city walk that involved walking through an open market square. RESULTS: Our human OFT triggered thigmotaxis in participants; patients with agoraphobia and participants with high anxiety sensitivity exhibited enhanced thigmotaxis. This behavior was evident in increased movement lengths along the wall of the natural open field and fewer entries into the center of the field despite normal movement speed and length. Furthermore, participants avoided passing through the market square during the city walk, indicating again that thigmotaxis is related to agoraphobia.
CONCLUSIONS:
This study is the first to our knowledge to verify the translational validity of the OFT and to reveal that thigmotaxis, an evolutionarily adaptive behavior shown by most species, is related to agoraphobia, a pathologic fear of open spaces, and anxiety sensitivity, a risk factor for agoraphobia.
Epigenetic signatures such as methylation of the monoamine oxidase A (MAOA) gene have been found to be altered in panic disorder (PD). Hypothesizing temporal plasticity of epigenetic processes as a mechanism of successful fear extinction, the present psychotherapy-epigenetic study for we believe the first time investigated MAOA methylation changes during the course of exposure-based cognitive behavioral therapy (CBT) in PD. MAOA methylation was compared between N=28 female Caucasian PD patients (discovery sample) and N=28 age- and sex-matched healthy controls via direct sequencing of sodium bisulfite-treated DNA extracted from blood cells. MAOA methylation was furthermore analyzed at baseline (T0) and after a 6-week CBT (T1) in the discovery sample parallelized by a waiting time in healthy controls, as well as in an independent sample of female PD patients (N=20). Patients exhibited lower MAOA methylation than healthy controls (P<0.001), and baseline PD severity correlated negatively with MAOA methylation (P=0.01). In the discovery sample, MAOA methylation increased up to the level of healthy controls along with CBT response (number of panic attacks; T0-T1: +3.37±2.17%), while non-responders further decreased in methylation (-2.00±1.28%; P=0.001). In the replication sample, increases in MAOA methylation correlated with agoraphobic symptom reduction after CBT (P=0.02-0.03). The present results support previous evidence for MAOA hypomethylation as a PD risk marker and suggest reversibility of MAOA hypomethylation as a potential epigenetic correlate of response to CBT. The emerging notion of epigenetic signatures as a mechanism of action of psychotherapeutic interventions may promote epigenetic patterns as biomarkers of lasting extinction effects.
Expectation and previous experience are both well established key mediators of placebo and nocebo effects. However, the investigation of their respective contribution to placebo and nocebo responses is rather difficult because most placebo and nocebo manipulations are contaminated by pre-existing treatment expectancies resulting from a learning history of previous medical interventions. To circumvent any resemblance to classical treatments, a purely psychological placebonocebo manipulation was established, namely, the "visual stripe pattern induced modulation of pain." To this end, experience and expectation regarding the effects of different visual cues (stripe patterns) on pain were varied across 3 different groups, with either only placebo instruction (expectation), placebo conditioning (experience), or both (expectation + experience) applied. Only the combined manipulation (expectation + experience) revealed significant behavioral and physiological placebo nocebo effects on pain. Two subsequent experiments, which, in addition to placebo and nocebo cues, included a neutral control condition further showed that especially nocebo responses were more easily induced by this psychological placebo and nocebo manipulation. The results emphasize the great effect of psychological processes on placebo and nocebo effects. Particularly, nocebo effects should be addressed more thoroughly and carefully considered in clinical practice to prevent the accidental induction of side effects.
Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants (N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications.
Most research on human fear conditioning and its generalization has focused on adults whereas only little is known about these processes in children. Direct comparisons between child and adult populations are needed to determine developmental risk markers of fear and anxiety. We compared 267 children and 285 adults in a differential fear conditioning paradigm and generalization test. Skin conductance responses (SCR) and ratings of valence and arousal were obtained to indicate fear learning. Both groups displayed robust and similar differential conditioning on subjective and physiological levels. However, children showed heightened fear generalization compared to adults as indexed by higher arousal ratings and SCR to the generalization stimuli. Results indicate overgeneralization of conditioned fear as a developmental correlate of fear learning. The developmental change from a shallow to a steeper generalization gradient is likely related to the maturation of brain structures that modulate efficient discrimination between danger and (ambiguous) safety cues.
Endogenous Testosterone and Exogenous Oxytocin Modulate Attentional Processing of Infant Faces
(2016)
Evidence indicates that hormones modulate the intensity of maternal care. Oxytocin is known for its positive influence on maternal behavior and its important role for childbirth. In contrast, testosterone promotes egocentric choices and reduces empathy. Further, testosterone decreases during parenthood which could be an adaptation to increased parental investment. The present study investigated the interaction between testosterone and oxytocin in attentional control and their influence on attention to baby schema in women. Higher endogenous testosterone was expected to decrease selective attention to child portraits in a face-in-the-crowd-paradigm, while oxytocin was expected to counteract this effect. As predicted, women with higher salivary testosterone were slower in orienting attention to infant targets in the context of adult distractors. Interestingly, reaction times to infant and adult stimuli decreased after oxytocin administration, but only in women with high endogenous testosterone. These results suggest that oxytocin may counteract the adverse effects of testosterone on a central aspect of social behavior and maternal caretaking.
Traditionally, adversity was defined as the accumulation of environmental events (allostatic load). Recently however, a mismatch between the early and the later (adult) environment (mismatch) has been hypothesized to be critical for disease development, a hypothesis that has not yet been tested explicitly in humans. We explored the impact of timing of life adversity (childhood and past year) on anxiety and depression levels (N = 833) and brain morphology (N = 129). Both remote (childhood) and proximal (recent) adversities were differentially mirrored in morphometric changes in areas critically involved in emotional processing (i.e. amygdala/hippocampus, dorsal anterior cingulate cortex, respectively). The effect of adversity on affect acted in an additive way with no evidence for interactions (mismatch). Structural equation modeling demonstrated a direct effect of adversity on morphometric estimates and anxiety/depression without evidence of brain morphology functioning as a mediator. Our results highlight that adversity manifests as pronounced changes in brain morphometric and affective temperament even though these seem to represent distinct mechanistic pathways. A major goal of future studies should be to define critical time periods for the impact of adversity and strategies for intervening to prevent or reverse the effects of adverse childhood life experiences.
Large-Scale Assessment of a Fully Automatic Co-Adaptive Motor Imagery-Based Brain Computer Interface
(2016)
In the last years Brain Computer Interface (BCI) technology has benefited from the development of sophisticated machine leaning methods that let the user operate the BCI after a few trials of calibration. One remarkable example is the recent development of co-adaptive techniques that proved to extend the use of BCIs also to people not able to achieve successful control with the standard BCI procedure. Especially for BCIs based on the modulation of the Sensorimotor Rhythm (SMR) these improvements are essential, since a not negligible percentage of users is unable to operate SMR-BCIs efficiently. In this study we evaluated for the first time a fully automatic co-adaptive BCI system on a large scale. A pool of 168 participants naive to BCIs operated the co-adaptive SMR-BCI in one single session. Different psychological interventions were performed prior the BCI session in order to investigate how motor coordination training and relaxation could influence BCI performance. A neurophysiological indicator based on the Power Spectral Density (PSD) was extracted by the recording of few minutes of resting state brain activity and tested as predictor of BCI performances. Results show that high accuracies in operating the BCI could be reached by the majority of the participants before the end of the session. BCI performances could be significantly predicted by the neurophysiological indicator, consolidating the validity of the model previously developed. Anyway, we still found about 22% of users with performance significantly lower than the threshold of efficient BCI control at the end of the session. Being the inter-subject variability still the major problem of BCI technology, we pointed out crucial issues for those who did not achieve sufficient control. Finally, we propose valid developments to move a step forward to the applicability of the promising co-adaptive methods.
The unification of two major approaches to moral judgment is the purpose of the present approach. Kohlberg's well-known stage theory assumes a sequence of discrete stages that underlie all moral judgment. Stage theory recognizes the problem of integrating considerations but gives no way to solve such integration, even with information from any one stage. And, of course, the stage concept denies any significant integration from different stages. Thus, research on moral judgment needs to study the integration problem which can be tested within Anderson's theory of information integration. The main purpose of the present study was to extend this unificationist approach to the issue of sexual morality. A novel task presents information from two very different stages. The results showed that in contrast to discreteness the stage informers were positively correlated in punishment judgments of both genders about consensual sex of juveniles. Furthermore, the subjects integrated considerations from those very different stages also in contrast to the hypothesis that only a single stage was operative at any time.
Dyspnea is common in many cardiorespiratory diseases. Already the anticipation of this aversive symptom elicits fear in many patients resulting in unfavorable health behaviors such as activity avoidance and sedentary lifestyle. This study investigated brain mechanisms underlying these anticipatory processes. We induced dyspnea using resistive-load breathing in healthy subjects during functional magnetic resonance imaging. Blocks of severe and mild dyspnea alternated, each preceded by anticipation periods. Severe dyspnea activated a network of sensorimotor, cerebellar, and limbic areas. The left insular, parietal opercular, and cerebellar cortices showed increased activation already during dyspnea anticipation. Left insular and parietal opercular cortex showed increased connectivity with right insular and anterior cingulate cortex when severe dyspnea was anticipated, while the cerebellum showed increased connectivity with the amygdala. Notably, insular activation during dyspnea perception was positively correlated with midbrain activation during anticipation. Moreover, anticipatory fear was positively correlated with anticipatory activation in right insular and anterior cingulate cortex. The results demonstrate that dyspnea anticipation activates brain areas involved in dyspnea perception. The involvement of emotion-related areas such as insula, anterior cingulate cortex, and amygdala during dyspnea anticipation most likely reflects anticipatory fear and might underlie the development of unfavorable health behaviors in patients suffering from dyspnea.
In the current study, electroencephalography (EEG) was recorded simultaneously with facial electromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220-280 ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However, its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500-750 ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500-750 ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250 ms) than for scenes (500 ms) whereas for scenes activity changes were more pronounced over the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes.
The novel BackHome system offers individuals with disabilities a range of useful services available via brain-computer interfaces (BCIs), to help restore their independence. This is the time such technology is ready to be deployed in the real world, that is, at the target end users’ home. This has been achieved by the development of practical electrodes, easy to use software, and delivering telemonitoring and home support capabilities which have been conceived, implemented, and tested within a user-centred design approach. The final BackHome system is the result of a 3-year long process involving extensive user engagement to maximize effectiveness, reliability, robustness, and ease of use of a home based BCI system. The system is comprised of ergonomic and hassle-free BCI equipment; one-click software services for Smart Home control, cognitive stimulation, and web browsing; and remote telemonitoring and home support tools to enable independent home use for nonexpert caregivers and users. BackHome aims to successfully bring BCIs to the home of people with limited mobility to restore their independence and ultimately improve their quality of life.
In interpersonal encounters, individuals often exhibit changes in their own facial expressions in response to emotional expressions of another person. Such changes are often called facial mimicry. While this tendency first appeared to be an automatic tendency of the perceiver to show the same emotional expression as the sender, evidence is now accumulating that situation, person, and relationship jointly determine whether and for which emotions such congruent facial behavior is shown. We review the evidence regarding the moderating influence of such factors on facial mimicry with a focus on understanding the meaning of facial responses to emotional expressions in a particular constellation. From this, we derive recommendations for a research agenda with a stronger focus on the most common forms of encounters, actual interactions with known others, and on assessing potential mediators of facial mimicry. We conclude that facial mimicry is modulated by many factors: attention deployment and sensitivity, detection of valence, emotional feelings, and social motivations. We posit that these are the more proximal causes of changes in facial mimicry due to changes in its social setting.
Background
In this study, we evaluated electrooculography (EOG), an eye tracker and an auditory brain-computer interface (BCI) as access methods to augmentative and alternative communication (AAC). The participant of the study has been in the locked-in state (LIS) for 6 years due to amyotrophic lateral sclerosis. He was able to communicate with slow residual eye movements, but had no means of partner independent communication. We discuss the usability of all tested access methods and the prospects of using BCIs as an assistive technology.
Methods
Within four days, we tested whether EOG, eye tracking and a BCI would allow the participant in LIS to make simple selections. We optimized the parameters in an iterative procedure for all systems.
Results
The participant was able to gain control over all three systems. Nonetheless, due to the level of proficiency previously achieved with his low-tech AAC method, he did not consider using any of the tested systems as an additional communication channel. However, he would consider using the BCI once control over his eye muscles would no longer be possible. He rated the ease of use of the BCI as the highest among the tested systems, because no precise eye movements were required; but also as the most tiring, due to the high level of attention needed to operate the BCI.
Conclusions
In this case study, the partner based communication was possible due to the good care provided and the proficiency achieved by the interlocutors. To ease the transition from a low-tech AAC method to a BCI once control over all muscles is lost, it must be simple to operate. For persons, who rely on AAC and are affected by a progressive neuromuscular disease, we argue that a complementary approach, combining BCIs and standard assistive technology, can prove valuable to achieve partner independent communication and ease the transition to a purely BCI based approach. Finally, we provide further evidence for the importance of a user-centered approach in the design of new assistive devices.
Brain-computer interfaces (BCIs) can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g., in the completely locked-in state) or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on 2 consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS) in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min) that increased to 90% (4.23 bits/min) on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.
The objective of this study was to test the usability of a new auditory Brain-Computer Interface (BCI) application for communication. We introduce a word based, intuitive auditory spelling paradigm the WIN-speller. In the WIN-speller letters are grouped by words, such as the word KLANG representing the letters A, G, K, L, and N. Thereby, the decoding step between perceiving a code and translating it to the stimuli it represents becomes superfluous. We tested 11 healthy volunteers and four end-users with motor impairment in the copy spelling mode. Spelling was successful with an average accuracy of 84% in the healthy sample. Three of the end-users communicated with average accuracies of 80% or higher while one user was not able to communicate reliably. Even though further evaluation is required, the WIN-speller represents a potential alternative for BCI based communication in end-users.
Previous research showed that priming effects in the affective misattribution procedure (AMP) are unaffected by direct warnings to avoid an influence of the primes. The present research examined whether a priming influence is diminished by task procedures that encourage accurate judgments of the targets. Participants were motivated to categorize the affective meaning of nonsense targets accurately by being made to believe that a true word was presented in each trial and by providing feedback on (allegedly) incorrect responses. This condition produced robust priming effects. Priming was however reduced and less reliable relative to more typical AMP conditions in which participants guessed the meaning of openly presented nonsense targets. Affective judgments of nonsense targets were not affected by advance knowledge of the response mapping during the priming phase, which argues against a response-priming explanation of AMP effects. These findings show that affective primes influence evaluative judgments even in conditions in which the motivation to provide accurate responses is high and a priming of motor responses is not possible. Priming effects were however weaker with high accuracy motivation, suggesting that a focus on accurate judgments is an effective strategy to control for an unwanted priming influence in the AMP.
3D visualization of movements can amplify motor cortex activation during subsequent motor imagery
(2015)
A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.
It has been demonstrated that verbal context information alters the neural processing of ambiguous faces such as faces with no apparent facial expression. In social anxiety, neutral faces may be implicitly threatening for socially anxious individuals due to their ambiguous nature, but even more so if these neutral faces are put in self-referential negative contexts. Therefore, we measured event-related brain potentials (ERPs) in response to neutral faces which were preceded by affective verbal information (negative, neutral, positive). Participants with low social anxiety (LSA; n = 23) and high social anxiety (HSA; n = 21) were asked to watch and rate valence and arousal of the respective faces while continuous EEG was recorded. ERP analysis revealed that HSA showed elevated P100 amplitudes in response to faces, but reduced structural encoding of faces as indexed by reduced N170 amplitudes. In general, affective context led to an enhanced early posterior negativity (EPN) for negative compared to neutral facial expressions. Moreover, HSA compared to LSA showed enhanced late positive potentials (LPP) to negatively contextualized faces, whereas in LSA this effect was found for faces in positive contexts. Also, HSA rated faces in negative contexts as more negative compared to LSA. These results point at enhanced vigilance for neutral faces regardless of context in HSA, while structural encoding seems to be diminished (avoidance). Interestingly, later components of sustained processing (LPP) indicate that LSA show enhanced visuocortical processing for faces in positive contexts (happy bias), whereas this seems to be the case for negatively contextualized faces in HSA (threat bias). Finally, our results add further new evidence that top-down information in interaction with individual anxiety levels can influence early-stage aspects of visual perception.
We used a new methodological approach to investigate whether top-down influences like expertise determine the extent of unconscious processing. This approach does not rely on preexisting differences between experts and novices, but instructs essentially the same task in a way that either addresses a domain of expertise or not. Participants either were instructed to perform a lexical decision task (expert task) or to respond to a combination of single features of word and non-word stimuli (novel task). The stimuli and importantly also the mapping of responses to those stimuli, however, were exactly the same in both groups. We analyzed congruency effects of masked primes depending on the instructed task. Participants performing the expert task responded faster and less error prone when the prime was response congruent rather than incongruent. This effect was significantly reduced in the novel task, and even reversed when excluding identical prime-target pairs. This indicates that the primes in the novel task had an effect on a perceptual level, but were not able to impact on response activation. Overall, these results demonstrate an expertise-based top-down modulation of unconscious processing that cannot be explained by confounds that are otherwise inherent in comparisons between novices and experts.
Routes to Embodiment
(2015)
Research on embodiment is rich in impressive demonstrations but somewhat poor in comprehensive explanations. Although some moderators and driving mechanisms have been identified, a comprehensive conceptual account of how bodily states or dynamics influence behavior is still missing. Here, we attempt to integrate current knowledge by describing three basic psychological mechanisms: direct state induction, which influences how humans feel or process information, unmediated by any other cognitive mechanism; modal priming, which changes the accessibility of concepts associated with a bodily state; sensorimotor simulation, which affects the ease with which congruent and incongruent actions are performed. We argue that the joint impact of these mechanisms can account for most existing embodiment effects. Additionally, we summarize empirical tests for distinguishing these mechanisms and suggest a guideline for future research about the mechanisms underlying embodiment effects.
Virtual reality (VR) has made its way into mainstream psychological research in the last two decades. This technology, with its unique ability to simulate complex, real situations and contexts, offers researchers unprecedented opportunities to investigate human behavior in well controlled designs in the laboratory. One important application of VR is the investigation of pathological processes in mental disorders, especially anxiety disorders. Research on the processes underlying threat perception, fear, and exposure therapy has shed light on more general aspects of the relation between perception and emotion. Being by its nature virtual, i.e., simulation of reality, VR strongly relies on the adequate selection of specific perceptual cues to activate emotions. Emotional experiences in turn are related to presence, another important concept in VR, which describes the user's sense of being in a VR environment. This paper summarizes current research into perception of fear cues, emotion, and presence, aiming at the identification of the most relevant aspects of emotional experience in VR and their mutual relations. A special focus lies on a series of recent experiments designed to test the relative contribution of perception and conceptual information on fear in VR. This strand of research capitalizes on the dissociation between perception (bottom up input) and conceptual information (top-down input) that is possible in VR. Further, we review the factors that have so far been recognized to influence presence, with emotions (e.g., fear) being the most relevant in the context of clinical psychology. Recent research has highlighted the mutual influence of presence and fear in VR, but has also traced the limits of our current understanding of this relationship. In this paper, the crucial role of perception on eliciting emotional reactions is highlighted, and the role of arousal as a basic dimension of emotional experience is discussed. An interoceptive attribution model of presence is suggested as a first step toward an integrative framework for emotion research in VR. Gaps in the current literature and future directions are outlined.
Visual ERP (P300) based brain-computer interfaces (BCIs) allow for fast and reliable spelling and are intended as a muscle-independent communication channel for people with severe paralysis. However, they require the presentation of visual stimuli in the field of view of the user. A head-mounted display could allow convenient presentation of visual stimuli in situations, where mounting a conventional monitor might be difficult or not feasible (e.g., at a patient's bedside). To explore if similar accuracies can be achieved with a virtual reality (VR) headset compared to a conventional flat screen monitor, we conducted an experiment with 18 healthy participants. We also evaluated it with a person in the locked-in state (LIS) to verify that usage of the headset is possible for a severely paralyzed person. Healthy participants performed online spelling with three different display methods. In one condition a 5 x 5 letter matrix was presented on a conventional 22 inch TFT monitor. Two configurations of the VR headset were tested. In the first (glasses A), the same 5 x 5 matrix filled the field of view of the user. In the second (glasses B), single letters of the matrix filled the field of view of the user. The participant in the LIS tested the VR headset on three different occasions (glasses A condition only). For healthy participants, average online spelling accuracies were 94% (15.5 bits/min) using three flash sequences for spelling with the monitor and glasses A and 96% (16.2 bits/min) with glasses B. In one session, the participant in the LIS reached an online spelling accuracy of 100% (10 bits/min) using the glasses A condition. We also demonstrated that spelling with one flash sequence is possible with the VR headset for healthy users (mean: 32.1 bits/min, maximum reached by one user: 71.89 bits/min at 100% accuracy). We conclude that the VR headset allows for rapid P300 BCI communication in healthy users and may be a suitable display option for severely paralyzed persons.
Emotion regulation dysfunctions are assumed to contribute to the development of tobacco addiction and relapses among smokers attempting to quit. To further examine this hypothesis, the present study compared heavy smokers with non-smokers (NS) in a reappraisal task. Specifically, we investigated whether non-deprived smokers (NDS) and deprived smokers (DS) differ from non-smokers in cognitive emotion regulation and whether there is an association between the outcome of emotion regulation and the cigarette craving. Sixty-five participants (23 non-smokers, 22 NDS, and 20 DS) were instructed to down-regulate emotions by reappraising negative or positive pictorial scenarios. Self-ratings of valence, arousal, and cigarette craving as well as facial electromyography and electroencephalograph activities were measured. Ratings, facial electromyography, and electroencephalograph data indicated that both NDS and DS performed comparably to nonsmokers in regulating emotional responses via reappraisal, irrespective of the valence of pictorial stimuli. Interestingly, changes in cigarette craving were positively associated with regulation of emotional arousal irrespective of emotional valence. These results suggest that heavy smokers are capable to regulate emotion via deliberate reappraisal and smokers' cigarette craving is associated with emotional arousal rather than emotional valence. This study provides preliminary support for the therapeutic use of reappraisal to replace maladaptive emotion-regulation strategies in nicotine addicts.
Converging evidence from controlled experiments suggests that the mere processing of a number and its attributes such as value or parity might affect free choice decisions between different actions. For example the spatial numerical associations of response codes (SNARC) effect indicates the magnitude of a digit to be associated with a spatial representation and might therefore affect spatial response choices (i.e., decisions between a "left" and a "right" option). At the same time, other (linguistic) features of a number such as parity are embedded into space and might likewise prime left or right responses through feature words [odd or even, respectively; markedness association of response codes (MARC) effect]. In this experiment we aimed at documenting such influences in a natural setting. We therefore assessed number space and parity space association effects by exposing participants to a fair distribution task in a card playing scenario. Participants drew cards, read out loud their number values, and announced their response choice, i.e., dealing it to a left vs. right player, indicated by Playmobil characters. Not only did participants prefer to deal more cards to the right player, the card's digits also affected response choices and led to a slightly but systematically unfair distribution, supported by a regular SNARC effect and counteracted by a reversed MARC effect. The experiment demonstrates the impact of SNARC- and MARC-like biases in free choice behavior through verbal and visual numerical information processing even in a setting with high external validity.
The emotion of surprise entails a complex of immediate responses, such as cognitive interruption, attention allocation to, and more systematic processing of the surprising stimulus. All these processes serve the ultimate function to increase processing depth and thus cognitively master the surprising stimulus. The present account introduces phasic negative affect as the underlying mechanism responsible for this switch in operating mode. Surprising stimuli are schema discrepant and thus entail cognitive disfluency, which elicits immediate negative affect. This affect in turn works like a phasic cognitive tuning switching the current processing mode from more automatic and heuristic to more systematic and reflective processing. Directly testing the initial elicitation of negative affect by surprising events, the present experiment presented high and low surprising neutral trivia statements to N = 28 participants while assessing their spontaneous facial expressions via facial electromyography. High compared to low surprising trivia elicited higher corrugator activity, indicative of negative affect and mental effort, while leaving zygomaticus (positive affect) and frontalis (cultural surprise expression) activity unaffected. Future research shall investigate the mediating role of negative affect in eliciting surprise-related outcomes.
In classical conditioning, an initially neutral stimulus (conditioned stimulus, CS) becomes associated with a biologically salient event (unconditioned stimulus, US), which might be pain (aversive conditioning) or food (appetitive conditioning). After a few associations, the CS is able to initiate either defensive or consummatory responses, respectively. Contrary to aversive conditioning, appetitive conditioning is rarely investigated in humans, although its importance for normal and pathological behaviors (e.g., obesity, addiction) is undeniable. The present study intents to translate animal findings on appetitive conditioning to humans using food as an US. Thirty-three participants were investigated between 8 and 10 am without breakfast in order to assure that they felt hungry. During two acquisition phases, one geometrical shape (avCS+) predicted an aversive US (painful electric shock), another shape (appCS+) predicted an appetitive US (chocolate or salty pretzel according to the participants' preference), and a third shape (CS) predicted neither US. In a extinction phase, these three shapes plus a novel shape (NEW) were presented again without US delivery. Valence and arousal ratings as well as startle and skin conductance (SCR) responses were collected as learning indices. We found successful aversive and appetitive conditioning. On the one hand, the avCS+ was rated as more negative and more arousing than the CS and induced startle potentiation and enhanced SCR. On the other hand, the appCS+ was rated more positive than the CS and induced startle attenuation and larger SCR. In summary, we successfully confirmed animal findings in (hungry) humans by demonstrating appetitive learning and normal aversive learning.
Several emotion theorists suggest that valenced stimuli automatically trigger motivational orientations and thereby facilitate corresponding behavior. Positive stimuli were thought to activate approach motivational circuits which in turn primed approach-related behavioral tendencies whereas negative stimuli were supposed to activate avoidance motivational circuits so that avoidance-related behavioral tendencies were primed (motivational orientation account). However, recent research suggests that typically observed affective stimulus response compatibility phenomena might be entirely explained in terms of theories accounting for mechanisms of general action control instead of assuming motivational orientations to mediate the effects (evaluative coding account). In what follows, we explore to what extent this notion is applicable. We present literature suggesting that evaluative coding mechanisms indeed influence a wide variety of affective stimulus response compatibility phenomena. However, the evaluative coding account does not seem to be sufficient to explain affective S-R compatibility effects. Instead, several studies provide clear evidence in favor of the motivational orientation account that seems to operate independently of evaluative coding mechanisms. Implications for theoretical developments and future research designs are discussed.
Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair- wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e. g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within- day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage.
The present approach exploits the biomechanical connection between articulation and ingestion-related mouth movements to introduce a novel psychological principle of brand name design. We constructed brand names for diverse products with consonantal stricture spots either from the front to the rear of the mouth, thus inwards (e.g., BODIKA), or from the rear to the front, thus outwards (e.g., KODIBA). These muscle dynamics resemble the oral kinematics during either ingestion (inwards), which feels positive, or expectoration (outwards), which feels negative. In 7 experiments (total N = 1261), participants liked products with inward names more than products with outward names (Experiment 1), reported higher purchase intentions (Experiment 2), and higher willingness-to-pay (Experiments 3a-3c, 4, 5), with the price gain amounting to 4-13% of the average estimated product value. These effects occurred across English and German language, under silent reading, for both edible and non-edible products, and even in the presence of a much stronger price determinant, namely fair-trade production (Experiment 5).
The main prediction of the Uncanny Valley Hypothesis (UVH) is that observation of humanlike characters that are difficult to distinguish from the human counterpart will evoke a state of negative affect. Well-established electrophysiological [late positive potential (LPP) and facial electromyography (EMG)] and self-report [Self-Assessment Manikin (SAM)] indices of valence and arousal, i.e., the primary orthogonal dimensions of affective experience, were used to test this prediction by examining affective experience in response to categorically ambiguous compared with unambiguous avatar and human faces (N = 30). LPP and EMG provided direct psychophysiological indices of affective state during passive observation and the SAM provided self-reported indices of affective state during explicit cognitive evaluation of static facial stimuli. The faces were drawn from well-controlled morph continua representing the UVH' dimension of human likeness (DHL). The results provide no support for the notion that category ambiguity along the DHL is specifically associated with enhanced experience of negative affect. On the contrary, the LPP and SAM-based measures of arousal and valence indicated a general increase in negative affective state (i.e., enhanced arousal and negative valence) with greater morph distance from the human end of the DHL. A second sample (N = 30) produced the same finding, using an ad hoc self-rating scale of feelings of familiarity, i.e., an oft-used measure of affective experience along the UVH' familiarity dimension. In conclusion, this multi-method approach using well-validated psychophysiological and self-rating indices of arousal and valence rejects for passive observation and for explicit affective evaluation of static faces the main prediction of the UVH.
Are certain foods addictive?
(2014)
The perception of unpleasant stimuli enhances whereas the perception of pleasant stimuli decreases pain perception. In contrast, the effects of pain on the processing of emotional stimuli are much less known. Especially given the recent interest in facial expressions of pain as a special category of emotional stimuli, a main topic in this research line is the mutual influence of pain and facial expression processing. Therefore, in this mini-review we selectively summarize research on the effects of emotional stimuli on pain, but more extensively turn to the opposite direction namely how pain influences concurrent processing of affective stimuli such as facial expressions. Based on the motivational priming theory one may hypothesize that the perception of pain enhances the processing of unpleasant stimuli and decreases the processing of pleasant stimuli. This review reveals that the literature is only partly consistent with this assumption: pain reduces the processing of pleasant pictures and happy facial expressions, but does not – or only partly – affect processing of unpleasant pictures. However, it was demonstrated that pain selectively enhances the processing of facial expressions if these are pain-related (i.e., facial expressions of pain). Extending a mere affective modulation theory, the latter results suggest pain-specific effects which may be explained by the perception-action model of empathy. Together, these results underscore the important mutual influence of pain and emotional face processing.
The idea that specific kind of foods may have an addiction potential and that some forms of overeating may represent an addicted behavior has been discussed for decades. In recent years, the interest in food addiction is growing and research on this topic lead to more precise definitions and assessment methods. For example, the Yale Food Addiction Scale has been developed for the measurement of addiction-like eating behavior based on the diagnostic criteria for substance dependence of the fourth revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV). In 2013, diagnostic criteria for substance abuse and-dependence were merged, thereby increasing the number of symptoms for substance use disorders (SUDs) in the DSM-5. Moreover, gambling disorder is now included along SUDs as a behavioral addiction. Although a plethora of review articles exist that discuss the applicability of the DSM-IV substance dependence criteria to eating behavior, the transferability of the newly added criteria to eating is unknown. Thus, the current article discusses if and how these new criteria may be translated to overeating. Furthermore, it is examined if the new SUD criteria will impact future research on food addiction, for example, if "diagnosing" food addiction should also be adapted by considering all of the new symptoms. Given the critical response to the revisions in DSM-5, we also discuss if the recent approach of Research Domain Criteria can be helpful in evaluating the concept of food addiction.
Brain-Computer Interfaces (BCIs) strive to decode brain signals into control commands for severely handicapped people with no means of muscular control. These potential users of noninvasive BCIs display a large range of physical and mental conditions. Prior studies have shown the general applicability of BCI with patients, with the conflict of either using many training sessions or studying only moderately restricted patients. We present a BCI system designed to establish external control for severely motor-impaired patients within a very short time. Within only six experimental sessions, three out of four patients were able to gain significant control over the BCI, which was based on motor imagery or attempted execution. For the most affected patient, we found evidence that the BCI could outperform the best assistive technology (AT) of the patient in terms of control accuracy, reaction time and information transfer rate. We credit this success to the applied user-centered design approach and to a highly flexible technical setup. State-of-the art machine learning methods allowed the exploitation and combination of multiple relevant features contained in the EEG, which rapidly enabled the patients to gain substantial BCI control. Thus, we could show the feasibility of a flexible and tailorable BCI application in severely disabled users. This can be considered a significant success for two reasons: Firstly, the results were obtained within a short period of time, matching the tight clinical requirements. Secondly, the participating patients showed, compared to most other studies, very severe communication deficits. They were dependent on everyday use of AT and two patients were in a locked-in state. For the most affected patient a reliable communication was rarely possible with existing AT.
Behavioral inhibition is one of the basic facets of executive functioning and is closely related to self-regulation. Impulsive reactions, that is, low inhibitory control, have been associated with higher body mass index (BMI), binge eating, and other problem behaviors (e.g., substance abuse, pathological gambling, etc.). Nevertheless, studies which investigated the direct influence of food-cues on behavioral inhibition have been fairly inconsistent. In the current studies, we investigated food-cue affected behavioral inhibition in young women. For this purpose, we used a go/no-go task with pictorial food and neutral stimuli in which stimulus-response mapping is reversed after every other block (affective shifting task). In study 1, hungry participants showed faster reaction times to and omitted fewer food than neutral targets. Low dieting success and higher BMI were associated with behavioral disinhibition in food relative to neutral blocks. In study 2, both hungry and satiated individuals were investigated. Satiation did not influence overall task performance, but modulated associations of task performance with dieting success and self-reported impulsivity. When satiated, increased food craving during the task was associated with low dieting success, possibly indicating a preload-disinhibition effect following food intake. Food-cues elicited automatic action and approach tendencies regardless of dieting success, self-reported impulsivity, or current hunger levels. Yet, associations between dieting success, impulsivity, and behavioral food-cue responses were modulated by hunger and satiation. Future research investigating clinical samples and including other salient non-food stimuli as control category is warranted.
OBJECTIVE:
Somatic marker theory predicts that somatic cues serve intuitive decision making; however, cardiovascular symptoms are threat cues for patients with panic disorder (PD). Therefore, enhanced cardiac perception may aid intuitive decision making only in healthy individuals, but impair intuitive decision making in PD patients.
METHODS:
PD patients and age-and sex-matched volunteers without a psychiatric diagnosis (n=17, respectively) completed the Iowa Gambling Task (IGT) as a measure of intuitive decision making. Interindividual differences in cardiac perception were assessed with a common mental-tracking task.
RESULTS:
In line with our hypothesis, we found a pattern of opposing associations (Fisher's Z=1.78, P=0.04) of high cardiac perception with improved IGT-performance in matched control-participants (r=0.36, n=14) but impaired IGT-performance in PD patients (r=-0.38, n=13).
CONCLUSION:
Interoceptive skills, typically assumed to aid intuitive decision making, can have the opposite effect in PD patients who experience interoceptive cues as threatening, and tend to avoid them. This may explain why PD patients frequently have problems with decision making in everyday life. Screening of cardiac perception may help identifying patients who benefit from specifically tailored interventions.
Extinction is an important mechanism to inhibit initially acquired fear responses. There is growing evidence that the ventromedial prefrontal cortex (vmPFC) inhibits the amygdala and therefore plays an important role in the extinction of delay fear conditioning. To our knowledge, there is no evidence on the role of the prefrontal cortex in the extinction of trace conditioning up to now. Thus, we compared brain structures involved in the extinction of human delay and trace fear conditioning in a between-subjects-design in an fMRI study. Participants were passively guided through a virtual environment during learning and extinction of conditioned fear. Two different lights served as conditioned stimuli (CS); as unconditioned stimulus (US) a mildly painful electric stimulus was delivered. In the delay conditioning group (DCG) the US was administered with offset of one light (CS+), whereas in the trace conditioning group (TCG) the US was presented 4s after CS+ offset. Both groups showed insular and striatal activation during early extinction, but differed in their prefrontal activation. The vmPFC was mainly activated in the DCG, whereas the TCG showed activation of the dorsolateral prefrontal cortex (dlPFC) during extinction. These results point to different extinction processes in delay and trace conditioning. VmPFC activation during extinction of delay conditioning might reflect the inhibition of the fear response. In contrast, dlPFC activation during extinction of trace conditioning may reflect modulation of working memory processes which are involved in bridging the trace interval and hold information in short term memory.
Cognitive theories on causes of developmental dyslexia can be divided into language-specific and general accounts. While the former assume that words are special in that associated processing problems are rooted in language-related cognition (e.g., phonology) deficits, the latter propose that dyslexia is rather rooted in a general impairment of cognitive (e.g., visual and/or auditory) processing streams. In the present study, we examined to what extent dyslexia (typically characterized by poor orthographic representations) may be associated with a general deficit in visual long-term memory (LTM) for details. We compared object- and detail-related visual LTM performance (and phonological skills) between dyslexic primary school children and IQ-, age-, and gender-matched controls. The results revealed that while the overall amount of LTM errors was comparable between groups, dyslexic children exhibited a greater portion of detail-related errors. The results suggest that not only phonological, but also general visual resolution deficits in LTM may play an important role in developmental dyslexia.