Refine
Has Fulltext
- yes (358)
Is part of the Bibliography
- yes (358)
Year of publication
Document Type
- Journal article (358) (remove)
Language
- English (358) (remove)
Keywords
- Psychologie (41)
- EEG (14)
- anxiety (13)
- attention (13)
- psychology (13)
- virtual reality (13)
- P300 (12)
- event-related potentials (10)
- emotion (8)
- emotions (8)
Institute
- Institut für Psychologie (358) (remove)
Sonstige beteiligte Institutionen
Examining the testing effect in university teaching: retrievability and question format matter
(2018)
Review of learned material is crucial for the learning process. One approach that promises to increase the effectiveness of reviewing during learning is to answer questions about the learning content rather than restudying the material (testing effect). This effect is well established in lab experiments. However, existing research in educational contexts has often combined testing with additional didactical measures that hampers the interpretation of testing effects. We aimed to examine the testing effect in its pure form by implementing a minimal intervention design in a university lecture (N = 92). The last 10 min of each lecture session were used for reviewing the lecture content by either answering short-answer questions, multiple-choice questions, or reading summarizing statements about core lecture content. Three unannounced criterial tests measured the retention of learning content at different times (1, 12, and 23 weeks after the last lecture). A positive testing effect emerged for short-answer questions that targeted information that participants could retrieve from memory. This effect was independent of the time of test. The results indicated no testing effect for multiple-choice testing. These results suggest that short-answer testing but not multiple-choice testing may benefit learning in higher education contexts.
The psychology of eating
(2013)
Most research on human fear conditioning and its generalization has focused on adults whereas only little is known about these processes in children. Direct comparisons between child and adult populations are needed to determine developmental risk markers of fear and anxiety. We compared 267 children and 285 adults in a differential fear conditioning paradigm and generalization test. Skin conductance responses (SCR) and ratings of valence and arousal were obtained to indicate fear learning. Both groups displayed robust and similar differential conditioning on subjective and physiological levels. However, children showed heightened fear generalization compared to adults as indexed by higher arousal ratings and SCR to the generalization stimuli. Results indicate overgeneralization of conditioned fear as a developmental correlate of fear learning. The developmental change from a shallow to a steeper generalization gradient is likely related to the maturation of brain structures that modulate efficient discrimination between danger and (ambiguous) safety cues.
It is one of the primary goals of medical care to secure good quality of life (QoL) while prolonging survival. This is a major challenge in severe medical conditions with a prognosis such as amyotrophic lateral sclerosis (ALS). Further, the definition of QoL and the question whether survival in this severe condition is compatible with a good QoL is a matter of subjective and culture-specific debate. Some people without neurodegenerative conditions believe that physical decline is incompatible with satisfactory QoL. Current data provide extensive evidence that psychosocial adaptation in ALS is possible, indicated by a satisfactory QoL. Thus, there is no fatalistic link of loss of QoL when physical health declines. There are intrinsic and extrinsic factors that have been shown to successfully facilitate and secure QoL in ALS which will be reviewed in the following article following the four ethical principles (1) Beneficence, (2) Non-maleficence, (3) Autonomy and (4) Justice, which are regarded as key elements of patient centered medical care according to Beauchamp and Childress. This is a JPND-funded work to summarize findings of the project NEEDSinALS (www.NEEDSinALS.com) which highlights subjective perspectives and preferences in medical decision making in ALS.
Previous studies of social phobia have reported an increased vigilance to social threat cues but also an avoidance of socially relevant stimuli such as eye gaze. The primary aim of this study was to examine attentional mechanisms relevant for perceiving social cues by means of abnormalities in scanning of facial features in patients with social phobia. In two novel experimental paradigms, patients with social phobia and healthy controls matched on age, gender and education were compared regarding their gazing behavior towards facial cues. The first experiment was an emotion classification paradigm which allowed for differentiating reflexive attentional shifts from sustained attention towards diagnostically relevant facial features. In the second experiment, attentional orienting by gaze direction was assessed in a gaze-cueing paradigm in which non-predictive gaze cues shifted attention towards or away from subsequently presented targets. We found that patients as compared to controls reflexively oriented their attention more frequently towards the eyes of emotional faces in the emotion classification paradigm. This initial hypervigilance for the eye region was observed at very early attentional stages when faces were presented for 150 ms, and persisted when facial stimuli were shown for 3 s. Moreover, a delayed attentional orienting into the direction of eye gaze was observed in individuals with social phobia suggesting a differential time course of eye gaze processing in patients and controls. Our findings suggest that basic mechanisms of early attentional exploration of social cues are biased in social phobia and might contribute to the development and maintenance of the disorder.
Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.
Acrophobia is characterized by intense fear in height situations. Virtual reality (VR) can be used to trigger such phobic fear, and VR exposure therapy (VRET) has proven effective for treatment of phobias, although it remains important to further elucidate factors that modulate and mediate the fear responses triggered in VR. The present study assessed verbal and behavioral fear responses triggered by a height simulation in a 5-sided cave automatic virtual environment (CAVE) with visual and acoustic simulation and further investigated how fear responses are modulated by immersion, i.e., an additional wind simulation, and presence, i.e., the feeling to be present in the VE. Results revealed a high validity for the CAVE and VE in provoking height related self-reported fear and avoidance behavior in accordance with a trait measure of acrophobic fear. Increasing immersion significantly increased fear responses in high height anxious (HHA) participants, but did not affect presence. Nevertheless, presence was found to be an important predictor of fear responses. We conclude that a CAVE system can be used to elicit valid fear responses, which might be further enhanced by immersion manipulations independent from presence. These results may help to improve VRET efficacy and its transfer to real situations.
Recent research suggests that the P3b may be closely related to the activation of the locus coeruleus-norepinephrine (LC-NE) system. To further study the potential association, we applied a novel technique, the non-invasive transcutaneous vagus nerve stimulation (tVNS), which is speculated to increase noradrenaline levels. Using a within-subject cross-over design, 20 healthy participants received continuous tVNS and sham stimulation on two consecutive days (stimulation counterbalanced across participants) while performing a visual oddball task. During stimulation, oval non-targets (standard), normal-head (easy) and rotated-head (difficult) targets, as well as novel stimuli (scenes) were presented. As an indirect marker of noradrenergic activation we also collected salivary alpha-amylase (sAA) before and after stimulation. Results showed larger P3b amplitudes for target, relative to standard stimuli, irrespective of stimulation condition. Exploratory post hoc analyses, however, revealed that, in comparison to standard stimuli, easy (but not difficult) targets produced larger P3b (but not P3a) amplitudes during active tVNS, compared to sham stimulation. For sAA levels, although main analyses did not show differential effects of stimulation, direct testing revealed that tVNS (but not sham stimulation) increased sAA levels after stimulation. Additionally, larger differences between tVNS and sham stimulation in P3b magnitudes for easy targets were associated with larger increase in sAA levels after tVNS, but not after sham stimulation. Despite preliminary evidence for a modulatory influence of tVNS on the P3b, which may be partly mediated by activation of the noradrenergic system, additional research in this field is clearly warranted. Future studies need to clarify whether tVNS also facilitates other processes, such as learning and memory, and whether tVNS can be used as therapeutic tool.
Research with adults in laboratory settings has shown that distributed rereading is a beneficial learning strategy but its effects depend on time of test. When learning outcomes are measured immediately after rereading, distributed rereading yields no benefits or even detrimental effects on learning, but the beneficial effects emerge two days later. In a preregistered experiment, the effects of distributed rereading were investigated in a classroom setting with school students. Seventh-graders (N = 191) reread a text either immediately or after 1 week. Learning outcomes were measured after 4 min or 1 week. Participants in the distributed rereading condition reread the text more slowly, predicted their learning success to be lower, and reported a lower on-task focus. At the shorter retention interval, massed rereading outperformed distributed rereading in terms of learning outcomes. Contrary to students in the massed condition, students in the distributed condition showed no forgetting from the short to the long retention interval. As a result, they performed equally well as the students in the massed condition at the longer retention interval. Our results indicate that distributed rereading makes learning more demanding and difficult and leads to higher effort during rereading. Its effects on learning depend on time of test, but no beneficial effects were found, not even at the delayed test.
BACKGROUND:
Thigmotaxis refers to a specific behavior of animals (i.e., to stay close to walls when exploring an open space). Such behavior can be assessed with the open field test (OFT), which is a well-established indicator of animal fear. The detection of similar open field behavior in humans may verify the translational validity of this paradigm. Enhanced thigmotaxis related to anxiety may suggest the relevance of such behavior for anxiety disorders, especially agoraphobia.
METHODS:
A global positioning system was used to analyze the behavior of 16 patients with agoraphobia and 18 healthy individuals with a risk for agoraphobia (i.e., high anxiety sensitivity) during a human OFT and compare it with appropriate control groups (n = 16 and n = 19). We also tracked 17 patients with agoraphobia and 17 control participants during a city walk that involved walking through an open market square. RESULTS: Our human OFT triggered thigmotaxis in participants; patients with agoraphobia and participants with high anxiety sensitivity exhibited enhanced thigmotaxis. This behavior was evident in increased movement lengths along the wall of the natural open field and fewer entries into the center of the field despite normal movement speed and length. Furthermore, participants avoided passing through the market square during the city walk, indicating again that thigmotaxis is related to agoraphobia.
CONCLUSIONS:
This study is the first to our knowledge to verify the translational validity of the OFT and to reveal that thigmotaxis, an evolutionarily adaptive behavior shown by most species, is related to agoraphobia, a pathologic fear of open spaces, and anxiety sensitivity, a risk factor for agoraphobia.
Fear is elicited by imminent threat and leads to phasic fear responses with selective attention, whereas anxiety is characterized by a sustained state of heightened vigilance due to uncertain danger. In the present study, we investigated attention mechanisms in fear and anxiety by adapting the NPU-threat test to measure steady-state visual evoked potentials (ssVEPs). We investigated ssVEPs across no aversive events (N), predictable aversive events (P), and unpredictable aversive events (U), signaled by four-object arrays (30 s). In addition, central cues were presented during all conditions but predictably signaled imminent threat only during the P condition. Importantly, cues and context events were flickered at different frequencies (15 Hz vs. 20 Hz) in order to disentangle respective electrocortical responses. The onset of the context elicited larger electrocortical responses for U compared to P context. Conversely, P cues elicited larger electrocortical responses compared to N cues. Interestingly, during the presence of the P cue, visuocortical processing of the concurrent context was also enhanced. The results support the notion of enhanced initial hypervigilance to unpredictable compared to predictable threat contexts, while predictable cues show electrocortical enhancement of the cues themselves but additionally a boost of context processing.
Immersive virtual reality is a powerful method to modify the environment and thereby influence experience. The present study used a virtual hand illusion and context manipulation in immersive virtual reality to examine top-down modulation of pain. Participants received painful heat stimuli on their forearm and placed an embodied virtual hand (co-located with their real one) under a virtual water tap, which dispensed virtual water under different experimental conditions. We aimed to induce a temperature illusion by a red, blue or white light suggesting warm, cold or no virtual water. In addition, the sense of agency was manipulated by allowing participants to have high or low control over the virtual hand’s movements. Most participants experienced a thermal sensation in response to the virtual water and associated the blue and red light with cool/cold or warm/hot temperatures, respectively. Importantly, the blue light condition reduced and the red light condition increased pain intensity and unpleasantness, both compared to the control condition. The control manipulation influenced the sense of agency, but did not influence pain ratings. The large effects revealed in our study suggest that context effects within an embodied setting in an immersive virtual environment should be considered within VR based pain therapy.
Models of eye-movement control distinguish between different control levels, ranging from automatic (bottom-up, stimulus-driven selection) and automatized (based on well-learned routines) to voluntary (top-down, goal-driven selection, e.g., based on instructions). However, one type of voluntary control has yet only been examined in the manual and not in the oculomotor domain, namely free-choice selection among arbitrary targets, that is, targets that are of equal interest from both a bottom-up and top-down processing perspective. Here, we ask which features of targets (identity- or location-related) are used to determine such oculomotor free-choice behavior. In two experiments, participants executed a saccade to one of four peripheral targets in three different choice conditions: unconstrained free choice, constrained free choice based on target identity (color), and constrained free choice based on target location. The analysis of choice frequencies revealed that unconstrained free-choice selection closely resembled constrained choice based on target location. The results suggest that free-choice oculomotor control is mainly guided by spatial (location-based) target characteristics. We explain these results by assuming that participants tend to avoid less parsimonious recoding of target-identity representations into spatial codes, the latter being a necessary prerequisite to configure oculomotor commands.
Objectives
Virtual reality exposure therapy (VRET) is a promising treatment for patients with fear of driving. The present pilot study is the first one focusing on behavioral effects of VRET on patients with fear of driving as measured by a post-treatment driving test in real traffic.
Methods
The therapy followed a standardized manual including psychotherapeutic and medical examination, two preparative psychotherapy sessions, five virtual reality exposure sessions, a final behavioral avoidance test (BAT) in real traffic, a closing session, and two follow-up phone assessments after six and twelve weeks. VRE was conducted in a driving simulator with a fully equipped mockup. The exposure scenarios were individually tailored to the patients’ anxiety hierarchy. A total of 14 patients were treated. Parameters on the verbal, behavioral and physiological level were assessed.
Results
The treatment was helpful to overcome driving fear and avoidance. In the final BAT, all patients mastered driving tasks they had avoided before, 71% showed an adequate driving behavior as assessed by the driving instructor, and 93% could maintain their treatment success until the second follow-up phone call. Further analyses suggest that treatment reduces avoidance behavior as well as symptoms of posttraumatic stress disorder as measured by standardized questionnaires (Avoidance and Fusion Questionnaire: p < .10, PTSD Symptom Scale–Self Report: p < .05).
Conclusions
VRET in driving simulation is very promising to treat driving fear. Further research with randomized controlled trials is needed to verify efficacy. Moreover, simulators with lower configuration stages should be tested for a broad availability in psychotherapy.
Chronic alcohol use leads to specific neurobiological alterations in the dopaminergic brain reward system, which probably are leading to a reward deficiency syndrome in alcohol dependence. The purpose of our study was to examine the effects of such hypothesized neurobiological alterations on the behavioral level, and more precisely on the implicit and explicit reward learning. Alcohol users were classified as dependent drinkers (using the DSM-IV criteria), binge drinkers (using criteria of the USA National Institute on Alcohol Abuse and Alcoholism) or low-risk drinkers (following recommendations of the Scientific board of trustees of the German Health Ministry). The final sample (n = 94) consisted of 36 low-risk alcohol users, 37 binge drinkers and 21 abstinent alcohol dependent patients. Participants were administered a probabilistic implicit reward learning task and an explicit reward- and punishment-based trial-and-error-learning task. Alcohol dependent patients showed a lower performance in implicit and explicit reward learning than low risk drinkers. Binge drinkers learned less than low-risk drinkers in the implicit learning task. The results support the assumption that binge drinking and alcohol dependence are related to a chronic reward deficit. Binge drinking accompanied by implicit reward learning deficits could increase the risk for the development of an alcohol dependence.
The affective dimensions of emotional valence and emotional arousal affect processing of verbal and pictorial stimuli. Traditional emotional theories assume a linear relationship between these dimensions, with valence determining the direction of a behavior (approach vs. withdrawal) and arousal its intensity or strength. In contrast, according to the valence-arousal conflict theory, both dimensions are interactively related: positive valence and low arousal (PL) are associated with an implicit tendency to approach a stimulus, whereas negative valence and high arousal (NH) are associated with withdrawal. Hence, positive, high-arousal (PH) and negative, low-arousal (NL) stimuli elicit conflicting action tendencies. By extending previous research that used several tasks and methods, the present study investigated whether and how emotional valence and arousal affect subjective approach vs. withdrawal tendencies toward emotional words during two novel tasks. In Study 1, participants had to decide whether they would approach or withdraw from concepts expressed by written words. In Studies 2 and 3 participants had to respond to each word by pressing one of two keys labeled with an arrow pointing upward or downward. Across experiments, positive and negative words, high or low in arousal, were presented. In Study 1 (explicit task), in line with the valence-arousal conflict theory, PH and NL words were responded to more slowly than PL and NH words. In addition, participants decided to approach positive words more often than negative words. In Studies 2 and 3, participants responded faster to positive than negative words, irrespective of their level of arousal. Furthermore, positive words were significantly more often associated with “up” responses than negative words, thus supporting the existence of implicit associations between stimulus valence and response coding (positive is up and negative is down). Hence, in contexts in which participants' spontaneous responses are based on implicit associations between stimulus valence and response, there is no influence of arousal. In line with the valence-arousal conflict theory, arousal seems to affect participants' approach-withdrawal tendencies only when such tendencies are made explicit by the task, and a minimal degree of processing depth is required.
Embodiment (i.e., the involvement of a bodily representation) is thought to be relevant in emotional experiences. Virtual reality (VR) is a capable means of activating phobic fear in patients. The representation of the patient’s body (e.g., the right hand) in VR enhances immersion and increases presence, but its effect on phobic fear is still unknown. We analyzed the influence of the presentation of the participant’s hand in VR on presence and fear responses in 32 women with spider phobia and 32 matched controls. Participants sat in front of a table with an acrylic glass container within reaching distance. During the experiment this setup was concealed by a head-mounted display (HMD). The VR scenario presented via HMD showed the same setup, i.e., a table with an acrylic glass container. Participants were randomly assigned to one of two experimental groups. In one group, fear responses were triggered by fear-relevant visual input in VR (virtual spider in the virtual acrylic glass container), while information about a real but unseen neutral control animal (living snake in the acrylic glass container) was given. The second group received fear-relevant information of the real but unseen situation (living spider in the acrylic glass container), but visual input was kept neutral VR (virtual snake in the virtual acrylic glass container). Participants were instructed to touch the acrylic glass container with their right hand in 20 consecutive trials. Visibility of the hand was varied randomly in a within-subjects design. We found for all participants that visibility of the participant’s hand increased presence independently of the fear trigger. However, in patients, the influence of the virtual hand on fear depended on the fear trigger. When fear was triggered perceptually, i.e., by a virtual spider, the virtual hand increased fear. When fear was triggered by information about a real spider, the virtual hand had no effect on fear. Our results shed light on the significance of different fear triggers (visual, conceptual) in interaction with body representations.
Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits) that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present.
The present study examined the developmental trajectories of motor planning and executive functioning in children. To this end, we tested 217 participants with three motor tasks, measuring anticipatory planning abilities (i.e., the bar-transport-task, the sword-rotation-task and the grasp-height-task), and three cognitive tasks, measuring executive functions (i.e., the Tower-of-Hanoi-task, the Mosaic-task, and the D2-attention-endurance-task). Children were aged between 3 and 10 years and were separated into age groups by 1-year bins, resulting in a total of eight groups of children and an additional group of adults. Results suggested (1) a positive developmental trajectory for each of the sub-tests, with better task performance as children get older; (2) that the performance in the separate tasks was not correlated across participants in the different age groups; and (3) that there was no relationship between performance in the motor tasks and in the cognitive tasks used in the present study when controlling for age. These results suggest that both, motor planning and executive functions are rather heterogeneous domains of cognitive functioning with fewer interdependencies than often suggested.
Although recent developmental studies exploring the predictive power of intelligence and working memory (WM) for educational achievement in children have provided evidence for the importance of both variables, findings concerning the relative impact of IQ and WM on achievement have been inconsistent. Whereas IQ has been identified as the major predictor variable in a few studies, results from several other developmental investigations suggest that WM may be the stronger predictor of academic achievement. In the present study, data from the Munich Longitudinal Study on the Genesis of Individual Competencies (LOGIC) were used to explore this issue further. The secondary data analysis included data from about 200 participants whose IQ and WM was first assessed at the age of six and repeatedly measured until the ages of 18 and 23. Measures of reading, spelling, and math were also repeatedly assessed for this age range. Both regression analyses based on observed variables and latent variable structural equation modeling (SEM) were carried out to explore whether the predictive power of IQ and WM would differ as a function of time point of measurement (i.e., early vs. late assessment). As a main result of various regression analyses, IQ and WM turned out to be reliable predictors of academic achievement, both in early and later developmental stages, when previous domain knowledge was not included as additional predictor. The latter variable accounted for most of the variance in more comprehensive regression models, reducing the impact of both IQ and WM considerably. Findings from SEM analyses basically confirmed this outcome, indicating IQ impacts on educational achievement in the early phase, and illustrating the strong additional impact of previous domain knowledge on achievement at later stages of development.
Well-developed phonological awareness skills are a core prerequisite for early literacy development. Although effective phonological awareness training programs exist, children at risk often do not reach similar levels of phonological awareness after the intervention as children with normally developed skills. Based on theoretical considerations and first promising results the present study explores effects of an early musical training in combination with a conventional phonological training in children with weak phonological awareness skills. Using a quasi-experimental pretest-posttest control group design and measurements across a period of 2 years, we tested the effects of two interventions: a consecutive combination of a musical and a phonological training and a phonological training alone. The design made it possible to disentangle effects of the musical training alone as well the effects of its combination with the phonological training. The outcome measures of these groups were compared with the control group with multivariate analyses, controlling for a number of background variables. The sample included N = 424 German-speaking children aged 4–5 years at the beginning of the study. We found a positive relationship between musical abilities and phonological awareness. Yet, whereas the well-established phonological training produced the expected effects, adding a musical training did not contribute significantly to phonological awareness development. Training effects were partly dependent on the initial level of phonological awareness. Possible reasons for the lack of training effects in the musical part of the combination condition as well as practical implications for early literacy education are discussed.
Investigating approach-avoidance behavior regarding affective stimuli is important in broadening the understanding of one of the most common psychiatric disorders, social anxiety disorder. Many studies in this field rely on approach-avoidance tasks, which mainly assess hand movements, or interpersonal distance measures, which return inconsistent results and lack ecological validity. Therefore, the present study introduces a virtual reality task, looking at avoidance parameters (movement time and speed, distance to social stimulus, gaze behavior) during whole-body movements. These complex movements represent the most ecologically valid form of approach and avoidance behavior. These are at the core of complex and natural social behavior. With this newly developed task, the present study examined whether high socially anxious individuals differ in avoidance behavior when bypassing another person, here virtual humans with neutral and angry facial expressions. Results showed that virtual bystanders displaying angry facial expressions were generally avoided by all participants. In addition, high socially anxious participants generally displayed enhanced avoidance behavior towards virtual people, but no specifically exaggerated avoidance behavior towards virtual people with a negative facial expression. The newly developed virtual reality task proved to be an ecological valid tool for research on complex approach-avoidance behavior in social situations. The first results revealed that whole body approach-avoidance behavior relative to passive bystanders is modulated by their emotional facial expressions and that social anxiety generally amplifies such avoidance.
Continuous norming methods have seldom been subjected to scientific review. In this simulation study, we compared parametric with semi-parametric continuous norming methods in psychometric tests by constructing a fictitious population model within which a latent ability increases with age across seven age groups. We drew samples of different sizes (n = 50, 75, 100, 150, 250, 500 and 1,000 per age group) and simulated the results of an easy, medium, and difficult test scale based on Item Response Theory (IRT). We subjected the resulting data to different continuous norming methods and compared the data fit under the different test conditions with a representative cross-validation dataset of n = 10,000 per age group. The most significant differences were found in suboptimal (i.e., too easy or too difficult) test scales and in ability levels that were far from the population mean. We discuss the results with regard to the selection of the appropriate modeling techniques in psychometric test construction, the required sample sizes, and the requirement to report appropriate quantitative and qualitative test quality criteria for continuous norming methods in test manuals.
Virtual reality plays an increasingly important role in research and therapy of pathological fear. However, the mechanisms how virtual environments elicit and modify fear responses are not yet fully understood. Presence, a psychological construct referring to the ‘sense of being there’ in a virtual environment, is widely assumed to crucially influence the strength of the elicited fear responses, however, causality is still under debate. The present study is the first that experimentally manipulated both variables to unravel the causal link between presence and fear responses. Height-fearful participants (N = 49) were immersed into a virtual height situation and a neutral control situation (fear manipulation) with either high versus low sensory realism (presence manipulation). Ratings of presence and verbal and physiological (skin conductance, heart rate) fear responses were recorded. Results revealed an effect of the fear manipulation on presence, i.e., higher presence ratings in the height situation compared to the neutral control situation, but no effect of the presence manipulation on fear responses. However, the presence ratings during the first exposure to the high quality neutral environment were predictive of later fear responses in the height situation. Our findings support the hypothesis that experiencing emotional responses in a virtual environment leads to a stronger feeling of being there, i.e., increase presence. In contrast, the effects of presence on fear seem to be more complex: on the one hand, increased presence due to the quality of the virtual environment did not influence fear; on the other hand, presence variability that likely stemmed from differences in user characteristics did predict later fear responses. These findings underscore the importance of user characteristics in the emergence of presence.
Cognitive Processing in Non-Communicative Patients: What Can Event-Related Potentials Tell Us?
(2016)
Event-related potentials (ERP) have been proposed to improve the differential diagnosis of non-responsive patients. We investigated the potential of the P300 as a reliable marker of conscious processing in patients with locked-in syndrome (LIS). Eleven chronic LIS patients and 10 healthy subjects (HS) listened to a complex-tone auditory oddball paradigm, first in a passive condition (listen to the sounds) and then in an active condition (counting the deviant tones). Seven out of nine HS displayed a P300 waveform in the passive condition and all in the active condition. HS showed statistically significant changes in peak and area amplitude between conditions. Three out of seven LIS patients showed the P3 waveform in the passive condition and five of seven in the active condition. No changes in peak amplitude and only a significant difference at one electrode in area amplitude were observed in this group between conditions. We conclude that, in spite of keeping full consciousness and intact or nearly intact cortical functions, compared to HS, LIS patients present less reliable results when testing with ERP, specifically in the passive condition. We thus strongly recommend applying ERP paradigms in an active condition when evaluating consciousness in non-responsive patients.
Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI
(2016)
Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.
Context conditioning is characterized by unpredictable threat and its generalization may constitute risk factors for panic disorder (PD). Therefore, we examined differences between individuals with panic attacks (PA; N = 21) and healthy controls (HC, N = 22) in contextual learning and context generalization using a virtual reality (VR) paradigm. Successful context conditioning was indicated in both groups by higher arousal, anxiety and contingency ratings, and increased startle responses and skin conductance levels (SCLs) in an anxiety context (CTX+) where an aversive unconditioned stimulus (US) occurred unpredictably vs. a safety context (CTX−). PA compared to HC exhibited increased differential responding to CTX+ vs. CTX− and overgeneralization of contextual anxiety on an evaluative verbal level, but not on a physiological level. We conclude that increased contextual conditioning and contextual generalization may constitute risk factors for PD or agoraphobia contributing to the characteristic avoidance of anxiety contexts and withdrawal to safety contexts and that evaluative cognitive process may play a major role.
According to the motivational priming hypothesis, unpleasant stimuli activate the motivational defense system, which in turn promotes congruent affective states such as negative emotions and pain. The question arises to what degree this bottom–up impact of emotions on pain is susceptible to a manipulation of top–down-driven expectations. To this end, we investigated whether verbal instructions implying pain potentiation vs. reduction (placebo or nocebo expectations)—later on confirmed by corresponding experiences (placebo or nocebo conditioning)—might alter behavioral and neurophysiological correlates of pain modulation by unpleasant pictures. We compared two groups, which underwent three experimental phases: first, participants were either instructed that watching unpleasant affective pictures would increase pain (nocebo group) or that watching unpleasant pictures would decrease pain (placebo group) relative to neutral pictures. During the following placebo/nocebo-conditioning phase, pictures were presented together with electrical pain stimuli of different intensities, reinforcing the instructions. In the subsequent test phase, all pictures were presented again combined with identical pain stimuli. Electroencephalogram was recorded in order to analyze neurophysiological responses of pain (somatosensory evoked potential) and picture processing [visually evoked late positive potential (LPP)], in addition to pain ratings. In the test phase, ratings of pain stimuli administered while watching unpleasant relative to neutral pictures were significantly higher in the nocebo group, thus confirming the motivational priming effect for pain perception. In the placebo group, this effect was reversed such that unpleasant compared with neutral pictures led to significantly lower pain ratings. Similarly, somatosensory evoked potentials were decreased during unpleasant compared with neutral pictures, in the placebo group only. LPPs of the placebo group failed to discriminate between unpleasant and neutral pictures, while the LPPs of the nocebo group showed a clear differentiation. We conclude that the placebo manipulation already affected the processing of the emotional stimuli and, in consequence, the processing of the pain stimuli. In summary, the study revealed that the modulation of pain by emotions, albeit a reliable and well-established finding, is further tuned by reinforced expectations—known to induce placebo/nocebo effects—which should be addressed in future research and considered in clinical applications.
Background
While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks.
Results
Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus.
Conclusion
The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior.
Responding in the presence of stimuli leads to an integration of stimulus features and response features into event fles, which can later be retrieved to assist action control. This integration mechanism is not limited to target stimuli, but can also include distractors (distractor-response binding). A recurring research question is which factors determine whether or not distractors are integrated. One suggested candidate factor is target-distractor congruency: Distractor-response binding effects were reported to be stronger for congruent than for incongruent target-distractor pairs. Here, we discuss a general problem with including the factor of congruency in typical analyses used to study distractor-based binding effects. Integrating this factor leads to a confound that may explain any differences between distractor-response binding effects of congruent and incongruent distractors with a simple congruency effect. Simulation data confrmed this argument. We propose to interpret previous data cautiously and discuss potential avenues to circumvent this problem in the future.
Promising initial insights show that offices designed to permit physical activity (PA) may reduce workplace sitting time. Biophilic approaches are intended to introduce natural surroundings into the workplace, and preliminary data show positive effects on stress reduction and elevated productivity within the workplace. The primary aim of this pilot study was to analyze changes in workplace sitting time and self-reported habit strength concerning uninterrupted sitting and PA during work, when relocating from a traditional office setting to “active” biophilic-designed surroundings. The secondary aim was to assess possible changes in work-associated factors such as satisfaction with the office environment, work engagement, and work performance, among office staff. In a pre-post designed field study, we collected data through an online survey on health behavior at work. Twelve participants completed the survey before (one-month pre-relocation, T1) and twice after the office relocation (three months (T2) and seven months post-relocation (T3)). Standing time per day during office hours increased from T1 to T3 by about 40 min per day (p < 0.01). Other outcomes remained unaltered. The results suggest that changing office surroundings to an active-permissive biophilic design increased standing time during working hours. Future larger-scale controlled studies are warranted to investigate the influence of office design on sitting time and work-associated factors during working hours in depth.
A hallmark of habitual actions is that, once they are established, they become insensitive to changes in the values of action outcomes. In this article, we review empirical research that examined effects of posttraining changes in outcome values in outcome-selective Pavlovian-to-instrumental transfer (PIT) tasks. This review suggests that cue-instigated action tendencies in these tasks are not affected by weak and/or incomplete revaluation procedures (e.g., selective satiety) and substantially disrupted by a strong and complete devaluation of reinforcers. In a second part, we discuss two alternative models of a motivational control of habitual action: a default-interventionist framework and expected value of control theory. It is argued that the default-interventionist framework cannot solve the problem of an infinite regress (i.e., what controls the controller?). In contrast, expected value of control can explain control of habitual actions with local computations and feedback loops without (implicit) references to control homunculi. It is argued that insensitivity to changes in action outcomes is not an intrinsic design feature of habits but, rather, a function of the cognitive system that controls habitual action tendencies.
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.
For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS.
No abstract available.
A commentary on: Feeling the Conflict: The Crucial Role of Conflict Experience in Adaptationby Desender, K., Van Opstal, F., and Van den Bussche, E. (2014). Psychol. Sci. 25, 675–683. doi:10.1177/0956797613511468
Conflict adaptation in masked priming has recently been proposed to rely not on successful conflictresolution but rather on conflict experience (Desender et al., 2014). We re-assessed this proposal ina direct replication and also tested a potential confound due toconflict strength. The data supported this alternative view, but also failed to replicate basic conflict adaptation effects of the original studydespite considerable power.
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
When More Is Better – Consumption Priming Decreases Responders’ Rejections in the Ultimatum Game
(2017)
During the past decades, economic theories of rational choice have been exposed to outcomes that were severe challenges to their claim of universal validity. For example, traditional theories cannot account for refusals to cooperate if cooperation would result in higher payoffs. A prominent illustration are responders’ rejections of positive but unequal payoffs in the Ultimatum Game. To accommodate this anomaly in a rational framework one needs to assume both a preference for higher payoffs and a preference for equal payoffs. The current set of studies shows that the relative weight of these preference components depends on external conditions and that consumption priming may decrease responders’ rejections of unequal payoffs. Specifically, we demonstrate that increasing the accessibility of consumption-related information accentuates the preference for higher payoffs. Furthermore, consumption priming increased responders’ reaction times for unequal payoffs which suggests an increased conflict between both preference components. While these results may also be integrated into existing social preference models, we try to identify some basic psychological processes underlying economic decision making. Going beyond the Ultimatum Game, we propose that a distinction between comparative and deductive evaluations may provide a more general framework to account for various anomalies in behavioral economics.
In today’s world of work, networking behaviors are an important and viable strategy to enhance success in work and career domains. Concerning personality as an antecedent of networking behaviors, prior studies have exclusively relied on trait perspectives that focus on how people feel, think, and act. Adopting a motivational perspective on personality, we enlarge this focus and argue that beyond traits predominantly tapping social content, motives shed further light on instrumental aspects of networking – or why people network. We use McClelland’s implicit motives framework of need for power (nPow), need for achievement (nAch), and need for affiliation (nAff) to examine instrumental determinants of networking. Using a facet theoretical approach to networking behaviors, we predict differential relations of these three motives with facets of (1) internal vs. external networking and (2) building, maintaining, and using contacts. We conducted an online study, in which we temporally separate measures (N = 539 employed individuals) to examine our hypotheses. Using multivariate latent regression, we show that nAch is related to networking in general. In line with theoretical differences between networking facets, we find that nAff is positively related to building contacts, whereas nPow is positively related to using internal contacts. In sum, this study shows that networking is not only driven by social factors (i.e., nAff), but instead the achievement motive is the most important driver of networking behaviors.
When observing another agent performing simple actions, these actions are systematically remembered as one’s own after a brief period of time. Such observation inflation has been documented as a robust phenomenon in studies in which participants passively observed videotaped actions. Whether observation inflation also holds for direct, face-to-face interactions is an open question that we addressed in two experiments. In Experiment 1, participants commanded the experimenter to carry out certain actions, and they indeed reported false memories of self-performance in a later memory test. The effect size of this inflation effect was similar to passive observation as confirmed by Experiment 2. These findings suggest that observation inflation might affect action memory in a broad range of real-world interactions.
Saliency-based models of visual attention postulate that, when a scene is freely viewed, attention is predominantly allocated to those elements that stand out in terms of their physical properties. However, eye-tracking studies have shown that saliency models fail to predict gaze behavior accurately when social information is included in an image. Notably, gaze pattern analyses revealed that depictions of human beings are heavily prioritized independent of their low-level physical saliency. What remains unknown, however, is whether the prioritization of such social features is a reflexive or a voluntary process. To investigate the early stages of social attention in more detail, participants viewed photographs of naturalistic scenes with and without social features (i.e., human heads or bodies) for 200 ms while their eye movements were being recorded. We observed significantly more first eye movements to regions containing social features than would be expected from a chance level distribution of saccades. Additionally, a generalized linear mixed model analysis revealed that the social content of a region better predicted first saccade direction than its saliency suggesting that social features partially override the impact of low-level physical saliency on gaze patterns. Given the brief image presentation time that precluded visual exploration, our results provide compelling evidence for a reflexive component in social attention. Moreover, the present study emphasizes the importance of considering social influences for a more coherent understanding of human attentional selection.
Visual saliency maps reflecting locations that stand out from the background in terms of their low-level physical features have proven to be very useful for empirical research on attentional exploration and reliably predict gaze behavior. In the present study we tested these predictions for socially relevant stimuli occurring in naturalistic scenes using eye tracking. We hypothesized that social features (i.e. human faces or bodies) would be processed preferentially over non-social features (i.e. objects, animals) regardless of their low-level saliency. To challenge this notion, we included three tasks that deliberately addressed non-social attributes. In agreement with our hypothesis, social information, especially heads, was preferentially attended compared to highly salient image regions across all tasks. Social information was never required to solve a task but was regarded nevertheless. More so, after completing the task requirements, viewing behavior reverted back to that of free-viewing with heavy prioritization of social features. Additionally, initial eye movements reflecting potentially automatic shifts of attention, were predominantly directed towards heads irrespective of top-down task demands. On these grounds, we suggest that social stimuli may provide exclusive access to the priority map, enabling social attention to override reflexive and controlled attentional processes. Furthermore, our results challenge the generalizability of saliency-based attention models.
Do people evaluate an open-minded midwife less positively than a caring midwife? Both open-minded and caring are generally seen as positive attributes. However, consistency varies—the attribute caring is consistent with the midwife stereotype while open-minded is not. In general, both stimulus valence and consistency can influence evaluations. Six experiments investigated the respective influence of valence and consistency on evaluative judgments in the domain of stereotyping. In an impression formation paradigm, valence and consistency of stereotypic information about target persons were manipulated orthogonally and spontaneous evaluations of these target persons were measured. Valence reliably influenced evaluations. However, for strongly valenced stereotypes, no effect of consistency was observed. Parameters possibly preventing the occurrence of consistency effects were ruled out, specifically, valence of inconsistent attributes, processing priority of category information, and impression formation instructions. However, consistency had subtle effects on evaluative judgments if the information about a target person was not strongly valenced and experimental conditions were optimal. Concluding, in principle, both stereotype valence and consistency can play a role in evaluative judgments of stereotypic target persons. However, the more subtle influence of consistency does not seem to substantially influence evaluations of stereotyped target persons. Implications for fluency research and stereotype disconfirmation are discussed.
Since exposure therapy for anxiety disorders incorporates extinction of contextual anxiety, relapses may be due to reinstatement processes. Animal research demonstrated more stable extinction memory and less anxiety relapse due to vagus nerve stimulation (VNS). We report a valid human three-day context conditioning, extinction and return of anxiety protocol, which we used to examine effects of transcutaneous VNS (tVNS). Seventy-five healthy participants received electric stimuli (unconditioned stimuli, US) during acquisition (Day1) when guided through one virtual office (anxiety context, CTX+) but never in another (safety context, CTX−). During extinction (Day2), participants received tVNS, sham, or no stimulation and revisited both contexts without US delivery. On Day3, participants received three USs for reinstatement followed by a test phase. Successful acquisition, i.e. startle potentiation, lower valence, higher arousal, anxiety and contingency ratings in CTX+ versus CTX−, the disappearance of these effects during extinction, and successful reinstatement indicate validity of this paradigm. Interestingly, we found generalized reinstatement in startle responses and differential reinstatement in valence ratings. Altogether, our protocol serves as valid conditioning paradigm. Reinstatement effects indicate different anxiety networks underlying physiological versus verbal responses. However, tVNS did neither affect extinction nor reinstatement, which asks for validation and improvement of the stimulation protocol.
Strong bottom-up impulses and weak top-down control may interactively lead to overeating and, consequently, weight gain. In the present study, female university freshmen were tested at the start of the first semester and again at the start of the second semester. Attentional bias toward high- or low-calorie food-cues was assessed using a dot-probe paradigm and participants completed the Barratt Impulsiveness Scale. Attentional bias and motor impulsivity interactively predicted change in body mass index: motor impulsivity positively predicted weight gain only when participants showed an attentional bias toward high-calorie food-cues. Attentional and non-planning impulsivity were unrelated to weight change. Results support findings showing that weight gain is prospectively predicted by a combination of weak top-down control (i.e. high impulsivity) and strong bottom-up impulses (i.e. high automatic motivational drive toward high-calorie food stimuli). They also highlight the fact that only specific aspects of impulsivity are relevant in eating and weight regulation.