Refine
Is part of the Bibliography
- yes (595)
Year of publication
Document Type
- Journal article (387)
- Doctoral Thesis (159)
- Book article / Book chapter (21)
- Conference Proceeding (12)
- Book (4)
- Review (4)
- Report (3)
- Other (2)
- Preprint (2)
- Master Thesis (1)
Keywords
- Psychologie (65)
- EEG (24)
- virtual reality (20)
- attention (17)
- Kognition (15)
- P300 (15)
- anxiety (13)
- emotion (13)
- Virtuelle Realität (12)
- psychology (12)
Institute
- Institut für Psychologie (595) (remove)
Sonstige beteiligte Institutionen
- Adam Opel AG (1)
- BMBF (1)
- Blindeninstitut, Ohmstr. 7, 97076, Wuerzburg, Germany (1)
- Deutsches Zentrum für Präventionsforschung Psychische Gesundheit (DZPP) (1)
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society (ESI) (1)
- Evangelisches Studienwerk e.V. (1)
- Forschungsverbund ForChange des Bayrischen Kultusministeriums (1)
- IFT Institut für Therapieforschung München (1)
- Klinik für Psychiatrie und Psychotherapie, Universität Würzburg (1)
- Opel Automobile GmbH (1)
Context conditioning is characterized by unpredictable threat and its generalization may constitute risk factors for panic disorder (PD). Therefore, we examined differences between individuals with panic attacks (PA; N = 21) and healthy controls (HC, N = 22) in contextual learning and context generalization using a virtual reality (VR) paradigm. Successful context conditioning was indicated in both groups by higher arousal, anxiety and contingency ratings, and increased startle responses and skin conductance levels (SCLs) in an anxiety context (CTX+) where an aversive unconditioned stimulus (US) occurred unpredictably vs. a safety context (CTX−). PA compared to HC exhibited increased differential responding to CTX+ vs. CTX− and overgeneralization of contextual anxiety on an evaluative verbal level, but not on a physiological level. We conclude that increased contextual conditioning and contextual generalization may constitute risk factors for PD or agoraphobia contributing to the characteristic avoidance of anxiety contexts and withdrawal to safety contexts and that evaluative cognitive process may play a major role.
According to the motivational priming hypothesis, unpleasant stimuli activate the motivational defense system, which in turn promotes congruent affective states such as negative emotions and pain. The question arises to what degree this bottom–up impact of emotions on pain is susceptible to a manipulation of top–down-driven expectations. To this end, we investigated whether verbal instructions implying pain potentiation vs. reduction (placebo or nocebo expectations)—later on confirmed by corresponding experiences (placebo or nocebo conditioning)—might alter behavioral and neurophysiological correlates of pain modulation by unpleasant pictures. We compared two groups, which underwent three experimental phases: first, participants were either instructed that watching unpleasant affective pictures would increase pain (nocebo group) or that watching unpleasant pictures would decrease pain (placebo group) relative to neutral pictures. During the following placebo/nocebo-conditioning phase, pictures were presented together with electrical pain stimuli of different intensities, reinforcing the instructions. In the subsequent test phase, all pictures were presented again combined with identical pain stimuli. Electroencephalogram was recorded in order to analyze neurophysiological responses of pain (somatosensory evoked potential) and picture processing [visually evoked late positive potential (LPP)], in addition to pain ratings. In the test phase, ratings of pain stimuli administered while watching unpleasant relative to neutral pictures were significantly higher in the nocebo group, thus confirming the motivational priming effect for pain perception. In the placebo group, this effect was reversed such that unpleasant compared with neutral pictures led to significantly lower pain ratings. Similarly, somatosensory evoked potentials were decreased during unpleasant compared with neutral pictures, in the placebo group only. LPPs of the placebo group failed to discriminate between unpleasant and neutral pictures, while the LPPs of the nocebo group showed a clear differentiation. We conclude that the placebo manipulation already affected the processing of the emotional stimuli and, in consequence, the processing of the pain stimuli. In summary, the study revealed that the modulation of pain by emotions, albeit a reliable and well-established finding, is further tuned by reinforced expectations—known to induce placebo/nocebo effects—which should be addressed in future research and considered in clinical applications.
Multitasking, defined as performing more than one task at a time, typically yields performance decrements, for instance, in processing speed and accuracy. These performance costs are often distributed asymmetrically among the involved tasks. Under suitable conditions, this can be interpreted as a marker for prioritization of one task – the one that suffers less – over the other. One source of such task prioritization is based on the use of different effector systems (e.g., oculomotor system, vocal tract, limbs) and their characteristics. The present work explores such effector system-based task prioritization by examining to which extent associated effector systems determine which task is processed with higher priority in multitasking situations. Thus, three different paradigms are used, namely the simultaneous (stimulus) onset paradigm, the psychological refractory period (PRP) paradigm, and the task switching paradigm. These paradigms invoke situations in which two (in the present studies basic spatial decision) tasks are a) initiated at exactly the same time, b) initiated with a short varying temporal distance (but still temporally overlapping), or c) in which tasks alternate randomly (without temporal overlap). The results allow for three major conclusions: 1. The assumption of effector system-based task prioritization according to an ordinal pattern (oculomotor > pedal > vocal > manual, indicating decreasing prioritization) is supported by the observed data in the simultaneous onset paradigm. This data pattern cannot be explained by a rigid “first come, first served” task scheduling principle. 2. The data from the PRP paradigm confirmed the assumption of vocal-over-manual prioritization and showed that classic PRP effects (as a marker for task order-based prioritization) can be modulated by effector system characteristics. 3. The mere cognitive representation of task sets (that must be held active to switch between them) differing in effector systems without an actual temporal overlap in task processing, however, is not sufficient to elicit the same effector system prioritization phenomena observed for overlapping tasks. In summary, the insights obtained by the present work support the assumptions of parallel central task processing and resource sharing among tasks, as opposed to exclusively serial processing of central processing stages. Moreover, they indicate that effector systems are a crucial factor in multitasking and suggest an integration of corresponding weighting parameters in existing dual-task control frameworks.
Background
While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks.
Results
Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus.
Conclusion
The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior.
Responding in the presence of stimuli leads to an integration of stimulus features and response features into event fles, which can later be retrieved to assist action control. This integration mechanism is not limited to target stimuli, but can also include distractors (distractor-response binding). A recurring research question is which factors determine whether or not distractors are integrated. One suggested candidate factor is target-distractor congruency: Distractor-response binding effects were reported to be stronger for congruent than for incongruent target-distractor pairs. Here, we discuss a general problem with including the factor of congruency in typical analyses used to study distractor-based binding effects. Integrating this factor leads to a confound that may explain any differences between distractor-response binding effects of congruent and incongruent distractors with a simple congruency effect. Simulation data confrmed this argument. We propose to interpret previous data cautiously and discuss potential avenues to circumvent this problem in the future.
Promising initial insights show that offices designed to permit physical activity (PA) may reduce workplace sitting time. Biophilic approaches are intended to introduce natural surroundings into the workplace, and preliminary data show positive effects on stress reduction and elevated productivity within the workplace. The primary aim of this pilot study was to analyze changes in workplace sitting time and self-reported habit strength concerning uninterrupted sitting and PA during work, when relocating from a traditional office setting to “active” biophilic-designed surroundings. The secondary aim was to assess possible changes in work-associated factors such as satisfaction with the office environment, work engagement, and work performance, among office staff. In a pre-post designed field study, we collected data through an online survey on health behavior at work. Twelve participants completed the survey before (one-month pre-relocation, T1) and twice after the office relocation (three months (T2) and seven months post-relocation (T3)). Standing time per day during office hours increased from T1 to T3 by about 40 min per day (p < 0.01). Other outcomes remained unaltered. The results suggest that changing office surroundings to an active-permissive biophilic design increased standing time during working hours. Future larger-scale controlled studies are warranted to investigate the influence of office design on sitting time and work-associated factors during working hours in depth.
A hallmark of habitual actions is that, once they are established, they become insensitive to changes in the values of action outcomes. In this article, we review empirical research that examined effects of posttraining changes in outcome values in outcome-selective Pavlovian-to-instrumental transfer (PIT) tasks. This review suggests that cue-instigated action tendencies in these tasks are not affected by weak and/or incomplete revaluation procedures (e.g., selective satiety) and substantially disrupted by a strong and complete devaluation of reinforcers. In a second part, we discuss two alternative models of a motivational control of habitual action: a default-interventionist framework and expected value of control theory. It is argued that the default-interventionist framework cannot solve the problem of an infinite regress (i.e., what controls the controller?). In contrast, expected value of control can explain control of habitual actions with local computations and feedback loops without (implicit) references to control homunculi. It is argued that insensitivity to changes in action outcomes is not an intrinsic design feature of habits but, rather, a function of the cognitive system that controls habitual action tendencies.
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.
For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS.
No abstract available.
Maladaptive coping mechanisms influence health-related quality of life (HRQoL) of individuals facing acute and chronic stress. Trait emotional intelligence (EI) may provide a protective shield against the debilitating effects of maladaptive coping thus contributing to maintained HRQoL. Low trait EI, on the other hand, may predispose individuals to apply maladaptive coping, consequently resulting in lower HRQoL. The current research is comprised of two studies. Study 1 was designed to investigate the protective effects of trait EI and its utility for efficient coping in dealing with the stress caused by chronic heart failure (CHF) in a cross-cultural setting (Pakistan vs Germany). N = 200 CHF patients were recruited at cardiology institutes of Multan, Pakistan and Würzburg as well as Brandenburg, Germany. Path analysis confirmed the expected relation between low trait EI and low HRQoL and revealed that this association was mediated by maladaptive metacognitions and negative coping strategies in Pakistani but not German CHF patients. Interestingly, also the specific coping strategies were culture-specific. The Pakistani sample considered religious coping to be highly important, whereas the German sample was focused on adopting a healthy lifestyle such as doing exercise. These findings are in line with cultural characteristics suggesting that German CHF patients have an internal locus of control as compared to an external locus of control in Pakistani CHF patients. Finally, the findings from study 1 corroborate the culture-independent validity of the metacognitive model of generalized anxiety disorder.
In addition to low trait EI, high interoception accuracy (IA) may predispose individuals to interpret cardiac symptoms as threatening, thus leading to anxiety. To examine this proposition, Study 2 compared individuals with high vs low IA in dealing with a psychosocial stressor (public speaking) in an experimental lab study. In addition, a novel physiological intervention named transcutaneous vagus nerve stimulation (t-VNS) and cognitive reappraisal (CR) were applied during and after the anticipation of the speech in order to facilitate coping with stress. N= 99 healthy volunteers participated in the study. Results showed interesting descriptive results that only reached trend level. They suggested a tendency of high IA individuals to perceive the situation as more threatening as indicated by increased heart rate and reduced heart rate variability in the high-frequency spectrum as well as high subjective anxiety during anticipation of and actual performance of the speech. This suggests a potential vulnerability of high IA individuals for developing anxiety disorders, specifically social anxiety disorder, in case negative self-focused attention and negative evaluation is applied to the (more prominently perceived) increased cardiac responding during anticipation of and the actual presentation of the public speech. The study did not reveal any significant protective effects of t-VNS and CR.
In summary, the current research suggested that low trait EI and high IA predicted worse psychological adjustment to chronic and acute distress. Low trait EI facilitated maladaptive metacognitive processes resulting in the use of negative coping strategies in Study 1; however, increased IA regarding cardioceptions predicted high physiological arousal in study 2. Finally, the German vs. the Pakistani culture greatly affected the preference for specific coping strategies. These findings have implications for caregivers to provide culture-specific treatments on the one hand. On the other hand, they highlight high IA as a possible vulnerability to be targeted for the prevention of (social) anxiety.
Forward Collision Alarms (FCA) intend to signal hazardous traffic situations and the need for an immediate corrective driver response. However, data of naturalistic driving studies revealed that approximately the half of all alarms activated by conventional FCA systems represented unnecessary alarms. In these situations, the alarm activation was correct according to the implemented algorithm, whereas the alarms led to no or only minimal driver responses. Psychological research can make an important contribution to understand drivers’ needs when interacting with driver assistance systems.
The overarching objective of this thesis was to gain a systematic understanding of psychological factors and processes that influence drivers’ perceived need for assistance in potential collision situations. To elucidate under which conditions drivers perceive alarms as unnecessary, a theoretical framework of drivers’ subjective alarm evaluation was developed. A further goal was to investigate the impact of unnecessary alarms on drivers’ responses and acceptance. Four driving simulator studies were carried out to examine the outlined research questions.
In line with the hypotheses derived from the theoretical framework, the results suggest that drivers’ perceived need for assistance is determined by their retrospective subjective hazard perception. While predictions of conventional FCA systems are exclusively based on physical measurements resulting in a time to collision, human drivers additionally consider their own manoeuvre intentions and those attributed to other road users to anticipate the further course of a potentially critical situation. When drivers anticipate a dissolving outcome of a potential conflict, they perceive the situation as less hazardous than the system. Based on this discrepancy, the system would activate an alarm, while drivers’ perceived need for assistance is low. To sum up, the described factors and processes cause drivers to perceive certain alarms as unnecessary. Although drivers accept unnecessary alarms less than useful alarms, unnecessary alarms do not reduce their overall system acceptance. While unnecessary alarms cause moderate driver responses in the short term, the intensity of responses decrease with multiple exposures to unnecessary alarms. However, overall, effects of unnecessary alarms on drivers’ alarm responses and acceptance seem to be rather uncritical.
This thesis provides insights into human factors that explain when FCAs are perceived as unnecessary. These factors might contribute to design FCA systems tailored to drivers’ needs.
A commentary on: Feeling the Conflict: The Crucial Role of Conflict Experience in Adaptationby Desender, K., Van Opstal, F., and Van den Bussche, E. (2014). Psychol. Sci. 25, 675–683. doi:10.1177/0956797613511468
Conflict adaptation in masked priming has recently been proposed to rely not on successful conflictresolution but rather on conflict experience (Desender et al., 2014). We re-assessed this proposal ina direct replication and also tested a potential confound due toconflict strength. The data supported this alternative view, but also failed to replicate basic conflict adaptation effects of the original studydespite considerable power.
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
When More Is Better – Consumption Priming Decreases Responders’ Rejections in the Ultimatum Game
(2017)
During the past decades, economic theories of rational choice have been exposed to outcomes that were severe challenges to their claim of universal validity. For example, traditional theories cannot account for refusals to cooperate if cooperation would result in higher payoffs. A prominent illustration are responders’ rejections of positive but unequal payoffs in the Ultimatum Game. To accommodate this anomaly in a rational framework one needs to assume both a preference for higher payoffs and a preference for equal payoffs. The current set of studies shows that the relative weight of these preference components depends on external conditions and that consumption priming may decrease responders’ rejections of unequal payoffs. Specifically, we demonstrate that increasing the accessibility of consumption-related information accentuates the preference for higher payoffs. Furthermore, consumption priming increased responders’ reaction times for unequal payoffs which suggests an increased conflict between both preference components. While these results may also be integrated into existing social preference models, we try to identify some basic psychological processes underlying economic decision making. Going beyond the Ultimatum Game, we propose that a distinction between comparative and deductive evaluations may provide a more general framework to account for various anomalies in behavioral economics.
In today’s world of work, networking behaviors are an important and viable strategy to enhance success in work and career domains. Concerning personality as an antecedent of networking behaviors, prior studies have exclusively relied on trait perspectives that focus on how people feel, think, and act. Adopting a motivational perspective on personality, we enlarge this focus and argue that beyond traits predominantly tapping social content, motives shed further light on instrumental aspects of networking – or why people network. We use McClelland’s implicit motives framework of need for power (nPow), need for achievement (nAch), and need for affiliation (nAff) to examine instrumental determinants of networking. Using a facet theoretical approach to networking behaviors, we predict differential relations of these three motives with facets of (1) internal vs. external networking and (2) building, maintaining, and using contacts. We conducted an online study, in which we temporally separate measures (N = 539 employed individuals) to examine our hypotheses. Using multivariate latent regression, we show that nAch is related to networking in general. In line with theoretical differences between networking facets, we find that nAff is positively related to building contacts, whereas nPow is positively related to using internal contacts. In sum, this study shows that networking is not only driven by social factors (i.e., nAff), but instead the achievement motive is the most important driver of networking behaviors.
Social attention is a ubiquitous, but also enigmatic and sometimes elusive phenomenon.
We direct our gaze at other human beings to see what they are doing
and to guess their intentions, but we may also absorb social events en passant as
they unfold in the corner of the eye. We use our gaze as a discrete communication
channel, sometimes conveying pieces of information which would be difficult
to explicate, but we may also find ourselves avoiding eye-contact with others in
moments when self-disclosure is fear-laden. We experience our gaze as the most
genuine expression of our will, but research also suggests considerable levels of
predictability and automaticity in our gaze behavior. The phenomenon’s complexity
has hindered researchers from developing a unified framework which can
conclusively accommodate all of its aspects, or from even agreeing on the most
promising research methodologies.
The present work follows a multi-methods approach, taking on several aspects
of the phenomenon from various directions. Participants in study 1 viewed dynamic
social scenes on a computer screen. Here, low-level physical saliency (i.e.
color, contrast, or motion) and human heads both attracted gaze to a similar extent,
providing a comparison of two vastly different classes of gaze predictors in
direct juxtaposition. In study 2, participants with varying degrees of social anxiety
walked in a public train station while their eye movements were tracked. With
increasing levels of social anxiety, participants showed a relative avoidance of gaze
at near compared to distant people. When replicating the experiment in a laboratory
situation with a matched participant group, social anxiety did not modulate
gaze behavior, fueling the debate around appropriate experimental designs in the
field. Study 3 employed virtual reality (VR) to investigate social gaze in a complex
and immersive, but still highly controlled situation. In this situation, participants
exhibited a gaze behavior which may be more typical for real-life compared to laboratory situations as they avoided gaze contact with a virtual conspecific unless
she gazed at them. This study provided important insights into gaze behavior in
virtual social situations, helping to better estimate the possible benefits of this
new research approach. Throughout all three experiments, participants showed
consistent inter-individual differences in their gaze behavior. However, the present
work could not resolve if these differences are linked to psychologically meaningful
traits or if they instead have an epiphenomenal character.