Refine
Is part of the Bibliography
- yes (595)
Year of publication
Document Type
- Journal article (387)
- Doctoral Thesis (159)
- Book article / Book chapter (21)
- Conference Proceeding (12)
- Book (4)
- Review (4)
- Report (3)
- Other (2)
- Preprint (2)
- Master Thesis (1)
Keywords
- Psychologie (65)
- EEG (24)
- virtual reality (20)
- attention (17)
- Kognition (15)
- P300 (15)
- anxiety (13)
- emotion (13)
- Virtuelle Realität (12)
- psychology (12)
Institute
- Institut für Psychologie (595) (remove)
Sonstige beteiligte Institutionen
- Adam Opel AG (1)
- BMBF (1)
- Blindeninstitut, Ohmstr. 7, 97076, Wuerzburg, Germany (1)
- Deutsches Zentrum für Präventionsforschung Psychische Gesundheit (DZPP) (1)
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society (ESI) (1)
- Evangelisches Studienwerk e.V. (1)
- Forschungsverbund ForChange des Bayrischen Kultusministeriums (1)
- IFT Institut für Therapieforschung München (1)
- Klinik für Psychiatrie und Psychotherapie, Universität Würzburg (1)
- Opel Automobile GmbH (1)
Context conditioning is characterized by unpredictable threat and its generalization may constitute risk factors for panic disorder (PD). Therefore, we examined differences between individuals with panic attacks (PA; N = 21) and healthy controls (HC, N = 22) in contextual learning and context generalization using a virtual reality (VR) paradigm. Successful context conditioning was indicated in both groups by higher arousal, anxiety and contingency ratings, and increased startle responses and skin conductance levels (SCLs) in an anxiety context (CTX+) where an aversive unconditioned stimulus (US) occurred unpredictably vs. a safety context (CTX−). PA compared to HC exhibited increased differential responding to CTX+ vs. CTX− and overgeneralization of contextual anxiety on an evaluative verbal level, but not on a physiological level. We conclude that increased contextual conditioning and contextual generalization may constitute risk factors for PD or agoraphobia contributing to the characteristic avoidance of anxiety contexts and withdrawal to safety contexts and that evaluative cognitive process may play a major role.
According to the motivational priming hypothesis, unpleasant stimuli activate the motivational defense system, which in turn promotes congruent affective states such as negative emotions and pain. The question arises to what degree this bottom–up impact of emotions on pain is susceptible to a manipulation of top–down-driven expectations. To this end, we investigated whether verbal instructions implying pain potentiation vs. reduction (placebo or nocebo expectations)—later on confirmed by corresponding experiences (placebo or nocebo conditioning)—might alter behavioral and neurophysiological correlates of pain modulation by unpleasant pictures. We compared two groups, which underwent three experimental phases: first, participants were either instructed that watching unpleasant affective pictures would increase pain (nocebo group) or that watching unpleasant pictures would decrease pain (placebo group) relative to neutral pictures. During the following placebo/nocebo-conditioning phase, pictures were presented together with electrical pain stimuli of different intensities, reinforcing the instructions. In the subsequent test phase, all pictures were presented again combined with identical pain stimuli. Electroencephalogram was recorded in order to analyze neurophysiological responses of pain (somatosensory evoked potential) and picture processing [visually evoked late positive potential (LPP)], in addition to pain ratings. In the test phase, ratings of pain stimuli administered while watching unpleasant relative to neutral pictures were significantly higher in the nocebo group, thus confirming the motivational priming effect for pain perception. In the placebo group, this effect was reversed such that unpleasant compared with neutral pictures led to significantly lower pain ratings. Similarly, somatosensory evoked potentials were decreased during unpleasant compared with neutral pictures, in the placebo group only. LPPs of the placebo group failed to discriminate between unpleasant and neutral pictures, while the LPPs of the nocebo group showed a clear differentiation. We conclude that the placebo manipulation already affected the processing of the emotional stimuli and, in consequence, the processing of the pain stimuli. In summary, the study revealed that the modulation of pain by emotions, albeit a reliable and well-established finding, is further tuned by reinforced expectations—known to induce placebo/nocebo effects—which should be addressed in future research and considered in clinical applications.
Multitasking, defined as performing more than one task at a time, typically yields performance decrements, for instance, in processing speed and accuracy. These performance costs are often distributed asymmetrically among the involved tasks. Under suitable conditions, this can be interpreted as a marker for prioritization of one task – the one that suffers less – over the other. One source of such task prioritization is based on the use of different effector systems (e.g., oculomotor system, vocal tract, limbs) and their characteristics. The present work explores such effector system-based task prioritization by examining to which extent associated effector systems determine which task is processed with higher priority in multitasking situations. Thus, three different paradigms are used, namely the simultaneous (stimulus) onset paradigm, the psychological refractory period (PRP) paradigm, and the task switching paradigm. These paradigms invoke situations in which two (in the present studies basic spatial decision) tasks are a) initiated at exactly the same time, b) initiated with a short varying temporal distance (but still temporally overlapping), or c) in which tasks alternate randomly (without temporal overlap). The results allow for three major conclusions: 1. The assumption of effector system-based task prioritization according to an ordinal pattern (oculomotor > pedal > vocal > manual, indicating decreasing prioritization) is supported by the observed data in the simultaneous onset paradigm. This data pattern cannot be explained by a rigid “first come, first served” task scheduling principle. 2. The data from the PRP paradigm confirmed the assumption of vocal-over-manual prioritization and showed that classic PRP effects (as a marker for task order-based prioritization) can be modulated by effector system characteristics. 3. The mere cognitive representation of task sets (that must be held active to switch between them) differing in effector systems without an actual temporal overlap in task processing, however, is not sufficient to elicit the same effector system prioritization phenomena observed for overlapping tasks. In summary, the insights obtained by the present work support the assumptions of parallel central task processing and resource sharing among tasks, as opposed to exclusively serial processing of central processing stages. Moreover, they indicate that effector systems are a crucial factor in multitasking and suggest an integration of corresponding weighting parameters in existing dual-task control frameworks.
Background
While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks.
Results
Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus.
Conclusion
The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior.
Responding in the presence of stimuli leads to an integration of stimulus features and response features into event fles, which can later be retrieved to assist action control. This integration mechanism is not limited to target stimuli, but can also include distractors (distractor-response binding). A recurring research question is which factors determine whether or not distractors are integrated. One suggested candidate factor is target-distractor congruency: Distractor-response binding effects were reported to be stronger for congruent than for incongruent target-distractor pairs. Here, we discuss a general problem with including the factor of congruency in typical analyses used to study distractor-based binding effects. Integrating this factor leads to a confound that may explain any differences between distractor-response binding effects of congruent and incongruent distractors with a simple congruency effect. Simulation data confrmed this argument. We propose to interpret previous data cautiously and discuss potential avenues to circumvent this problem in the future.
Promising initial insights show that offices designed to permit physical activity (PA) may reduce workplace sitting time. Biophilic approaches are intended to introduce natural surroundings into the workplace, and preliminary data show positive effects on stress reduction and elevated productivity within the workplace. The primary aim of this pilot study was to analyze changes in workplace sitting time and self-reported habit strength concerning uninterrupted sitting and PA during work, when relocating from a traditional office setting to “active” biophilic-designed surroundings. The secondary aim was to assess possible changes in work-associated factors such as satisfaction with the office environment, work engagement, and work performance, among office staff. In a pre-post designed field study, we collected data through an online survey on health behavior at work. Twelve participants completed the survey before (one-month pre-relocation, T1) and twice after the office relocation (three months (T2) and seven months post-relocation (T3)). Standing time per day during office hours increased from T1 to T3 by about 40 min per day (p < 0.01). Other outcomes remained unaltered. The results suggest that changing office surroundings to an active-permissive biophilic design increased standing time during working hours. Future larger-scale controlled studies are warranted to investigate the influence of office design on sitting time and work-associated factors during working hours in depth.
A hallmark of habitual actions is that, once they are established, they become insensitive to changes in the values of action outcomes. In this article, we review empirical research that examined effects of posttraining changes in outcome values in outcome-selective Pavlovian-to-instrumental transfer (PIT) tasks. This review suggests that cue-instigated action tendencies in these tasks are not affected by weak and/or incomplete revaluation procedures (e.g., selective satiety) and substantially disrupted by a strong and complete devaluation of reinforcers. In a second part, we discuss two alternative models of a motivational control of habitual action: a default-interventionist framework and expected value of control theory. It is argued that the default-interventionist framework cannot solve the problem of an infinite regress (i.e., what controls the controller?). In contrast, expected value of control can explain control of habitual actions with local computations and feedback loops without (implicit) references to control homunculi. It is argued that insensitivity to changes in action outcomes is not an intrinsic design feature of habits but, rather, a function of the cognitive system that controls habitual action tendencies.
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.
For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS.
No abstract available.
Maladaptive coping mechanisms influence health-related quality of life (HRQoL) of individuals facing acute and chronic stress. Trait emotional intelligence (EI) may provide a protective shield against the debilitating effects of maladaptive coping thus contributing to maintained HRQoL. Low trait EI, on the other hand, may predispose individuals to apply maladaptive coping, consequently resulting in lower HRQoL. The current research is comprised of two studies. Study 1 was designed to investigate the protective effects of trait EI and its utility for efficient coping in dealing with the stress caused by chronic heart failure (CHF) in a cross-cultural setting (Pakistan vs Germany). N = 200 CHF patients were recruited at cardiology institutes of Multan, Pakistan and Würzburg as well as Brandenburg, Germany. Path analysis confirmed the expected relation between low trait EI and low HRQoL and revealed that this association was mediated by maladaptive metacognitions and negative coping strategies in Pakistani but not German CHF patients. Interestingly, also the specific coping strategies were culture-specific. The Pakistani sample considered religious coping to be highly important, whereas the German sample was focused on adopting a healthy lifestyle such as doing exercise. These findings are in line with cultural characteristics suggesting that German CHF patients have an internal locus of control as compared to an external locus of control in Pakistani CHF patients. Finally, the findings from study 1 corroborate the culture-independent validity of the metacognitive model of generalized anxiety disorder.
In addition to low trait EI, high interoception accuracy (IA) may predispose individuals to interpret cardiac symptoms as threatening, thus leading to anxiety. To examine this proposition, Study 2 compared individuals with high vs low IA in dealing with a psychosocial stressor (public speaking) in an experimental lab study. In addition, a novel physiological intervention named transcutaneous vagus nerve stimulation (t-VNS) and cognitive reappraisal (CR) were applied during and after the anticipation of the speech in order to facilitate coping with stress. N= 99 healthy volunteers participated in the study. Results showed interesting descriptive results that only reached trend level. They suggested a tendency of high IA individuals to perceive the situation as more threatening as indicated by increased heart rate and reduced heart rate variability in the high-frequency spectrum as well as high subjective anxiety during anticipation of and actual performance of the speech. This suggests a potential vulnerability of high IA individuals for developing anxiety disorders, specifically social anxiety disorder, in case negative self-focused attention and negative evaluation is applied to the (more prominently perceived) increased cardiac responding during anticipation of and the actual presentation of the public speech. The study did not reveal any significant protective effects of t-VNS and CR.
In summary, the current research suggested that low trait EI and high IA predicted worse psychological adjustment to chronic and acute distress. Low trait EI facilitated maladaptive metacognitive processes resulting in the use of negative coping strategies in Study 1; however, increased IA regarding cardioceptions predicted high physiological arousal in study 2. Finally, the German vs. the Pakistani culture greatly affected the preference for specific coping strategies. These findings have implications for caregivers to provide culture-specific treatments on the one hand. On the other hand, they highlight high IA as a possible vulnerability to be targeted for the prevention of (social) anxiety.
Forward Collision Alarms (FCA) intend to signal hazardous traffic situations and the need for an immediate corrective driver response. However, data of naturalistic driving studies revealed that approximately the half of all alarms activated by conventional FCA systems represented unnecessary alarms. In these situations, the alarm activation was correct according to the implemented algorithm, whereas the alarms led to no or only minimal driver responses. Psychological research can make an important contribution to understand drivers’ needs when interacting with driver assistance systems.
The overarching objective of this thesis was to gain a systematic understanding of psychological factors and processes that influence drivers’ perceived need for assistance in potential collision situations. To elucidate under which conditions drivers perceive alarms as unnecessary, a theoretical framework of drivers’ subjective alarm evaluation was developed. A further goal was to investigate the impact of unnecessary alarms on drivers’ responses and acceptance. Four driving simulator studies were carried out to examine the outlined research questions.
In line with the hypotheses derived from the theoretical framework, the results suggest that drivers’ perceived need for assistance is determined by their retrospective subjective hazard perception. While predictions of conventional FCA systems are exclusively based on physical measurements resulting in a time to collision, human drivers additionally consider their own manoeuvre intentions and those attributed to other road users to anticipate the further course of a potentially critical situation. When drivers anticipate a dissolving outcome of a potential conflict, they perceive the situation as less hazardous than the system. Based on this discrepancy, the system would activate an alarm, while drivers’ perceived need for assistance is low. To sum up, the described factors and processes cause drivers to perceive certain alarms as unnecessary. Although drivers accept unnecessary alarms less than useful alarms, unnecessary alarms do not reduce their overall system acceptance. While unnecessary alarms cause moderate driver responses in the short term, the intensity of responses decrease with multiple exposures to unnecessary alarms. However, overall, effects of unnecessary alarms on drivers’ alarm responses and acceptance seem to be rather uncritical.
This thesis provides insights into human factors that explain when FCAs are perceived as unnecessary. These factors might contribute to design FCA systems tailored to drivers’ needs.
A commentary on: Feeling the Conflict: The Crucial Role of Conflict Experience in Adaptationby Desender, K., Van Opstal, F., and Van den Bussche, E. (2014). Psychol. Sci. 25, 675–683. doi:10.1177/0956797613511468
Conflict adaptation in masked priming has recently been proposed to rely not on successful conflictresolution but rather on conflict experience (Desender et al., 2014). We re-assessed this proposal ina direct replication and also tested a potential confound due toconflict strength. The data supported this alternative view, but also failed to replicate basic conflict adaptation effects of the original studydespite considerable power.
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
When More Is Better – Consumption Priming Decreases Responders’ Rejections in the Ultimatum Game
(2017)
During the past decades, economic theories of rational choice have been exposed to outcomes that were severe challenges to their claim of universal validity. For example, traditional theories cannot account for refusals to cooperate if cooperation would result in higher payoffs. A prominent illustration are responders’ rejections of positive but unequal payoffs in the Ultimatum Game. To accommodate this anomaly in a rational framework one needs to assume both a preference for higher payoffs and a preference for equal payoffs. The current set of studies shows that the relative weight of these preference components depends on external conditions and that consumption priming may decrease responders’ rejections of unequal payoffs. Specifically, we demonstrate that increasing the accessibility of consumption-related information accentuates the preference for higher payoffs. Furthermore, consumption priming increased responders’ reaction times for unequal payoffs which suggests an increased conflict between both preference components. While these results may also be integrated into existing social preference models, we try to identify some basic psychological processes underlying economic decision making. Going beyond the Ultimatum Game, we propose that a distinction between comparative and deductive evaluations may provide a more general framework to account for various anomalies in behavioral economics.
In today’s world of work, networking behaviors are an important and viable strategy to enhance success in work and career domains. Concerning personality as an antecedent of networking behaviors, prior studies have exclusively relied on trait perspectives that focus on how people feel, think, and act. Adopting a motivational perspective on personality, we enlarge this focus and argue that beyond traits predominantly tapping social content, motives shed further light on instrumental aspects of networking – or why people network. We use McClelland’s implicit motives framework of need for power (nPow), need for achievement (nAch), and need for affiliation (nAff) to examine instrumental determinants of networking. Using a facet theoretical approach to networking behaviors, we predict differential relations of these three motives with facets of (1) internal vs. external networking and (2) building, maintaining, and using contacts. We conducted an online study, in which we temporally separate measures (N = 539 employed individuals) to examine our hypotheses. Using multivariate latent regression, we show that nAch is related to networking in general. In line with theoretical differences between networking facets, we find that nAff is positively related to building contacts, whereas nPow is positively related to using internal contacts. In sum, this study shows that networking is not only driven by social factors (i.e., nAff), but instead the achievement motive is the most important driver of networking behaviors.
Social attention is a ubiquitous, but also enigmatic and sometimes elusive phenomenon.
We direct our gaze at other human beings to see what they are doing
and to guess their intentions, but we may also absorb social events en passant as
they unfold in the corner of the eye. We use our gaze as a discrete communication
channel, sometimes conveying pieces of information which would be difficult
to explicate, but we may also find ourselves avoiding eye-contact with others in
moments when self-disclosure is fear-laden. We experience our gaze as the most
genuine expression of our will, but research also suggests considerable levels of
predictability and automaticity in our gaze behavior. The phenomenon’s complexity
has hindered researchers from developing a unified framework which can
conclusively accommodate all of its aspects, or from even agreeing on the most
promising research methodologies.
The present work follows a multi-methods approach, taking on several aspects
of the phenomenon from various directions. Participants in study 1 viewed dynamic
social scenes on a computer screen. Here, low-level physical saliency (i.e.
color, contrast, or motion) and human heads both attracted gaze to a similar extent,
providing a comparison of two vastly different classes of gaze predictors in
direct juxtaposition. In study 2, participants with varying degrees of social anxiety
walked in a public train station while their eye movements were tracked. With
increasing levels of social anxiety, participants showed a relative avoidance of gaze
at near compared to distant people. When replicating the experiment in a laboratory
situation with a matched participant group, social anxiety did not modulate
gaze behavior, fueling the debate around appropriate experimental designs in the
field. Study 3 employed virtual reality (VR) to investigate social gaze in a complex
and immersive, but still highly controlled situation. In this situation, participants
exhibited a gaze behavior which may be more typical for real-life compared to laboratory situations as they avoided gaze contact with a virtual conspecific unless
she gazed at them. This study provided important insights into gaze behavior in
virtual social situations, helping to better estimate the possible benefits of this
new research approach. Throughout all three experiments, participants showed
consistent inter-individual differences in their gaze behavior. However, the present
work could not resolve if these differences are linked to psychologically meaningful
traits or if they instead have an epiphenomenal character.
When observing another agent performing simple actions, these actions are systematically remembered as one’s own after a brief period of time. Such observation inflation has been documented as a robust phenomenon in studies in which participants passively observed videotaped actions. Whether observation inflation also holds for direct, face-to-face interactions is an open question that we addressed in two experiments. In Experiment 1, participants commanded the experimenter to carry out certain actions, and they indeed reported false memories of self-performance in a later memory test. The effect size of this inflation effect was similar to passive observation as confirmed by Experiment 2. These findings suggest that observation inflation might affect action memory in a broad range of real-world interactions.
Saliency-based models of visual attention postulate that, when a scene is freely viewed, attention is predominantly allocated to those elements that stand out in terms of their physical properties. However, eye-tracking studies have shown that saliency models fail to predict gaze behavior accurately when social information is included in an image. Notably, gaze pattern analyses revealed that depictions of human beings are heavily prioritized independent of their low-level physical saliency. What remains unknown, however, is whether the prioritization of such social features is a reflexive or a voluntary process. To investigate the early stages of social attention in more detail, participants viewed photographs of naturalistic scenes with and without social features (i.e., human heads or bodies) for 200 ms while their eye movements were being recorded. We observed significantly more first eye movements to regions containing social features than would be expected from a chance level distribution of saccades. Additionally, a generalized linear mixed model analysis revealed that the social content of a region better predicted first saccade direction than its saliency suggesting that social features partially override the impact of low-level physical saliency on gaze patterns. Given the brief image presentation time that precluded visual exploration, our results provide compelling evidence for a reflexive component in social attention. Moreover, the present study emphasizes the importance of considering social influences for a more coherent understanding of human attentional selection.
Visual saliency maps reflecting locations that stand out from the background in terms of their low-level physical features have proven to be very useful for empirical research on attentional exploration and reliably predict gaze behavior. In the present study we tested these predictions for socially relevant stimuli occurring in naturalistic scenes using eye tracking. We hypothesized that social features (i.e. human faces or bodies) would be processed preferentially over non-social features (i.e. objects, animals) regardless of their low-level saliency. To challenge this notion, we included three tasks that deliberately addressed non-social attributes. In agreement with our hypothesis, social information, especially heads, was preferentially attended compared to highly salient image regions across all tasks. Social information was never required to solve a task but was regarded nevertheless. More so, after completing the task requirements, viewing behavior reverted back to that of free-viewing with heavy prioritization of social features. Additionally, initial eye movements reflecting potentially automatic shifts of attention, were predominantly directed towards heads irrespective of top-down task demands. On these grounds, we suggest that social stimuli may provide exclusive access to the priority map, enabling social attention to override reflexive and controlled attentional processes. Furthermore, our results challenge the generalizability of saliency-based attention models.
Do people evaluate an open-minded midwife less positively than a caring midwife? Both open-minded and caring are generally seen as positive attributes. However, consistency varies—the attribute caring is consistent with the midwife stereotype while open-minded is not. In general, both stimulus valence and consistency can influence evaluations. Six experiments investigated the respective influence of valence and consistency on evaluative judgments in the domain of stereotyping. In an impression formation paradigm, valence and consistency of stereotypic information about target persons were manipulated orthogonally and spontaneous evaluations of these target persons were measured. Valence reliably influenced evaluations. However, for strongly valenced stereotypes, no effect of consistency was observed. Parameters possibly preventing the occurrence of consistency effects were ruled out, specifically, valence of inconsistent attributes, processing priority of category information, and impression formation instructions. However, consistency had subtle effects on evaluative judgments if the information about a target person was not strongly valenced and experimental conditions were optimal. Concluding, in principle, both stereotype valence and consistency can play a role in evaluative judgments of stereotypic target persons. However, the more subtle influence of consistency does not seem to substantially influence evaluations of stereotyped target persons. Implications for fluency research and stereotype disconfirmation are discussed.
Social Cueing of Numerical Magnitude : Observed Head Orientation Influences Number Processing
(2019)
In many parts of the modern world, numbers are used as tools to describe spatial relationships, be it heights, latitudes, or distances. However, this connection goes deeper as a myriad of studies showed that number representations are rooted in space (vertical, horizontal, and/or radial). For instance, numbers were shown to affect spatial perception and, conversely, perceptions or movements in space were shown to affect number estimations. This bidirectional link has already found didactic application in the classroom when children are taught the meaning of numbers. However, our knowledge about the cognitive (and neuropsychological) processes underlying the numerical magnitude operations is still very limited.
Several authors indicated that the processing within peripersonal space (i.e. the space surrounding the body in reaching distance) and numerical magnitude operations are functionally equivalent. This assumption has several implications that the present work aims at describing. For instance, vision and visuospatial attention orienting play a prominent role for processing within peripersonal space. Indeed, both neuropsychological and behavioral studies also suggested a similar role of vision and visuospatial attention orienting for number processing. Moreover, social cognition research showed that movements, posture and gestures affect not only the representation of one's own peripersonal space, but also the visuospatial attention behavior of an observer. Against this background, the current work tests the specific implication of the functional equivalence assumption that the spatial attention response to an observed person’s posture should extend to the observer’s numerical magnitude operations.
The empirical part of the present work tests the spatial attention response of observers to vertical head postures (with continuing eye contact to the observer) in both perceptual and numerical space. Two experimental series are presented that follow both steps from the observation of another person’s vertical head orientation (within his/her peripersonal space) to the observer’s attention orienting response (Experimental series A) as well as from there to the observer’s magnitude operations with numbers (Experimental Series B). Results show that the observation of a movement from a neutral to a vertical head orientation (Experiment 1) as well as the observation of the vertical head orientation alone (Experiment 3) shifted the observer’s spatial attention in correspondence with the direction information of the observed head (up vs. down). Movement from a vertical to a neutral end position, however, had no effect on the observer's spatial attention orienting response (Experiment 2). Furthermore, following down-tilted head posture (relative to up- or non-tilted head orientation), observers generated smaller numbers in a random number generation task (range 1- 9, Experiment 4), gave smaller estimates to numerical trivia questions (mostly multi-digit numbers, Experiment 5) and chose response keys less frequently in a free choice task that was associated with larger numerical magnitude in a intermixed numerical magnitude task.
Experimental Series A served as groundwork for Experimental Series B, as it demonstrated that observing another person’s head orientation indeed triggered the expected directional attention orienting response in the observer. Based on this preliminary work, the results of Experimental Series B lend support to the assumption that numerical magnitude operations are grounded in visuospatial processing of peripersonal space. Thus, the present studies brought together numerical and social cognition as well as peripersonal space research. Moreover, the Empirical Part of the present work provides the basis for elaborating on the role of processing within peripersonal space in terms of Walsh’s (2003, 2013) Theory of Magnitude. In this context, a specification of the Theory of Magnitude was staked out in a processing model that stresses the pivotal role of spatial attention orienting. Implications for mental magnitude operations are discussed. Possible applications in the classroom and beyond are described.
In most foreign language learning contexts, there are only rare chance for contact with native speakers of the target language. In such a situation, reading plays an important role in language acquisition as well as in gaining cultural information about the target language and its speakers.
Previous research indicated that reading in foreign language is a complex process, which is influenced by various linguistic, cognitive and affective factors. The aim of the present study was to test two structural models of the relationship between reading comprehension in native language (L1), English language (L2) reading motivation, metacognitive awareness of L2 reading strategies, and reading comprehension of English as a foreign language among the two samples. Furthermore, the current study aimed to examine the differences between Egyptian and German students in their perceived usage of reading strategies during reading English texts, as well as to explore the pattern of their motivation toward reading English texts. For this purpose, 401 students were recruited from Germany (n=200) and Egypt (n=201) to participate in the current study. In order to have information about metacognitive awareness of reading strategies, a self-report questionnaire (SORS) developed by Moktari and Sheory (2002) was used. While the L2 reading motivation variable, was measured by a reading motivation survey (L2RMQ) which was based on reviewed reading motivation research. In addition, two reading tests were administrated one to measure reading comprehension for native language (German/Arabic) and the other to measure English reading comprehension.
To analyze the collected data, descriptive statistics and independent t-tests were performed. In addition, further analysis using structural equation modeling was applied to test the strength of relationships between the variables under study.
The results from the current research revealed that L1 reading comprehension, whether in a German or Arabic language, had the strongest relationship with L2 reading comprehension. However, the relationship between L2 intrinsic reading motivation was not proven to be significant in either the German or Egyptian models. On the other hand, the relationship between L2 extrinsic reading motivation, metacognitive awareness of reading strategies, and L2 reading comprehension was only proven significant in the German sample. The discussion of these results along with their pedagogical implications for education and practice will be illustrated in the following study.
Since exposure therapy for anxiety disorders incorporates extinction of contextual anxiety, relapses may be due to reinstatement processes. Animal research demonstrated more stable extinction memory and less anxiety relapse due to vagus nerve stimulation (VNS). We report a valid human three-day context conditioning, extinction and return of anxiety protocol, which we used to examine effects of transcutaneous VNS (tVNS). Seventy-five healthy participants received electric stimuli (unconditioned stimuli, US) during acquisition (Day1) when guided through one virtual office (anxiety context, CTX+) but never in another (safety context, CTX−). During extinction (Day2), participants received tVNS, sham, or no stimulation and revisited both contexts without US delivery. On Day3, participants received three USs for reinstatement followed by a test phase. Successful acquisition, i.e. startle potentiation, lower valence, higher arousal, anxiety and contingency ratings in CTX+ versus CTX−, the disappearance of these effects during extinction, and successful reinstatement indicate validity of this paradigm. Interestingly, we found generalized reinstatement in startle responses and differential reinstatement in valence ratings. Altogether, our protocol serves as valid conditioning paradigm. Reinstatement effects indicate different anxiety networks underlying physiological versus verbal responses. However, tVNS did neither affect extinction nor reinstatement, which asks for validation and improvement of the stimulation protocol.
Strong bottom-up impulses and weak top-down control may interactively lead to overeating and, consequently, weight gain. In the present study, female university freshmen were tested at the start of the first semester and again at the start of the second semester. Attentional bias toward high- or low-calorie food-cues was assessed using a dot-probe paradigm and participants completed the Barratt Impulsiveness Scale. Attentional bias and motor impulsivity interactively predicted change in body mass index: motor impulsivity positively predicted weight gain only when participants showed an attentional bias toward high-calorie food-cues. Attentional and non-planning impulsivity were unrelated to weight change. Results support findings showing that weight gain is prospectively predicted by a combination of weak top-down control (i.e. high impulsivity) and strong bottom-up impulses (i.e. high automatic motivational drive toward high-calorie food stimuli). They also highlight the fact that only specific aspects of impulsivity are relevant in eating and weight regulation.
The unification of two major approaches to moral judgment is the purpose of the present approach. Kohlberg's well-known stage theory assumes a sequence of discrete stages that underlie all moral judgment. Stage theory recognizes the problem of integrating considerations but gives no way to solve such integration, even with information from any one stage. And, of course, the stage concept denies any significant integration from different stages. Thus, research on moral judgment needs to study the integration problem which can be tested within Anderson's theory of information integration. The main purpose of the present study was to extend this unificationist approach to the issue of sexual morality. A novel task presents information from two very different stages. The results showed that in contrast to discreteness the stage informers were positively correlated in punishment judgments of both genders about consensual sex of juveniles. Furthermore, the subjects integrated considerations from those very different stages also in contrast to the hypothesis that only a single stage was operative at any time.
Large-Scale Assessment of a Fully Automatic Co-Adaptive Motor Imagery-Based Brain Computer Interface
(2016)
In the last years Brain Computer Interface (BCI) technology has benefited from the development of sophisticated machine leaning methods that let the user operate the BCI after a few trials of calibration. One remarkable example is the recent development of co-adaptive techniques that proved to extend the use of BCIs also to people not able to achieve successful control with the standard BCI procedure. Especially for BCIs based on the modulation of the Sensorimotor Rhythm (SMR) these improvements are essential, since a not negligible percentage of users is unable to operate SMR-BCIs efficiently. In this study we evaluated for the first time a fully automatic co-adaptive BCI system on a large scale. A pool of 168 participants naive to BCIs operated the co-adaptive SMR-BCI in one single session. Different psychological interventions were performed prior the BCI session in order to investigate how motor coordination training and relaxation could influence BCI performance. A neurophysiological indicator based on the Power Spectral Density (PSD) was extracted by the recording of few minutes of resting state brain activity and tested as predictor of BCI performances. Results show that high accuracies in operating the BCI could be reached by the majority of the participants before the end of the session. BCI performances could be significantly predicted by the neurophysiological indicator, consolidating the validity of the model previously developed. Anyway, we still found about 22% of users with performance significantly lower than the threshold of efficient BCI control at the end of the session. Being the inter-subject variability still the major problem of BCI technology, we pointed out crucial issues for those who did not achieve sufficient control. Finally, we propose valid developments to move a step forward to the applicability of the promising co-adaptive methods.
Research on the deployment and use of technology to assist learning has seen a significant
rise over the last decades (Aparicio et al., 2017). The focus on course quality, technology,
learning outcome and learner satisfaction in e-learning has led to insufficient attention by
researchers to individual characteristics of learners (Cidral et al., 2017 ; Hsu et al., 2013). The current work aims to bridge this gap by investigating characteristics identified by previous works and backed by theory as influential individual differences in e-learning. These learner characteristics have been suggested as motivational factors (Edmunds et al., 2012) in decisions by learners to interact and exchange information (Luo et al., 2017).
In this work e-learning is defined as interaction dependent information seeking and sharing enabled by technology. This is primarily approached from a media psychology perspective. The role of learner characteristics namely, beliefs about the source of knowledge (Schommer, 1990), learning styles (Felder & Silverman, 1988), need for affect (Maio & Esses, 2001), need for cognition (Cacioppo & Petty, 1982) and power distance (Hofstede, 1980) on interactions to seek and share information in e-learning are investigated. These investigations were shaped by theory and empirical lessons as briefly mentioned in the next paragraphs. Theoretical support for investigations is derived from the technology acceptance model(TAM) by psychologist Davis (1989) and the hyper-personal model by communication scientist Walther (1996). The TAM was used to describe the influence of learner characteristics on decisions to use e-learning systems (Stantchev et al., 2014). The hyper-personal model described why computer-mediated communication thrives in e-learning (Kaye et al., 2016) and how learners interpret messages exchanged online (Hansen et al., 2015). This theoretical framework was followed by empirical reviews which justified the use of interaction and information seeking-sharing as key components of e-learning as well as the selection of learner characteristics. The reviews provided suggestions for the measurement of variables (Kühl et al., 2014) and the investigation design (Dascalau et al., 2015). Investigations were designed and implemented through surveys and quasi experiments which were used for three preliminary studies and two main studies. Samples were selected from Germany and Ghana with same variables tested in both countries. Hypotheses were tested with interaction and information seeking-sharing as dependent variables while beliefs about the source of knowledge, learning styles, need for affect, need for cognition and power distance were independent variables. Firstly, using analyses of variance, the influence of beliefs about the source of knowledge on interaction choices of learners was supported. Secondly, the role of need for cognition on interaction choices of learners was supported by results from a logistic regression. Thirdly, results from multiple linear regressions backed the influence of need for cognition and power distance on information seeking-sharing behaviour of learners. Fourthly, the relationship between need for affect and need for cognition
was supported. The findings may have implications for media psychology research, theories used in this work, research on e-learning, measurement of learner characteristics and the design of e-learning platforms. The findings suggest that, the beliefs learners have about the source of knowledge, their need for cognition and their power distance can influence decisions to interact and seek or share information. The outlook from reviews and findings in this work predicts more research on learner characteristics and a corresponding intensity in the use of e-learning by individuals. It is suggested that future studies investigate the relationship between learner autonomy and power distance. Studies on inter-cultural similarities amongst e-learners in different populations are also
suggested.
The present thesis addresses cognitive processing of voice information. Based on general theoretical concepts regarding mental processes it will differentiate between modular, abstract information processing approaches to cognition and interactive, embodied ideas of mental processing. These general concepts will then be transferred to the context of processing voice-related information in the context of parallel face-related processing streams. One central issue here is whether and to what extent cognitive voice processing can occur independently, that is, encapsulated from the simultaneous processing of visual person-related information (and vice versa). In Study 1 (Huestegge & Raettig, in press), participants are presented with audio-visual stimuli displaying faces uttering digits.
Audiovisual gender congruency was manipulated: There were male and female faces, each uttering digits with either a male or female voice (all stimuli were AV- synchronized). Participants were asked to categorize the gender of either the face or the voice by pressing one of two keys in each trial. A central result was that audio-visual gender congruency affected performance: Incongruent stimuli were categorized slower and more error-prone, suggesting a strong cross-modal interaction of the underlying visual and auditory processing routes. Additionally, the effect of incongruent visual information on auditory classification was stronger than the effect of incongruent auditory information on visual categorization, suggesting visual dominance over auditory processing in the context of gender classification. A gender congruency effect was also present under high cognitive load. Study 2 (Huestegge, Raettig, & Huestegge, in press) utilized the same (gender-congruent and -incongruent) stimuli, but different tasks for the participants, namely categorizing the spoken digits (into odd/even or smaller/larger than 5). This should effectively direct attention away from gender information, which was no longer task-relevant. Nevertheless, congruency effects were still observed in this study. This suggests a relatively automatic processing of cross-modal gender information, which
eventually affects basic speech-based information processing. Study 3 (Huestegge, subm.) focused on the ability of participants to match unfamiliar voices to (either static or dynamic) faces. One result was that participants were indeed able to match voices to faces. Moreover, there was no evidence for any performance increase when dynamic (vs. mere static) faces had to be matched to concurrent voices. The results support the idea that common person-related source information affects both vocal and facial features, and implicit corresponding knowledge appears to be used by participants to successfully complete face-voice matching. Taken together, the three studies (Huestegge, subm.; Huestegge & Raettig, in press; Huestegge et al., in press) provided information to further develop current theories of voice processing (in the context of face processing). On a general level, the results of all three studies are not in line with an abstract, modular view of cognition, but rather lend further support to interactive, embodied accounts of mental processing.
This dissertation highlights various aspects of basic social attention by choosing versatile approaches to disentangle the precise mechanisms underlying the preference to focus on other human beings. The progressive examination of different social processes contrasted with aspects of previously adopted principles of general attention. Recent research investigating eye movements during free exploration revealed a clear and robust social bias, especially for the faces of depicted human beings in a naturalistic scene. However, free viewing implies a combination of mechanisms, namely automatic attention (bottom-up), goal-driven allocation (top-down), or contextual cues and inquires consideration of overt (open exploration using the eyes) as well as covert orienting (peripheral attention without eye movement). Within the scope of this dissertation, all of these aspects have been disentangled in three studies to provide a thorough investigation of different influences on social attention mechanisms.
In the first study (section 2.1), we implemented top-down manipulations targeting non-social features in a social scene to test competing resources. Interestingly, attention towards social aspects prevailed, even though this was detrimental to completing the requirements. Furthermore, the tendency of this bias was evident for overall fixation patterns, as well as fixations occurring directly after stimulus onset, suggesting sustained as well as early preferential processing of social features. Although the introduction of tasks generally changes gaze patterns, our results imply only subtle variance when stimuli are social. Concluding, this experiment indicates that attention towards social aspects remains preferential even in light of top-down demands.
The second study (section 2.2) comprised of two separate experiments, one in which we investigated reflexive covert attention and another in which we tested reflexive as well as sustained overt attention for images in which a human being was unilaterally located on either the left or right half of the scene. The first experiment consisted of a modified dot-probe paradigm, in which peripheral probes were presented either congruently on the side of the social aspect, or incongruently on the non-social side. This was based on the assumption that social features would act similar to cues in traditional spatial cueing paradigms, thereby facilitating reaction times for probes presented on the social half as opposed to the non-social half. Indeed, results reflected such congruency effect. The second experiment investigated these reflexive mechanisms by monitoring eye movements and specifying the location of saccades and fixations for short as well as long presentation times. Again, we found the majority of initial saccades to be congruently directed to the social side of the stimulus. Furthermore, we replicated findings for sustained attention processes with highest fixation densities for the head region of the displayed human being.
The third study (section 2.3), tackled the other mechanism proposed in the attention dichotomy, the bottom-up influence. Specifically, we reduced the available contextual information of a scene by using a gaze-contingent display, in which only the currently fixated regions would be visible to the viewer, while the remaining image would remain masked. Thereby, participants had to voluntarily change their gaze in order to explore the stimulus. First, results revealed a replication of a social bias in free-viewing displays. Second, the preference to select social features was also evident in gaze-contingent displays. Third, we find higher recurrent gaze patterns for social images compared to non-social ones for both viewing modalities. Taken together, these findings imply a top-down driven preference for social features largely independent of contextual information.
Importantly, for all experiments, we took saliency predictions of different computational algorithms into consideration to ensure that the observed social bias was not a result of high physical saliency within these areas. For our second experiment, we even reduced the stimulus set to those images, which yielded lower mean and peak saliency for the side of the stimulus containing the social information, while considering algorithms based on low-level features, as well as pre-trained high-level features incorporated in deep learning algorithms.
Our experiments offer new insights into single attentional mechanisms with regard to static social naturalistic scenes and enable a further understanding of basic social processing, contrasting from that of non-social attention. The replicability and consistency of our findings across experiments speaks for a robust effect, attributing social attention an exceptional role within the general attention construct, not only behaviorally, but potentially also on a neuronal level and further allowing implications for clinical populations with impaired social functioning.
Endogenous Testosterone and Exogenous Oxytocin Modulate Attentional Processing of Infant Faces
(2016)
Evidence indicates that hormones modulate the intensity of maternal care. Oxytocin is known for its positive influence on maternal behavior and its important role for childbirth. In contrast, testosterone promotes egocentric choices and reduces empathy. Further, testosterone decreases during parenthood which could be an adaptation to increased parental investment. The present study investigated the interaction between testosterone and oxytocin in attentional control and their influence on attention to baby schema in women. Higher endogenous testosterone was expected to decrease selective attention to child portraits in a face-in-the-crowd-paradigm, while oxytocin was expected to counteract this effect. As predicted, women with higher salivary testosterone were slower in orienting attention to infant targets in the context of adult distractors. Interestingly, reaction times to infant and adult stimuli decreased after oxytocin administration, but only in women with high endogenous testosterone. These results suggest that oxytocin may counteract the adverse effects of testosterone on a central aspect of social behavior and maternal caretaking.
Epigenetic signatures such as methylation of the monoamine oxidase A (MAOA) gene have been found to be altered in panic disorder (PD). Hypothesizing temporal plasticity of epigenetic processes as a mechanism of successful fear extinction, the present psychotherapy-epigenetic study for we believe the first time investigated MAOA methylation changes during the course of exposure-based cognitive behavioral therapy (CBT) in PD. MAOA methylation was compared between N=28 female Caucasian PD patients (discovery sample) and N=28 age- and sex-matched healthy controls via direct sequencing of sodium bisulfite-treated DNA extracted from blood cells. MAOA methylation was furthermore analyzed at baseline (T0) and after a 6-week CBT (T1) in the discovery sample parallelized by a waiting time in healthy controls, as well as in an independent sample of female PD patients (N=20). Patients exhibited lower MAOA methylation than healthy controls (P<0.001), and baseline PD severity correlated negatively with MAOA methylation (P=0.01). In the discovery sample, MAOA methylation increased up to the level of healthy controls along with CBT response (number of panic attacks; T0-T1: +3.37±2.17%), while non-responders further decreased in methylation (-2.00±1.28%; P=0.001). In the replication sample, increases in MAOA methylation correlated with agoraphobic symptom reduction after CBT (P=0.02-0.03). The present results support previous evidence for MAOA hypomethylation as a PD risk marker and suggest reversibility of MAOA hypomethylation as a potential epigenetic correlate of response to CBT. The emerging notion of epigenetic signatures as a mechanism of action of psychotherapeutic interventions may promote epigenetic patterns as biomarkers of lasting extinction effects.
Dyspnea is common in many cardiorespiratory diseases. Already the anticipation of this aversive symptom elicits fear in many patients resulting in unfavorable health behaviors such as activity avoidance and sedentary lifestyle. This study investigated brain mechanisms underlying these anticipatory processes. We induced dyspnea using resistive-load breathing in healthy subjects during functional magnetic resonance imaging. Blocks of severe and mild dyspnea alternated, each preceded by anticipation periods. Severe dyspnea activated a network of sensorimotor, cerebellar, and limbic areas. The left insular, parietal opercular, and cerebellar cortices showed increased activation already during dyspnea anticipation. Left insular and parietal opercular cortex showed increased connectivity with right insular and anterior cingulate cortex when severe dyspnea was anticipated, while the cerebellum showed increased connectivity with the amygdala. Notably, insular activation during dyspnea perception was positively correlated with midbrain activation during anticipation. Moreover, anticipatory fear was positively correlated with anticipatory activation in right insular and anterior cingulate cortex. The results demonstrate that dyspnea anticipation activates brain areas involved in dyspnea perception. The involvement of emotion-related areas such as insula, anterior cingulate cortex, and amygdala during dyspnea anticipation most likely reflects anticipatory fear and might underlie the development of unfavorable health behaviors in patients suffering from dyspnea.
In dieser EEG Untersuchung wurde der Einfluss von zuvor präsentierten Abfolgen wütender und neutraler Gesichtsausdrücke auf die neurokognitive Verarbeitung eines aktuell wahrgenommenen Gesichts unter Berücksichtigung des modulierenden Effekts der individuellen Ängstlichkeit, sowie eines sozial stressenden Kontextes und einer erhöhten kognitiven Auslastung erforscht.
Die Ergebnisse lieferten bereits auf der Ebene der basalen visuellen Gesichtsanalyse Belege für eine parallele Verarbeitung und Integration von strukturellen und emotionalen Gesichtsinformationen. Zudem konnte schon in dieser frühen Phase ein genereller kontextueller Einfluss von Gesichtssequenzen auf die kognitive Gesichtsverarbeitung nachgewiesen werden, welcher sogar in späteren Phasen der kognitiven Verarbeitung noch zunahm.
Damit konnte nachgewiesen werden, dass die zeitliche Integration, d.h. die spezifische Abfolge wahrgenommener Gesichter eine wichtige Rolle für die kognitive Evaluation des aktuell perzipierten Gesichtes spielt. Diese Ergebnisse wurden zudem in einer Revision des Gesichtsverarbeitungsmodells von Haxby und Kollegen verordnet und in einer sLORETA Analyse dargestellt.
Die Befunde zur individuellen Ängstlichkeit und kognitiven Auslastung bestätigten außerdem die Attentional Control Theorie und das Dual Mechanisms of Control Modell.
Biologische Marker für Aufmerksamkeitsverzerrungen bei sozialer Ängstlichkeit und deren Modifikation
(2019)
Diese Dissertationsschrift beschäftigt sich mit biologischen Korrelaten von Aufmerksamkeits-verzerrungen und eruiert deren Modifikation in einem längsschnittlich angelegten Experiment. Hierfür wurden über 100 sozial-ängstliche Teilnehmer mit Hilfe einer Screening-Prozedur gewonnen und hinsichtlich der Ausprägung einer ereigniskorrelierten Lateralisation namens „N2pc“ untersucht.
Während der ersten Labormessung indizierte die N2pc bei der Bearbeitung eines Dot Probe Paradigmas einen mittelgroßen, statistisch hochbedeutsamen Attentional Bias hin zu wütenden Gesichtern im Vergleich zu neutralen. Das hierfür klassischerweise verwendete Maß von Reaktionszeitunterschieden hingegen konnte diese Verzerrung der Aufmerksamkeit nicht abbilden. Ferner zeigten weder die elektrophysiologische noch die behaviorale Messgröße einen Zusammenhang mit Fragebögen sozialer Angst, was teilweise auf ein Fehlen interner Konsistenz zurückgeführt werden kann.
Im weiteren Verlauf absolvierten die überwiegend weiblichen Teilnehmer an acht unterschiedlichen Terminen über zwei bis vier Wochen fast 7000 Durchgänge eines Aufmerksamkeitsverzerrungsmodifikationstrainings oder einer aktiven Kontrollprozedur. Daraufhin zeigte sich eine Auslöschung der ereigniskorrelierten Lateralisation, allerdings in einem späteren Zeitfenster als erwartet. Dieses Verschwinden des Attentional Bias blieb bis elf Wochen nach Ende der Trainingsprozedur stabil. Außerdem trat dieselbe Modifikation ebenfalls für die Kontrollgruppe auf. Die selbstberichtete Schwere der Symptomausprägung veränderte sich zwar nicht, allerdings konnte eine Reduktion des Persönlichkeitsmerkmals Neurotizismus verzeichnet werden, welches konzeptuell mit dem Begriff der Ängstlichkeit eng verwoben ist.
Durch explorative Folgeanalysen konnte eine stärkere Modulation der rechten Großhirnhälfte, also durch Reize im linken visuellen Halbfeld aufgedeckt werden. Eine Neuberechnung des Attentional Bias separat für jede Hemisphäre scheint daher auch für künftige Untersuchungen angebracht. Ferner wurde als Träger der Modifikation über die Zeit eine Veränderung der Hyperpolarisation nach der N2-Komponente identifiziert. Ob durch eine Anpassung der Prozedur eine Modulation einer früheren ereigniskorrelierten Komponente erzielt werden kann, bleibt zum aktuellen Zeitpunkt unbeantwortet.
Detecting whether a suspect possesses incriminating (e.g., crime-related) information can provide valuable decision aids in court. To this means, the Concealed Information Test (CIT) has been developed and is currently applied on a regular basis in Japan. But whereas research has revealed a high validity of the CIT in student and normal populations, research investigating its validity in forensic samples in scarce. This applies even more to the reaction time-based CIT (RT-CIT), where no such research is available so far. The current study tested the application of the RT-CIT for an imaginary mock crime scenario both in a sample of prisoners (n = 27) and a matched control group (n = 25). Results revealed a high validity of the RT-CIT for discriminating between crime-related and crime-unrelated information, visible in medium to very high effect sizes for error rates and reaction times. Interestingly, in accordance with theories that criminal offenders may have worse response inhibition capacities and that response inhibition plays a crucial role in the RT-CIT, CIT-effects in the error rates were even elevated in the prisoners compared to the control group. No support for this hypothesis could, however, be found in reaction time CIT-effects. Also, performance in a standard Stroop task, that was conducted to measure executive functioning, did not differ between both groups and no correlation was found between Stroop task performance and performance in the RT-CIT. Despite frequently raised concerns that the RT-CIT may not be applicable in non-student and forensic populations, our results thereby do suggest that such a use may be possible and that effects seem to be quite large. Future research should build up on these findings by increasing the realism of the crime and interrogation situation and by further investigating the replicability and the theoretical substantiation of increased effects in non-student and forensic samples.
A negative mood-congruent attention bias has been consistently observed, for example, in clinical studies on major depression. This bias is assumed to be dysfunctional in that it supports maintaining a sad mood, whereas a potentially adaptive role has largely been neglected. Previous experiments involving sad mood induction techniques found a negative mood-congruent attention bias specifically for young individuals, explained by an adaptive need for information transfer in the service of mood regulation. In the present study we investigated the attentional bias in typically developing children (aged 6–12 years) when happy and sad moods were induced. Crucially, we manipulated the age (adult vs. child) of the displayed pairs of facial expressions depicting sadness, anger, fear and happiness. The results indicate that sad children indeed exhibited a mood specific attention bias toward sad facial expressions. Additionally, this bias was more pronounced for adult faces. Results are discussed in the context of an information gain which should be stronger when looking at adult faces due to their more expansive life experience. These findings bear implications for both research methods and future interventions.
Altruistic punishment is connected to trait anger, not trait altruism, if compensation is available
(2018)
Altruistic punishment and altruistic compensation are important concepts that are used to investigate altruism. However, altruistic punishment has been found to be correlated with anger. We were interested whether altruistic punishment and altruistic compensation are both driven by trait altruism and trait anger or whether the influence of those two traits is more specific to one of the behavioral options. We found that if the participants were able to apply altruistic compensation and altruistic punishment together in one paradigm, trait anger only predicts altruistic punishment and trait altruism only predicts altruistic compensation. Interestingly, these relations are disguised in classical altruistic punishment and altruistic compensation paradigms where participants can either only punish or compensate. Hence altruistic punishment and altruistic compensation paradigms should be merged together if one is interested in trait altruism without the confounding influence of trait anger.
Although questionable research practices (QRPs) and p-hacking have received attention in recent years, little research has focused on their prevalence and acceptance in students. Students are the researchers of the future and will represent the field in the future. Therefore, they should not be learning to use and accept QRPs, which would reduce their ability to produce and evaluate meaningful research. 207 psychology students and fresh graduates provided self-report data on the prevalence and predictors of QRPs. Attitudes towards QRPs, belief that significant results constitute better science or lead to better grades, motivation, and stress levels were predictors. Furthermore, we assessed perceived supervisor attitudes towards QRPs as an important predictive factor. The results were in line with estimates of QRP prevalence from academia. The best predictor of QRP use was students’ QRP attitudes. Perceived supervisor attitudes exerted both a direct and indirect effect via student attitudes. Motivation to write a good thesis was a protective factor, whereas stress had no effect. Students in this sample did not subscribe to beliefs that significant results were better for science or their grades. Such beliefs further did not impact QRP attitudes or use in this sample. Finally, students engaged in more QRPs pertaining to reporting and analysis than those pertaining to study design. We conclude that supervisors have an important function in shaping students’ attitudes towards QRPs and can improve their research practices by motivating them well. Furthermore, this research provides some impetus towards identifying predictors of QRP use in academia.
Arrow cues and other overlearned spatial symbols automatically orient attention according to their spatial meaning. This renders them similar to exogenous cues that occur at stimulus location. Exogenous cues trigger shifts of attention even when they are presented subliminally. Here, we investigate to what extent the mechanisms underlying the orienting of attention by exogenous cues and by arrow cues are comparable by analyzing the effects of visible and masked arrow cues on attention. In Experiment 1, we presented arrow cues with overall 50% validity. Visible cues, but not masked cues, lead to shifts of attention. In Experiment 2, the arrow cues had an overall validity of 80%. Now both visible and masked arrows lead to shifts of attention. This is in line with findings that subliminal exogenous cues capture attention only in a top-down contingent manner, that is, when the cues fit the observer’s intentions.
The present study investigated event-related brain potentials elicited by true and false negated statements to evaluate if discrimination of the truth value of negated information relies on conscious processing and requires higher-order cognitive processing in healthy subjects across different levels of stimulus complexity. The stimulus material consisted of true and false negated sentences (sentence level) and prime-target expressions (word level). Stimuli were presented acoustically and no overt behavioral response of the participants was required. Event-related brain potentials to target words preceded by true and false negated expressions were analyzed both within group and at the single subject level. Across the different processing conditions (word pairs and sentences), target words elicited a frontal negativity and a late positivity in the time window from 600-1000 msec post target word onset. Amplitudes of both brain potentials varied as a function of the truth value of the negated expressions. Results were confirmed at the single-subject level. In sum, our results support recent suggestions according to which evaluation of the truth value of a negated expression is a time-and cognitively demanding process that cannot be solved automatically, and thus requires conscious processing. Our paradigm provides insight into higher-order processing related to language comprehension and reasoning in healthy subjects. Future studies are needed to evaluate if our paradigm also proves sensitive for the detection of consciousness in non-responsive patients.
Background:
This study investigated the relation between social desirability and self-reported physical activity in web-based research.
Findings:
A longitudinal study (N = 5,495, 54% women) was conducted on a representative sample of the Dutch population using the Marlowe-Crowne Scale as social desirability measure and the short form of the International Physical Activity Questionnaire. Social desirability was not associated with self-reported physical activity (in MET-minutes/week), nor with its sub-behaviors (i.e., walking, moderate-intensity activity, vigorous-intensity activity, and sedentary behavior). Socio-demographics (i.e., age, sex, income, and education) did not moderate the effect of social desirability on self-reported physical activity and its sub-behaviors.
Conclusions:
This study does not throw doubt on the usefulness of the Internet as a medium to collect self-reports on physical activity.
The Concealed Information Test (CIT) is a well-validated means to detect whether someone possesses certain (e.g., crime-relevant) information. The current study investigated whether alcohol intoxication during CIT administration influences reaction time (RT) CIT-effects. Two opposing predictions can be made. First, by decreasing attention to critical information, alcohol intoxication could diminish CIT-effects. Second, by hampering the inhibition of truthful responses, alcohol intoxication could increase CIT-effects. A correlational field design was employed. Participants (n = 42) were recruited and tested at a bar, where alcohol consumption was voluntary and incidental. Participants completed a CIT, in which they were instructed to hide knowledge of their true identity. BAC was estimated via breath alcohol ratio. Results revealed that higher BAC levels were correlated with higher CIT-effects. Our results demonstrate that robust CIT effects can be obtained even when testing conditions differ from typical laboratory settings and strengthen the idea that response inhibition contributes to the RT-CIT effect.
Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts’ advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts’ extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI) related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas
Human actions are generally not determined by external stimuli, but by internal goals and by the urge to evoke desired effects in the environment. To reach these effects, humans typically have to act. But at times, deciding not to act can be better suited or even the only way to reach a desired effect. What mental processes are involved when people decide not to act to reach certain effects? From the outside it may seem that nothing remarkable is happening, because no action can be observed. However, I present three studies which disclose the cognitive processes that control nonactions.
The present experiments address situations where people intentionally decide to omit certain actions in order to produce a predictable effect in the environment. These experiments are based on the ideomotor hypothesis, which suggests that bidirectional associations can be formed between actions and the resulting effects. Because of these associations, anticipating the effects can in turn activate the respective action. The results of the present experiments show that associations can be formed between nonactions (i.e., the intentional decision not to act) and the resulting effects. Due to these associations, perceiving the nonaction effects encourages not acting (Exp. 1–3). What is more, planning a nonaction seems to come with an activation of the effects that inevitably follow the nonaction (Exp. 4–5). These results suggest that the ideomotor hypothesis can be expanded to nonactions and that nonactions are cognitively represented in terms of their sensory effects. Furthermore, nonaction effects can elicit a sense of agency (Exp. 6–8). That is, even though people refrain from acting, the resulting nonaction effects are perceived as self-produced effects.
In a nutshell, these findings demonstrate that intentional nonactions include specific mechanisms and processes, which are involved, for instance, in effect anticipation and the sense of agency. This means that, while it may seem that nothing remarkable is happening when people decide not to act, complex processes run on the inside, which are also involved in intentional actions.
The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.