Filtern
Volltext vorhanden
- ja (28)
Gehört zur Bibliographie
- ja (28)
Erscheinungsjahr
- 2019 (28) (entfernen)
Dokumenttyp
Sprache
- Englisch (28) (entfernen)
Schlagworte
- virtual reality (4)
- Soziale Wahrnehmung (3)
- Aufmerksamkeit (2)
- Kognition (2)
- social attention (2)
- Adolescent (1)
- Adult (1)
- Aktionsforschung (1)
- Alcohol dependence (1)
- Antizipation (1)
Institut
- Institut für Psychologie (28) (entfernen)
Sonstige beteiligte Institutionen
EU-Projektnummer / Contract (GA) number
- 230331-PROPEREMO (1)
- 336305 (1)
- 677819 (1)
Human actions are generally not determined by external stimuli, but by internal goals and by the urge to evoke desired effects in the environment. To reach these effects, humans typically have to act. But at times, deciding not to act can be better suited or even the only way to reach a desired effect. What mental processes are involved when people decide not to act to reach certain effects? From the outside it may seem that nothing remarkable is happening, because no action can be observed. However, I present three studies which disclose the cognitive processes that control nonactions.
The present experiments address situations where people intentionally decide to omit certain actions in order to produce a predictable effect in the environment. These experiments are based on the ideomotor hypothesis, which suggests that bidirectional associations can be formed between actions and the resulting effects. Because of these associations, anticipating the effects can in turn activate the respective action. The results of the present experiments show that associations can be formed between nonactions (i.e., the intentional decision not to act) and the resulting effects. Due to these associations, perceiving the nonaction effects encourages not acting (Exp. 1–3). What is more, planning a nonaction seems to come with an activation of the effects that inevitably follow the nonaction (Exp. 4–5). These results suggest that the ideomotor hypothesis can be expanded to nonactions and that nonactions are cognitively represented in terms of their sensory effects. Furthermore, nonaction effects can elicit a sense of agency (Exp. 6–8). That is, even though people refrain from acting, the resulting nonaction effects are perceived as self-produced effects.
In a nutshell, these findings demonstrate that intentional nonactions include specific mechanisms and processes, which are involved, for instance, in effect anticipation and the sense of agency. This means that, while it may seem that nothing remarkable is happening when people decide not to act, complex processes run on the inside, which are also involved in intentional actions.
Promising initial insights show that offices designed to permit physical activity (PA) may reduce workplace sitting time. Biophilic approaches are intended to introduce natural surroundings into the workplace, and preliminary data show positive effects on stress reduction and elevated productivity within the workplace. The primary aim of this pilot study was to analyze changes in workplace sitting time and self-reported habit strength concerning uninterrupted sitting and PA during work, when relocating from a traditional office setting to “active” biophilic-designed surroundings. The secondary aim was to assess possible changes in work-associated factors such as satisfaction with the office environment, work engagement, and work performance, among office staff. In a pre-post designed field study, we collected data through an online survey on health behavior at work. Twelve participants completed the survey before (one-month pre-relocation, T1) and twice after the office relocation (three months (T2) and seven months post-relocation (T3)). Standing time per day during office hours increased from T1 to T3 by about 40 min per day (p < 0.01). Other outcomes remained unaltered. The results suggest that changing office surroundings to an active-permissive biophilic design increased standing time during working hours. Future larger-scale controlled studies are warranted to investigate the influence of office design on sitting time and work-associated factors during working hours in depth.
Detecting whether a suspect possesses incriminating (e.g., crime-related) information can provide valuable decision aids in court. To this means, the Concealed Information Test (CIT) has been developed and is currently applied on a regular basis in Japan. But whereas research has revealed a high validity of the CIT in student and normal populations, research investigating its validity in forensic samples in scarce. This applies even more to the reaction time-based CIT (RT-CIT), where no such research is available so far. The current study tested the application of the RT-CIT for an imaginary mock crime scenario both in a sample of prisoners (n = 27) and a matched control group (n = 25). Results revealed a high validity of the RT-CIT for discriminating between crime-related and crime-unrelated information, visible in medium to very high effect sizes for error rates and reaction times. Interestingly, in accordance with theories that criminal offenders may have worse response inhibition capacities and that response inhibition plays a crucial role in the RT-CIT, CIT-effects in the error rates were even elevated in the prisoners compared to the control group. No support for this hypothesis could, however, be found in reaction time CIT-effects. Also, performance in a standard Stroop task, that was conducted to measure executive functioning, did not differ between both groups and no correlation was found between Stroop task performance and performance in the RT-CIT. Despite frequently raised concerns that the RT-CIT may not be applicable in non-student and forensic populations, our results thereby do suggest that such a use may be possible and that effects seem to be quite large. Future research should build up on these findings by increasing the realism of the crime and interrogation situation and by further investigating the replicability and the theoretical substantiation of increased effects in non-student and forensic samples.
Background
While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks.
Results
Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus.
Conclusion
The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior.
Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.
Previous research showed that full body ownership illusions in virtual reality (VR) can be robustly induced by providing congruent visual stimulation, and that congruent tactile experiences provide a dispensable extension to an already established phenomenon. Here we show that visuo-tactile congruency indeed does not add to already high measures for body ownership on explicit measures, but does modulate movement behavior when walking in the laboratory. Specifically, participants who took ownership over a more corpulent virtual body with intact visuo-tactile congruency increased safety distances towards the laboratory's walls compared to participants who experienced the same illusion with deteriorated visuo-tactile congruency. This effect is in line with the body schema more readily adapting to a more corpulent body after receiving congruent tactile information. We conclude that the action-oriented, unconscious body schema relies more heavily on tactile information compared to more explicit aspects of body ownership.
Social attention is a ubiquitous, but also enigmatic and sometimes elusive phenomenon.
We direct our gaze at other human beings to see what they are doing
and to guess their intentions, but we may also absorb social events en passant as
they unfold in the corner of the eye. We use our gaze as a discrete communication
channel, sometimes conveying pieces of information which would be difficult
to explicate, but we may also find ourselves avoiding eye-contact with others in
moments when self-disclosure is fear-laden. We experience our gaze as the most
genuine expression of our will, but research also suggests considerable levels of
predictability and automaticity in our gaze behavior. The phenomenon’s complexity
has hindered researchers from developing a unified framework which can
conclusively accommodate all of its aspects, or from even agreeing on the most
promising research methodologies.
The present work follows a multi-methods approach, taking on several aspects
of the phenomenon from various directions. Participants in study 1 viewed dynamic
social scenes on a computer screen. Here, low-level physical saliency (i.e.
color, contrast, or motion) and human heads both attracted gaze to a similar extent,
providing a comparison of two vastly different classes of gaze predictors in
direct juxtaposition. In study 2, participants with varying degrees of social anxiety
walked in a public train station while their eye movements were tracked. With
increasing levels of social anxiety, participants showed a relative avoidance of gaze
at near compared to distant people. When replicating the experiment in a laboratory
situation with a matched participant group, social anxiety did not modulate
gaze behavior, fueling the debate around appropriate experimental designs in the
field. Study 3 employed virtual reality (VR) to investigate social gaze in a complex
and immersive, but still highly controlled situation. In this situation, participants
exhibited a gaze behavior which may be more typical for real-life compared to laboratory situations as they avoided gaze contact with a virtual conspecific unless
she gazed at them. This study provided important insights into gaze behavior in
virtual social situations, helping to better estimate the possible benefits of this
new research approach. Throughout all three experiments, participants showed
consistent inter-individual differences in their gaze behavior. However, the present
work could not resolve if these differences are linked to psychologically meaningful
traits or if they instead have an epiphenomenal character.
According to the motivational priming hypothesis, unpleasant stimuli activate the motivational defense system, which in turn promotes congruent affective states such as negative emotions and pain. The question arises to what degree this bottom–up impact of emotions on pain is susceptible to a manipulation of top–down-driven expectations. To this end, we investigated whether verbal instructions implying pain potentiation vs. reduction (placebo or nocebo expectations)—later on confirmed by corresponding experiences (placebo or nocebo conditioning)—might alter behavioral and neurophysiological correlates of pain modulation by unpleasant pictures. We compared two groups, which underwent three experimental phases: first, participants were either instructed that watching unpleasant affective pictures would increase pain (nocebo group) or that watching unpleasant pictures would decrease pain (placebo group) relative to neutral pictures. During the following placebo/nocebo-conditioning phase, pictures were presented together with electrical pain stimuli of different intensities, reinforcing the instructions. In the subsequent test phase, all pictures were presented again combined with identical pain stimuli. Electroencephalogram was recorded in order to analyze neurophysiological responses of pain (somatosensory evoked potential) and picture processing [visually evoked late positive potential (LPP)], in addition to pain ratings. In the test phase, ratings of pain stimuli administered while watching unpleasant relative to neutral pictures were significantly higher in the nocebo group, thus confirming the motivational priming effect for pain perception. In the placebo group, this effect was reversed such that unpleasant compared with neutral pictures led to significantly lower pain ratings. Similarly, somatosensory evoked potentials were decreased during unpleasant compared with neutral pictures, in the placebo group only. LPPs of the placebo group failed to discriminate between unpleasant and neutral pictures, while the LPPs of the nocebo group showed a clear differentiation. We conclude that the placebo manipulation already affected the processing of the emotional stimuli and, in consequence, the processing of the pain stimuli. In summary, the study revealed that the modulation of pain by emotions, albeit a reliable and well-established finding, is further tuned by reinforced expectations—known to induce placebo/nocebo effects—which should be addressed in future research and considered in clinical applications.
Major depressive disorder and the anxiety disorders are highly prevalent, disabling and moderately heritable. Depression and anxiety are also highly comorbid and have a strong genetic correlation (r(g) approximate to 1). Cognitive behavioural therapy is a leading evidence-based treatment but has variable outcomes. Currently, there are no strong predictors of outcome. Therapygenetics research aims to identify genetic predictors of prognosis following therapy. We performed genome-wide association meta-analyses of symptoms following cognitive behavioural therapy in adults with anxiety disorders (n = 972), adults with major depressive disorder (n = 832) and children with anxiety disorders (n = 920; meta-analysis n = 2724). We (h(SNP)(2)) and polygenic scoring was used to examine genetic associations between therapy outcomes and psychopathology, personality and estimated the variance in therapy outcomes that could be explained by common genetic variants learning. No single nucleotide polymorphisms were strongly associated with treatment outcomes. No significant estimate of h(SNP)(2) could be obtained, suggesting the heritability of therapy outcome is smaller than our analysis was powered to detect. Polygenic scoring failed to detect genetic overlap between therapy outcome and psychopathology, personality or learning. This study is the largest therapygenetics study to date. Results are consistent with previous, similarly powered genome-wide association studies of complex traits.