Refine
Has Fulltext
- yes (19)
Is part of the Bibliography
- yes (19)
Document Type
- Journal article (19)
Language
- English (19)
Keywords
- social attention (3)
- Concealed Information Test (2)
- attention (2)
- eye movements (2)
- eye tracking (2)
- psychology (2)
- social features (2)
- Amygdala (1)
- Body schema (1)
- CIT (1)
Institute
EU-Project number / Contract (GA) number
- 336305 (4)
- ERC-2013-StG-336305 (1)
Detecting whether a suspect possesses incriminating (e.g., crime-related) information can provide valuable decision aids in court. To this means, the Concealed Information Test (CIT) has been developed and is currently applied on a regular basis in Japan. But whereas research has revealed a high validity of the CIT in student and normal populations, research investigating its validity in forensic samples in scarce. This applies even more to the reaction time-based CIT (RT-CIT), where no such research is available so far. The current study tested the application of the RT-CIT for an imaginary mock crime scenario both in a sample of prisoners (n = 27) and a matched control group (n = 25). Results revealed a high validity of the RT-CIT for discriminating between crime-related and crime-unrelated information, visible in medium to very high effect sizes for error rates and reaction times. Interestingly, in accordance with theories that criminal offenders may have worse response inhibition capacities and that response inhibition plays a crucial role in the RT-CIT, CIT-effects in the error rates were even elevated in the prisoners compared to the control group. No support for this hypothesis could, however, be found in reaction time CIT-effects. Also, performance in a standard Stroop task, that was conducted to measure executive functioning, did not differ between both groups and no correlation was found between Stroop task performance and performance in the RT-CIT. Despite frequently raised concerns that the RT-CIT may not be applicable in non-student and forensic populations, our results thereby do suggest that such a use may be possible and that effects seem to be quite large. Future research should build up on these findings by increasing the realism of the crime and interrogation situation and by further investigating the replicability and the theoretical substantiation of increased effects in non-student and forensic samples.
Dyspnea is common in many cardiorespiratory diseases. Already the anticipation of this aversive symptom elicits fear in many patients resulting in unfavorable health behaviors such as activity avoidance and sedentary lifestyle. This study investigated brain mechanisms underlying these anticipatory processes. We induced dyspnea using resistive-load breathing in healthy subjects during functional magnetic resonance imaging. Blocks of severe and mild dyspnea alternated, each preceded by anticipation periods. Severe dyspnea activated a network of sensorimotor, cerebellar, and limbic areas. The left insular, parietal opercular, and cerebellar cortices showed increased activation already during dyspnea anticipation. Left insular and parietal opercular cortex showed increased connectivity with right insular and anterior cingulate cortex when severe dyspnea was anticipated, while the cerebellum showed increased connectivity with the amygdala. Notably, insular activation during dyspnea perception was positively correlated with midbrain activation during anticipation. Moreover, anticipatory fear was positively correlated with anticipatory activation in right insular and anterior cingulate cortex. The results demonstrate that dyspnea anticipation activates brain areas involved in dyspnea perception. The involvement of emotion-related areas such as insula, anterior cingulate cortex, and amygdala during dyspnea anticipation most likely reflects anticipatory fear and might underlie the development of unfavorable health behaviors in patients suffering from dyspnea.
In general, humans preferentially look at conspecifics in naturalistic images. However, such group-based effects might conceal systematic individual differences concerning the preference for social information. Here, we investigated to what degree fixations on social features occur consistently within observers and whether this preference generalizes to other measures of social prioritization in the laboratory as well as the real world. Participants carried out a free viewing task, a relevance taps task that required them to actively select image regions that are crucial for understanding a given scene, and they were asked to freely take photographs outside the laboratory that were later classified regarding their social content. We observed stable individual differences in the fixation and active selection of human heads and faces that were correlated across tasks and partly predicted the social content of self-taken photographs. Such relationship was not observed for human bodies indicating that different social elements need to be dissociated. These findings suggest that idiosyncrasies in the visual exploration and interpretation of social features exist and predict real-world behavior. Future studies should further characterize these preferences and elucidate how they shape perception and interpretation of social contexts in healthy participants and patients with mental disorders that affect social functioning.
Previous research has shown that low-level visual features (i.e., low-level visual saliency) as well as socially relevant information predict gaze allocation in free viewing conditions. However, these studies mainly used static and highly controlled stimulus material, thus revealing little about the robustness of attentional processes across diverging situations. Secondly, the influence of affective stimulus characteristics on visual exploration patterns remains poorly understood. Participants in the present study freely viewed a set of naturalistic, contextually rich video clips from a variety of settings that were capable of eliciting different moods. Using recordings of eye movements, we quantified to what degree social information, emotional valence and low-level visual features influenced gaze allocation using generalized linear mixed models. We found substantial and similarly large regression weights for low-level saliency and social information, affirming the importance of both predictor classes under ecologically more valid dynamic stimulation conditions. Differences in predictor strength between individuals were large and highly stable across videos. Additionally, low-level saliency was less important for fixation selection in videos containing persons than in videos not containing persons, and less important for videos perceived as negative. We discuss the generalizability of these findings and the feasibility of applying this research paradigm to patient groups.
Borderline personality disorder (BPD) patients’ hypersensitivity for emotionally relevant stimuli has been suggested be due to abnormal activity and connectivity in (para-)limbic and prefrontal brain regions during stimulus processing. The neuropeptide oxytocin has been shown to modulate activity and functional connectivity in these brain regions, thereby optimizing the processing of emotional and neutral stimuli. To investigate whether oxytocin would be capable of attenuating BPD patients’ hypersensitivity for such stimuli, we recorded brain activity and gaze behavior during the processing of complex scenes in 51 females with and 48 without BPD after intranasal application of either oxytocin or placebo. We found divergent effects of oxytocin on BPD and healthy control (HC) participants’ (para-)limbic reactivity to emotional and neutral scenes: Oxytocin decreased amygdala and insula reactivity in BPD participants but increased it in HC participants, indicating an oxytocin-induced normalization of amygdala and insula activity during scene processing. In addition, oxytocin normalized the abnormal coupling between amygdala activity and gaze behavior across all scenes in BPD participants. Overall, these findings suggest that oxytocin may be capable of attenuating BPD patients’ hypersensitivity for complex scenes, irrespective of their valence.
Stronger reactivity to social gaze in virtual reality compared to a classical laboratory environment
(2021)
People show a robust tendency to gaze at other human beings when viewing images or videos, but were also found to relatively avoid gaze at others in several real‐world situations. This discrepancy, along with theoretical considerations, spawned doubts about the appropriateness of classical laboratory‐based experimental paradigms in social attention research. Several researchers instead suggested the use of immersive virtual scenarios in eliciting and measuring naturalistic attentional patterns, but the field, struggling with methodological challenges, still needs to establish the advantages of this approach. Here, we show using eye‐tracking in a complex social scenario displayed in virtual reality that participants show enhanced attention towards the face of an avatar at near distance and demonstrate an increased reactivity towards her social gaze as compared to participants who viewed the same scene on a computer monitor. The present study suggests that reactive virtual agents observed in immersive virtual reality can elicit natural modes of information processing and can help to conduct ecologically more valid experiments while maintaining high experimental control.
Automatic orienting to unexpected changes in the environment is a pre-requisite for adaptive behavior. One prominent mechanism of automatic attentional control is the Orienting Response (OR). Despite the fundamental significance of the OR in everyday life, only little is known about how the OR is affected by healthy aging. We tested this question in two age groups (19–38 and 55–72 years) and measured skin-conductance responses (SCRs) and event-related brain potentials (ERPs) to novels (i.e., short environmental sounds presented only once in the experiment; 10% of the trials) compared to standard sounds (600 Hz sinusoidal tones with 200 ms duration; 90% of the trials). Novel and standard stimuli were presented in four conditions differing in the inter-stimulus interval (ISI) with a mean ISI of either 10, 3, 1, or 0.5 s (blocked presentation). In both age groups, pronounced SCRs were elicited by novels in the 10 s ISI condition, suggesting the elicitation of stable ORs. These effects were accompanied by pronounced N1 and frontal P3 amplitudes in the ERP, suggesting that automatic novelty processing and orientation of attention are effective in both age groups. Furthermore, the SCR and ERP effects declined with decreasing ISI length. In addition, differences between the two groups were observable with the fastest presentation rates (i.e., 1 and 0.5 s ISI length). The most prominent difference was a shift of the peak of the frontal positivity from around 300 to 200 ms in the 19–38 years group while in the 55–72 years group the amplitude of the frontal P3 decreased linearly with decreasing ISI length. Taken together, this pattern of results does not suggest a general decline in processing efficacy with healthy aging. At least with very rare changes (here, the novels in the 10 s ISI condition) the OR is as effective in healthy older adults as in younger adults. With faster presentation rates, however, the efficacy of the OR decreases. This seems to result in a switch from novelty to deviant processing in younger adults, but less so in the group of older adults.
Visual saliency maps reflecting locations that stand out from the background in terms of their low-level physical features have proven to be very useful for empirical research on attentional exploration and reliably predict gaze behavior. In the present study we tested these predictions for socially relevant stimuli occurring in naturalistic scenes using eye tracking. We hypothesized that social features (i.e. human faces or bodies) would be processed preferentially over non-social features (i.e. objects, animals) regardless of their low-level saliency. To challenge this notion, we included three tasks that deliberately addressed non-social attributes. In agreement with our hypothesis, social information, especially heads, was preferentially attended compared to highly salient image regions across all tasks. Social information was never required to solve a task but was regarded nevertheless. More so, after completing the task requirements, viewing behavior reverted back to that of free-viewing with heavy prioritization of social features. Additionally, initial eye movements reflecting potentially automatic shifts of attention, were predominantly directed towards heads irrespective of top-down task demands. On these grounds, we suggest that social stimuli may provide exclusive access to the priority map, enabling social attention to override reflexive and controlled attentional processes. Furthermore, our results challenge the generalizability of saliency-based attention models.
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.