Refine
Has Fulltext
- yes (3)
Is part of the Bibliography
- yes (3) (remove)
Year of publication
- 2019 (3) (remove)
Document Type
- Journal article (2)
- Doctoral Thesis (1)
Language
- English (3) (remove)
Keywords
- social attention (2)
- Aufmerksamkeit (1)
- Body schema (1)
- Full body ownership illusion (1)
- Movement behavior (1)
- Psychologie (1)
- Soziale Aufmerksamkeit (1)
- Soziale Wahrnehmung (1)
- Virtuelle Realität (1)
- Visuo-tactile congruency (1)
Institute
EU-Project number / Contract (GA) number
- 336305 (1)
Previous research showed that full body ownership illusions in virtual reality (VR) can be robustly induced by providing congruent visual stimulation, and that congruent tactile experiences provide a dispensable extension to an already established phenomenon. Here we show that visuo-tactile congruency indeed does not add to already high measures for body ownership on explicit measures, but does modulate movement behavior when walking in the laboratory. Specifically, participants who took ownership over a more corpulent virtual body with intact visuo-tactile congruency increased safety distances towards the laboratory's walls compared to participants who experienced the same illusion with deteriorated visuo-tactile congruency. This effect is in line with the body schema more readily adapting to a more corpulent body after receiving congruent tactile information. We conclude that the action-oriented, unconscious body schema relies more heavily on tactile information compared to more explicit aspects of body ownership.
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.
Social attention is a ubiquitous, but also enigmatic and sometimes elusive phenomenon.
We direct our gaze at other human beings to see what they are doing
and to guess their intentions, but we may also absorb social events en passant as
they unfold in the corner of the eye. We use our gaze as a discrete communication
channel, sometimes conveying pieces of information which would be difficult
to explicate, but we may also find ourselves avoiding eye-contact with others in
moments when self-disclosure is fear-laden. We experience our gaze as the most
genuine expression of our will, but research also suggests considerable levels of
predictability and automaticity in our gaze behavior. The phenomenon’s complexity
has hindered researchers from developing a unified framework which can
conclusively accommodate all of its aspects, or from even agreeing on the most
promising research methodologies.
The present work follows a multi-methods approach, taking on several aspects
of the phenomenon from various directions. Participants in study 1 viewed dynamic
social scenes on a computer screen. Here, low-level physical saliency (i.e.
color, contrast, or motion) and human heads both attracted gaze to a similar extent,
providing a comparison of two vastly different classes of gaze predictors in
direct juxtaposition. In study 2, participants with varying degrees of social anxiety
walked in a public train station while their eye movements were tracked. With
increasing levels of social anxiety, participants showed a relative avoidance of gaze
at near compared to distant people. When replicating the experiment in a laboratory
situation with a matched participant group, social anxiety did not modulate
gaze behavior, fueling the debate around appropriate experimental designs in the
field. Study 3 employed virtual reality (VR) to investigate social gaze in a complex
and immersive, but still highly controlled situation. In this situation, participants
exhibited a gaze behavior which may be more typical for real-life compared to laboratory situations as they avoided gaze contact with a virtual conspecific unless
she gazed at them. This study provided important insights into gaze behavior in
virtual social situations, helping to better estimate the possible benefits of this
new research approach. Throughout all three experiments, participants showed
consistent inter-individual differences in their gaze behavior. However, the present
work could not resolve if these differences are linked to psychologically meaningful
traits or if they instead have an epiphenomenal character.