Refine
Has Fulltext
- yes (14)
Is part of the Bibliography
- yes (14)
Document Type
- Journal article (14)
Language
- English (14)
Keywords
- direct gaze (2)
- eye movements (2)
- saccades (2)
- social cognition (2)
- visual orientation (2)
- Adolescent (1)
- Adult (1)
- Choice Behavior/physiology (1)
- Eye Movements/physiology (1)
- Female (1)
Institute
Human attention is strongly attracted by direct gaze and sudden onset motion. The sudden direct-gaze effect refers to the processing advantage for targets appearing on peripheral faces that suddenly establish eye contact. Here, we investigate the necessity of social information for attention capture by (sudden onset) ostensive cues. Six experiments involving 204 participants applied (1) naturalistic faces, (2) arrows, (3) schematic eyes, (4) naturalistic eyes, or schematic facial configurations (5) without or (6) with head turn to an attention-capture paradigm. Trials started with two stimuli oriented towards the observer and two stimuli pointing into the periphery. Simultaneous to target presentation, one direct stimulus changed to averted and one averted stimulus changed to direct, yielding a 2 × 2 factorial design with direction and motion cues being absent or present. We replicated the (sudden) direct-gaze effect for photographic faces, but found no corresponding effects in Experiments 2-6. Hence, a holistic and socially meaningful facial context seems vital for attention capture by direct gaze. STATEMENT OF SIGNIFICANCE: The present study highlights the significance of context information for social attention. Our findings demonstrate that the direct-gaze effect, that is, the prioritization of direct gaze over averted gaze, critically relies on the presentation of a meaningful holistic and naturalistic facial context. This pattern of results is evidence in favor of early effects of surrounding social information on attention capture by direct gaze.
Pupil dilation is known to be affected by a variety of factors, including physical (e.g., light) and cognitive sources of influence (e.g., mental load due to working memory demands, stimulus/response competition etc.). In the present experiment, we tested the extent to which vocal demands (speaking) can affect pupil dilation. Based on corresponding preliminary evidence found in a reanalysis of an existing data set from our lab, we setup a new experiment that systematically investigated vocal response‐related effects compared to mere jaw/lip movement and button press responses. Conditions changed on a trial‐by‐trial basis while participants were instructed to keep fixating a central cross on a screen throughout. In line with our prediction (and previous observation), speaking caused the pupils to dilate strongest, followed by nonvocal movements and finally a baseline condition without any vocal or muscular demands. An additional analysis of blink rates showed no difference in blink frequency between vocal and baseline conditions, but different blink dynamics. Finally, simultaneously recorded electromyographic activity showed that muscle activity may contribute to some (but not all) aspects of the observed effects on pupil size. The results are discussed in the context of other recent research indicating effects of perceived (instead of executed) vocal action on pupil dynamics.
Human eye gaze conveys an enormous amount of socially relevant information, and the rapid assessment of gaze direction is of particular relevance in order to adapt behavior accordingly. Specifically, previous research demonstrated evidence for an advantage of processing direct (vs. averted) gaze. The present study examined discrimination performance for gaze direction (direct vs. averted) under controlled presentation conditions: Using a backward-masking gaze-discrimination task, photographs of faces with direct and averted gaze were briefly presented, followed by a mask stimulus. Additionally, effects of facial context on gaze discrimination were assessed by either presenting gaze direction in isolation (i.e., by only showing the eye region) or in the context of an upright or inverted face. Across three experiments, we consistently observed a facial context effect with highest discrimination performance for faces presented in upright position, lower performance for inverted faces, and lowest performance for eyes presented in isolation. Additionally, averted gaze was generally responded to faster and with higher accuracy than direct gaze, indicating an averted-gaze advantage. Overall, the results suggest that direct gaze is not generally associated with processing advantages, thereby highlighting the important role of presentation conditions and task demands in gaze perception.
When processing of two tasks overlaps, performance is known to suffer. In the well-established psychological refractory period (PRP) paradigm, tasks are triggered by two stimuli with a short temporal delay (stimulus onset asynchrony; SOA), thereby allowing control of the degree of task overlap. A decrease of the SOA reliably yields longer RTs of the task associated with the second stimulus (Task 2) while performance in the other task (Task 1) remains largely unaffected. This Task 2-specific SOA effect is usually interpreted in terms of central capacity limitations. Particularly, it has been assumed that response selection in Task 2 is delayed due to the allocation of less capacity until this process has been completed in Task 1. Recently, another important factor determining task prioritization has been proposed—namely, the particular effector systems associated with tasks. Here, we study both sources of task prioritization simultaneously by systematically combining three different effector systems (pairwise combinations of oculomotor, vocal, and manual responses) in the PRP paradigm. Specifically, we asked whether task order-based task prioritization (SOA effect) is modulated as a function of Task 2 effector system. The results indicate a modulation of SOA effects when the same (oculomotor) Task 1 is combined with a vocal versus a manual Task 2. This is incompatible with the assumption that SOA effects are solely determined by Task 1 response selection duration. Instead, they support the view that dual-task processing bottlenecks are resolved by establishing a capacity allocation scheme fed by multiple input factors, including attentional weights associated with particular effector systems.
Data analytics as a field is currently at a crucial point in its development, as a commoditization takes place in the context of increasing amounts of data, more user diversity, and automated analysis solutions, the latter potentially eliminating the need for expert analysts. A central hypothesis of the present paper is that data visualizations should be adapted to both the user and the context. This idea was initially addressed in Study 1, which demonstrated substantial interindividual variability among a group of experts when freely choosing an option to visualize data sets. To lay the theoretical groundwork for a systematic, taxonomic approach, a user model combining user traits, states, strategies, and actions was proposed and further evaluated empirically in Studies 2 and 3. The results implied that for adapting to user traits, statistical expertise is a relevant dimension that should be considered. Additionally, for adapting to user states different user intentions such as monitoring and analysis should be accounted for. These results were used to develop a taxonomy which adapts visualization recommendations to these (and other) factors. A preliminary attempt to validate the taxonomy in Study 4 tested its visualization recommendations with a group of experts. While the corresponding results were somewhat ambiguous overall, some aspects nevertheless supported the claim that a user-adaptive data visualization approach based on the principles outlined in the taxonomy can indeed be useful. While the present approach to user adaptivity is still in its infancy and should be extended (e.g., by testing more participants), the general approach appears to be very promising.
Background
While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks.
Results
Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus.
Conclusion
The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior.
Models of eye-movement control distinguish between different control levels, ranging from automatic (bottom-up, stimulus-driven selection) and automatized (based on well-learned routines) to voluntary (top-down, goal-driven selection, e.g., based on instructions). However, one type of voluntary control has yet only been examined in the manual and not in the oculomotor domain, namely free-choice selection among arbitrary targets, that is, targets that are of equal interest from both a bottom-up and top-down processing perspective. Here, we ask which features of targets (identity- or location-related) are used to determine such oculomotor free-choice behavior. In two experiments, participants executed a saccade to one of four peripheral targets in three different choice conditions: unconstrained free choice, constrained free choice based on target identity (color), and constrained free choice based on target location. The analysis of choice frequencies revealed that unconstrained free-choice selection closely resembled constrained choice based on target location. The results suggest that free-choice oculomotor control is mainly guided by spatial (location-based) target characteristics. We explain these results by assuming that participants tend to avoid less parsimonious recoding of target-identity representations into spatial codes, the latter being a necessary prerequisite to configure oculomotor commands.
Cognitive theories on causes of developmental dyslexia can be divided into language-specific and general accounts. While the former assume that words are special in that associated processing problems are rooted in language-related cognition (e.g., phonology) deficits, the latter propose that dyslexia is rather rooted in a general impairment of cognitive (e.g., visual and/or auditory) processing streams. In the present study, we examined to what extent dyslexia (typically characterized by poor orthographic representations) may be associated with a general deficit in visual long-term memory (LTM) for details. We compared object- and detail-related visual LTM performance (and phonological skills) between dyslexic primary school children and IQ-, age-, and gender-matched controls. The results revealed that while the overall amount of LTM errors was comparable between groups, dyslexic children exhibited a greater portion of detail-related errors. The results suggest that not only phonological, but also general visual resolution deficits in LTM may play an important role in developmental dyslexia.
Oculomotor dominance in multitasking: Mechanisms of conflict resolution in cross-modal action
(2014)
In daily life, eye movement control usually occurs in the context of concurrent action demands in other effector domains. However, little research has focused on understanding how such cross-modal action demands are coordinated, especially when conflicting information needs to be processed conjunctly in different action modalities. In two experiments, we address this issue by studying vocal responses in the context of spatially conflicting eye movements (Experiment 1) and in the context of spatially conflicting manual actions (Experiment 2, under controlled eye fixation conditions). Crucially, a comparison across experiments allows us to assess resource scheduling priorities among the three effector systems by comparing the same (vocal) response demands in the context of eye movements in contrast to manual responses. The results indicate that in situations involving response conflict, eye movements are prioritized over concurrent action demands in another effector system. This oculomotor dominance effect corroborates previous observations in the context of multiple action demands without spatial response conflict. Furthermore, and in line with recent theoretical accounts of parallel multiple action control, resource scheduling patterns appear to be flexibly adjustable based on the temporal proximity of the two actions that need to be performed.
Cross-modal Action Complexity: Action- and Rule-related Memory Retrieval in Dual-response Control
(2017)
Normally, we do not act within a single effector system only, but rather coordinate actions across several output modules (cross-modal action). Such cross-modal action demands can vary substantially with respect to their complexity in terms of the number of task-relevant response combinations and to-be-retrieved stimulus-response (S-R) mapping rules. In the present study, we study the impact of these two types of cross-modal action complexity on dual-response costs (i.e., performance differences between single- and dual-action demands). In Experiment 1, we combined a manual and an oculomotor task, each involving four response alternatives. Crucially, one (unconstrained) condition involved all 16 possible combinations of response alternatives, whereas a constrained condition involved only a subset of possible response combinations. The results revealed that preparing for a larger number of response combinations yielded a significant, but moderate increase in dual-response costs. In Experiment 2, we utilized one common lateralized auditory (e.g., left) stimulus to trigger incompatible response compounds (e.g., left saccade and right key press or vice versa). While one condition only involved one set of task-relevant S-R rules, another condition involved two sets of task-relevant rules (coded by stimulus type: noise/tone), while the number of task-relevant response combinations was the same in both conditions. Here, an increase in the number of to-be-retrieved S-R rules was associated with a substantial increase in dual-response costs that were also modulated on a trial-by-trial basis when switching between rules. Taken together, the results shed further light on the dependency of cross-modal action control on both action- and rule-related memory retrieval processes.