Refine
Has Fulltext
- yes (20)
Is part of the Bibliography
- yes (20) (remove)
Year of publication
Document Type
- Journal article (20)
Language
- English (20)
Keywords
- saccades (3)
- direct gaze (2)
- eye movements (2)
- face perception (2)
- human behaviour (2)
- oculomotor control (2)
- social cognition (2)
- visual orientation (2)
- Adolescent (1)
- Adult (1)
Institute
Cross-modal Action Complexity: Action- and Rule-related Memory Retrieval in Dual-response Control
(2017)
Normally, we do not act within a single effector system only, but rather coordinate actions across several output modules (cross-modal action). Such cross-modal action demands can vary substantially with respect to their complexity in terms of the number of task-relevant response combinations and to-be-retrieved stimulus-response (S-R) mapping rules. In the present study, we study the impact of these two types of cross-modal action complexity on dual-response costs (i.e., performance differences between single- and dual-action demands). In Experiment 1, we combined a manual and an oculomotor task, each involving four response alternatives. Crucially, one (unconstrained) condition involved all 16 possible combinations of response alternatives, whereas a constrained condition involved only a subset of possible response combinations. The results revealed that preparing for a larger number of response combinations yielded a significant, but moderate increase in dual-response costs. In Experiment 2, we utilized one common lateralized auditory (e.g., left) stimulus to trigger incompatible response compounds (e.g., left saccade and right key press or vice versa). While one condition only involved one set of task-relevant S-R rules, another condition involved two sets of task-relevant rules (coded by stimulus type: noise/tone), while the number of task-relevant response combinations was the same in both conditions. Here, an increase in the number of to-be-retrieved S-R rules was associated with a substantial increase in dual-response costs that were also modulated on a trial-by-trial basis when switching between rules. Taken together, the results shed further light on the dependency of cross-modal action control on both action- and rule-related memory retrieval processes.
Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes
(2016)
Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards.
Cognitive theories on causes of developmental dyslexia can be divided into language-specific and general accounts. While the former assume that words are special in that associated processing problems are rooted in language-related cognition (e.g., phonology) deficits, the latter propose that dyslexia is rather rooted in a general impairment of cognitive (e.g., visual and/or auditory) processing streams. In the present study, we examined to what extent dyslexia (typically characterized by poor orthographic representations) may be associated with a general deficit in visual long-term memory (LTM) for details. We compared object- and detail-related visual LTM performance (and phonological skills) between dyslexic primary school children and IQ-, age-, and gender-matched controls. The results revealed that while the overall amount of LTM errors was comparable between groups, dyslexic children exhibited a greater portion of detail-related errors. The results suggest that not only phonological, but also general visual resolution deficits in LTM may play an important role in developmental dyslexia.
Oculomotor dominance in multitasking: Mechanisms of conflict resolution in cross-modal action
(2014)
In daily life, eye movement control usually occurs in the context of concurrent action demands in other effector domains. However, little research has focused on understanding how such cross-modal action demands are coordinated, especially when conflicting information needs to be processed conjunctly in different action modalities. In two experiments, we address this issue by studying vocal responses in the context of spatially conflicting eye movements (Experiment 1) and in the context of spatially conflicting manual actions (Experiment 2, under controlled eye fixation conditions). Crucially, a comparison across experiments allows us to assess resource scheduling priorities among the three effector systems by comparing the same (vocal) response demands in the context of eye movements in contrast to manual responses. The results indicate that in situations involving response conflict, eye movements are prioritized over concurrent action demands in another effector system. This oculomotor dominance effect corroborates previous observations in the context of multiple action demands without spatial response conflict. Furthermore, and in line with recent theoretical accounts of parallel multiple action control, resource scheduling patterns appear to be flexibly adjustable based on the temporal proximity of the two actions that need to be performed.
When processing of two tasks overlaps, performance is known to suffer. In the well-established psychological refractory period (PRP) paradigm, tasks are triggered by two stimuli with a short temporal delay (stimulus onset asynchrony; SOA), thereby allowing control of the degree of task overlap. A decrease of the SOA reliably yields longer RTs of the task associated with the second stimulus (Task 2) while performance in the other task (Task 1) remains largely unaffected. This Task 2-specific SOA effect is usually interpreted in terms of central capacity limitations. Particularly, it has been assumed that response selection in Task 2 is delayed due to the allocation of less capacity until this process has been completed in Task 1. Recently, another important factor determining task prioritization has been proposed—namely, the particular effector systems associated with tasks. Here, we study both sources of task prioritization simultaneously by systematically combining three different effector systems (pairwise combinations of oculomotor, vocal, and manual responses) in the PRP paradigm. Specifically, we asked whether task order-based task prioritization (SOA effect) is modulated as a function of Task 2 effector system. The results indicate a modulation of SOA effects when the same (oculomotor) Task 1 is combined with a vocal versus a manual Task 2. This is incompatible with the assumption that SOA effects are solely determined by Task 1 response selection duration. Instead, they support the view that dual-task processing bottlenecks are resolved by establishing a capacity allocation scheme fed by multiple input factors, including attentional weights associated with particular effector systems.
Previous research has shown that the simultaneous execution of two actions (instead of only one) is not necessarily more difficult but can actually be easier (less error-prone), in particular when executing one action requires the simultaneous inhibition of another action. Corresponding inhibitory demands are particularly challenging when the to-be-inhibited action is highly prepotent (i.e., characterized by a strong urge to be executed). Here, we study a range of important potential sources of such prepotency. Building on a previously established paradigm to elicit dual-action benefits, participants responded to stimuli with single actions (either manual button press or saccade) or dual actions (button press and saccade). Crucially, we compared blocks in which these response demands were randomly intermixed (mixed blocks) with pure blocks involving only one type of response demand. The results highlight the impact of global (action-inherent) sources of action prepotency, as reflected in more pronounced inhibitory failures in saccade vs. manual control, but also more local (transient) sources of influence, as reflected in a greater probability of inhibition failures following trials that required the to-be-inhibited type of action. In addition, sequential analyses revealed that inhibitory control (including its failure) is exerted at the level of response modality representations, not at the level of fully specified response representations. In sum, the study highlights important preconditions and mechanisms underlying the observation of dual-action benefits.
In task-switching studies, performance is typically worse in task-switch trials than in task-repetition trials. These switch costs are often asymmetrical, a phenomenon that has been explained by referring to a dominance of one task over the other. Previous studies also indicated that response modalities associated with two tasks may be considered as integral components for defining a task set. However, a systematic assessment of the role of response modalities in task switching is still lacking: Are some response modalities harder to switch to than others? The present study systematically examined switch costs when combining tasks that differ only with respect to their associated effector systems. In Experiment 1, 16 participants switched (in unpredictable sequence) between oculomotor and vocal tasks. In Experiment 2, 72 participants switched (in pairwise combinations) between oculomotor, vocal, and manual tasks. We observed systematic performance costs when switching between response modalities under otherwise constant task features and could thereby replicate previous observations of response modality switch costs. However, we did not observe any substantial switch-cost asymmetries. As previous studies using temporally overlapping dual-task paradigms found substantial prioritization effects (in terms of asymmetric costs) especially for oculomotor tasks, the present results suggest different underlying processes in sequential task switching than in simultaneous multitasking. While more research is needed to further substantiate a lack of response modality switch-cost asymmetries in a broader range of task switching situations, we suggest that task-set representations related to specific response modalities may exhibit rapid decay.
Data analytics as a field is currently at a crucial point in its development, as a commoditization takes place in the context of increasing amounts of data, more user diversity, and automated analysis solutions, the latter potentially eliminating the need for expert analysts. A central hypothesis of the present paper is that data visualizations should be adapted to both the user and the context. This idea was initially addressed in Study 1, which demonstrated substantial interindividual variability among a group of experts when freely choosing an option to visualize data sets. To lay the theoretical groundwork for a systematic, taxonomic approach, a user model combining user traits, states, strategies, and actions was proposed and further evaluated empirically in Studies 2 and 3. The results implied that for adapting to user traits, statistical expertise is a relevant dimension that should be considered. Additionally, for adapting to user states different user intentions such as monitoring and analysis should be accounted for. These results were used to develop a taxonomy which adapts visualization recommendations to these (and other) factors. A preliminary attempt to validate the taxonomy in Study 4 tested its visualization recommendations with a group of experts. While the corresponding results were somewhat ambiguous overall, some aspects nevertheless supported the claim that a user-adaptive data visualization approach based on the principles outlined in the taxonomy can indeed be useful. While the present approach to user adaptivity is still in its infancy and should be extended (e.g., by testing more participants), the general approach appears to be very promising.
This study investigates the sense of agency (SoA) for saccades with implicit and explicit agency measures. In two eye tracking experiments, participants moved their eyes towards on-screen stimuli that subsequently changed color. Participants then either reproduced the temporal interval between saccade and color-change (Experiment 1) or reported the time points of these events with an auditory Libet clock (Experiment 2) to measure temporal binding effects as implicit indices of SoA. Participants were either made to believe to exert control over the color change or not (agency manipulation). Explicit ratings indicated that the manipulation of causal beliefs and hence agency was successful. However, temporal binding was only evident for caused effects, and only when a sufficiently sensitive procedure was used (auditory Libet clock). This suggests a feebler connection between temporal binding and SoA than previously proposed. The results also provide evidence for a relatively fast acquisition of sense of agency for previously never experienced types of action-effect associations. This indicates that the underlying processes of action control may be rooted in more intricate and adaptable cognitive models than previously thought. Oculomotor SoA as addressed in the present study presumably represents an important cognitive foundation of gaze-based social interaction (social sense of agency) or gaze-based human-machine interaction scenarios.
Public significance statement: In this study, sense of agency for eye movements in the non-social domain is investigated in detail, using both explicit and implicit measures. Therefore, it offers novel and specific insights into comprehending sense of agency concerning effects induced by eye movements, as well as broader insights into agency pertaining to entirely newly acquired types of action-effect associations. Oculomotor sense of agency presumably represents an important cognitive foundation of gaze-based social interaction (social agency) or gaze-based human-machine interaction scenarios. Due to peculiarities of the oculomotor domain such as the varying degree of volitional control, eye movements could provide new information regarding more general theories of sense of agency in future research.
An important cognitive requirement in multitasking is the decision of how multiple tasks should be temporally scheduled (task order control). Specifically, task order switches (vs. repetitions) yield performance costs (i.e., task-order switch costs), suggesting that task order scheduling is a vital part of configuring a task set. Recently, it has been shown that this process takes specific task-related characteristics into account: task order switches were easier when switching to a preferred (vs. non-preferred) task order. Here, we ask whether another determinant of task order control, namely the phenomenon that a task order switch in a previous trial facilitates a task order switch in a current trial (i.e., a sequential modulation of task order switch effect) also takes task-specific characteristics into account. Based on three experiments involving task order switches between a preferred (dominant oculomotor task prior to non-dominant manual/pedal task) and a non-preferred (vice versa) order, we replicated the finding that task order switching (in Trial N) is facilitated after a previous switch (vs. repetition in Trial N - 1) in task order. There was no substantial evidence in favor of a significant difference when switching to the preferred vs. non-preferred order and in the analyses of the dominant oculomotor task and the non-dominant manual task. This indicates different mechanisms underlying the control of immediate task order configuration (indexed by task order switch costs) and the sequential modulation of these costs based on the task order transition type in the previous trial.