Filtern
Volltext vorhanden
- ja (20) (entfernen)
Gehört zur Bibliographie
- ja (20)
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (20)
Schlagworte
- saccades (3)
- direct gaze (2)
- eye movements (2)
- face perception (2)
- human behaviour (2)
- oculomotor control (2)
- social cognition (2)
- visual orientation (2)
- Adolescent (1)
- Adult (1)
- Choice Behavior/physiology (1)
- Eye Movements/physiology (1)
- Female (1)
- Humans (1)
- Learning/physiology (1)
- Male (1)
- Oculomotor Muscles/physiology (1)
- Photic (1)
- Psychomotor Performance/physiology (1)
- Saccades/physiology (1)
- Stimulation (1)
- Young Adult (1)
- action prepotency (1)
- adaptive role (1)
- adjustment (1)
- analytics (1)
- anguage and word processing (1)
- attention bias (1)
- attention capture (1)
- attention restoration theory (1)
- averted gaze (1)
- blink rate (1)
- bottom-up processing (1)
- cell phone conversation (1)
- cognitive and attentional control (1)
- cognitive control (1)
- cognitive neuroscience (1)
- cognitive resources (1)
- control levels (1)
- cross-modal action (1)
- crosstalk (1)
- data visualization (1)
- divided attention (1)
- driving simulation (1)
- dual action benefits (1)
- dual task procedures (PRP) Introduction In everyday (1)
- dual tasking (1)
- dual tasks (1)
- dual-response costs (1)
- dual-task control (1)
- dual-task performance (1)
- effort (1)
- ego depletion (1)
- emotion regulation (1)
- energy management (1)
- eye movement (1)
- eye tracking (1)
- free choice (1)
- gaze control (1)
- gaze discrimination (1)
- gaze processing (1)
- global (action-inherent) (1)
- graph adaptivity (1)
- graph ergonomics (1)
- hazard perception (1)
- inhibition failures (1)
- learning and memory (1)
- local (transient) (1)
- major depression (1)
- memory errors (1)
- mental fatigue (1)
- mood induction (1)
- motivated cognition (1)
- movement interaction (1)
- multiple action control (1)
- multitasking (1)
- oculomotor dominance (1)
- orthographic representations (1)
- perception and action (1)
- peripheral vision (1)
- person perception (1)
- phonology and reading (1)
- picture processing (1)
- psychology (1)
- pupil dilation (1)
- reading disability (1)
- recommendation engine (1)
- resource scheduling (1)
- response modalities (1)
- rest breaks (1)
- semantics of gaze (1)
- sense of agency (1)
- social cues (1)
- social interaction (1)
- spatial attention (1)
- steering (1)
- sustained attention (1)
- task control (1)
- task coordination (1)
- task rules (1)
- task switching (1)
- task-order control (1)
- temporal binding (1)
- top-down processing (1)
- traffic (1)
- user model (1)
- variability (1)
- ventral stream (1)
- vigilance (1)
- visual resolution deficit (1)
- vocal responses (1)
- word form area (1)
Institut
- Institut für Psychologie (20) (entfernen)
When processing of two tasks overlaps, performance is known to suffer. In the well-established psychological refractory period (PRP) paradigm, tasks are triggered by two stimuli with a short temporal delay (stimulus onset asynchrony; SOA), thereby allowing control of the degree of task overlap. A decrease of the SOA reliably yields longer RTs of the task associated with the second stimulus (Task 2) while performance in the other task (Task 1) remains largely unaffected. This Task 2-specific SOA effect is usually interpreted in terms of central capacity limitations. Particularly, it has been assumed that response selection in Task 2 is delayed due to the allocation of less capacity until this process has been completed in Task 1. Recently, another important factor determining task prioritization has been proposed—namely, the particular effector systems associated with tasks. Here, we study both sources of task prioritization simultaneously by systematically combining three different effector systems (pairwise combinations of oculomotor, vocal, and manual responses) in the PRP paradigm. Specifically, we asked whether task order-based task prioritization (SOA effect) is modulated as a function of Task 2 effector system. The results indicate a modulation of SOA effects when the same (oculomotor) Task 1 is combined with a vocal versus a manual Task 2. This is incompatible with the assumption that SOA effects are solely determined by Task 1 response selection duration. Instead, they support the view that dual-task processing bottlenecks are resolved by establishing a capacity allocation scheme fed by multiple input factors, including attentional weights associated with particular effector systems.
Data analytics as a field is currently at a crucial point in its development, as a commoditization takes place in the context of increasing amounts of data, more user diversity, and automated analysis solutions, the latter potentially eliminating the need for expert analysts. A central hypothesis of the present paper is that data visualizations should be adapted to both the user and the context. This idea was initially addressed in Study 1, which demonstrated substantial interindividual variability among a group of experts when freely choosing an option to visualize data sets. To lay the theoretical groundwork for a systematic, taxonomic approach, a user model combining user traits, states, strategies, and actions was proposed and further evaluated empirically in Studies 2 and 3. The results implied that for adapting to user traits, statistical expertise is a relevant dimension that should be considered. Additionally, for adapting to user states different user intentions such as monitoring and analysis should be accounted for. These results were used to develop a taxonomy which adapts visualization recommendations to these (and other) factors. A preliminary attempt to validate the taxonomy in Study 4 tested its visualization recommendations with a group of experts. While the corresponding results were somewhat ambiguous overall, some aspects nevertheless supported the claim that a user-adaptive data visualization approach based on the principles outlined in the taxonomy can indeed be useful. While the present approach to user adaptivity is still in its infancy and should be extended (e.g., by testing more participants), the general approach appears to be very promising.
Interpreting gaze behavior is essential in evaluating interaction partners, yet the ‘semantics of gaze’ in dynamic interactions are still poorly understood. We aimed to comprehensively investigate effects of gaze behavior patterns in different conversation contexts, using a two-step, qualitative-quantitative procedure. Participants watched video clips of single persons listening to autobiographic narrations by another (invisible) person. The listener’s gaze behavior was manipulated in terms of gaze direction, frequency and direction of gaze shifts, and blink frequency; emotional context was manipulated through the valence of the narration (neutral/negative). In Experiment 1 (qualitative-exploratory), participants freely described which states and traits they attributed to the listener in each condition, allowing us to identify relevant aspects of person perception and to construct distinct rating scales that were implemented in Experiment 2 (quantitative-confirmatory). Results revealed systematic and differential meanings ascribed to the listener’s gaze behavior. For example, rapid blinking and fast gaze shifts were rated more negatively (e.g., restless and unnatural) than slower gaze behavior; downward gaze was evaluated more favorably (e.g., empathetic) than other gaze aversion types, especially in the emotionally negative context. Overall, our study contributes to a more systematic understanding of flexible gaze semantics in social interaction.
In this work, we evaluate the status of both theory and empirical evidence in the field of experimental rest-break research based on a framework that combines mental-chronometry and psychometric-measurement theory. To this end, we (1) provide a taxonomy of rest breaks according to which empirical studies can be classified (e.g., by differentiating between long, short, and micro-rest breaks based on context and temporal properties). Then, we (2) evaluate the theorizing in both the basic and applied fields of research and explain how popular concepts (e.g., ego depletion model, opportunity cost theory, attention restoration theory, action readiness, etc.) relate to each other in contemporary theoretical debates. Here, we highlight differences between all these models in the light of two symbolic categories, termed the resource-based and satiation-based model, including aspects related to the dynamics and the control (strategic or non-strategic) mechanisms at work. Based on a critical assessment of existing methodological and theoretical approaches, we finally (3) provide a set of guidelines for both theory building and future empirical approaches to the experimental study of rest breaks. We conclude that a psychometrically advanced and theoretically focused research of rest and recovery has the potential to finally provide a sound scientific basis to eventually mitigate the adverse effects of ever increasing task demands on performance and well-being in a multitasking world at work and leisure.
This is a pilot study that examined the effect of cell-phone conversation on cognition using a continuous multitasking paradigm. Current theorizing argues that phone conversation affects behavior (e.g., driving) by interfering at a level of cognitive processes (not peripheral activity) and by implying an attentional-failure account. Within the framework of an intermittent spare–utilized capacity threading model, we examined the effect of aspects of (secondary-task) phone conversation on (primary-task) continuous arithmetic performance, asking whether phone use makes components of automatic and controlled information-processing (i.e., easy vs. hard mental arithmetic) run more slowly, or alternatively, makes processing run less reliably albeit with the same processing speed. The results can be summarized as follows: While neither expecting a text message nor expecting an impending phone call had any detrimental effects on performance, active phone conversation was clearly detrimental to primary-task performance. Crucially, the decrement imposed by secondary-task (conversation) was not due to a constant slowdown but is better be characterized by an occasional breakdown of information processing, which differentially affected automatic and controlled components of primary-task processing. In conclusion, these findings support the notion that phone conversation makes individuals not constantly slower but more vulnerable to commit attention failure, and in this way, hampers stability of (primary-task) information processing.
Out of the corner of the driver's eye: Peripheral processing of hazards in static traffic scenes
(2016)
Effective gaze control in traffic, based on peripheral visual information, is important to avoid hazards. Whereas previous hazard perception research mainly focused on skill-component development (e.g., orientation and hazard processing), little is known about the role and dynamics of peripheral vision in hazard perception. We analyzed eye movement data from a study in which participants scanned static traffic scenes including medium-level versus dangerous hazards and focused on characteristics of fixations prior to entering the hazard region. We found that initial saccade amplitudes into the hazard region were substantially longer for dangerous (vs. medium-level) hazards, irrespective of participants' driving expertise. An analysis of the temporal dynamics of this hazard-level dependent saccade targeting distance effect revealed that peripheral hazard-level processing occurred around 200–400 ms during the course of the fixation prior to entering the hazard region. An additional psychophysical hazard detection experiment, in which hazard eccentricity was manipulated, revealed better detection for dangerous (vs. medium-level) hazards in both central and peripheral vision. Furthermore, we observed a significant perceptual decline from center to periphery for medium (but not for highly) dangerous hazards. Overall, the results suggest that hazard processing is remarkably effective in peripheral vision and utilized to guide the eyes toward potential hazards.
Oculomotor dominance in multitasking: Mechanisms of conflict resolution in cross-modal action
(2014)
In daily life, eye movement control usually occurs in the context of concurrent action demands in other effector domains. However, little research has focused on understanding how such cross-modal action demands are coordinated, especially when conflicting information needs to be processed conjunctly in different action modalities. In two experiments, we address this issue by studying vocal responses in the context of spatially conflicting eye movements (Experiment 1) and in the context of spatially conflicting manual actions (Experiment 2, under controlled eye fixation conditions). Crucially, a comparison across experiments allows us to assess resource scheduling priorities among the three effector systems by comparing the same (vocal) response demands in the context of eye movements in contrast to manual responses. The results indicate that in situations involving response conflict, eye movements are prioritized over concurrent action demands in another effector system. This oculomotor dominance effect corroborates previous observations in the context of multiple action demands without spatial response conflict. Furthermore, and in line with recent theoretical accounts of parallel multiple action control, resource scheduling patterns appear to be flexibly adjustable based on the temporal proximity of the two actions that need to be performed.
A negative mood-congruent attention bias has been consistently observed, for example, in clinical studies on major depression. This bias is assumed to be dysfunctional in that it supports maintaining a sad mood, whereas a potentially adaptive role has largely been neglected. Previous experiments involving sad mood induction techniques found a negative mood-congruent attention bias specifically for young individuals, explained by an adaptive need for information transfer in the service of mood regulation. In the present study we investigated the attentional bias in typically developing children (aged 6–12 years) when happy and sad moods were induced. Crucially, we manipulated the age (adult vs. child) of the displayed pairs of facial expressions depicting sadness, anger, fear and happiness. The results indicate that sad children indeed exhibited a mood specific attention bias toward sad facial expressions. Additionally, this bias was more pronounced for adult faces. Results are discussed in the context of an information gain which should be stronger when looking at adult faces due to their more expansive life experience. These findings bear implications for both research methods and future interventions.
Background
While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks.
Results
Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus.
Conclusion
The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior.
Human eye gaze conveys an enormous amount of socially relevant information, and the rapid assessment of gaze direction is of particular relevance in order to adapt behavior accordingly. Specifically, previous research demonstrated evidence for an advantage of processing direct (vs. averted) gaze. The present study examined discrimination performance for gaze direction (direct vs. averted) under controlled presentation conditions: Using a backward-masking gaze-discrimination task, photographs of faces with direct and averted gaze were briefly presented, followed by a mask stimulus. Additionally, effects of facial context on gaze discrimination were assessed by either presenting gaze direction in isolation (i.e., by only showing the eye region) or in the context of an upright or inverted face. Across three experiments, we consistently observed a facial context effect with highest discrimination performance for faces presented in upright position, lower performance for inverted faces, and lowest performance for eyes presented in isolation. Additionally, averted gaze was generally responded to faster and with higher accuracy than direct gaze, indicating an averted-gaze advantage. Overall, the results suggest that direct gaze is not generally associated with processing advantages, thereby highlighting the important role of presentation conditions and task demands in gaze perception.