Filtern
Volltext vorhanden
- ja (2)
Gehört zur Bibliographie
- ja (2) (entfernen)
Erscheinungsjahr
- 2019 (2) (entfernen)
Dokumenttyp
Sprache
- Englisch (2) (entfernen)
Schlagworte
- Adolescent (1)
- Adult (1)
- Choice Behavior/physiology (1)
- Eye Movements/physiology (1)
- Female (1)
- Humans (1)
- Learning/physiology (1)
- Male (1)
- Oculomotor Muscles/physiology (1)
- Photic (1)
Institut
In contrast to classical theories of cognitive control, recent evidence suggests that cognitive control and unconscious automatic processing influence each other. First, masked semantic priming, an index of unconscious automatic processing, depends on attention to semantics induced by a previously executed task. Second, cognitive control operations (e.g., implementation of task sets indicating how to process a particular stimulus) can be activated by masked task cues, presented outside awareness. In this study, we combined both lines of research. We investigated in three experiments whether induction tasks and presentation of visible or masked task cues, which signal subsequent semantic or perceptual tasks but do not require induction task execution, comparably modulate masked semantic priming. In line with previous research, priming was consistently larger following execution of a semantic rather than a perceptual induction task. However, we observed in experiment 1 (masked letter cues) a reversed priming pattern following task cues (larger priming following cues signaling perceptual tasks) compared to induction tasks. Experiment 2 (visible letter cues) and experiment 3 (visible color cues) showed that this reversed priming pattern depended only on apriori associations between task cues and task elements (task set dominance), but neither on awareness nor on the verbal or non-verbal format of the cues. These results indicate that task cues have the power to modulate subsequent masked semantic priming through attentional mechanisms. Task-set dominance conceivably affects the time course of task set activation and inhibition in response to task cues and thus the direction of their modulatory effects on priming.
Models of eye-movement control distinguish between different control levels, ranging from automatic (bottom-up, stimulus-driven selection) and automatized (based on well-learned routines) to voluntary (top-down, goal-driven selection, e.g., based on instructions). However, one type of voluntary control has yet only been examined in the manual and not in the oculomotor domain, namely free-choice selection among arbitrary targets, that is, targets that are of equal interest from both a bottom-up and top-down processing perspective. Here, we ask which features of targets (identity- or location-related) are used to determine such oculomotor free-choice behavior. In two experiments, participants executed a saccade to one of four peripheral targets in three different choice conditions: unconstrained free choice, constrained free choice based on target identity (color), and constrained free choice based on target location. The analysis of choice frequencies revealed that unconstrained free-choice selection closely resembled constrained choice based on target location. The results suggest that free-choice oculomotor control is mainly guided by spatial (location-based) target characteristics. We explain these results by assuming that participants tend to avoid less parsimonious recoding of target-identity representations into spatial codes, the latter being a necessary prerequisite to configure oculomotor commands.