Refine
Has Fulltext
- yes (25)
Is part of the Bibliography
- yes (25)
Year of publication
Document Type
- Journal article (25)
Language
- English (25)
Keywords
- ideomotor theory (4)
- masked priming (4)
- motor control (3)
- unconscious processing (3)
- action (2)
- body ownership (2)
- experimental design (2)
- expertise (2)
- multisensory processing (2)
- perception (2)
Institute
- Institut für Psychologie (25) (remove)
Arrow cues and other overlearned spatial symbols automatically orient attention according to their spatial meaning. This renders them similar to exogenous cues that occur at stimulus location. Exogenous cues trigger shifts of attention even when they are presented subliminally. Here, we investigate to what extent the mechanisms underlying the orienting of attention by exogenous cues and by arrow cues are comparable by analyzing the effects of visible and masked arrow cues on attention. In Experiment 1, we presented arrow cues with overall 50% validity. Visible cues, but not masked cues, lead to shifts of attention. In Experiment 2, the arrow cues had an overall validity of 80%. Now both visible and masked arrows lead to shifts of attention. This is in line with findings that subliminal exogenous cues capture attention only in a top-down contingent manner, that is, when the cues fit the observer’s intentions.
We examined whether movement costs as defined by movement magnitude have an impact on distance perception in near space. In Experiment 1, participants were given a numerical cue regarding the amplitude of a hand movement to be carried out. Before the movement execution, the length of a visual distance had to be judged. These visual distances were judged to be larger, the larger the amplitude of the concurrently prepared hand movement was. In Experiment 2, in which numerical cues were merely memorized without concurrent movement planning, this general increase of distance with cue size was not observed. The results of these experiments indicate that visual perception of near space is specifically affected by the costs of planned hand movements.
Anticipating where an event will occur enables us to instantaneously respond to events that occur at the expected location. Here we investigated if such spatial anticipations can be triggered by symbolic information that participants cannot consciously see. In two experiments involving a Posner cueing task and a visual search task, a central cue informed participants about the likely location of the next target stimulus. In half of the trials, this cue was rendered invisible by pattern masking. In both experiments, visible cues led to cueing effects, that is, faster responses after valid compared to invalid cues. Importantly, even masked cues caused cueing effects, though to a lesser extent. Additionally, we analyzed effects on attention that persist from one trial to the subsequent trial. We found that spatial anticipations are able to interfere with newly formed spatial anticipations and influence orienting of attention in the subsequent trial. When the preceding cue was visible, the corresponding spatial anticipation persisted to an extent that prevented a noticeable effect of masked cues. The effects of visible cues were likewise modulated by previous spatial anticipations, but were strong enough to also exert an impact on attention themselves. Altogether, the results suggest that spatial anticipations can be formed on the basis of unconscious stimuli, but that interfering influences like still active spatial anticipations can suppress this effect.
We used a new methodological approach to investigate whether top-down influences like expertise determine the extent of unconscious processing. This approach does not rely on preexisting differences between experts and novices, but instructs essentially the same task in a way that either addresses a domain of expertise or not. Participants either were instructed to perform a lexical decision task (expert task) or to respond to a combination of single features of word and non-word stimuli (novel task). The stimuli and importantly also the mapping of responses to those stimuli, however, were exactly the same in both groups. We analyzed congruency effects of masked primes depending on the instructed task. Participants performing the expert task responded faster and less error prone when the prime was response congruent rather than incongruent. This effect was significantly reduced in the novel task, and even reversed when excluding identical prime-target pairs. This indicates that the primes in the novel task had an effect on a perceptual level, but were not able to impact on response activation. Overall, these results demonstrate an expertise-based top-down modulation of unconscious processing that cannot be explained by confounds that are otherwise inherent in comparisons between novices and experts.
Action feedback affects the perception of action-related objects beyond actual action success
(2014)
Successful object-oriented action typically increases the perceived size of aimed target objects. This phenomenon has been assumed to reflect an impact of an actor's current action ability on visual perception. The actual action ability and the explicit knowledge of action outcome, however, were confounded in previous studies. The present experiments aimed at disentangling these two factors. Participants repeatedly tried to hit a circular target varying in size with a stylus movement under restricted feedback conditions. After each movement they were explicitly informed about the success in hitting the target and were then asked to judge target size. The explicit feedback regarding movement success was manipulated orthogonally to actual movement success. The results of three experiments indicated the participants' bias to judge relatively small targets as larger and relatively large targets as smaller after explicit feedback of failure than after explicit feedback of success. This pattern was independent of the actual motor performance, suggesting that the actors' evaluations of motor actions may bias perception of target objects in itself.
Recent research revealed that action video game players outperform non-players in a wide range of attentional, perceptual and cognitive tasks. Here we tested if expertise in action video games is related to differences regarding the potential of shortly presented stimuli to bias behavior. In a response priming paradigm, participants classified four animal pictures functioning as targets as being smaller or larger than a reference frame. Before each target, one of the same four animal pictures was presented as a masked prime to influence participants' responses in a congruent or incongruent way. Masked primes induced congruence effects, that is, faster responses for congruent compared to incongruent conditions, indicating processing of hardly visible primes. Results also suggested that action video game players showed a larger congruence effect than non-players for 20 ms primes, whereas there was no group difference for 60 ms primes. In addition, there was a tendency for action video game players to detect masked primes for some prime durations better than non-players. Thus, action video game expertise may be accompanied by faster and more efficient processing of shortly presented visual stimuli.
It has been argued that several reported non-visual influences on perception cannot be truly perceptual. If they were, they should affect the perception of target objects and reference objects used to express perceptual judgments, and thus cancel each other out. This reasoning presumes that non-visual manipulations impact target objects and comparison objects equally. In the present study we show that equalizing a body-related manipulation between target objects and reference objects essentially abolishes the impact of that manipulation so as it should do when that manipulation actually altered perception. Moreover, the manipulation has an impact on judgements when applied to only the target object but not to the reference object, and that impact reverses when only applied to the reference object but not to the target object. A perceptual explanation predicts this reversal, whereas explanations in terms of post-perceptual response biases or demand effects do not. Altogether these results suggest that body-related influences on perception cannot as a whole be attributed to extra-perceptual factors.
Action planning can be construed as the temporary binding of features of perceptual action effects. While previous research demonstrated binding for task-relevant, body-related effect features, the role of task-irrelevant or environment-related effect features in action planning is less clear. Here, we studied whether task-relevance or body-relatedness determines feature binding in action planning. Participants planned an action A, but before executing it initiated an intermediate action B. Each action relied on a body-related effect feature (index vs. middle finger movement) and an environment-related effect feature (cursor movement towards vs. away from a reference object). In Experiments 1 and 2, both effects were task-relevant. Performance in action B suffered from partial feature overlap with action A compared to full feature repetition or alternation, which is in line with binding of both features while planning action A. Importantly, this cost disappeared when all features were available but only body-related features were task-relevant (Experiment 3). When only the environment-related effect of action A was known in advance, action B benefitted when it aimed at the same (vs. a different) environment-related effect (Experiment 4). Consequently, the present results support the idea that task relevance determines whether binding of body-related and environment-related effect features takes place while the pre-activation of environment-related features without binding them primes feature-overlapping actions.
It has been proposed that statistical integration of multisensory cues may be a suitable framework to explain temporal binding, that is, the finding that causally related events such as an action and its effect are perceived to be shifted towards each other in time. A multisensory approach to temporal binding construes actions and effects as individual sensory signals, which are each perceived with a specific temporal precision. When they are integrated into one multimodal event, like an action-effect chain, the extent to which they affect this event's perception depends on their relative reliability. We test whether this assumption holds true in a temporal binding task by manipulating certainty of actions and effects. Two experiments suggest that a relatively uncertain sensory signal in such action-effect sequences is shifted more towards its counterpart than a relatively certain one. This was especially pronounced for temporal binding of the action towards its effect but could also be shown for effect binding. Other conceptual approaches to temporal binding cannot easily explain these results, and the study therefore adds to the growing body of evidence endorsing a multisensory approach to temporal binding.