Refine
Has Fulltext
- yes (27)
Is part of the Bibliography
- yes (27)
Year of publication
Document Type
- Journal article (27)
Language
- English (27)
Keywords
- ideomotor theory (4)
- masked priming (4)
- motor control (3)
- unconscious processing (3)
- Psychologie (2)
- action (2)
- body ownership (2)
- experimental design (2)
- expertise (2)
- multisensory processing (2)
- perception (2)
- perception and action (2)
- tool use (2)
- vision (2)
- visual perception (2)
- Action feedback (1)
- Adolescent (1)
- Adult (1)
- Attention (1)
- Choice Behavior/physiology (1)
- Cognitive representation (1)
- Eye Movements/physiology (1)
- Female (1)
- High-jump photographs (1)
- Humans (1)
- Learning/physiology (1)
- Male (1)
- Oculomotor Muscles/physiology (1)
- Perception (1)
- Photic (1)
- Psychomotor Performance/physiology (1)
- Saccades/physiology (1)
- Simon task (1)
- Stimulation (1)
- Subliminal priming (1)
- Young Adult (1)
- action ability (1)
- action access (1)
- action binding (1)
- action effects (1)
- action planning (1)
- action representation (1)
- action videogaming (1)
- action–effect compatibility (1)
- active self (1)
- agency (1)
- analysis of variance (1)
- anticipation (1)
- appearance (1)
- arrow cues (1)
- attention (1)
- attentional reweighting (1)
- binding (1)
- bottom-up processing (1)
- children (1)
- cognitive conflict (1)
- cognitive control (1)
- color perception (1)
- conflict adaptation (1)
- conflict experience (1)
- conflict strength (1)
- contingent capture (1)
- control levels (1)
- deictic reference (1)
- dishonest responding (1)
- effect anticipations (1)
- effects (1)
- endogenous shifts of attention (1)
- exteroception (1)
- eye movement (1)
- fingers (1)
- force (1)
- free choice (1)
- hand movements (1)
- hands (1)
- haptic neglect (1)
- honesty (1)
- interoception (1)
- knowledge of results (1)
- learning arbitrary mappings (1)
- lexical decision task (1)
- lying (1)
- maximum likelihood difference scaling (1)
- measures (1)
- minimal self (1)
- monitoring (1)
- movement (1)
- movement tracking (1)
- object-oriented actions (1)
- ontogeny (1)
- perception-action coupling (1)
- pointing gestures (1)
- pointing production and interpretation (1)
- preprocessing (1)
- prime visibility (1)
- proprioceptive drift (1)
- psychophysics (1)
- saccades (1)
- self-construction (1)
- sense of agency (1)
- sense of ownership (1)
- sensory perception (1)
- sociomotor control (1)
- somatosensory feedback (1)
- spatial binding (1)
- spatial cueing (1)
- spatial cuing (1)
- surface structure (1)
- temporal processing (1)
- top-down control (1)
- top-down processing (1)
- video gaming masked stimuli (1)
- virtual reality (1)
Action binding refers to the observation that the perceived time of an action (e.g., a keypress) is shifted towards the distal sensory feedback (usually a sound) triggered by that action. Surprisingly, the role of somatosensory feedback for this phe-nomenon has been largely ignored. We fill this gap by showing that the somatosensory feedback, indexed by keypress peak force, is functional in judging keypress time. Specifically, the strength of somatosensory feedback is positively correlated with reported keypress time when the keypress is not associated with an auditory feedback and negatively correlated when the keypress triggers an auditory feedback. The result is consistent with the view that the reported keypress time is shaped by sensory information from different modalities. Moreover, individual differences in action binding can be explained by a sensory information weighting between somatosensory and auditory feedback. At the group level, increasing the strength of somatosensory feedback can decrease action binding to a level not being detected statistically. Therefore, a multisensory information integration account (between somatosensory and auditory inputs) explains action binding at both a group level and an individual level.
A commentary on: Feeling the Conflict: The Crucial Role of Conflict Experience in Adaptationby Desender, K., Van Opstal, F., and Van den Bussche, E. (2014). Psychol. Sci. 25, 675–683. doi:10.1177/0956797613511468
Conflict adaptation in masked priming has recently been proposed to rely not on successful conflictresolution but rather on conflict experience (Desender et al., 2014). We re-assessed this proposal ina direct replication and also tested a potential confound due toconflict strength. The data supported this alternative view, but also failed to replicate basic conflict adaptation effects of the original studydespite considerable power.
When telling a lie, humans might engage in stronger monitoring of their behavior than when telling the truth. Initial evidence has indeed pointed towards a stronger recruitment of capacity-limited monitoring processes in dishonest than honest responding, conceivably resulting from the necessity to overcome automatic tendencies to respond honestly. Previous results suggested monitoring to be confined to response execution, however, whereas the current study goes beyond these findings by specifically probing for post-execution monitoring. Participants responded (dis)honestly to simple yes/no questions in a first task and switched to an unrelated second task after a response–stimulus interval of 0 ms or 1000 ms. Dishonest responses did not only prolong response times in Task 1, but also in Task 2 with a short response–stimulus interval. These findings support the assumption that increased monitoring for dishonest responses extends beyond mere response execution, a mechanism that is possibly tuned to assess the successful completion of a dishonest act.
Little is known about the cognitive background of unconscious visuomotor control of complex sports movements. Therefore, we investigated the extent to which novices and skilled high-jump athletes are able to identify visually presented body postures of the high jump unconsciously. We also asked whether or not the manner of processing differs (qualitatively or quantitatively) between these groups as a function of their motor expertise. A priming experiment with not consciously perceivable stimuli was designed to determine whether subliminal priming of movement phases (same vs. different movement phases) or temporal order (i.e. natural vs. reversed movement order) affects target processing. Participants had to decide which phase of the high jump (approach vs. flight phase) a target photograph was taken from. We found a main effect of temporal order for skilled athletes, that is, faster reaction times for prime-target pairs that reflected the natural movement order as opposed to the reversed movement order. This result indicates that temporal-order information pertaining to the domain of expertise plays a critical role in athletes’ perceptual capacities. For novices, data analyses revealed an interaction between temporal order and movement phases. That is, only the reversed movement order of flight-approach pictures increased processing time. Taken together, the results suggest that the structure of cognitive movement representation modulates unconscious processing of movement pictures and points to a functional role of motor representations in visual perception.
Pointing is a ubiquitous means of communication. Nevertheless, observers systematically misinterpret the location indicated by pointers. We examined whether these misunderstandings result from the typically different viewpoints of pointers and observers. Participants either pointed themselves or interpreted points while assuming the pointer’s or a typical observer perspective in a virtual reality environment. The perspective had a strong effect on the relationship between pointing gestures and referents, whereas the task had only a minor influence. This suggests that misunderstandings between pointers and observers primarily result from their typically different viewpoints.
Models of eye-movement control distinguish between different control levels, ranging from automatic (bottom-up, stimulus-driven selection) and automatized (based on well-learned routines) to voluntary (top-down, goal-driven selection, e.g., based on instructions). However, one type of voluntary control has yet only been examined in the manual and not in the oculomotor domain, namely free-choice selection among arbitrary targets, that is, targets that are of equal interest from both a bottom-up and top-down processing perspective. Here, we ask which features of targets (identity- or location-related) are used to determine such oculomotor free-choice behavior. In two experiments, participants executed a saccade to one of four peripheral targets in three different choice conditions: unconstrained free choice, constrained free choice based on target identity (color), and constrained free choice based on target location. The analysis of choice frequencies revealed that unconstrained free-choice selection closely resembled constrained choice based on target location. The results suggest that free-choice oculomotor control is mainly guided by spatial (location-based) target characteristics. We explain these results by assuming that participants tend to avoid less parsimonious recoding of target-identity representations into spatial codes, the latter being a necessary prerequisite to configure oculomotor commands.
We examined whether movement costs as defined by movement magnitude have an impact on distance perception in near space. In Experiment 1, participants were given a numerical cue regarding the amplitude of a hand movement to be carried out. Before the movement execution, the length of a visual distance had to be judged. These visual distances were judged to be larger, the larger the amplitude of the concurrently prepared hand movement was. In Experiment 2, in which numerical cues were merely memorized without concurrent movement planning, this general increase of distance with cue size was not observed. The results of these experiments indicate that visual perception of near space is specifically affected by the costs of planned hand movements.
The present study explored the origin of perceptual changes repeatedly observed in the context of actions. In Experiment 1, participants tried to hit a circular target with a stylus movement under restricted feedback conditions. We measured the perception of target size during action planning and observed larger estimates for larger movement distances. In Experiment 2, we then tested the hypothesis that this action specific influence on perception is due to changes in the allocation of spatial attention. For this purpose, we replaced the hitting task by conditions of focused and distributed attention and measured the perception of the former target stimulus. The results revealed changes in the perceived stimulus size very similar to those observed in Experiment 1. These results indicate that action's effects on perception root in changes of spatial attention.
The present study examined the perceptual consequences of learning arbitrary mappings between visual stimuli and hand movements. Participants moved a small cursor with their unseen hand twice to a large visual target object and then judged either the relative distance of the hand movements (Exp.1), or the relative number of dots that appeared in the two consecutive target objects (Exp.2) using a two-alternative forced choice method. During a learning phase, the numbers of dots that appeared in the target object were correlated with the hand movement distance. In Exp.1, we observed that after the participants were trained to expect many dots with larger hand movements, they judged movements made to targets with many dots as being longer than the same movements made to targets with few dots. In Exp.2, another group of participants who received the same training judged the same number of dots as smaller when larger rather than smaller hand movements were executed. When many dots were paired with smaller hand movements during the learning phase of both experiments, no significant changes in the perception of movements and of visual stimuli were observed. These results suggest that changes in the perception of body states and of external objects can arise when certain body characteristics co-occur with certain characteristics of the environment. They also indicate that the (dis)integration of multimodal perceptual signals depends not only on the physical or statistical relation between these signals, but on which signal is currently attended.