Refine
Has Fulltext
- yes (26)
Is part of the Bibliography
- yes (26)
Year of publication
Document Type
- Journal article (26)
Language
- English (26)
Keywords
- ideomotor theory (4)
- masked priming (4)
- motor control (3)
- unconscious processing (3)
- Psychologie (2)
- action (2)
- body ownership (2)
- experimental design (2)
- expertise (2)
- multisensory processing (2)
Perceptual changes that an agent produces by efferent activity can become part of the agent’s minimal self. Yet, in human agents, efferent activities produce perceptual changes in various sensory modalities and in various temporal and spatial proximities. Some of these changes occur at the “biological” body, and they are to some extent conveyed by “private” sensory signals, whereas other changes occur in the environment of that biological body and are conveyed by “public” sensory signals. We discuss commonalties and differences of these signals for generating selfhood. We argue that despite considerable functional overlap of these sensory signals in generating self-experience, there are reasons to tell them apart in theorizing and empirical research about development of the self.
The present study explored the origin of perceptual changes repeatedly observed in the context of actions. In Experiment 1, participants tried to hit a circular target with a stylus movement under restricted feedback conditions. We measured the perception of target size during action planning and observed larger estimates for larger movement distances. In Experiment 2, we then tested the hypothesis that this action specific influence on perception is due to changes in the allocation of spatial attention. For this purpose, we replaced the hitting task by conditions of focused and distributed attention and measured the perception of the former target stimulus. The results revealed changes in the perceived stimulus size very similar to those observed in Experiment 1. These results indicate that action's effects on perception root in changes of spatial attention.
It has been proposed that statistical integration of multisensory cues may be a suitable framework to explain temporal binding, that is, the finding that causally related events such as an action and its effect are perceived to be shifted towards each other in time. A multisensory approach to temporal binding construes actions and effects as individual sensory signals, which are each perceived with a specific temporal precision. When they are integrated into one multimodal event, like an action-effect chain, the extent to which they affect this event's perception depends on their relative reliability. We test whether this assumption holds true in a temporal binding task by manipulating certainty of actions and effects. Two experiments suggest that a relatively uncertain sensory signal in such action-effect sequences is shifted more towards its counterpart than a relatively certain one. This was especially pronounced for temporal binding of the action towards its effect but could also be shown for effect binding. Other conceptual approaches to temporal binding cannot easily explain these results, and the study therefore adds to the growing body of evidence endorsing a multisensory approach to temporal binding.
Previous research has revealed changes in the perception of objects due to changes of object-oriented actions. In present study, we varied the arm and finger postures in the context of a virtual reaching and grasping task and tested whether this manipulation can simultaneously affect the perceived size and distance of external objects. Participants manually controlled visual cursors, aiming at reaching and enclosing a distant target object, and judged the size and distance of this object. We observed that a visual-proprioceptive discrepancy introduced during the reaching part of the action simultaneously affected the judgments of target distance and of target size (Experiment 1). A related variation applied to the grasping part of the action affected the judgments of size, but not of distance of the target (Experiment 2). These results indicate that perceptual effects observed in the context of actions can directly arise through sensory integration of multimodal redundant signals and indirectly through perceptual constancy mechanisms.
Movements of a tool typically diverge from the movements of the hand manipulating that tool, such as when operating a pivotal lever where tool and hand move in opposite directions. Previous studies suggest that humans are often unaware of the position or movements of their effective body part (mostly the hand) in such situations. It has been suggested that this might be due to a "haptic neglect" of bodily sensations to decrease the interference of representations of body and tool movements. However, in principle this interference could also be decreased by neglecting sensations regarding the tool and focusing instead on body movements. While in most tool use situations the tool-related action effects are task-relevant and thus suppression of body-related rather than tool-related sensations is more beneficial for successful goal achievement, we manipulated this task-relevance in a controlled experiment. The results showed that visual, tool-related effect representations can be suppressed just as proprioceptive, body-related ones in situations where effect representations interfere, given that task-relevance of body-related effects is increased relative to tool-related ones.
Action planning can be construed as the temporary binding of features of perceptual action effects. While previous research demonstrated binding for task-relevant, body-related effect features, the role of task-irrelevant or environment-related effect features in action planning is less clear. Here, we studied whether task-relevance or body-relatedness determines feature binding in action planning. Participants planned an action A, but before executing it initiated an intermediate action B. Each action relied on a body-related effect feature (index vs. middle finger movement) and an environment-related effect feature (cursor movement towards vs. away from a reference object). In Experiments 1 and 2, both effects were task-relevant. Performance in action B suffered from partial feature overlap with action A compared to full feature repetition or alternation, which is in line with binding of both features while planning action A. Importantly, this cost disappeared when all features were available but only body-related features were task-relevant (Experiment 3). When only the environment-related effect of action A was known in advance, action B benefitted when it aimed at the same (vs. a different) environment-related effect (Experiment 4). Consequently, the present results support the idea that task relevance determines whether binding of body-related and environment-related effect features takes place while the pre-activation of environment-related features without binding them primes feature-overlapping actions.
Spatial action–effect binding denotes the mutual attraction between the perceived position of an effector (e.g., one’s own hand) and a distal object that is controlled by this effector. Such spatial binding can be construed as an implicit measure of object ownership, thus the belonging of a controlled object to the own body. The current study investigated how different transformations of hand movements (body-internal action component) into movements of a visual object (body-external action component) affect spatial action–effect binding, and thus implicit object ownership. In brief, participants had to bring a cursor on the computer screen into a predefined target position by moving their occluded hand on a tablet and had to estimate their final hand position. In Experiment 1, we found a significantly lower drift of the proprioceptive position of the hand towards the visual object when hand movements were transformed into laterally inverted cursor movements, rather than cursor movements in the same direction. Experiment 2 showed that this reduction reflected an elimination of spatial action–effect binding in the inverted condition. The results are discussed with respect to the prerequisites for an experience of ownership over artificial, noncorporeal objects. Our results show that predictability of an object movement alone is not a sufficient condition for ownership because, depending on the type of transformation, integration of the effector and a distal object can be fully abolished even under conditions of full controllability.
Action binding refers to the observation that the perceived time of an action (e.g., a keypress) is shifted towards the distal sensory feedback (usually a sound) triggered by that action. Surprisingly, the role of somatosensory feedback for this phe-nomenon has been largely ignored. We fill this gap by showing that the somatosensory feedback, indexed by keypress peak force, is functional in judging keypress time. Specifically, the strength of somatosensory feedback is positively correlated with reported keypress time when the keypress is not associated with an auditory feedback and negatively correlated when the keypress triggers an auditory feedback. The result is consistent with the view that the reported keypress time is shaped by sensory information from different modalities. Moreover, individual differences in action binding can be explained by a sensory information weighting between somatosensory and auditory feedback. At the group level, increasing the strength of somatosensory feedback can decrease action binding to a level not being detected statistically. Therefore, a multisensory information integration account (between somatosensory and auditory inputs) explains action binding at both a group level and an individual level.
Pointing is a ubiquitous means of communication. Nevertheless, observers systematically misinterpret the location indicated by pointers. We examined whether these misunderstandings result from the typically different viewpoints of pointers and observers. Participants either pointed themselves or interpreted points while assuming the pointer’s or a typical observer perspective in a virtual reality environment. The perspective had a strong effect on the relationship between pointing gestures and referents, whereas the task had only a minor influence. This suggests that misunderstandings between pointers and observers primarily result from their typically different viewpoints.
Design choices: Empirical recommendations for designing two-dimensional finger-tracking experiments
(2020)
The continuous tracking of mouse or finger movements has become an increasingly popular research method for investigating cognitive and motivational processes such as decision-making, action-planning, and executive functions. In the present paper, we evaluate and discuss how apparently trivial design choices of researchers may impact participants’ behavior and, consequently, a study’s results. We first provide a thorough comparison of mouse- and finger-tracking setups on the basis of a Simon task. We then vary a comprehensive set of design factors, including spatial layout, movement extent, time of stimulus onset, size of the target areas, and hit detection in a finger-tracking variant of this task. We explore the impact of these variations on a broad spectrum of movement parameters that are typically used to describe movement trajectories. Based on our findings, we suggest several recommendations for best practice that avoid some of the pitfalls of the methodology. Keeping these recommendations in mind will allow for informed decisions when planning and conducting future tracking experiments.