Refine
Has Fulltext
- yes (33)
Is part of the Bibliography
- yes (33)
Year of publication
Document Type
- Journal article (33)
Language
- English (33)
Keywords
- ideomotor theory (4)
- masked priming (4)
- motor control (3)
- perception (3)
- unconscious processing (3)
- Psychologie (2)
- action (2)
- body ownership (2)
- experimental design (2)
- expertise (2)
Action binding refers to the observation that the perceived time of an action (e.g., a keypress) is shifted towards the distal sensory feedback (usually a sound) triggered by that action. Surprisingly, the role of somatosensory feedback for this phe-nomenon has been largely ignored. We fill this gap by showing that the somatosensory feedback, indexed by keypress peak force, is functional in judging keypress time. Specifically, the strength of somatosensory feedback is positively correlated with reported keypress time when the keypress is not associated with an auditory feedback and negatively correlated when the keypress triggers an auditory feedback. The result is consistent with the view that the reported keypress time is shaped by sensory information from different modalities. Moreover, individual differences in action binding can be explained by a sensory information weighting between somatosensory and auditory feedback. At the group level, increasing the strength of somatosensory feedback can decrease action binding to a level not being detected statistically. Therefore, a multisensory information integration account (between somatosensory and auditory inputs) explains action binding at both a group level and an individual level.
A commentary on: Feeling the Conflict: The Crucial Role of Conflict Experience in Adaptationby Desender, K., Van Opstal, F., and Van den Bussche, E. (2014). Psychol. Sci. 25, 675–683. doi:10.1177/0956797613511468
Conflict adaptation in masked priming has recently been proposed to rely not on successful conflictresolution but rather on conflict experience (Desender et al., 2014). We re-assessed this proposal ina direct replication and also tested a potential confound due toconflict strength. The data supported this alternative view, but also failed to replicate basic conflict adaptation effects of the original studydespite considerable power.
It has been argued that several reported non-visual influences on perception cannot be truly perceptual. If they were, they should affect the perception of target objects and reference objects used to express perceptual judgments, and thus cancel each other out. This reasoning presumes that non-visual manipulations impact target objects and comparison objects equally. In the present study we show that equalizing a body-related manipulation between target objects and reference objects essentially abolishes the impact of that manipulation so as it should do when that manipulation actually altered perception. Moreover, the manipulation has an impact on judgements when applied to only the target object but not to the reference object, and that impact reverses when only applied to the reference object but not to the target object. A perceptual explanation predicts this reversal, whereas explanations in terms of post-perceptual response biases or demand effects do not. Altogether these results suggest that body-related influences on perception cannot as a whole be attributed to extra-perceptual factors.
Pointing is a ubiquitous means of communication. Nevertheless, observers systematically misinterpret the location indicated by pointers. We examined whether these misunderstandings result from the typically different viewpoints of pointers and observers. Participants either pointed themselves or interpreted points while assuming the pointer’s or a typical observer perspective in a virtual reality environment. The perspective had a strong effect on the relationship between pointing gestures and referents, whereas the task had only a minor influence. This suggests that misunderstandings between pointers and observers primarily result from their typically different viewpoints.
Action planning can be construed as the temporary binding of features of perceptual action effects. While previous research demonstrated binding for task-relevant, body-related effect features, the role of task-irrelevant or environment-related effect features in action planning is less clear. Here, we studied whether task-relevance or body-relatedness determines feature binding in action planning. Participants planned an action A, but before executing it initiated an intermediate action B. Each action relied on a body-related effect feature (index vs. middle finger movement) and an environment-related effect feature (cursor movement towards vs. away from a reference object). In Experiments 1 and 2, both effects were task-relevant. Performance in action B suffered from partial feature overlap with action A compared to full feature repetition or alternation, which is in line with binding of both features while planning action A. Importantly, this cost disappeared when all features were available but only body-related features were task-relevant (Experiment 3). When only the environment-related effect of action A was known in advance, action B benefitted when it aimed at the same (vs. a different) environment-related effect (Experiment 4). Consequently, the present results support the idea that task relevance determines whether binding of body-related and environment-related effect features takes place while the pre-activation of environment-related features without binding them primes feature-overlapping actions.
It has been proposed that statistical integration of multisensory cues may be a suitable framework to explain temporal binding, that is, the finding that causally related events such as an action and its effect are perceived to be shifted towards each other in time. A multisensory approach to temporal binding construes actions and effects as individual sensory signals, which are each perceived with a specific temporal precision. When they are integrated into one multimodal event, like an action-effect chain, the extent to which they affect this event's perception depends on their relative reliability. We test whether this assumption holds true in a temporal binding task by manipulating certainty of actions and effects. Two experiments suggest that a relatively uncertain sensory signal in such action-effect sequences is shifted more towards its counterpart than a relatively certain one. This was especially pronounced for temporal binding of the action towards its effect but could also be shown for effect binding. Other conceptual approaches to temporal binding cannot easily explain these results, and the study therefore adds to the growing body of evidence endorsing a multisensory approach to temporal binding.
Movements of a tool typically diverge from the movements of the hand manipulating that tool, such as when operating a pivotal lever where tool and hand move in opposite directions. Previous studies suggest that humans are often unaware of the position or movements of their effective body part (mostly the hand) in such situations. It has been suggested that this might be due to a "haptic neglect" of bodily sensations to decrease the interference of representations of body and tool movements. However, in principle this interference could also be decreased by neglecting sensations regarding the tool and focusing instead on body movements. While in most tool use situations the tool-related action effects are task-relevant and thus suppression of body-related rather than tool-related sensations is more beneficial for successful goal achievement, we manipulated this task-relevance in a controlled experiment. The results showed that visual, tool-related effect representations can be suppressed just as proprioceptive, body-related ones in situations where effect representations interfere, given that task-relevance of body-related effects is increased relative to tool-related ones.
The present study examined the perceptual consequences of learning arbitrary mappings between visual stimuli and hand movements. Participants moved a small cursor with their unseen hand twice to a large visual target object and then judged either the relative distance of the hand movements (Exp.1), or the relative number of dots that appeared in the two consecutive target objects (Exp.2) using a two-alternative forced choice method. During a learning phase, the numbers of dots that appeared in the target object were correlated with the hand movement distance. In Exp.1, we observed that after the participants were trained to expect many dots with larger hand movements, they judged movements made to targets with many dots as being longer than the same movements made to targets with few dots. In Exp.2, another group of participants who received the same training judged the same number of dots as smaller when larger rather than smaller hand movements were executed. When many dots were paired with smaller hand movements during the learning phase of both experiments, no significant changes in the perception of movements and of visual stimuli were observed. These results suggest that changes in the perception of body states and of external objects can arise when certain body characteristics co-occur with certain characteristics of the environment. They also indicate that the (dis)integration of multimodal perceptual signals depends not only on the physical or statistical relation between these signals, but on which signal is currently attended.
Perceptual changes that an agent produces by efferent activity can become part of the agent’s minimal self. Yet, in human agents, efferent activities produce perceptual changes in various sensory modalities and in various temporal and spatial proximities. Some of these changes occur at the “biological” body, and they are to some extent conveyed by “private” sensory signals, whereas other changes occur in the environment of that biological body and are conveyed by “public” sensory signals. We discuss commonalties and differences of these signals for generating selfhood. We argue that despite considerable functional overlap of these sensory signals in generating self-experience, there are reasons to tell them apart in theorizing and empirical research about development of the self.
Visual perception of surfaces is of utmost importance in everyday life. Therefore, it comes naturally, that different surface structures evoke different visual impressions in the viewer even if the material underlying these surface structures is the same. This topic is especially virulent for manufacturing processes in which more than one stakeholder is involved, but where the final product needs to meet certain criteria. A common practice to address such slight but perceivable differences in the visual appearance of structured surfaces is that trained evaluators assess the samples and assign a pass or fail. However, this process is both time consuming and cost intensive. Thus, we conducted two studies to analyze the relationship between physical surface structure parameters and participants visual assessment of the samples. With the first experiment, we aimed at uncovering a relationship between physical roughness parameters and visual lightness perception while the second experiment was designed to test participants' discrimination sensitivity across the range of stimuli. Perceived lightness and the measured surface roughness were nonlinearly related to the surface structure. Additionally, we found a linear relationship between the engraving parameter and physical brightness. Surface structure was an ideal predictor for perceived lightness and participants discriminated equally well across the entire range of surface structures.