Refine
Has Fulltext
- yes (33)
Is part of the Bibliography
- yes (33)
Year of publication
Document Type
- Journal article (33)
Language
- English (33)
Keywords
- ideomotor theory (4)
- masked priming (4)
- motor control (3)
- perception (3)
- unconscious processing (3)
- Psychologie (2)
- action (2)
- body ownership (2)
- experimental design (2)
- expertise (2)
When telling a lie, humans might engage in stronger monitoring of their behavior than when telling the truth. Initial evidence has indeed pointed towards a stronger recruitment of capacity-limited monitoring processes in dishonest than honest responding, conceivably resulting from the necessity to overcome automatic tendencies to respond honestly. Previous results suggested monitoring to be confined to response execution, however, whereas the current study goes beyond these findings by specifically probing for post-execution monitoring. Participants responded (dis)honestly to simple yes/no questions in a first task and switched to an unrelated second task after a response–stimulus interval of 0 ms or 1000 ms. Dishonest responses did not only prolong response times in Task 1, but also in Task 2 with a short response–stimulus interval. These findings support the assumption that increased monitoring for dishonest responses extends beyond mere response execution, a mechanism that is possibly tuned to assess the successful completion of a dishonest act.
With ubiquitous computing, problems can be solved using more strategies than ever, though many strategies feature subpar performance. Here, we explored whether and how simple advice regarding when to use which strategy can improve performance. Specifically, we presented unfamiliar alphanumeric equations (e.g., A + 5 = F) and asked whether counting up the alphabet from the left letter by the indicated number resulted in the right letter. In an initial choice block, participants could engage in one of three cognitive strategies: (a) internal counting, (b) internal retrieval of previously generated solutions, or (c) computer-mediated external retrieval of solutions. Participants belonged to one of two groups: they were either instructed to first try internal retrieval before using external retrieval, or received no specific use instructions. In a subsequent internal block with identical instructions for both groups, external retrieval was made unavailable. The ‘try internal retrieval first’ instruction in the choice block led to pronounced benefits (d = .76) in the internal block. Benefits were due to facilitated creation and retrieval of internal memory traces and possibly also due to improved strategy choice. These results showcase how simple strategy advice can greatly help users navigate cognitive environments. More generally, our results also imply that uninformed use of external tools (i.e., technology) can bear the risk of not developing and using even more superior internal processing strategies.
We assessed the relation of creativity and unethical behaviour by manipulating the thinking style of participants (N = 450 adults) and measuring the impact of this manipulation on the prevalence of dishonest behaviour. Participants performed one of three inducer tasks: the alternative uses task to promote divergent thinking, the remote associates task to promote convergent thinking, or a simple classification task for rule-based thinking. Before and after this manipulation, participants conducted the mind game as a straightforward measure of dishonesty. Dishonest behaviour increased from before to after the intervention, but we found no credible evidence that this increase differed between induced mindsets. Exploratory analyses did not support any relation of trait creativity and dishonesty either. We conclude that the influence of creative thinking on unethical behaviour seems to be more ambiguous than assumed in earlier research or might be restricted to specific populations or contexts.
Repeatedly encountering a stimulus biases the observer’s affective response and evaluation of the stimuli. Here we provide evidence for a causal link between mere exposure to fictitious news reports and subsequent voting behavior. In four pre-registered online experiments, participants browsed through newspaper webpages and were tacitly exposed to names of fictitious politicians. Exposure predicted voting behavior in a subsequent mock election, with a consistent preference for frequent over infrequent names, except when news items were decidedly negative. Follow-up analyses indicated that mere media presence fuels implicit personality theories regarding a candidate’s vigor in political contexts. News outlets should therefore be mindful to cover political candidates as evenly as possible.
Neuroanatomical variations across the visual field of human observers go along with corresponding variations of the perceived coarseness of visual stimuli. Here we show that horizontal gratings are perceived as having lower spatial frequency than vertical gratings when occurring along the horizontal meridian of the visual field, whereas gratings occurring along the vertical meridian show the exact opposite effect. This finding indicates a new peculiarity of processes operating along the cardinal axes of the visual field.
Changes in body perception often arise when observers are confronted with related yet discrepant multisensory signals. Some of these effects are interpreted as outcomes of sensory integration of various signals, whereas related biases are ascribed to learning-dependent recalibration of coding individual signals. The present study explored whether the same sensorimotor experience entails changes in body perception that are indicative of multisensory integration and those that indicate recalibration. Participants enclosed visual objects by a pair of visual cursors controlled by finger movements. Then either they judged their perceived finger posture (indicating multisensory integration) or they produced a certain finger posture (indicating recalibration). An experimental variation of the size of the visual object resulted in systematic and opposite biases of the perceived and produced finger distances. This pattern of results is consistent with the assumption that multisensory integration and recalibration had a common origin in the task we used.
The sociomotor framework outlines a possible role of social action effects on human action control, suggesting that anticipated partner reactions are a major cue to represent, select, and initiate own body movements. Here, we review studies that elucidate the actual content of social action representations and that explore factors that can distinguish action control processes involving social and inanimate action effects. Specifically, we address two hypotheses on how the social context can influence effect-based action control: first, by providing unique social features such as body-related, anatomical codes, and second, by orienting attention towards any relevant feature dimensions of the action effects. The reviewed empirical work presents a surprisingly mixed picture: while there is indirect evidence for both accounts, previous studies that directly addressed the anatomical account showed no signs of the involvement of genuinely social features in sociomotor action control. Furthermore, several studies show evidence against the differentiation of social and non-social action effect processing, portraying sociomotor action representations as remarkably non-social. A focus on enhancing the social experience in future studies should, therefore, complement the current database to establish whether such settings give rise to the hypothesized influence of social context.
The present study examined the perceptual consequences of learning arbitrary mappings between visual stimuli and hand movements. Participants moved a small cursor with their unseen hand twice to a large visual target object and then judged either the relative distance of the hand movements (Exp.1), or the relative number of dots that appeared in the two consecutive target objects (Exp.2) using a two-alternative forced choice method. During a learning phase, the numbers of dots that appeared in the target object were correlated with the hand movement distance. In Exp.1, we observed that after the participants were trained to expect many dots with larger hand movements, they judged movements made to targets with many dots as being longer than the same movements made to targets with few dots. In Exp.2, another group of participants who received the same training judged the same number of dots as smaller when larger rather than smaller hand movements were executed. When many dots were paired with smaller hand movements during the learning phase of both experiments, no significant changes in the perception of movements and of visual stimuli were observed. These results suggest that changes in the perception of body states and of external objects can arise when certain body characteristics co-occur with certain characteristics of the environment. They also indicate that the (dis)integration of multimodal perceptual signals depends not only on the physical or statistical relation between these signals, but on which signal is currently attended.
Visual perception of surfaces is of utmost importance in everyday life. Therefore, it comes naturally, that different surface structures evoke different visual impressions in the viewer even if the material underlying these surface structures is the same. This topic is especially virulent for manufacturing processes in which more than one stakeholder is involved, but where the final product needs to meet certain criteria. A common practice to address such slight but perceivable differences in the visual appearance of structured surfaces is that trained evaluators assess the samples and assign a pass or fail. However, this process is both time consuming and cost intensive. Thus, we conducted two studies to analyze the relationship between physical surface structure parameters and participants visual assessment of the samples. With the first experiment, we aimed at uncovering a relationship between physical roughness parameters and visual lightness perception while the second experiment was designed to test participants' discrimination sensitivity across the range of stimuli. Perceived lightness and the measured surface roughness were nonlinearly related to the surface structure. Additionally, we found a linear relationship between the engraving parameter and physical brightness. Surface structure was an ideal predictor for perceived lightness and participants discriminated equally well across the entire range of surface structures.
Objects which a human agent controls by efferent activities (such as real or virtual tools) can be perceived by the agent as belonging to his or her body. This suggests that what an agent counts as “body” is plastic, depending on what she or he controls. Yet there are possible limitations for such momentary plasticity. One of these limitations is that sensations stemming from the body (e.g., proprioception) and sensations stemming from objects outside the body (e.g., vision) are not integrated if they do not sufficiently “match”. What “matches” and what does not is conceivably determined by long–term experience with the perceptual changes that body movements typically produce. Children have accumulated less sensorimotor experience than adults have. Consequently, they express higher flexibility to integrate body-internal and body-external signals, independent of their “match” as suggested by rubber hand illusion studies. However, children’s motor performance in tool use is more affected by mismatching body-internal and body-external action effects than that of adults, possibly because of less developed means to overcome such mismatches. We review research on perception-action interactions, multisensory integration, and developmental psychology to build bridges between these research fields. By doing so, we account for the flexibility of the sense of body ownership for actively controlled events and its development through ontogeny. This gives us the opportunity to validate the suggested mechanisms for generating ownership by investigating their effects in still developing and incomplete stages in children. We suggest testable predictions for future studies investigating both body ownership and motor skills throughout the lifespan.
Pointing is a ubiquitous means of communication. Nevertheless, observers systematically misinterpret the location indicated by pointers. We examined whether these misunderstandings result from the typically different viewpoints of pointers and observers. Participants either pointed themselves or interpreted points while assuming the pointer’s or a typical observer perspective in a virtual reality environment. The perspective had a strong effect on the relationship between pointing gestures and referents, whereas the task had only a minor influence. This suggests that misunderstandings between pointers and observers primarily result from their typically different viewpoints.
It has been proposed that statistical integration of multisensory cues may be a suitable framework to explain temporal binding, that is, the finding that causally related events such as an action and its effect are perceived to be shifted towards each other in time. A multisensory approach to temporal binding construes actions and effects as individual sensory signals, which are each perceived with a specific temporal precision. When they are integrated into one multimodal event, like an action-effect chain, the extent to which they affect this event's perception depends on their relative reliability. We test whether this assumption holds true in a temporal binding task by manipulating certainty of actions and effects. Two experiments suggest that a relatively uncertain sensory signal in such action-effect sequences is shifted more towards its counterpart than a relatively certain one. This was especially pronounced for temporal binding of the action towards its effect but could also be shown for effect binding. Other conceptual approaches to temporal binding cannot easily explain these results, and the study therefore adds to the growing body of evidence endorsing a multisensory approach to temporal binding.
Perceptual changes that an agent produces by efferent activity can become part of the agent’s minimal self. Yet, in human agents, efferent activities produce perceptual changes in various sensory modalities and in various temporal and spatial proximities. Some of these changes occur at the “biological” body, and they are to some extent conveyed by “private” sensory signals, whereas other changes occur in the environment of that biological body and are conveyed by “public” sensory signals. We discuss commonalties and differences of these signals for generating selfhood. We argue that despite considerable functional overlap of these sensory signals in generating self-experience, there are reasons to tell them apart in theorizing and empirical research about development of the self.
Previous research has revealed changes in the perception of objects due to changes of object-oriented actions. In present study, we varied the arm and finger postures in the context of a virtual reaching and grasping task and tested whether this manipulation can simultaneously affect the perceived size and distance of external objects. Participants manually controlled visual cursors, aiming at reaching and enclosing a distant target object, and judged the size and distance of this object. We observed that a visual-proprioceptive discrepancy introduced during the reaching part of the action simultaneously affected the judgments of target distance and of target size (Experiment 1). A related variation applied to the grasping part of the action affected the judgments of size, but not of distance of the target (Experiment 2). These results indicate that perceptual effects observed in the context of actions can directly arise through sensory integration of multimodal redundant signals and indirectly through perceptual constancy mechanisms.
The present study explored the origin of perceptual changes repeatedly observed in the context of actions. In Experiment 1, participants tried to hit a circular target with a stylus movement under restricted feedback conditions. We measured the perception of target size during action planning and observed larger estimates for larger movement distances. In Experiment 2, we then tested the hypothesis that this action specific influence on perception is due to changes in the allocation of spatial attention. For this purpose, we replaced the hitting task by conditions of focused and distributed attention and measured the perception of the former target stimulus. The results revealed changes in the perceived stimulus size very similar to those observed in Experiment 1. These results indicate that action's effects on perception root in changes of spatial attention.
Spatial action–effect binding denotes the mutual attraction between the perceived position of an effector (e.g., one’s own hand) and a distal object that is controlled by this effector. Such spatial binding can be construed as an implicit measure of object ownership, thus the belonging of a controlled object to the own body. The current study investigated how different transformations of hand movements (body-internal action component) into movements of a visual object (body-external action component) affect spatial action–effect binding, and thus implicit object ownership. In brief, participants had to bring a cursor on the computer screen into a predefined target position by moving their occluded hand on a tablet and had to estimate their final hand position. In Experiment 1, we found a significantly lower drift of the proprioceptive position of the hand towards the visual object when hand movements were transformed into laterally inverted cursor movements, rather than cursor movements in the same direction. Experiment 2 showed that this reduction reflected an elimination of spatial action–effect binding in the inverted condition. The results are discussed with respect to the prerequisites for an experience of ownership over artificial, noncorporeal objects. Our results show that predictability of an object movement alone is not a sufficient condition for ownership because, depending on the type of transformation, integration of the effector and a distal object can be fully abolished even under conditions of full controllability.
Design choices: Empirical recommendations for designing two-dimensional finger-tracking experiments
(2020)
The continuous tracking of mouse or finger movements has become an increasingly popular research method for investigating cognitive and motivational processes such as decision-making, action-planning, and executive functions. In the present paper, we evaluate and discuss how apparently trivial design choices of researchers may impact participants’ behavior and, consequently, a study’s results. We first provide a thorough comparison of mouse- and finger-tracking setups on the basis of a Simon task. We then vary a comprehensive set of design factors, including spatial layout, movement extent, time of stimulus onset, size of the target areas, and hit detection in a finger-tracking variant of this task. We explore the impact of these variations on a broad spectrum of movement parameters that are typically used to describe movement trajectories. Based on our findings, we suggest several recommendations for best practice that avoid some of the pitfalls of the methodology. Keeping these recommendations in mind will allow for informed decisions when planning and conducting future tracking experiments.
Action binding refers to the observation that the perceived time of an action (e.g., a keypress) is shifted towards the distal sensory feedback (usually a sound) triggered by that action. Surprisingly, the role of somatosensory feedback for this phe-nomenon has been largely ignored. We fill this gap by showing that the somatosensory feedback, indexed by keypress peak force, is functional in judging keypress time. Specifically, the strength of somatosensory feedback is positively correlated with reported keypress time when the keypress is not associated with an auditory feedback and negatively correlated when the keypress triggers an auditory feedback. The result is consistent with the view that the reported keypress time is shaped by sensory information from different modalities. Moreover, individual differences in action binding can be explained by a sensory information weighting between somatosensory and auditory feedback. At the group level, increasing the strength of somatosensory feedback can decrease action binding to a level not being detected statistically. Therefore, a multisensory information integration account (between somatosensory and auditory inputs) explains action binding at both a group level and an individual level.
Action planning can be construed as the temporary binding of features of perceptual action effects. While previous research demonstrated binding for task-relevant, body-related effect features, the role of task-irrelevant or environment-related effect features in action planning is less clear. Here, we studied whether task-relevance or body-relatedness determines feature binding in action planning. Participants planned an action A, but before executing it initiated an intermediate action B. Each action relied on a body-related effect feature (index vs. middle finger movement) and an environment-related effect feature (cursor movement towards vs. away from a reference object). In Experiments 1 and 2, both effects were task-relevant. Performance in action B suffered from partial feature overlap with action A compared to full feature repetition or alternation, which is in line with binding of both features while planning action A. Importantly, this cost disappeared when all features were available but only body-related features were task-relevant (Experiment 3). When only the environment-related effect of action A was known in advance, action B benefitted when it aimed at the same (vs. a different) environment-related effect (Experiment 4). Consequently, the present results support the idea that task relevance determines whether binding of body-related and environment-related effect features takes place while the pre-activation of environment-related features without binding them primes feature-overlapping actions.
Movements of a tool typically diverge from the movements of the hand manipulating that tool, such as when operating a pivotal lever where tool and hand move in opposite directions. Previous studies suggest that humans are often unaware of the position or movements of their effective body part (mostly the hand) in such situations. It has been suggested that this might be due to a "haptic neglect" of bodily sensations to decrease the interference of representations of body and tool movements. However, in principle this interference could also be decreased by neglecting sensations regarding the tool and focusing instead on body movements. While in most tool use situations the tool-related action effects are task-relevant and thus suppression of body-related rather than tool-related sensations is more beneficial for successful goal achievement, we manipulated this task-relevance in a controlled experiment. The results showed that visual, tool-related effect representations can be suppressed just as proprioceptive, body-related ones in situations where effect representations interfere, given that task-relevance of body-related effects is increased relative to tool-related ones.