Refine
Has Fulltext
- yes (35)
Is part of the Bibliography
- yes (35)
Year of publication
- 2019 (35) (remove)
Document Type
- Journal article (26)
- Doctoral Thesis (8)
- Preprint (1)
Language
- English (35) (remove)
Keywords
- virtual reality (4)
- Soziale Wahrnehmung (3)
- Aufmerksamkeit (2)
- Kognition (2)
- human behaviour (2)
- social attention (2)
- Adolescent (1)
- Adult (1)
- Aktionsforschung (1)
- Alcohol dependence (1)
Institute
- Institut für Psychologie (35) (remove)
Sonstige beteiligte Institutionen
EU-Project number / Contract (GA) number
- 230331-PROPEREMO (1)
- 336305 (1)
- 677819 (1)
A stimulus (conditioned stimulus, CS) associated with an appetitive unconditioned stimulus (US) acquires positive properties and elicits appetitive conditioned responses (CR). Such associative learning has been examined extensively in animals with food as the US, and results are used to explain psychopathologies (e.g., substance-related disorders or obesity). Human studies on appetitive conditioning exist, too, but we still know little about generalization processes. Understanding these processes may explain why stimuli not associated with a drug, for instance, can elicit craving. Forty-seven hungry participants underwent an appetitive conditioning protocol during which one of two circles with different diameters (CS+) became associated with an appetitive US (chocolate or salty pretzel, according to participants’ preference) but never the other circle (CS−). During generalization, US were delivered twice and the two CS were presented again plus four circles (generalization stimuli, GS) with gradually increasing diameters from CS− to CS+. We found successful appetitive conditioning as reflected in appetitive subjective ratings (positive valence, higher contingency) and physiological responses (startle attenuation and larger skin conductance responses) to CS+ versus CS−, and, importantly, both measures confirmed generalization as indicated by generalization gradients. Small changes in CS-US contingency during generalization may have weakened generalization processes on the physiological level. Considering that appetitive conditioned responses can be generalized to non-US-associated stimuli, a next important step would be to investigate risk factors that mediate overgeneralization.
Models of eye-movement control distinguish between different control levels, ranging from automatic (bottom-up, stimulus-driven selection) and automatized (based on well-learned routines) to voluntary (top-down, goal-driven selection, e.g., based on instructions). However, one type of voluntary control has yet only been examined in the manual and not in the oculomotor domain, namely free-choice selection among arbitrary targets, that is, targets that are of equal interest from both a bottom-up and top-down processing perspective. Here, we ask which features of targets (identity- or location-related) are used to determine such oculomotor free-choice behavior. In two experiments, participants executed a saccade to one of four peripheral targets in three different choice conditions: unconstrained free choice, constrained free choice based on target identity (color), and constrained free choice based on target location. The analysis of choice frequencies revealed that unconstrained free-choice selection closely resembled constrained choice based on target location. The results suggest that free-choice oculomotor control is mainly guided by spatial (location-based) target characteristics. We explain these results by assuming that participants tend to avoid less parsimonious recoding of target-identity representations into spatial codes, the latter being a necessary prerequisite to configure oculomotor commands.
A hallmark of habitual actions is that, once they are established, they become insensitive to changes in the values of action outcomes. In this article, we review empirical research that examined effects of posttraining changes in outcome values in outcome-selective Pavlovian-to-instrumental transfer (PIT) tasks. This review suggests that cue-instigated action tendencies in these tasks are not affected by weak and/or incomplete revaluation procedures (e.g., selective satiety) and substantially disrupted by a strong and complete devaluation of reinforcers. In a second part, we discuss two alternative models of a motivational control of habitual action: a default-interventionist framework and expected value of control theory. It is argued that the default-interventionist framework cannot solve the problem of an infinite regress (i.e., what controls the controller?). In contrast, expected value of control can explain control of habitual actions with local computations and feedback loops without (implicit) references to control homunculi. It is argued that insensitivity to changes in action outcomes is not an intrinsic design feature of habits but, rather, a function of the cognitive system that controls habitual action tendencies.
It is one of the primary goals of medical care to secure good quality of life (QoL) while prolonging survival. This is a major challenge in severe medical conditions with a prognosis such as amyotrophic lateral sclerosis (ALS). Further, the definition of QoL and the question whether survival in this severe condition is compatible with a good QoL is a matter of subjective and culture-specific debate. Some people without neurodegenerative conditions believe that physical decline is incompatible with satisfactory QoL. Current data provide extensive evidence that psychosocial adaptation in ALS is possible, indicated by a satisfactory QoL. Thus, there is no fatalistic link of loss of QoL when physical health declines. There are intrinsic and extrinsic factors that have been shown to successfully facilitate and secure QoL in ALS which will be reviewed in the following article following the four ethical principles (1) Beneficence, (2) Non-maleficence, (3) Autonomy and (4) Justice, which are regarded as key elements of patient centered medical care according to Beauchamp and Childress. This is a JPND-funded work to summarize findings of the project NEEDSinALS (www.NEEDSinALS.com) which highlights subjective perspectives and preferences in medical decision making in ALS.
Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.
Research with adults in laboratory settings has shown that distributed rereading is a beneficial learning strategy but its effects depend on time of test. When learning outcomes are measured immediately after rereading, distributed rereading yields no benefits or even detrimental effects on learning, but the beneficial effects emerge two days later. In a preregistered experiment, the effects of distributed rereading were investigated in a classroom setting with school students. Seventh-graders (N = 191) reread a text either immediately or after 1 week. Learning outcomes were measured after 4 min or 1 week. Participants in the distributed rereading condition reread the text more slowly, predicted their learning success to be lower, and reported a lower on-task focus. At the shorter retention interval, massed rereading outperformed distributed rereading in terms of learning outcomes. Contrary to students in the massed condition, students in the distributed condition showed no forgetting from the short to the long retention interval. As a result, they performed equally well as the students in the massed condition at the longer retention interval. Our results indicate that distributed rereading makes learning more demanding and difficult and leads to higher effort during rereading. Its effects on learning depend on time of test, but no beneficial effects were found, not even at the delayed test.
Continuous norming methods have seldom been subjected to scientific review. In this simulation study, we compared parametric with semi-parametric continuous norming methods in psychometric tests by constructing a fictitious population model within which a latent ability increases with age across seven age groups. We drew samples of different sizes (n = 50, 75, 100, 150, 250, 500 and 1,000 per age group) and simulated the results of an easy, medium, and difficult test scale based on Item Response Theory (IRT). We subjected the resulting data to different continuous norming methods and compared the data fit under the different test conditions with a representative cross-validation dataset of n = 10,000 per age group. The most significant differences were found in suboptimal (i.e., too easy or too difficult) test scales and in ability levels that were far from the population mean. We discuss the results with regard to the selection of the appropriate modeling techniques in psychometric test construction, the required sample sizes, and the requirement to report appropriate quantitative and qualitative test quality criteria for continuous norming methods in test manuals.
Context conditioning is characterized by unpredictable threat and its generalization may constitute risk factors for panic disorder (PD). Therefore, we examined differences between individuals with panic attacks (PA; N = 21) and healthy controls (HC, N = 22) in contextual learning and context generalization using a virtual reality (VR) paradigm. Successful context conditioning was indicated in both groups by higher arousal, anxiety and contingency ratings, and increased startle responses and skin conductance levels (SCLs) in an anxiety context (CTX+) where an aversive unconditioned stimulus (US) occurred unpredictably vs. a safety context (CTX−). PA compared to HC exhibited increased differential responding to CTX+ vs. CTX− and overgeneralization of contextual anxiety on an evaluative verbal level, but not on a physiological level. We conclude that increased contextual conditioning and contextual generalization may constitute risk factors for PD or agoraphobia contributing to the characteristic avoidance of anxiety contexts and withdrawal to safety contexts and that evaluative cognitive process may play a major role.
The present thesis addresses cognitive processing of voice information. Based on general theoretical concepts regarding mental processes it will differentiate between modular, abstract information processing approaches to cognition and interactive, embodied ideas of mental processing. These general concepts will then be transferred to the context of processing voice-related information in the context of parallel face-related processing streams. One central issue here is whether and to what extent cognitive voice processing can occur independently, that is, encapsulated from the simultaneous processing of visual person-related information (and vice versa). In Study 1 (Huestegge & Raettig, in press), participants are presented with audio-visual stimuli displaying faces uttering digits.
Audiovisual gender congruency was manipulated: There were male and female faces, each uttering digits with either a male or female voice (all stimuli were AV- synchronized). Participants were asked to categorize the gender of either the face or the voice by pressing one of two keys in each trial. A central result was that audio-visual gender congruency affected performance: Incongruent stimuli were categorized slower and more error-prone, suggesting a strong cross-modal interaction of the underlying visual and auditory processing routes. Additionally, the effect of incongruent visual information on auditory classification was stronger than the effect of incongruent auditory information on visual categorization, suggesting visual dominance over auditory processing in the context of gender classification. A gender congruency effect was also present under high cognitive load. Study 2 (Huestegge, Raettig, & Huestegge, in press) utilized the same (gender-congruent and -incongruent) stimuli, but different tasks for the participants, namely categorizing the spoken digits (into odd/even or smaller/larger than 5). This should effectively direct attention away from gender information, which was no longer task-relevant. Nevertheless, congruency effects were still observed in this study. This suggests a relatively automatic processing of cross-modal gender information, which
eventually affects basic speech-based information processing. Study 3 (Huestegge, subm.) focused on the ability of participants to match unfamiliar voices to (either static or dynamic) faces. One result was that participants were indeed able to match voices to faces. Moreover, there was no evidence for any performance increase when dynamic (vs. mere static) faces had to be matched to concurrent voices. The results support the idea that common person-related source information affects both vocal and facial features, and implicit corresponding knowledge appears to be used by participants to successfully complete face-voice matching. Taken together, the three studies (Huestegge, subm.; Huestegge & Raettig, in press; Huestegge et al., in press) provided information to further develop current theories of voice processing (in the context of face processing). On a general level, the results of all three studies are not in line with an abstract, modular view of cognition, but rather lend further support to interactive, embodied accounts of mental processing.
Virtual reality plays an increasingly important role in research and therapy of pathological fear. However, the mechanisms how virtual environments elicit and modify fear responses are not yet fully understood. Presence, a psychological construct referring to the ‘sense of being there’ in a virtual environment, is widely assumed to crucially influence the strength of the elicited fear responses, however, causality is still under debate. The present study is the first that experimentally manipulated both variables to unravel the causal link between presence and fear responses. Height-fearful participants (N = 49) were immersed into a virtual height situation and a neutral control situation (fear manipulation) with either high versus low sensory realism (presence manipulation). Ratings of presence and verbal and physiological (skin conductance, heart rate) fear responses were recorded. Results revealed an effect of the fear manipulation on presence, i.e., higher presence ratings in the height situation compared to the neutral control situation, but no effect of the presence manipulation on fear responses. However, the presence ratings during the first exposure to the high quality neutral environment were predictive of later fear responses in the height situation. Our findings support the hypothesis that experiencing emotional responses in a virtual environment leads to a stronger feeling of being there, i.e., increase presence. In contrast, the effects of presence on fear seem to be more complex: on the one hand, increased presence due to the quality of the virtual environment did not influence fear; on the other hand, presence variability that likely stemmed from differences in user characteristics did predict later fear responses. These findings underscore the importance of user characteristics in the emergence of presence.
In contrast to classical theories of cognitive control, recent evidence suggests that cognitive control and unconscious automatic processing influence each other. First, masked semantic priming, an index of unconscious automatic processing, depends on attention to semantics induced by a previously executed task. Second, cognitive control operations (e.g., implementation of task sets indicating how to process a particular stimulus) can be activated by masked task cues, presented outside awareness. In this study, we combined both lines of research. We investigated in three experiments whether induction tasks and presentation of visible or masked task cues, which signal subsequent semantic or perceptual tasks but do not require induction task execution, comparably modulate masked semantic priming. In line with previous research, priming was consistently larger following execution of a semantic rather than a perceptual induction task. However, we observed in experiment 1 (masked letter cues) a reversed priming pattern following task cues (larger priming following cues signaling perceptual tasks) compared to induction tasks. Experiment 2 (visible letter cues) and experiment 3 (visible color cues) showed that this reversed priming pattern depended only on apriori associations between task cues and task elements (task set dominance), but neither on awareness nor on the verbal or non-verbal format of the cues. These results indicate that task cues have the power to modulate subsequent masked semantic priming through attentional mechanisms. Task-set dominance conceivably affects the time course of task set activation and inhibition in response to task cues and thus the direction of their modulatory effects on priming.
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.
Depending on the point of view, conceptions of greed range from being a desirable and inevitable feature of a well-regulated, well-balanced economy to the root of all evil - radix omnium malorum avaritia (Tim 6.10). Regarding the latter, it has been proposed that greedy individuals strive for obtaining desired goods at all costs. Here, we show that trait greed predicts selfish economic decisions that come at the expense of others in a resource dilemma. This effect was amplified when individuals strived for obtaining real money, as compared to points, and when their revenue was at the expense of another person, as compared to a computer. On the neural level, we show that individuals high, compared to low in trait greed showed a characteristic signature in the EEG, a reduced P3 effect to positive, compared to negative feedback, indicating that they may have a lack of sensitivity to adjust behavior according to positive and negative stimuli from the environment. Brain-behavior relations further confirmed this lack of sensitivity to behavior adjustment as a potential underlying neuro-cognitive mechanism which explains selfish and reckless behavior that may come at the expense of others.
Major depressive disorder and the anxiety disorders are highly prevalent, disabling and moderately heritable. Depression and anxiety are also highly comorbid and have a strong genetic correlation (r(g) approximate to 1). Cognitive behavioural therapy is a leading evidence-based treatment but has variable outcomes. Currently, there are no strong predictors of outcome. Therapygenetics research aims to identify genetic predictors of prognosis following therapy. We performed genome-wide association meta-analyses of symptoms following cognitive behavioural therapy in adults with anxiety disorders (n = 972), adults with major depressive disorder (n = 832) and children with anxiety disorders (n = 920; meta-analysis n = 2724). We (h(SNP)(2)) and polygenic scoring was used to examine genetic associations between therapy outcomes and psychopathology, personality and estimated the variance in therapy outcomes that could be explained by common genetic variants learning. No single nucleotide polymorphisms were strongly associated with treatment outcomes. No significant estimate of h(SNP)(2) could be obtained, suggesting the heritability of therapy outcome is smaller than our analysis was powered to detect. Polygenic scoring failed to detect genetic overlap between therapy outcome and psychopathology, personality or learning. This study is the largest therapygenetics study to date. Results are consistent with previous, similarly powered genome-wide association studies of complex traits.
Chronic alcohol use leads to specific neurobiological alterations in the dopaminergic brain reward system, which probably are leading to a reward deficiency syndrome in alcohol dependence. The purpose of our study was to examine the effects of such hypothesized neurobiological alterations on the behavioral level, and more precisely on the implicit and explicit reward learning. Alcohol users were classified as dependent drinkers (using the DSM-IV criteria), binge drinkers (using criteria of the USA National Institute on Alcohol Abuse and Alcoholism) or low-risk drinkers (following recommendations of the Scientific board of trustees of the German Health Ministry). The final sample (n = 94) consisted of 36 low-risk alcohol users, 37 binge drinkers and 21 abstinent alcohol dependent patients. Participants were administered a probabilistic implicit reward learning task and an explicit reward- and punishment-based trial-and-error-learning task. Alcohol dependent patients showed a lower performance in implicit and explicit reward learning than low risk drinkers. Binge drinkers learned less than low-risk drinkers in the implicit learning task. The results support the assumption that binge drinking and alcohol dependence are related to a chronic reward deficit. Binge drinking accompanied by implicit reward learning deficits could increase the risk for the development of an alcohol dependence.