Refine
Has Fulltext
- yes (358)
Is part of the Bibliography
- yes (358)
Year of publication
Document Type
- Journal article (358) (remove)
Language
- English (358) (remove)
Keywords
- Psychologie (41)
- EEG (14)
- anxiety (13)
- attention (13)
- psychology (13)
- virtual reality (13)
- P300 (12)
- event-related potentials (10)
- emotion (8)
- emotions (8)
Institute
- Institut für Psychologie (358) (remove)
Sonstige beteiligte Institutionen
These pre-registered studies shed light on the cues that individuals use to identify rich people. In two studies (N = 598), we first developed a factor-analytical model that describes the content and the mental structure of 24 wealth cues. A third within-subject study (N = 89) then assessed the perception of rich subgroups based on this model of wealth cues. Participants evaluated the extent to which the wealth cues applied to two distinct subgroups of rich people. The results show: German and US-American participants think that one can identify rich people based on the same set of cues which can be grouped along the following dimensions: luxury consumption, expensive hobbies, spontaneous spending, greedy behavior, charismatic behavior, self-presentation, and specific possessions. However, Germans and US-Americans relied on these cues to different degrees to diagnose wealth in others. Moreover, we found evidence for subgroup-specific wealth cue profiles insofar as target individuals who acquired their wealth via internal (e.g., hard work) compared to external means (e.g., lottery winners) were evaluated differently on these wealth cues, presumably because of their perceived differences in valence and competence. Together, this research provides new insights in the cognitive representation of the latent construct of wealth. Practical implications for research on the perception of affluence, and implications for political decision makers, are discussed in the last section.
When telling a lie, humans might engage in stronger monitoring of their behavior than when telling the truth. Initial evidence has indeed pointed towards a stronger recruitment of capacity-limited monitoring processes in dishonest than honest responding, conceivably resulting from the necessity to overcome automatic tendencies to respond honestly. Previous results suggested monitoring to be confined to response execution, however, whereas the current study goes beyond these findings by specifically probing for post-execution monitoring. Participants responded (dis)honestly to simple yes/no questions in a first task and switched to an unrelated second task after a response–stimulus interval of 0 ms or 1000 ms. Dishonest responses did not only prolong response times in Task 1, but also in Task 2 with a short response–stimulus interval. These findings support the assumption that increased monitoring for dishonest responses extends beyond mere response execution, a mechanism that is possibly tuned to assess the successful completion of a dishonest act.
In three experiments, we examined the cognitive underpinnings of self-serving dishonesty by manipulating cognitive load under different incentive structures. Participants could increase a financial bonus by misreporting outcomes of private die rolls without any risk of detection. At the same time, they had to remember letter strings of varying length. If honesty is the automatic response tendency and dishonesty is cognitively demanding, lying behavior should be less evident under high cognitive load. This hypothesis was supported by the outcome of two out of three experiments. We further manipulated whether all trials or only one random trial determined payoff to modulate reward adaptation over time (Experiment 2) and whether payoff was framed as a financial gain or loss (Experiment 3). The payoff scheme of one random or all trials did not affect lying behavior and, discordant to earlier research, facing losses instead of gains did not increase lying behavior. Finally, cognitive load and incentive frame interacted significantly, but contrary to our assumption gains increased lying under low cognitive load. While the impact of cognitive load on dishonesty appears to be comparably robust, motivational influences seem to be more elusive than commonly assumed in current theorizing.
In task-switching studies, performance is typically worse in task-switch trials than in task-repetition trials. These switch costs are often asymmetrical, a phenomenon that has been explained by referring to a dominance of one task over the other. Previous studies also indicated that response modalities associated with two tasks may be considered as integral components for defining a task set. However, a systematic assessment of the role of response modalities in task switching is still lacking: Are some response modalities harder to switch to than others? The present study systematically examined switch costs when combining tasks that differ only with respect to their associated effector systems. In Experiment 1, 16 participants switched (in unpredictable sequence) between oculomotor and vocal tasks. In Experiment 2, 72 participants switched (in pairwise combinations) between oculomotor, vocal, and manual tasks. We observed systematic performance costs when switching between response modalities under otherwise constant task features and could thereby replicate previous observations of response modality switch costs. However, we did not observe any substantial switch-cost asymmetries. As previous studies using temporally overlapping dual-task paradigms found substantial prioritization effects (in terms of asymmetric costs) especially for oculomotor tasks, the present results suggest different underlying processes in sequential task switching than in simultaneous multitasking. While more research is needed to further substantiate a lack of response modality switch-cost asymmetries in a broader range of task switching situations, we suggest that task-set representations related to specific response modalities may exhibit rapid decay.
Evidence from multisensory body illusions suggests that body representations may be malleable, for instance, by embodying external objects. However, adjusting body representations to current task demands also implies that external objects become disembodied from the body representation if they are no longer required. In the current web-based study, we induced the embodiment of a two-dimensional (2D) virtual hand that could be controlled by active movements of a computer mouse or on a touchpad. Following initial embodiment, we probed for disembodiment by comparing two conditions: Participants either continued moving the virtual hand or they stopped moving and kept the hand still. Based on theoretical accounts that conceptualize body representations as a set of multisensory bindings, we expected gradual disembodiment of the virtual hand if the body representations are no longer updated through correlated visuomotor signals. In contrast to our prediction, the virtual hand was instantly disembodied as soon as participants stopped moving it. This result was replicated in two follow-up experiments. The observed instantaneous disembodiment might suggest that humans are sensitive to the rapid changes that characterize action and body in virtual environments, and hence adjust corresponding body representations particularly swiftly.
Previous research has shown that the simultaneous execution of two actions (instead of only one) is not necessarily more difficult but can actually be easier (less error-prone), in particular when executing one action requires the simultaneous inhibition of another action. Corresponding inhibitory demands are particularly challenging when the to-be-inhibited action is highly prepotent (i.e., characterized by a strong urge to be executed). Here, we study a range of important potential sources of such prepotency. Building on a previously established paradigm to elicit dual-action benefits, participants responded to stimuli with single actions (either manual button press or saccade) or dual actions (button press and saccade). Crucially, we compared blocks in which these response demands were randomly intermixed (mixed blocks) with pure blocks involving only one type of response demand. The results highlight the impact of global (action-inherent) sources of action prepotency, as reflected in more pronounced inhibitory failures in saccade vs. manual control, but also more local (transient) sources of influence, as reflected in a greater probability of inhibition failures following trials that required the to-be-inhibited type of action. In addition, sequential analyses revealed that inhibitory control (including its failure) is exerted at the level of response modality representations, not at the level of fully specified response representations. In sum, the study highlights important preconditions and mechanisms underlying the observation of dual-action benefits.
In the last years, it has become general consensus that actions change our time perception. Performing an action to elicit a specific event seems to lead to a systematic underestimation of the interval between action and effect, a phenomenon termed temporal (or previously intentional) binding. Temporal binding has been closely associated with sense of agency, our perceived control over our actions and our environment, and because of its robust behavioral effects has indeed been widely utilized as an implicit correlate of sense of agency. The most robust and clear temporal binding effects are typically found via Libet clock paradigms. In the present study, we investigate a crucial methodological confound in these paradigms that provides an alternative explanation for temporal binding effects: a redirection of attentional resources in two-event sequences (as in classical operant conditions) versus singular events (as in classical baseline conditions). Our results indicate that binding effects in Libet clock paradigms may be based to a large degree on such attentional processes, irrespective of intention or action-effect sequences. Thus, these findings challenge many of the previously drawn conclusions and interpretations with regard to actions and time perception.
Psychopathology, protective factors, and COVID-19 among adolescents: a structural equation model
(2023)
Since the outbreak of the COVID-19 pandemic in December 2019 and the associated restrictions, mental health in children and adolescents has been increasingly discussed in the media. Negative impacts of the pandemic, including a sharp increase in psychopathology and, consequently, reduced quality of life, appear to have particularly affected children and young people, who may be especially vulnerable to the adverse effects of isolation. Nevertheless, many children and adolescents have managed to cope well with the restrictions, without deterioration of their mental health. The present study therefore explored the links between COVID-19 infection (in oneself or a family member, as well as the death of a family member due to the virus), protective factors such as self-efficacy, resilience, self-esteem, and health-related quality of life, and measures of psychopathology such as depression scores, internalizing/externalizing problems, emotion dysregulation, and victimization. For this purpose, we examined data from 2129 adolescents (mean age = 12.31, SD = 0.67; 51% male; 6% born outside of Germany) using a structural equation model. We found medium to high loadings of the manifest variables with the latent variables (COVID-19, protective factors, and psychopathology). Protective factors showed a significant negative correlation with psychopathology. However, COVID-19 had a weak connection with psychopathology in our sample. External pandemic-related factors (e.g., restrictions) and their interaction with existing psychopathology or individual protective factors appear to have a greater influence on young people’s mental health than the impact of the virus per se. Sociopolitical efforts should be undertaken to foster prevention and promote individual resilience, especially in adolescence.
Previous EEG research only investigated one stage ultimatum games (UGs). We investigated the influence of a second bargaining stage in an UG concerning behavioral responses, electro-cortical correlates and their moderations by the traits altruism, anger, anxiety, and greed in 92 participants. We found that an additional stage led to more rejection in the 2-stage UG (2SUG) and that increasing offers in the second stage compared to the first stage led to more acceptance. The FRN during a trial was linked to expectance evaluation concerning the fairness of the offers, while midfrontal theta was a marker for the needed cognitive control to overcome the respective default behavioral pattern. The FRN responses to unfair offers were more negative for either low or high altruism in the UG, while high trait anxiety led to more negative FRN responses in the first stage of 2SUG, indicating higher sensitivity to unfairness. Accordingly, the mean FRN response, representing the trait-like general electrocortical reactivity to unfairness, predicted rejection in the first stage of 2SUG. Additionally, we found that high trait anger led to more rejections for unfair offer in 2SUG in general, while trait altruism led to more rejection of unimproving unfair offers in the second stage of 2SUG. In contrast, trait anxiety led to more acceptance in the second stage of 2SUG, while trait greed even led to more acceptance if the offer was worse than in the stage before. These findings suggest, that 2SUG creates a trait activation situation compared to the UG.
Objective
Alzheimer’s disease (AD) is a growing challenge worldwide, which is why the search for early-onset predictors must be focused as soon as possible. Longitudinal studies that investigate courses of neuropsychological and other variables screen for such predictors correlated to mild cognitive impairment (MCI). However, one often neglected issue in analyses of such studies is measurement invariance (MI), which is often assumed but not tested for. This study uses the absence of MI (non-MI) and latent factor scores instead of composite variables to assess properties of cognitive domains, compensation mechanisms, and their predictability to establish a method for a more comprehensive understanding of pathological cognitive decline.
Methods
An exploratory factor analysis (EFA) and a set of increasingly restricted confirmatory factor analyses (CFAs) were conducted to find latent factors, compared them with the composite approach, and to test for longitudinal (partial-)MI in a neuropsychiatric test battery, consisting of 14 test variables. A total of 330 elderly (mean age: 73.78 ± 1.52 years at baseline) were analyzed two times (3 years apart).
Results
EFA revealed a four-factor model representing declarative memory, attention, working memory, and visual–spatial processing. Based on CFA, an accurate model was estimated across both measurement timepoints. Partial non-MI was found for parameters such as loadings, test- and latent factor intercepts as well as latent factor variances. The latent factor approach was preferable to the composite approach.
Conclusion
The overall assessment of non-MI latent factors may pose a possible target for this field of research. Hence, the non-MI of variances indicated variables that are especially suited for the prediction of pathological cognitive decline, while non-MI of intercepts indicated general aging-related decline. As a result, the sole assessment of MI may help distinguish pathological from normative aging processes and additionally may reveal compensatory neuropsychological mechanisms.
In this article, we explain and demonstrate how to model norm scores with the cNORM package in R. This package is designed specifically to determine norm scores when the latent ability to be measured covaries with age or other explanatory variables such as grade level. The mathematical method used in this package draws on polynomial regression to model a three-dimensional hyperplane that smoothly and continuously captures the relation between raw scores, norm scores and the explanatory variable. By doing so, it overcomes the typical problems of classical norming methods, such as overly large age intervals, missing norm scores, large amounts of sampling error in the subsamples or huge requirements with regard to the sample size. After a brief introduction to the mathematics of the model, we describe the individual methods of the package. We close the article with a practical example using data from a real reading comprehension test.
Anxiety is characterized by anxious anticipation and heightened vigilance to uncertain threat. However, if threat is not reliably indicated by a specific cue, the context in which threat was previously experienced becomes its best predictor, leading to anxiety. A suitable means to induce anxiety experimentally is context conditioning: In one context (CTX+), an unpredictable aversive stimulus (US) is repeatedly presented, in contrast to a second context (CTX−), in which no US is ever presented. In this EEG study, we investigated attentional mechanisms during acquisition and extinction learning in 38 participants, who underwent a context conditioning protocol. Flickering video stimuli (32 s clips depicting virtual offices representing CTX+/−) were used to evoke steady‐state visual evoked potentials (ssVEPs) as an index of visuocortical engagement with the contexts. Analyses of the electrocortical responses suggest a successful induction of the ssVEP signal by video presentation in flicker mode. Furthermore, we found clear indices of context conditioning and extinction learning on a subjective level, while cortical processing of the CTX+ was unexpectedly reduced during video presentation. The differences between CTX+ and CTX− diminished during extinction learning. Together, these results indicate that the dynamic sensory input of the video presentation leads to disruptions in the ssVEP signal, which is greater for motivationally significant, threatening contexts.
The effect of inherently threatening contexts on visuocortical engagement to conditioned threat
(2023)
Fear and anxiety are crucial for adaptive responding in life‐threatening situations. Whereas fear is a phasic response to an acute threat accompanied by selective attention, anxiety is characterized by a sustained feeling of apprehension and hypervigilance during situations of potential threat. In the current literature, fear and anxiety are usually considered mutually exclusive, with partially separated neural underpinnings. However, there is accumulating evidence that challenges this distinction between fear and anxiety, and simultaneous activation of fear and anxiety networks has been reported. Therefore, the current study experimentally tested potential interactions between fear and anxiety. Fifty‐two healthy participants completed a differential fear conditioning paradigm followed by a test phase in which the conditioned stimuli were presented in front of threatening or neutral contextual images. To capture defense system activation, we recorded subjective (threat, US‐expectancy), physiological (skin conductance, heart rate) and visuocortical (steady‐state visual evoked potentials) responses to the conditioned stimuli as a function of contextual threat. Results demonstrated successful fear conditioning in all measures. In addition, threat and US‐expectancy ratings, cardiac deceleration, and visuocortical activity were enhanced for fear cues presented in threatening compared with neutral contexts. These results are in line with an additive or interactive rather than an exclusive model of fear and anxiety, indicating facilitated defensive behavior to imminent danger in situations of potential threat.
When trying to conceal one's knowledge, various ocular changes occur. However, which cognitive mechanisms drive these changes? Do orienting or inhibition—two processes previously associated with autonomic changes—play a role? To answer this question, we used a Concealed Information Test (CIT) in which participants were either motivated to conceal (orienting + inhibition) or reveal (orienting only) their knowledge. While pupil size increased in both motivational conditions, the fixation and blink CIT effects were confined to the conceal condition. These results were mirrored in autonomic changes, with skin conductance increasing in both conditions while heart rate decreased solely under motivation to conceal. Thus, different cognitive mechanisms seem to drive ocular responses. Pupil size appears to be linked to the orienting of attention (akin to skin conductance changes), while fixations and blinks rather seem to reflect arousal inhibition (comparable to heart rate changes). This knowledge strengthens CIT theory and illuminates the relationship between ocular and autonomic activity.
Although most protective behaviors related to the COVID‐19 pandemic come with personal costs, they will produce the largest benefit if everybody cooperates. This study explores two interacting factors that drive cooperation in this tension between private and collective interests. A preregistered experiment (N = 299) examined (a) how the quality of the relation among interacting partners (social proximity), and (b) how focusing on the risk of self‐infection versus onward transmission affected intentions to engage in protective behaviors. The results suggested that risk focus was an important moderator of the relation between social proximity and protection intentions. Specifically, participants were more willing to accept the risk of self‐infection from close others than from strangers, resulting in less caution toward a friend than toward a distant other. However, when onward transmission was the primary concern, participants were more reluctant to effect transmission to close others, resulting in more caution toward friends than strangers. These findings inform the debate about effective nonclinical measures against the pandemic. Practical implications for risk communication are discussed.
How do people estimate the income that is needed to be rich? Two correlative survey studies (Study 1 and 2, N = 568) and one registered experimental study (Study 3, N = 500) examined the cognitive mechanisms that are used to derive an answer to this question. We tested whether individuals use their personal income (PI) as a self‐generated anchor to derive an estimate of the income needed to be rich (= income wealth threshold estimation, IWTE). On a bivariate level, we found the expected positive relationship between one's PI and IWTE and, in line with previous findings, we found that people do not consider themselves rich. Furthermore, we predicted that individuals additionally use information about their social status within their social circles to make an IWTE. The findings from study 2 support this notion and show that only self‐reported high‐income individuals show different IWTEs depending on relative social status: Individuals in this group who self‐reported a high status produced higher IWTEs than individuals who self‐reported low status. The registered experimental study could not replicate this pattern robustly, although the results trended non‐significantly in the same direction. Together, the findings revealed that the income of individuals as well as the social environment are used as sources of information to make IWTE judgements, although they are likely not the only important predictors.
Despite high levels of distress, family caregivers of patients with cancer rarely seek psychosocial support and Internet-based interventions (IBIs) are a promising approach to reduce some access barriers. Therefore, we developed a self-guided IBI for family caregivers of patients with cancer (OAse), which, in addition to patients' spouses, also addresses other family members (e.g., adult children, parents). This study aimed to determine the feasibility of OAse (recruitment, dropout, adherence, participant satisfaction). Secondary outcomes were caregivers’ self-efficacy, emotional state, and supportive care needs. N = 41 family caregivers participated in the study (female: 65%), mostly spouses (71%), followed by children (20%), parents (7%), and friends (2%). Recruitment (47%), retention (68%), and adherence rates (76% completed at least 4 of 6 lessons) support the feasibility of OAse. Overall, the results showed a high degree of overall participant satisfaction (96%). There were no significant pre-post differences in secondary outcome criteria, but a trend toward improvement in managing difficult interactions/emotions (p = .06) and depression/anxiety (p = .06). Although the efficacy of the intervention remains to be investigated, our results suggest that OAse can be well implemented in caregivers’ daily lives and has the potential to improve family caregivers’ coping strategies.
The Coronavirus disease 2019 (COVID-19) has not only had negative effects on employees' health, but also on their prospects to gain and maintain employment. Using a longitudinal research design with two measurement points, we investigated the ramifications of various psychological and organizational resources on employees' careers during the COVID-19 pandemic. Specifically, in a sample of German employees (N = 305), we investigated the role of psychological capital (PsyCap) for four career-related outcomes: career satisfaction, career engagement, coping with changes in career due to COVID-19, and career-related COVID-19 worries. We also employed leader–member exchange (LMX) as a moderator and career adaptability as a mediating variable in these relationships. Results from path analyses revealed a positive association between PsyCap and career satisfaction and career coping. Furthermore, PsyCap was indirectly related to career engagement through career adaptability. However, moderation analysis showed no moderating role of LMX on the link between PsyCap and career adaptability. Our study contributes to the systematic research concerning the role of psychological and organizational resources for employees' careers and well-being, especially for crisis contexts.
Spontaneous brain activity builds the foundation for human cognitive processing during external demands. Neuroimaging studies based on functional magnetic resonance imaging (fMRI) identified specific characteristics of spontaneous (intrinsic) brain dynamics to be associated with individual differences in general cognitive ability, i.e., intelligence. However, fMRI research is inherently limited by low temporal resolution, thus, preventing conclusions about neural fluctuations within the range of milliseconds. Here, we used resting-state electroencephalographical (EEG) recordings from 144 healthy adults to test whether individual differences in intelligence (Raven’s Advanced Progressive Matrices scores) can be predicted from the complexity of temporally highly resolved intrinsic brain signals. We compared different operationalizations of brain signal complexity (multiscale entropy, Shannon entropy, Fuzzy entropy, and specific characteristics of microstates) regarding their relation to intelligence. The results indicate that associations between brain signal complexity measures and intelligence are of small effect sizes (r ∼ 0.20) and vary across different spatial and temporal scales. Specifically, higher intelligence scores were associated with lower complexity in local aspects of neural processing, and less activity in task-negative brain regions belonging to the default-mode network. Finally, we combined multiple measures of brain signal complexity to show that individual intelligence scores can be significantly predicted with a multimodal model within the sample (10-fold cross-validation) as well as in an independent sample (external replication, N = 57). In sum, our results highlight the temporal and spatial dependency of associations between intelligence and intrinsic brain dynamics, proposing multimodal approaches as promising means for future neuroscientific research on complex human traits.
To slow down the spread of the SARS-Cov-2 virus, countries worldwide severely restricted public and social life. In addition to the physical threat posed by the viral disease (COVID-19), the pandemic also has implications for psychological well-being. Using a small sample (N = 51), we examined how Big Five personality traits relate to coping with contact restrictions during three consecutive weeks in the first wave of the COVID-19 pandemic in Germany. We showed that extraversion was associated with suffering from severe contact restrictions and with benefiting from their relaxation. Individuals with high neuroticism did not show a change in their relatively poor coping with the restrictions over time, whereas conscientious individuals seemed to experience no discomfort and even positive feelings during the period of contact restrictions. Our results support the assumption that neuroticism is a vulnerability factor in relation to psychological wellbeing but also show an influence of contact restrictions on extraverted individuals.