Refine
Is part of the Bibliography
- yes (595)
Year of publication
Document Type
- Journal article (387)
- Doctoral Thesis (159)
- Book article / Book chapter (21)
- Conference Proceeding (12)
- Book (4)
- Review (4)
- Report (3)
- Other (2)
- Preprint (2)
- Master Thesis (1)
Keywords
- Psychologie (65)
- EEG (24)
- virtual reality (20)
- attention (17)
- Kognition (15)
- P300 (15)
- anxiety (13)
- emotion (13)
- Virtuelle Realität (12)
- psychology (12)
Institute
- Institut für Psychologie (595) (remove)
Sonstige beteiligte Institutionen
- Adam Opel AG (1)
- BMBF (1)
- Blindeninstitut, Ohmstr. 7, 97076, Wuerzburg, Germany (1)
- Deutsches Zentrum für Präventionsforschung Psychische Gesundheit (DZPP) (1)
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society (ESI) (1)
- Evangelisches Studienwerk e.V. (1)
- Forschungsverbund ForChange des Bayrischen Kultusministeriums (1)
- IFT Institut für Therapieforschung München (1)
- Klinik für Psychiatrie und Psychotherapie, Universität Würzburg (1)
- Opel Automobile GmbH (1)
Background
While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks.
Results
Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus.
Conclusion
The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior.
Investigating approach-avoidance behavior regarding affective stimuli is important in broadening the understanding of one of the most common psychiatric disorders, social anxiety disorder. Many studies in this field rely on approach-avoidance tasks, which mainly assess hand movements, or interpersonal distance measures, which return inconsistent results and lack ecological validity. Therefore, the present study introduces a virtual reality task, looking at avoidance parameters (movement time and speed, distance to social stimulus, gaze behavior) during whole-body movements. These complex movements represent the most ecologically valid form of approach and avoidance behavior. These are at the core of complex and natural social behavior. With this newly developed task, the present study examined whether high socially anxious individuals differ in avoidance behavior when bypassing another person, here virtual humans with neutral and angry facial expressions. Results showed that virtual bystanders displaying angry facial expressions were generally avoided by all participants. In addition, high socially anxious participants generally displayed enhanced avoidance behavior towards virtual people, but no specifically exaggerated avoidance behavior towards virtual people with a negative facial expression. The newly developed virtual reality task proved to be an ecological valid tool for research on complex approach-avoidance behavior in social situations. The first results revealed that whole body approach-avoidance behavior relative to passive bystanders is modulated by their emotional facial expressions and that social anxiety generally amplifies such avoidance.
A hallmark of habitual actions is that, once they are established, they become insensitive to changes in the values of action outcomes. In this article, we review empirical research that examined effects of posttraining changes in outcome values in outcome-selective Pavlovian-to-instrumental transfer (PIT) tasks. This review suggests that cue-instigated action tendencies in these tasks are not affected by weak and/or incomplete revaluation procedures (e.g., selective satiety) and substantially disrupted by a strong and complete devaluation of reinforcers. In a second part, we discuss two alternative models of a motivational control of habitual action: a default-interventionist framework and expected value of control theory. It is argued that the default-interventionist framework cannot solve the problem of an infinite regress (i.e., what controls the controller?). In contrast, expected value of control can explain control of habitual actions with local computations and feedback loops without (implicit) references to control homunculi. It is argued that insensitivity to changes in action outcomes is not an intrinsic design feature of habits but, rather, a function of the cognitive system that controls habitual action tendencies.
According to the motivational priming hypothesis, unpleasant stimuli activate the motivational defense system, which in turn promotes congruent affective states such as negative emotions and pain. The question arises to what degree this bottom–up impact of emotions on pain is susceptible to a manipulation of top–down-driven expectations. To this end, we investigated whether verbal instructions implying pain potentiation vs. reduction (placebo or nocebo expectations)—later on confirmed by corresponding experiences (placebo or nocebo conditioning)—might alter behavioral and neurophysiological correlates of pain modulation by unpleasant pictures. We compared two groups, which underwent three experimental phases: first, participants were either instructed that watching unpleasant affective pictures would increase pain (nocebo group) or that watching unpleasant pictures would decrease pain (placebo group) relative to neutral pictures. During the following placebo/nocebo-conditioning phase, pictures were presented together with electrical pain stimuli of different intensities, reinforcing the instructions. In the subsequent test phase, all pictures were presented again combined with identical pain stimuli. Electroencephalogram was recorded in order to analyze neurophysiological responses of pain (somatosensory evoked potential) and picture processing [visually evoked late positive potential (LPP)], in addition to pain ratings. In the test phase, ratings of pain stimuli administered while watching unpleasant relative to neutral pictures were significantly higher in the nocebo group, thus confirming the motivational priming effect for pain perception. In the placebo group, this effect was reversed such that unpleasant compared with neutral pictures led to significantly lower pain ratings. Similarly, somatosensory evoked potentials were decreased during unpleasant compared with neutral pictures, in the placebo group only. LPPs of the placebo group failed to discriminate between unpleasant and neutral pictures, while the LPPs of the nocebo group showed a clear differentiation. We conclude that the placebo manipulation already affected the processing of the emotional stimuli and, in consequence, the processing of the pain stimuli. In summary, the study revealed that the modulation of pain by emotions, albeit a reliable and well-established finding, is further tuned by reinforced expectations—known to induce placebo/nocebo effects—which should be addressed in future research and considered in clinical applications.
Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.
Virtual reality plays an increasingly important role in research and therapy of pathological fear. However, the mechanisms how virtual environments elicit and modify fear responses are not yet fully understood. Presence, a psychological construct referring to the ‘sense of being there’ in a virtual environment, is widely assumed to crucially influence the strength of the elicited fear responses, however, causality is still under debate. The present study is the first that experimentally manipulated both variables to unravel the causal link between presence and fear responses. Height-fearful participants (N = 49) were immersed into a virtual height situation and a neutral control situation (fear manipulation) with either high versus low sensory realism (presence manipulation). Ratings of presence and verbal and physiological (skin conductance, heart rate) fear responses were recorded. Results revealed an effect of the fear manipulation on presence, i.e., higher presence ratings in the height situation compared to the neutral control situation, but no effect of the presence manipulation on fear responses. However, the presence ratings during the first exposure to the high quality neutral environment were predictive of later fear responses in the height situation. Our findings support the hypothesis that experiencing emotional responses in a virtual environment leads to a stronger feeling of being there, i.e., increase presence. In contrast, the effects of presence on fear seem to be more complex: on the one hand, increased presence due to the quality of the virtual environment did not influence fear; on the other hand, presence variability that likely stemmed from differences in user characteristics did predict later fear responses. These findings underscore the importance of user characteristics in the emergence of presence.
It is one of the primary goals of medical care to secure good quality of life (QoL) while prolonging survival. This is a major challenge in severe medical conditions with a prognosis such as amyotrophic lateral sclerosis (ALS). Further, the definition of QoL and the question whether survival in this severe condition is compatible with a good QoL is a matter of subjective and culture-specific debate. Some people without neurodegenerative conditions believe that physical decline is incompatible with satisfactory QoL. Current data provide extensive evidence that psychosocial adaptation in ALS is possible, indicated by a satisfactory QoL. Thus, there is no fatalistic link of loss of QoL when physical health declines. There are intrinsic and extrinsic factors that have been shown to successfully facilitate and secure QoL in ALS which will be reviewed in the following article following the four ethical principles (1) Beneficence, (2) Non-maleficence, (3) Autonomy and (4) Justice, which are regarded as key elements of patient centered medical care according to Beauchamp and Childress. This is a JPND-funded work to summarize findings of the project NEEDSinALS (www.NEEDSinALS.com) which highlights subjective perspectives and preferences in medical decision making in ALS.
Preclinical studies point to a pivotal role of the orexin 1 (OX1) receptor in arousal and fear learning and therefore suggest the HCRTR1 gene as a prime candidate in panic disorder (PD) with/without agoraphobia (AG), PD/AG treatment response, and PD/AG-related intermediate phenotypes. Here, a multilevel approach was applied to test the non-synonymous HCRTR1 C/T Ile408Val gene variant (rs2271933) for association with PD/AG in two independent case-control samples (total n = 613 cases, 1839 healthy subjects), as an outcome predictor of a six-weeks exposure-based cognitive behavioral therapy (CBT) in PD/AG patients (n = 189), as well as with respect to agoraphobic cognitions (ACQ) (n = 483 patients, n = 2382 healthy subjects), fMRI alerting network activation in healthy subjects (n = 94), and a behavioral avoidance task in PD/AG pre- and post-CBT (n = 271). The HCRTR1 rs2271933 T allele was associated with PD/AG in both samples independently, and in their meta-analysis (p = 4.2 × 10−7), particularly in the female subsample (p = 9.8 × 10−9). T allele carriers displayed a significantly poorer CBT outcome (e.g., Hamilton anxiety rating scale: p = 7.5 × 10−4). The T allele count was linked to higher ACQ sores in PD/AG and healthy subjects, decreased inferior frontal gyrus and increased locus coeruleus activation in the alerting network. Finally, the T allele count was associated with increased pre-CBT exposure avoidance and autonomic arousal as well as decreased post-CBT improvement. In sum, the present results provide converging evidence for an involvement of HCRTR1 gene variation in the etiology of PD/AG and PD/AG-related traits as well as treatment response to CBT, supporting future therapeutic approaches targeting the orexin-related arousal system.
Examining the testing effect in university teaching: retrievability and question format matter
(2018)
Review of learned material is crucial for the learning process. One approach that promises to increase the effectiveness of reviewing during learning is to answer questions about the learning content rather than restudying the material (testing effect). This effect is well established in lab experiments. However, existing research in educational contexts has often combined testing with additional didactical measures that hampers the interpretation of testing effects. We aimed to examine the testing effect in its pure form by implementing a minimal intervention design in a university lecture (N = 92). The last 10 min of each lecture session were used for reviewing the lecture content by either answering short-answer questions, multiple-choice questions, or reading summarizing statements about core lecture content. Three unannounced criterial tests measured the retention of learning content at different times (1, 12, and 23 weeks after the last lecture). A positive testing effect emerged for short-answer questions that targeted information that participants could retrieve from memory. This effect was independent of the time of test. The results indicated no testing effect for multiple-choice testing. These results suggest that short-answer testing but not multiple-choice testing may benefit learning in higher education contexts.
In three studies, we investigated, if and how different modes of presentation - written, auditory, audiovisual (auditory combined with pictures) - affect comprehension of semantically identical materials. Children, beginning from the age of 7, and adults were included into the studies. A vast amount of studies have shown that pictures can facilitate text comprehension (e.g. Carney & Levin, 2002).
Other than the majority of these previous studies, we assessed text comprehension with methods that we assume to allow more differentiated insights into the cognitive processes that - according to current theories - underlie text comprehension. Text comprehension involves at least three levels of mental representations (see Kintsch, 1998). Moreover, text comprehension means constructing a locally and globally coherent mental representation of the text content.
Using a sentence recognition task (see Schmalhofer & Glavanov, 1986), we examined whether the memory of the text surface, the text base, and the situation model differs between written, auditory, and audiovisual text presentation in a sample of 103 8- and 10-year-olds and adults (Study I), and between auditory and audiovisual text presentation in a sample of 106 7-, 9-, and 11-year-olds (Study II). Furthermore, we examined with 155 9- and 11-year-olds, whether the ability to draw inferences to establish local and global coherence differs between written, auditory, and audiovisual text presentation. These inferences were indicated by reaction times to words associated with a protagonist's super- (global) or subordinate (local) goal.
Overall, the results of these three studies taken together, indicate that children up to age 11 do not only have better memory of not only the text surface, but also of the situation model when pictures are added to an auditory text. This effect became apparent in comparison with both auditory and written texts. For the adults, in contrast, we did not find an effect of the presentation mode. Furthermore, both 9- and 11-year-olds were better at establishing global coherence at audiovisual compared to auditory text presentation. Written presentation turned out to be superior to auditory presentation in terms of both local and global coherence.
The rise of automated driving will fundamentally change our mobility in the near future. This thesis specifically considers the stage of so called highly automated driving (Level 3, SAE International, 2014). At this level, a system carries out vehicle guidance in specific application areas, e.g. on highway roads. The driver can temporarily suspend from monitoring the driving task and might use the time by engaging in so called non-driving related tasks (NDR-tasks). However, the driver is still in charge to resume vehicle control when prompted by the system. This new role of the driver has to be critically examined from a human factors perspective.
The main aim of this thesis was to systematically investigate the impact of different NDR-tasks on driver behavior and take-over performance. Wickens’ (2008) architecture of multiple resource theory was chosen as theoretical framework, with the building blocks of multiplicity (task interference due to resource overlap), mental workload (task demands), and aspects of executive control or self-regulation. Specific adaptations and extensions of the theory were discussed to account for the context of NDR-task interactions in highly automated driving.
Overall four driving simulator studies were carried out to investigate the role of these theoretical components. Study 1 showed that drivers focused NDR-task engagement on sections of highly automated compared to manual driving. In addition, drivers avoided task engagement prior to predictable take-over situations. These results indicate that self-regulatory behavior, as reported for manual driving, also takes place in the context of highly automated driving. Study 2 specifically addressed the impact of NDR-tasks’ stimulus and response modalities on take-over performance. Results showed that particularly visual-manual tasks with high motoric load (including the need to get rid of a handheld object) had detrimental effects. However, drivers seemed to be aware of task specific distraction in take-over situations and strictly canceled visual-manual tasks compared to a low impairing auditory-vocal task. Study 3 revealed that also the mental demand of NDR-tasks should be considered for drivers’ take-over performance. Finally, different human-machine-interfaces were developed and evaluated in Simulator Study 4. Concepts including an explicit pre-alert (“notification”) clearly supported drivers’ self-regulation and achieved high usability and acceptance ratings.
Overall, this thesis indicates that the architecture of multiple resource theory provides a useful framework for research in this field. Practical implications arise regarding the potential legal regulation of NDR-tasks as well as the design of elaborated human-machine-interfaces.
The abilities to comprehend and critically evaluate scientific texts and the various arguments stated in these texts are an important aspect of scientific literacy, but these competences are usually not formally taught to students. Previous research indicates that, although undergraduate students evaluate the claims and evidence they find in scientific
documents to some extent, these evaluations usually fail to meet normative standards. In addition, students’ use of source information for evaluation is often insufficient. The rise of the internet and the increased accessibility of information have yielded some additional challenges that highlight the importance of adequate training and instruction.The aim of the present work was to further examine introductory students’ competences to systematically and heuristically evaluate scientific information, to identify relevant strategies that are involved in a successful evaluation, and to use this knowledge to design appropriate interventions for fostering epistemic competences in university students.To this end, a number of computer-based studies, including both quantitative and qualitative data as well as experimental designs, were developed. The first two studies were designed to specify educational needs and to reveal helpful processing strategies that are required in different tasks and situations. Two expert-novice comparisons were developed, whereby the performance of German students of psychology (novices) was compared to the performance of scientists from the domain of psychology (experts) in a number of different tasks, such as systematic plausibility evaluations of informal arguments (Study 1) or heuristic evaluations of the credibility of multiple scientific documents (Study 2). A think-aloud procedure was used
to identify specific strategies that were applied in both groups during task completion, and that possibly mediated performance differences between students and scientists. In addition, relationships between different strategies and between strategy use and relevant conceptual knowledge was examined. Based on the results of the expert-novice comparisons, an intervention study, consisting of two training experiments, was constructed to foster some
competences that proved to be particularly deficient in the comparisons (Study 3). Study 1 examined introductory students’ abilities to accurately judge the plausibility of informal arguments according to normative standards, to recognise common argumentation fallacies, and to identify different structural components of arguments. The results from Study 1 indicate that many students, compared to scientists, lack relevant knowledge about the structure of arguments, and that normatively accurate evaluations of their plausibility seem to be challenging in this group. Often, common argumentation fallacies were not identified correctly. Importantly, these deficits were partly mediated by differences in strategy use: It was especially difficult for students to pay sufficient attention to the relationship between argument components when forming their judgements. Moreover, they frequently relied on their intuition or opinion as a criterion for evaluation, whereas scientists predominantly determined quality of arguments based on their internal consistency.
In addition to students’ evaluation of the plausibility of informal arguments, Study 2 examined introductory students’ competences to evaluate the credibility of multiple scientific texts, and to use source characteristics for evaluation. The results show that students struggled not only to judge the plausibility of arguments correctly, but also to heuristically judge the credibility of science texts, and these deficits were fully mediated by their insufficient use of source information. In contrast, scientists were able to apply different strategies in a flexible manner. When the conditions for evaluation did not allow systematic processing (i.e. time limit), they primarily used source characteristics for their evaluations. However, when
systematic evaluations were possible (i.e. no time limit), they used more sophisticated normative criteria for their evaluations, such as paying attention to the internal consistency of arguments (cf. Study 1). Results also showed that students, in contrast to experts, lacked relevant knowledge about different publication types, and this was related to their ability to correctly determine document credibility. The results from the expert-novice comparisons also suggest that the competences assessed in both tasks might develop as a result of a more fundamental form of scientific literacy and discipline expertise. Performances in all tasks were positively related. On the basis of these results, two training experiments were developed that aimed at fostering university students’ competences to understand and evaluate informal arguments (Study 3). Experiment 1 describes an intervention approach in which students were familiarised with the formal structure of arguments based on Toulmin’s (1958) argumentation model. The performance of the experimental group to identify the structural components of this model was compared to the performance of a control group in which speed reading skills were practiced, using a pre-post-follow-up design. Results show that the training was successful for improving the comprehension of more complex arguments and relational aspects between key components in the posttest, compared to the control group. Moreover, an interaction effect was found with study performance. High achieving students with above average grades profited the most from the training intervention. Experiment 2 showed that
training in plausibility, normative criteria of argument evaluation, and argumentation fallacies improved students’ abilities to evaluate the plausibility of arguments and, in addition, their competences to recognise structural components of arguments, compared to a speed-reading control group. These results have important implications for education and practice, which will be discussed in detail in this dissertation.
Regulatory focus (RF) theory (Higgins, 1997) states that individuals follow different strategic concerns when focusing on gains (promotion) rather than losses (prevention). Applying the Reflective-Impulsive Model (RIM, Strack & Deutsch, 2004), this dissertation investigates RF’s influence on basic information processing, specifically semantic processing (Study 1), semantic (Study 2) and affective (Study 3) associative priming, and basic reflective operations (Studies 4-7). Study 1 showed no effect of RF on pre-activation of RF-related semantic concepts in a lexical decision task (LDT). Study 2 indicated that primes fitting a promotion focus improve performance in a LDT for chronically promotion-focused individuals, but not chronically prevention-focused individuals. However, the latter performed better when targets fit their focus. Stronger affect and arousal after processing valent words fitting an RF may explain this pattern. Study 3 showed some evidence for stronger priming effects for negative primes in a bona-fide pipeline task (Fazio et al., 1995) for chronically prevention-focused participants, while also providing evidence that situational prevention focus insulates individuals from misattributing the valence of simple primes. Studies 4-7 showed that a strong chronic prevention focus leads to greater negation effects for valent primes in an Affect Misattribution Procedure (Payne et al., 2005), especially when it fits the situation. Furthermore, Study 6 showed that these effects result from stronger weighting of negated valence rather than greater ease in negation. Study 7 showed that the increased negation effect is independent of time pressure. Broad implications are discussed, including how RF effects on basic processing may explain higher-order RF effects.
Programmansätze und deren Einsatz in vorschulisch, schulisch und außerschulisch bildenden Kontexten erfreuen sich der zunehmenden Beliebtheit. Ein breites und nicht nachlassendes Interesse in Forschung und Praxis kommt insbesondere vorschulischen Trainingskonzepten zuteil, denen das Potenzial zugesprochen wird, später auftretenden Schwierigkeiten beim Erwerb der Schriftsprache wirksam vorzubeugen.
Das Würzburger Trainingsprogramm »Hören, lauschen, lernen« stellt einen konzeptionell auf schriftspracherwerbstheoretischen Annahmen fundierten und mit mehreren evaluierenden Studien erprobten Trainingsansatz dar. Dieser bezweckt, Kindern den Erwerb des Lesens und Schreibens zu erleichtern. Dem Anspruch, späteren Lese-Rechtschreibschwierigkeiten effektiv vorzubeugen, unterliegt die vorschulische Förderung bereichsspezifischer Kompetenzen des Schriftspracherwerbs, insbesondere der Kompetenz phonologische Bewusstheit. Die Förderung wird optimal ausgeschöpft, sofern Empfehlungen einer qualitativen Implementierung umgesetzt werden, die als Manualtreue, Durchführungsintensität, Programmdifferenzierung, Programmkomplexität, Implementierungsstrategien, Vermittlungsqualität und Teilnehmerreaktion spezifiziert sind.
Zunehmend diskutiert sind in der Trainingsforschung, neben der theoretischen Fundierung und dem zu erbringenden Nachweis an empirischer Evidenz von Programmansätzen, Kriterien der Praxistauglichkeit. Daher befasst sich die vorliegende Arbeit mit der Frage der Programmrobustheit gegenüber Trainereffekten. Es nahmen 300 Kinder an dem Würzburger Trainingsprogramm teil und wurden 64 Kindern gegenübergestellt, die dem regulären Kindergartenprogramm folgten. Angeleitet durch das erzieherische Personal fand das 5-monatig andauernde Training innerhalb des Vorschuljahres statt. Die kindliche Entwicklung in den bereichsspezifischen Kompetenzen der phonologischen Bewusstheit und der Graphem-Phonem-Korrespondenz wurde vor und nach der Trainingsmaßnahme sowie zum Schulübertritt und in den Kompetenzen des Rechtschreibens und Lesens zum Ende des ersten Schuljahres untersucht. Es ließen sich unmittelbar und langfristig Trainingseffekte des eingesetzten Programmes nachweisen; indessen blieb ein Transfererfolg aus.
Der Exploration von Trainereffekten unterlag eine Eruierung der Praxistauglichkeit des Trainingsprogrammes anhand der erfolgten Implementierung durch das anleitende erzieherische Personal. Aus der ursprünglich mit 300 Kindern aus 44 involvierten Kindergärten bestehenden Datenbasis wurden drei Subgruppen mit insgesamt 174 Kindern aus 17 Kindergärten identifiziert, bei denen deutliche Diskrepanzen zu unmittelbaren, langfristigen und transferierenden Effekten des Trainingsprogrammes auftraten. Exploriert wurden Unterschiede in der Durchführung, um Rückschlüsse auf qualitative Aspekte der Programmimplementierung zu ziehen. Die Befunde des Extremgruppenvergleichs deuteten an, dass weniger Aspekte der Manualtreue und Durchführungsintensität ausschlaggebend für die Programmwirksamkeit waren; vielmehr schien für die Wirksamkeit des Trainingsprogrammes die Implementierung in der Art und Weise, wie die Trainingsinhalte den Kindern durch das erzieherische Personal vermittelt waren, entscheidend zu sein. Befunde zur eruierten Teilnehmerreaktion, die auf differenzielle Fördereffekte verweisen, stellten die Trainingswirksamkeit insbesondere für Kinder heraus, bei denen prognostisch ein Risiko unterstellt war, später auftretende Schwierigkeiten mit der Schriftsprache zu entwickeln. Ferner zeichnete sich ab, dass neben der Qualität der Programmimplementierung scheinbar auch Unterschiede in der schulischen Instruktionsmethode des Lesens und Schreibens einen nivellierenden Einfluss auf den Transfererfolg des Programmes ausübten. Theoretische und praktische Implikationen für den Einsatz des Trainingsprogrammes wurden diskutiert.
Previous research has shown that low-level visual features (i.e., low-level visual saliency) as well as socially relevant information predict gaze allocation in free viewing conditions. However, these studies mainly used static and highly controlled stimulus material, thus revealing little about the robustness of attentional processes across diverging situations. Secondly, the influence of affective stimulus characteristics on visual exploration patterns remains poorly understood. Participants in the present study freely viewed a set of naturalistic, contextually rich video clips from a variety of settings that were capable of eliciting different moods. Using recordings of eye movements, we quantified to what degree social information, emotional valence and low-level visual features influenced gaze allocation using generalized linear mixed models. We found substantial and similarly large regression weights for low-level saliency and social information, affirming the importance of both predictor classes under ecologically more valid dynamic stimulation conditions. Differences in predictor strength between individuals were large and highly stable across videos. Additionally, low-level saliency was less important for fixation selection in videos containing persons than in videos not containing persons, and less important for videos perceived as negative. We discuss the generalizability of these findings and the feasibility of applying this research paradigm to patient groups.
Epigenetic mechanisms have been proposed to mediate fear extinction in animal models. Here, MAOA methylation was analyzed via direct sequencing of sodium bisulfite-treated DNA extracted from blood cells before and after a 2-week exposure therapy in a sample of n = 28 female patients with acrophobia as well as in n = 28 matched healthy female controls. Clinical response was measured using the Acrophobia Questionnaire and the Attitude Towards Heights Questionnaire. The functional relevance of altered MAOA methylation was investigated by luciferase-based reporter gene assays. MAOA methylation was found to be significantly decreased in patients with acrophobia compared with healthy controls. Furthermore, MAOA methylation levels were shown to significantly increase after treatment and correlate with treatment response as reflected by decreasing Acrophobia Questionnaire/Attitude Towards Heights Questionnaire scores. Functional analyses revealed decreased reporter gene activity in presence of methylated compared with unmethylated pCpGfree_MAOA reporter gene vector constructs. The present proof-of-concept psychotherapy-epigenetic study for the first time suggests functional MAOA methylation changes as a potential epigenetic correlate of treatment response in acrophobia and fosters further investigation into the notion of epigenetic mechanisms underlying fear extinction.
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
Acrophobia is characterized by intense fear in height situations. Virtual reality (VR) can be used to trigger such phobic fear, and VR exposure therapy (VRET) has proven effective for treatment of phobias, although it remains important to further elucidate factors that modulate and mediate the fear responses triggered in VR. The present study assessed verbal and behavioral fear responses triggered by a height simulation in a 5-sided cave automatic virtual environment (CAVE) with visual and acoustic simulation and further investigated how fear responses are modulated by immersion, i.e., an additional wind simulation, and presence, i.e., the feeling to be present in the VE. Results revealed a high validity for the CAVE and VE in provoking height related self-reported fear and avoidance behavior in accordance with a trait measure of acrophobic fear. Increasing immersion significantly increased fear responses in high height anxious (HHA) participants, but did not affect presence. Nevertheless, presence was found to be an important predictor of fear responses. We conclude that a CAVE system can be used to elicit valid fear responses, which might be further enhanced by immersion manipulations independent from presence. These results may help to improve VRET efficacy and its transfer to real situations.
Altruistic punishment is connected to trait anger, not trait altruism, if compensation is available
(2018)
Altruistic punishment and altruistic compensation are important concepts that are used to investigate altruism. However, altruistic punishment has been found to be correlated with anger. We were interested whether altruistic punishment and altruistic compensation are both driven by trait altruism and trait anger or whether the influence of those two traits is more specific to one of the behavioral options. We found that if the participants were able to apply altruistic compensation and altruistic punishment together in one paradigm, trait anger only predicts altruistic punishment and trait altruism only predicts altruistic compensation. Interestingly, these relations are disguised in classical altruistic punishment and altruistic compensation paradigms where participants can either only punish or compensate. Hence altruistic punishment and altruistic compensation paradigms should be merged together if one is interested in trait altruism without the confounding influence of trait anger.
The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.