Refine
Is part of the Bibliography
- yes (600)
Year of publication
Document Type
- Journal article (391)
- Doctoral Thesis (160)
- Book article / Book chapter (21)
- Conference Proceeding (12)
- Book (4)
- Review (4)
- Report (3)
- Other (2)
- Preprint (2)
- Master Thesis (1)
Keywords
- Psychologie (65)
- EEG (24)
- virtual reality (20)
- attention (18)
- Kognition (15)
- P300 (15)
- anxiety (13)
- emotion (13)
- psychology (13)
- Virtuelle Realität (12)
Institute
- Institut für Psychologie (600) (remove)
Sonstige beteiligte Institutionen
- Adam Opel AG (1)
- BMBF (1)
- Blindeninstitut, Ohmstr. 7, 97076, Wuerzburg, Germany (1)
- Deutsches Zentrum für Präventionsforschung Psychische Gesundheit (DZPP) (1)
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society (ESI) (1)
- Evangelisches Studienwerk e.V. (1)
- Forschungsverbund ForChange des Bayrischen Kultusministeriums (1)
- IFT Institut für Therapieforschung München (1)
- Klinik für Psychiatrie und Psychotherapie, Universität Würzburg (1)
- Opel Automobile GmbH (1)
Major depressive disorder and the anxiety disorders are highly prevalent, disabling and moderately heritable. Depression and anxiety are also highly comorbid and have a strong genetic correlation (r(g) approximate to 1). Cognitive behavioural therapy is a leading evidence-based treatment but has variable outcomes. Currently, there are no strong predictors of outcome. Therapygenetics research aims to identify genetic predictors of prognosis following therapy. We performed genome-wide association meta-analyses of symptoms following cognitive behavioural therapy in adults with anxiety disorders (n = 972), adults with major depressive disorder (n = 832) and children with anxiety disorders (n = 920; meta-analysis n = 2724). We (h(SNP)(2)) and polygenic scoring was used to examine genetic associations between therapy outcomes and psychopathology, personality and estimated the variance in therapy outcomes that could be explained by common genetic variants learning. No single nucleotide polymorphisms were strongly associated with treatment outcomes. No significant estimate of h(SNP)(2) could be obtained, suggesting the heritability of therapy outcome is smaller than our analysis was powered to detect. Polygenic scoring failed to detect genetic overlap between therapy outcome and psychopathology, personality or learning. This study is the largest therapygenetics study to date. Results are consistent with previous, similarly powered genome-wide association studies of complex traits.
Previous research showed that full body ownership illusions in virtual reality (VR) can be robustly induced by providing congruent visual stimulation, and that congruent tactile experiences provide a dispensable extension to an already established phenomenon. Here we show that visuo-tactile congruency indeed does not add to already high measures for body ownership on explicit measures, but does modulate movement behavior when walking in the laboratory. Specifically, participants who took ownership over a more corpulent virtual body with intact visuo-tactile congruency increased safety distances towards the laboratory's walls compared to participants who experienced the same illusion with deteriorated visuo-tactile congruency. This effect is in line with the body schema more readily adapting to a more corpulent body after receiving congruent tactile information. We conclude that the action-oriented, unconscious body schema relies more heavily on tactile information compared to more explicit aspects of body ownership.
Investigating approach-avoidance behavior regarding affective stimuli is important in broadening the understanding of one of the most common psychiatric disorders, social anxiety disorder. Many studies in this field rely on approach-avoidance tasks, which mainly assess hand movements, or interpersonal distance measures, which return inconsistent results and lack ecological validity. Therefore, the present study introduces a virtual reality task, looking at avoidance parameters (movement time and speed, distance to social stimulus, gaze behavior) during whole-body movements. These complex movements represent the most ecologically valid form of approach and avoidance behavior. These are at the core of complex and natural social behavior. With this newly developed task, the present study examined whether high socially anxious individuals differ in avoidance behavior when bypassing another person, here virtual humans with neutral and angry facial expressions. Results showed that virtual bystanders displaying angry facial expressions were generally avoided by all participants. In addition, high socially anxious participants generally displayed enhanced avoidance behavior towards virtual people, but no specifically exaggerated avoidance behavior towards virtual people with a negative facial expression. The newly developed virtual reality task proved to be an ecological valid tool for research on complex approach-avoidance behavior in social situations. The first results revealed that whole body approach-avoidance behavior relative to passive bystanders is modulated by their emotional facial expressions and that social anxiety generally amplifies such avoidance.
A hallmark of habitual actions is that, once they are established, they become insensitive to changes in the values of action outcomes. In this article, we review empirical research that examined effects of posttraining changes in outcome values in outcome-selective Pavlovian-to-instrumental transfer (PIT) tasks. This review suggests that cue-instigated action tendencies in these tasks are not affected by weak and/or incomplete revaluation procedures (e.g., selective satiety) and substantially disrupted by a strong and complete devaluation of reinforcers. In a second part, we discuss two alternative models of a motivational control of habitual action: a default-interventionist framework and expected value of control theory. It is argued that the default-interventionist framework cannot solve the problem of an infinite regress (i.e., what controls the controller?). In contrast, expected value of control can explain control of habitual actions with local computations and feedback loops without (implicit) references to control homunculi. It is argued that insensitivity to changes in action outcomes is not an intrinsic design feature of habits but, rather, a function of the cognitive system that controls habitual action tendencies.
According to the motivational priming hypothesis, unpleasant stimuli activate the motivational defense system, which in turn promotes congruent affective states such as negative emotions and pain. The question arises to what degree this bottom–up impact of emotions on pain is susceptible to a manipulation of top–down-driven expectations. To this end, we investigated whether verbal instructions implying pain potentiation vs. reduction (placebo or nocebo expectations)—later on confirmed by corresponding experiences (placebo or nocebo conditioning)—might alter behavioral and neurophysiological correlates of pain modulation by unpleasant pictures. We compared two groups, which underwent three experimental phases: first, participants were either instructed that watching unpleasant affective pictures would increase pain (nocebo group) or that watching unpleasant pictures would decrease pain (placebo group) relative to neutral pictures. During the following placebo/nocebo-conditioning phase, pictures were presented together with electrical pain stimuli of different intensities, reinforcing the instructions. In the subsequent test phase, all pictures were presented again combined with identical pain stimuli. Electroencephalogram was recorded in order to analyze neurophysiological responses of pain (somatosensory evoked potential) and picture processing [visually evoked late positive potential (LPP)], in addition to pain ratings. In the test phase, ratings of pain stimuli administered while watching unpleasant relative to neutral pictures were significantly higher in the nocebo group, thus confirming the motivational priming effect for pain perception. In the placebo group, this effect was reversed such that unpleasant compared with neutral pictures led to significantly lower pain ratings. Similarly, somatosensory evoked potentials were decreased during unpleasant compared with neutral pictures, in the placebo group only. LPPs of the placebo group failed to discriminate between unpleasant and neutral pictures, while the LPPs of the nocebo group showed a clear differentiation. We conclude that the placebo manipulation already affected the processing of the emotional stimuli and, in consequence, the processing of the pain stimuli. In summary, the study revealed that the modulation of pain by emotions, albeit a reliable and well-established finding, is further tuned by reinforced expectations—known to induce placebo/nocebo effects—which should be addressed in future research and considered in clinical applications.
Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.
Virtual reality plays an increasingly important role in research and therapy of pathological fear. However, the mechanisms how virtual environments elicit and modify fear responses are not yet fully understood. Presence, a psychological construct referring to the ‘sense of being there’ in a virtual environment, is widely assumed to crucially influence the strength of the elicited fear responses, however, causality is still under debate. The present study is the first that experimentally manipulated both variables to unravel the causal link between presence and fear responses. Height-fearful participants (N = 49) were immersed into a virtual height situation and a neutral control situation (fear manipulation) with either high versus low sensory realism (presence manipulation). Ratings of presence and verbal and physiological (skin conductance, heart rate) fear responses were recorded. Results revealed an effect of the fear manipulation on presence, i.e., higher presence ratings in the height situation compared to the neutral control situation, but no effect of the presence manipulation on fear responses. However, the presence ratings during the first exposure to the high quality neutral environment were predictive of later fear responses in the height situation. Our findings support the hypothesis that experiencing emotional responses in a virtual environment leads to a stronger feeling of being there, i.e., increase presence. In contrast, the effects of presence on fear seem to be more complex: on the one hand, increased presence due to the quality of the virtual environment did not influence fear; on the other hand, presence variability that likely stemmed from differences in user characteristics did predict later fear responses. These findings underscore the importance of user characteristics in the emergence of presence.
It is one of the primary goals of medical care to secure good quality of life (QoL) while prolonging survival. This is a major challenge in severe medical conditions with a prognosis such as amyotrophic lateral sclerosis (ALS). Further, the definition of QoL and the question whether survival in this severe condition is compatible with a good QoL is a matter of subjective and culture-specific debate. Some people without neurodegenerative conditions believe that physical decline is incompatible with satisfactory QoL. Current data provide extensive evidence that psychosocial adaptation in ALS is possible, indicated by a satisfactory QoL. Thus, there is no fatalistic link of loss of QoL when physical health declines. There are intrinsic and extrinsic factors that have been shown to successfully facilitate and secure QoL in ALS which will be reviewed in the following article following the four ethical principles (1) Beneficence, (2) Non-maleficence, (3) Autonomy and (4) Justice, which are regarded as key elements of patient centered medical care according to Beauchamp and Childress. This is a JPND-funded work to summarize findings of the project NEEDSinALS (www.NEEDSinALS.com) which highlights subjective perspectives and preferences in medical decision making in ALS.
Immersive virtual reality is a powerful method to modify the environment and thereby influence experience. The present study used a virtual hand illusion and context manipulation in immersive virtual reality to examine top-down modulation of pain. Participants received painful heat stimuli on their forearm and placed an embodied virtual hand (co-located with their real one) under a virtual water tap, which dispensed virtual water under different experimental conditions. We aimed to induce a temperature illusion by a red, blue or white light suggesting warm, cold or no virtual water. In addition, the sense of agency was manipulated by allowing participants to have high or low control over the virtual hand’s movements. Most participants experienced a thermal sensation in response to the virtual water and associated the blue and red light with cool/cold or warm/hot temperatures, respectively. Importantly, the blue light condition reduced and the red light condition increased pain intensity and unpleasantness, both compared to the control condition. The control manipulation influenced the sense of agency, but did not influence pain ratings. The large effects revealed in our study suggest that context effects within an embodied setting in an immersive virtual environment should be considered within VR based pain therapy.
Models of eye-movement control distinguish between different control levels, ranging from automatic (bottom-up, stimulus-driven selection) and automatized (based on well-learned routines) to voluntary (top-down, goal-driven selection, e.g., based on instructions). However, one type of voluntary control has yet only been examined in the manual and not in the oculomotor domain, namely free-choice selection among arbitrary targets, that is, targets that are of equal interest from both a bottom-up and top-down processing perspective. Here, we ask which features of targets (identity- or location-related) are used to determine such oculomotor free-choice behavior. In two experiments, participants executed a saccade to one of four peripheral targets in three different choice conditions: unconstrained free choice, constrained free choice based on target identity (color), and constrained free choice based on target location. The analysis of choice frequencies revealed that unconstrained free-choice selection closely resembled constrained choice based on target location. The results suggest that free-choice oculomotor control is mainly guided by spatial (location-based) target characteristics. We explain these results by assuming that participants tend to avoid less parsimonious recoding of target-identity representations into spatial codes, the latter being a necessary prerequisite to configure oculomotor commands.
Detecting whether a suspect possesses incriminating (e.g., crime-related) information can provide valuable decision aids in court. To this means, the Concealed Information Test (CIT) has been developed and is currently applied on a regular basis in Japan. But whereas research has revealed a high validity of the CIT in student and normal populations, research investigating its validity in forensic samples in scarce. This applies even more to the reaction time-based CIT (RT-CIT), where no such research is available so far. The current study tested the application of the RT-CIT for an imaginary mock crime scenario both in a sample of prisoners (n = 27) and a matched control group (n = 25). Results revealed a high validity of the RT-CIT for discriminating between crime-related and crime-unrelated information, visible in medium to very high effect sizes for error rates and reaction times. Interestingly, in accordance with theories that criminal offenders may have worse response inhibition capacities and that response inhibition plays a crucial role in the RT-CIT, CIT-effects in the error rates were even elevated in the prisoners compared to the control group. No support for this hypothesis could, however, be found in reaction time CIT-effects. Also, performance in a standard Stroop task, that was conducted to measure executive functioning, did not differ between both groups and no correlation was found between Stroop task performance and performance in the RT-CIT. Despite frequently raised concerns that the RT-CIT may not be applicable in non-student and forensic populations, our results thereby do suggest that such a use may be possible and that effects seem to be quite large. Future research should build up on these findings by increasing the realism of the crime and interrogation situation and by further investigating the replicability and the theoretical substantiation of increased effects in non-student and forensic samples.
Background
While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks.
Results
Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus.
Conclusion
The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior.
Promising initial insights show that offices designed to permit physical activity (PA) may reduce workplace sitting time. Biophilic approaches are intended to introduce natural surroundings into the workplace, and preliminary data show positive effects on stress reduction and elevated productivity within the workplace. The primary aim of this pilot study was to analyze changes in workplace sitting time and self-reported habit strength concerning uninterrupted sitting and PA during work, when relocating from a traditional office setting to “active” biophilic-designed surroundings. The secondary aim was to assess possible changes in work-associated factors such as satisfaction with the office environment, work engagement, and work performance, among office staff. In a pre-post designed field study, we collected data through an online survey on health behavior at work. Twelve participants completed the survey before (one-month pre-relocation, T1) and twice after the office relocation (three months (T2) and seven months post-relocation (T3)). Standing time per day during office hours increased from T1 to T3 by about 40 min per day (p < 0.01). Other outcomes remained unaltered. The results suggest that changing office surroundings to an active-permissive biophilic design increased standing time during working hours. Future larger-scale controlled studies are warranted to investigate the influence of office design on sitting time and work-associated factors during working hours in depth.
Responding in the presence of stimuli leads to an integration of stimulus features and response features into event fles, which can later be retrieved to assist action control. This integration mechanism is not limited to target stimuli, but can also include distractors (distractor-response binding). A recurring research question is which factors determine whether or not distractors are integrated. One suggested candidate factor is target-distractor congruency: Distractor-response binding effects were reported to be stronger for congruent than for incongruent target-distractor pairs. Here, we discuss a general problem with including the factor of congruency in typical analyses used to study distractor-based binding effects. Integrating this factor leads to a confound that may explain any differences between distractor-response binding effects of congruent and incongruent distractors with a simple congruency effect. Simulation data confrmed this argument. We propose to interpret previous data cautiously and discuss potential avenues to circumvent this problem in the future.
Chronic alcohol use leads to specific neurobiological alterations in the dopaminergic brain reward system, which probably are leading to a reward deficiency syndrome in alcohol dependence. The purpose of our study was to examine the effects of such hypothesized neurobiological alterations on the behavioral level, and more precisely on the implicit and explicit reward learning. Alcohol users were classified as dependent drinkers (using the DSM-IV criteria), binge drinkers (using criteria of the USA National Institute on Alcohol Abuse and Alcoholism) or low-risk drinkers (following recommendations of the Scientific board of trustees of the German Health Ministry). The final sample (n = 94) consisted of 36 low-risk alcohol users, 37 binge drinkers and 21 abstinent alcohol dependent patients. Participants were administered a probabilistic implicit reward learning task and an explicit reward- and punishment-based trial-and-error-learning task. Alcohol dependent patients showed a lower performance in implicit and explicit reward learning than low risk drinkers. Binge drinkers learned less than low-risk drinkers in the implicit learning task. The results support the assumption that binge drinking and alcohol dependence are related to a chronic reward deficit. Binge drinking accompanied by implicit reward learning deficits could increase the risk for the development of an alcohol dependence.
Continuous norming methods have seldom been subjected to scientific review. In this simulation study, we compared parametric with semi-parametric continuous norming methods in psychometric tests by constructing a fictitious population model within which a latent ability increases with age across seven age groups. We drew samples of different sizes (n = 50, 75, 100, 150, 250, 500 and 1,000 per age group) and simulated the results of an easy, medium, and difficult test scale based on Item Response Theory (IRT). We subjected the resulting data to different continuous norming methods and compared the data fit under the different test conditions with a representative cross-validation dataset of n = 10,000 per age group. The most significant differences were found in suboptimal (i.e., too easy or too difficult) test scales and in ability levels that were far from the population mean. We discuss the results with regard to the selection of the appropriate modeling techniques in psychometric test construction, the required sample sizes, and the requirement to report appropriate quantitative and qualitative test quality criteria for continuous norming methods in test manuals.
Human actions are generally not determined by external stimuli, but by internal goals and by the urge to evoke desired effects in the environment. To reach these effects, humans typically have to act. But at times, deciding not to act can be better suited or even the only way to reach a desired effect. What mental processes are involved when people decide not to act to reach certain effects? From the outside it may seem that nothing remarkable is happening, because no action can be observed. However, I present three studies which disclose the cognitive processes that control nonactions.
The present experiments address situations where people intentionally decide to omit certain actions in order to produce a predictable effect in the environment. These experiments are based on the ideomotor hypothesis, which suggests that bidirectional associations can be formed between actions and the resulting effects. Because of these associations, anticipating the effects can in turn activate the respective action. The results of the present experiments show that associations can be formed between nonactions (i.e., the intentional decision not to act) and the resulting effects. Due to these associations, perceiving the nonaction effects encourages not acting (Exp. 1–3). What is more, planning a nonaction seems to come with an activation of the effects that inevitably follow the nonaction (Exp. 4–5). These results suggest that the ideomotor hypothesis can be expanded to nonactions and that nonactions are cognitively represented in terms of their sensory effects. Furthermore, nonaction effects can elicit a sense of agency (Exp. 6–8). That is, even though people refrain from acting, the resulting nonaction effects are perceived as self-produced effects.
In a nutshell, these findings demonstrate that intentional nonactions include specific mechanisms and processes, which are involved, for instance, in effect anticipation and the sense of agency. This means that, while it may seem that nothing remarkable is happening when people decide not to act, complex processes run on the inside, which are also involved in intentional actions.
In most foreign language learning contexts, there are only rare chance for contact with native speakers of the target language. In such a situation, reading plays an important role in language acquisition as well as in gaining cultural information about the target language and its speakers.
Previous research indicated that reading in foreign language is a complex process, which is influenced by various linguistic, cognitive and affective factors. The aim of the present study was to test two structural models of the relationship between reading comprehension in native language (L1), English language (L2) reading motivation, metacognitive awareness of L2 reading strategies, and reading comprehension of English as a foreign language among the two samples. Furthermore, the current study aimed to examine the differences between Egyptian and German students in their perceived usage of reading strategies during reading English texts, as well as to explore the pattern of their motivation toward reading English texts. For this purpose, 401 students were recruited from Germany (n=200) and Egypt (n=201) to participate in the current study. In order to have information about metacognitive awareness of reading strategies, a self-report questionnaire (SORS) developed by Moktari and Sheory (2002) was used. While the L2 reading motivation variable, was measured by a reading motivation survey (L2RMQ) which was based on reviewed reading motivation research. In addition, two reading tests were administrated one to measure reading comprehension for native language (German/Arabic) and the other to measure English reading comprehension.
To analyze the collected data, descriptive statistics and independent t-tests were performed. In addition, further analysis using structural equation modeling was applied to test the strength of relationships between the variables under study.
The results from the current research revealed that L1 reading comprehension, whether in a German or Arabic language, had the strongest relationship with L2 reading comprehension. However, the relationship between L2 intrinsic reading motivation was not proven to be significant in either the German or Egyptian models. On the other hand, the relationship between L2 extrinsic reading motivation, metacognitive awareness of reading strategies, and L2 reading comprehension was only proven significant in the German sample. The discussion of these results along with their pedagogical implications for education and practice will be illustrated in the following study.
Research on the deployment and use of technology to assist learning has seen a significant
rise over the last decades (Aparicio et al., 2017). The focus on course quality, technology,
learning outcome and learner satisfaction in e-learning has led to insufficient attention by
researchers to individual characteristics of learners (Cidral et al., 2017 ; Hsu et al., 2013). The current work aims to bridge this gap by investigating characteristics identified by previous works and backed by theory as influential individual differences in e-learning. These learner characteristics have been suggested as motivational factors (Edmunds et al., 2012) in decisions by learners to interact and exchange information (Luo et al., 2017).
In this work e-learning is defined as interaction dependent information seeking and sharing enabled by technology. This is primarily approached from a media psychology perspective. The role of learner characteristics namely, beliefs about the source of knowledge (Schommer, 1990), learning styles (Felder & Silverman, 1988), need for affect (Maio & Esses, 2001), need for cognition (Cacioppo & Petty, 1982) and power distance (Hofstede, 1980) on interactions to seek and share information in e-learning are investigated. These investigations were shaped by theory and empirical lessons as briefly mentioned in the next paragraphs. Theoretical support for investigations is derived from the technology acceptance model(TAM) by psychologist Davis (1989) and the hyper-personal model by communication scientist Walther (1996). The TAM was used to describe the influence of learner characteristics on decisions to use e-learning systems (Stantchev et al., 2014). The hyper-personal model described why computer-mediated communication thrives in e-learning (Kaye et al., 2016) and how learners interpret messages exchanged online (Hansen et al., 2015). This theoretical framework was followed by empirical reviews which justified the use of interaction and information seeking-sharing as key components of e-learning as well as the selection of learner characteristics. The reviews provided suggestions for the measurement of variables (Kühl et al., 2014) and the investigation design (Dascalau et al., 2015). Investigations were designed and implemented through surveys and quasi experiments which were used for three preliminary studies and two main studies. Samples were selected from Germany and Ghana with same variables tested in both countries. Hypotheses were tested with interaction and information seeking-sharing as dependent variables while beliefs about the source of knowledge, learning styles, need for affect, need for cognition and power distance were independent variables. Firstly, using analyses of variance, the influence of beliefs about the source of knowledge on interaction choices of learners was supported. Secondly, the role of need for cognition on interaction choices of learners was supported by results from a logistic regression. Thirdly, results from multiple linear regressions backed the influence of need for cognition and power distance on information seeking-sharing behaviour of learners. Fourthly, the relationship between need for affect and need for cognition
was supported. The findings may have implications for media psychology research, theories used in this work, research on e-learning, measurement of learner characteristics and the design of e-learning platforms. The findings suggest that, the beliefs learners have about the source of knowledge, their need for cognition and their power distance can influence decisions to interact and seek or share information. The outlook from reviews and findings in this work predicts more research on learner characteristics and a corresponding intensity in the use of e-learning by individuals. It is suggested that future studies investigate the relationship between learner autonomy and power distance. Studies on inter-cultural similarities amongst e-learners in different populations are also
suggested.
Social Cueing of Numerical Magnitude : Observed Head Orientation Influences Number Processing
(2019)
In many parts of the modern world, numbers are used as tools to describe spatial relationships, be it heights, latitudes, or distances. However, this connection goes deeper as a myriad of studies showed that number representations are rooted in space (vertical, horizontal, and/or radial). For instance, numbers were shown to affect spatial perception and, conversely, perceptions or movements in space were shown to affect number estimations. This bidirectional link has already found didactic application in the classroom when children are taught the meaning of numbers. However, our knowledge about the cognitive (and neuropsychological) processes underlying the numerical magnitude operations is still very limited.
Several authors indicated that the processing within peripersonal space (i.e. the space surrounding the body in reaching distance) and numerical magnitude operations are functionally equivalent. This assumption has several implications that the present work aims at describing. For instance, vision and visuospatial attention orienting play a prominent role for processing within peripersonal space. Indeed, both neuropsychological and behavioral studies also suggested a similar role of vision and visuospatial attention orienting for number processing. Moreover, social cognition research showed that movements, posture and gestures affect not only the representation of one's own peripersonal space, but also the visuospatial attention behavior of an observer. Against this background, the current work tests the specific implication of the functional equivalence assumption that the spatial attention response to an observed person’s posture should extend to the observer’s numerical magnitude operations.
The empirical part of the present work tests the spatial attention response of observers to vertical head postures (with continuing eye contact to the observer) in both perceptual and numerical space. Two experimental series are presented that follow both steps from the observation of another person’s vertical head orientation (within his/her peripersonal space) to the observer’s attention orienting response (Experimental series A) as well as from there to the observer’s magnitude operations with numbers (Experimental Series B). Results show that the observation of a movement from a neutral to a vertical head orientation (Experiment 1) as well as the observation of the vertical head orientation alone (Experiment 3) shifted the observer’s spatial attention in correspondence with the direction information of the observed head (up vs. down). Movement from a vertical to a neutral end position, however, had no effect on the observer's spatial attention orienting response (Experiment 2). Furthermore, following down-tilted head posture (relative to up- or non-tilted head orientation), observers generated smaller numbers in a random number generation task (range 1- 9, Experiment 4), gave smaller estimates to numerical trivia questions (mostly multi-digit numbers, Experiment 5) and chose response keys less frequently in a free choice task that was associated with larger numerical magnitude in a intermixed numerical magnitude task.
Experimental Series A served as groundwork for Experimental Series B, as it demonstrated that observing another person’s head orientation indeed triggered the expected directional attention orienting response in the observer. Based on this preliminary work, the results of Experimental Series B lend support to the assumption that numerical magnitude operations are grounded in visuospatial processing of peripersonal space. Thus, the present studies brought together numerical and social cognition as well as peripersonal space research. Moreover, the Empirical Part of the present work provides the basis for elaborating on the role of processing within peripersonal space in terms of Walsh’s (2003, 2013) Theory of Magnitude. In this context, a specification of the Theory of Magnitude was staked out in a processing model that stresses the pivotal role of spatial attention orienting. Implications for mental magnitude operations are discussed. Possible applications in the classroom and beyond are described.
This dissertation highlights various aspects of basic social attention by choosing versatile approaches to disentangle the precise mechanisms underlying the preference to focus on other human beings. The progressive examination of different social processes contrasted with aspects of previously adopted principles of general attention. Recent research investigating eye movements during free exploration revealed a clear and robust social bias, especially for the faces of depicted human beings in a naturalistic scene. However, free viewing implies a combination of mechanisms, namely automatic attention (bottom-up), goal-driven allocation (top-down), or contextual cues and inquires consideration of overt (open exploration using the eyes) as well as covert orienting (peripheral attention without eye movement). Within the scope of this dissertation, all of these aspects have been disentangled in three studies to provide a thorough investigation of different influences on social attention mechanisms.
In the first study (section 2.1), we implemented top-down manipulations targeting non-social features in a social scene to test competing resources. Interestingly, attention towards social aspects prevailed, even though this was detrimental to completing the requirements. Furthermore, the tendency of this bias was evident for overall fixation patterns, as well as fixations occurring directly after stimulus onset, suggesting sustained as well as early preferential processing of social features. Although the introduction of tasks generally changes gaze patterns, our results imply only subtle variance when stimuli are social. Concluding, this experiment indicates that attention towards social aspects remains preferential even in light of top-down demands.
The second study (section 2.2) comprised of two separate experiments, one in which we investigated reflexive covert attention and another in which we tested reflexive as well as sustained overt attention for images in which a human being was unilaterally located on either the left or right half of the scene. The first experiment consisted of a modified dot-probe paradigm, in which peripheral probes were presented either congruently on the side of the social aspect, or incongruently on the non-social side. This was based on the assumption that social features would act similar to cues in traditional spatial cueing paradigms, thereby facilitating reaction times for probes presented on the social half as opposed to the non-social half. Indeed, results reflected such congruency effect. The second experiment investigated these reflexive mechanisms by monitoring eye movements and specifying the location of saccades and fixations for short as well as long presentation times. Again, we found the majority of initial saccades to be congruently directed to the social side of the stimulus. Furthermore, we replicated findings for sustained attention processes with highest fixation densities for the head region of the displayed human being.
The third study (section 2.3), tackled the other mechanism proposed in the attention dichotomy, the bottom-up influence. Specifically, we reduced the available contextual information of a scene by using a gaze-contingent display, in which only the currently fixated regions would be visible to the viewer, while the remaining image would remain masked. Thereby, participants had to voluntarily change their gaze in order to explore the stimulus. First, results revealed a replication of a social bias in free-viewing displays. Second, the preference to select social features was also evident in gaze-contingent displays. Third, we find higher recurrent gaze patterns for social images compared to non-social ones for both viewing modalities. Taken together, these findings imply a top-down driven preference for social features largely independent of contextual information.
Importantly, for all experiments, we took saliency predictions of different computational algorithms into consideration to ensure that the observed social bias was not a result of high physical saliency within these areas. For our second experiment, we even reduced the stimulus set to those images, which yielded lower mean and peak saliency for the side of the stimulus containing the social information, while considering algorithms based on low-level features, as well as pre-trained high-level features incorporated in deep learning algorithms.
Our experiments offer new insights into single attentional mechanisms with regard to static social naturalistic scenes and enable a further understanding of basic social processing, contrasting from that of non-social attention. The replicability and consistency of our findings across experiments speaks for a robust effect, attributing social attention an exceptional role within the general attention construct, not only behaviorally, but potentially also on a neuronal level and further allowing implications for clinical populations with impaired social functioning.
Biologische Marker für Aufmerksamkeitsverzerrungen bei sozialer Ängstlichkeit und deren Modifikation
(2019)
Diese Dissertationsschrift beschäftigt sich mit biologischen Korrelaten von Aufmerksamkeits-verzerrungen und eruiert deren Modifikation in einem längsschnittlich angelegten Experiment. Hierfür wurden über 100 sozial-ängstliche Teilnehmer mit Hilfe einer Screening-Prozedur gewonnen und hinsichtlich der Ausprägung einer ereigniskorrelierten Lateralisation namens „N2pc“ untersucht.
Während der ersten Labormessung indizierte die N2pc bei der Bearbeitung eines Dot Probe Paradigmas einen mittelgroßen, statistisch hochbedeutsamen Attentional Bias hin zu wütenden Gesichtern im Vergleich zu neutralen. Das hierfür klassischerweise verwendete Maß von Reaktionszeitunterschieden hingegen konnte diese Verzerrung der Aufmerksamkeit nicht abbilden. Ferner zeigten weder die elektrophysiologische noch die behaviorale Messgröße einen Zusammenhang mit Fragebögen sozialer Angst, was teilweise auf ein Fehlen interner Konsistenz zurückgeführt werden kann.
Im weiteren Verlauf absolvierten die überwiegend weiblichen Teilnehmer an acht unterschiedlichen Terminen über zwei bis vier Wochen fast 7000 Durchgänge eines Aufmerksamkeitsverzerrungsmodifikationstrainings oder einer aktiven Kontrollprozedur. Daraufhin zeigte sich eine Auslöschung der ereigniskorrelierten Lateralisation, allerdings in einem späteren Zeitfenster als erwartet. Dieses Verschwinden des Attentional Bias blieb bis elf Wochen nach Ende der Trainingsprozedur stabil. Außerdem trat dieselbe Modifikation ebenfalls für die Kontrollgruppe auf. Die selbstberichtete Schwere der Symptomausprägung veränderte sich zwar nicht, allerdings konnte eine Reduktion des Persönlichkeitsmerkmals Neurotizismus verzeichnet werden, welches konzeptuell mit dem Begriff der Ängstlichkeit eng verwoben ist.
Durch explorative Folgeanalysen konnte eine stärkere Modulation der rechten Großhirnhälfte, also durch Reize im linken visuellen Halbfeld aufgedeckt werden. Eine Neuberechnung des Attentional Bias separat für jede Hemisphäre scheint daher auch für künftige Untersuchungen angebracht. Ferner wurde als Träger der Modifikation über die Zeit eine Veränderung der Hyperpolarisation nach der N2-Komponente identifiziert. Ob durch eine Anpassung der Prozedur eine Modulation einer früheren ereigniskorrelierten Komponente erzielt werden kann, bleibt zum aktuellen Zeitpunkt unbeantwortet.
In dieser EEG Untersuchung wurde der Einfluss von zuvor präsentierten Abfolgen wütender und neutraler Gesichtsausdrücke auf die neurokognitive Verarbeitung eines aktuell wahrgenommenen Gesichts unter Berücksichtigung des modulierenden Effekts der individuellen Ängstlichkeit, sowie eines sozial stressenden Kontextes und einer erhöhten kognitiven Auslastung erforscht.
Die Ergebnisse lieferten bereits auf der Ebene der basalen visuellen Gesichtsanalyse Belege für eine parallele Verarbeitung und Integration von strukturellen und emotionalen Gesichtsinformationen. Zudem konnte schon in dieser frühen Phase ein genereller kontextueller Einfluss von Gesichtssequenzen auf die kognitive Gesichtsverarbeitung nachgewiesen werden, welcher sogar in späteren Phasen der kognitiven Verarbeitung noch zunahm.
Damit konnte nachgewiesen werden, dass die zeitliche Integration, d.h. die spezifische Abfolge wahrgenommener Gesichter eine wichtige Rolle für die kognitive Evaluation des aktuell perzipierten Gesichtes spielt. Diese Ergebnisse wurden zudem in einer Revision des Gesichtsverarbeitungsmodells von Haxby und Kollegen verordnet und in einer sLORETA Analyse dargestellt.
Die Befunde zur individuellen Ängstlichkeit und kognitiven Auslastung bestätigten außerdem die Attentional Control Theorie und das Dual Mechanisms of Control Modell.
Context conditioning is characterized by unpredictable threat and its generalization may constitute risk factors for panic disorder (PD). Therefore, we examined differences between individuals with panic attacks (PA; N = 21) and healthy controls (HC, N = 22) in contextual learning and context generalization using a virtual reality (VR) paradigm. Successful context conditioning was indicated in both groups by higher arousal, anxiety and contingency ratings, and increased startle responses and skin conductance levels (SCLs) in an anxiety context (CTX+) where an aversive unconditioned stimulus (US) occurred unpredictably vs. a safety context (CTX−). PA compared to HC exhibited increased differential responding to CTX+ vs. CTX− and overgeneralization of contextual anxiety on an evaluative verbal level, but not on a physiological level. We conclude that increased contextual conditioning and contextual generalization may constitute risk factors for PD or agoraphobia contributing to the characteristic avoidance of anxiety contexts and withdrawal to safety contexts and that evaluative cognitive process may play a major role.
Forward Collision Alarms (FCA) intend to signal hazardous traffic situations and the need for an immediate corrective driver response. However, data of naturalistic driving studies revealed that approximately the half of all alarms activated by conventional FCA systems represented unnecessary alarms. In these situations, the alarm activation was correct according to the implemented algorithm, whereas the alarms led to no or only minimal driver responses. Psychological research can make an important contribution to understand drivers’ needs when interacting with driver assistance systems.
The overarching objective of this thesis was to gain a systematic understanding of psychological factors and processes that influence drivers’ perceived need for assistance in potential collision situations. To elucidate under which conditions drivers perceive alarms as unnecessary, a theoretical framework of drivers’ subjective alarm evaluation was developed. A further goal was to investigate the impact of unnecessary alarms on drivers’ responses and acceptance. Four driving simulator studies were carried out to examine the outlined research questions.
In line with the hypotheses derived from the theoretical framework, the results suggest that drivers’ perceived need for assistance is determined by their retrospective subjective hazard perception. While predictions of conventional FCA systems are exclusively based on physical measurements resulting in a time to collision, human drivers additionally consider their own manoeuvre intentions and those attributed to other road users to anticipate the further course of a potentially critical situation. When drivers anticipate a dissolving outcome of a potential conflict, they perceive the situation as less hazardous than the system. Based on this discrepancy, the system would activate an alarm, while drivers’ perceived need for assistance is low. To sum up, the described factors and processes cause drivers to perceive certain alarms as unnecessary. Although drivers accept unnecessary alarms less than useful alarms, unnecessary alarms do not reduce their overall system acceptance. While unnecessary alarms cause moderate driver responses in the short term, the intensity of responses decrease with multiple exposures to unnecessary alarms. However, overall, effects of unnecessary alarms on drivers’ alarm responses and acceptance seem to be rather uncritical.
This thesis provides insights into human factors that explain when FCAs are perceived as unnecessary. These factors might contribute to design FCA systems tailored to drivers’ needs.
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.
Research with adults in laboratory settings has shown that distributed rereading is a beneficial learning strategy but its effects depend on time of test. When learning outcomes are measured immediately after rereading, distributed rereading yields no benefits or even detrimental effects on learning, but the beneficial effects emerge two days later. In a preregistered experiment, the effects of distributed rereading were investigated in a classroom setting with school students. Seventh-graders (N = 191) reread a text either immediately or after 1 week. Learning outcomes were measured after 4 min or 1 week. Participants in the distributed rereading condition reread the text more slowly, predicted their learning success to be lower, and reported a lower on-task focus. At the shorter retention interval, massed rereading outperformed distributed rereading in terms of learning outcomes. Contrary to students in the massed condition, students in the distributed condition showed no forgetting from the short to the long retention interval. As a result, they performed equally well as the students in the massed condition at the longer retention interval. Our results indicate that distributed rereading makes learning more demanding and difficult and leads to higher effort during rereading. Its effects on learning depend on time of test, but no beneficial effects were found, not even at the delayed test.
Animals, just like humans, can freely move. They do so for various important reasons, such as finding food and escaping predators. Observing these behaviors can inform us about the underlying cognitive processes. In addition, while humans can convey complicated information easily through speaking, animals need to move their bodies to communicate. This has prompted many creative solutions by animal neuroscientists to enable studying the brain during movement. In this review, we first summarize how animal researchers record from the brain while an animal is moving, by describing the most common neural recording techniques in animals and how they were adapted to record during movement. We further discuss the challenge of controlling or monitoring sensory input during free movement.
However, not only is free movement a necessity to reflect the outcome of certain internal cognitive processes in animals, it is also a fascinating field of research since certain crucial behavioral patterns can only be observed and studied during free movement. Therefore, in a second part of the review, we focus on some key findings in animal research that specifically address the interaction between free movement and brain activity. First, focusing on walking as a fundamental form of free movement, we discuss how important such intentional movements are for understanding processes as diverse as spatial navigation, active sensing, and complex motor planning. Second, we propose the idea of regarding free movement as the expression of a behavioral state. This view can help to understand the general influence of movement on brain function.
Together, the technological advancements towards recording from the brain during movement, and the scientific questions asked about the brain engaged in movement, make animal research highly valuable to research into the human “moving brain”.
Preclinical studies point to a pivotal role of the orexin 1 (OX1) receptor in arousal and fear learning and therefore suggest the HCRTR1 gene as a prime candidate in panic disorder (PD) with/without agoraphobia (AG), PD/AG treatment response, and PD/AG-related intermediate phenotypes. Here, a multilevel approach was applied to test the non-synonymous HCRTR1 C/T Ile408Val gene variant (rs2271933) for association with PD/AG in two independent case-control samples (total n = 613 cases, 1839 healthy subjects), as an outcome predictor of a six-weeks exposure-based cognitive behavioral therapy (CBT) in PD/AG patients (n = 189), as well as with respect to agoraphobic cognitions (ACQ) (n = 483 patients, n = 2382 healthy subjects), fMRI alerting network activation in healthy subjects (n = 94), and a behavioral avoidance task in PD/AG pre- and post-CBT (n = 271). The HCRTR1 rs2271933 T allele was associated with PD/AG in both samples independently, and in their meta-analysis (p = 4.2 × 10−7), particularly in the female subsample (p = 9.8 × 10−9). T allele carriers displayed a significantly poorer CBT outcome (e.g., Hamilton anxiety rating scale: p = 7.5 × 10−4). The T allele count was linked to higher ACQ sores in PD/AG and healthy subjects, decreased inferior frontal gyrus and increased locus coeruleus activation in the alerting network. Finally, the T allele count was associated with increased pre-CBT exposure avoidance and autonomic arousal as well as decreased post-CBT improvement. In sum, the present results provide converging evidence for an involvement of HCRTR1 gene variation in the etiology of PD/AG and PD/AG-related traits as well as treatment response to CBT, supporting future therapeutic approaches targeting the orexin-related arousal system.
Depending on the point of view, conceptions of greed range from being a desirable and inevitable feature of a well-regulated, well-balanced economy to the root of all evil - radix omnium malorum avaritia (Tim 6.10). Regarding the latter, it has been proposed that greedy individuals strive for obtaining desired goods at all costs. Here, we show that trait greed predicts selfish economic decisions that come at the expense of others in a resource dilemma. This effect was amplified when individuals strived for obtaining real money, as compared to points, and when their revenue was at the expense of another person, as compared to a computer. On the neural level, we show that individuals high, compared to low in trait greed showed a characteristic signature in the EEG, a reduced P3 effect to positive, compared to negative feedback, indicating that they may have a lack of sensitivity to adjust behavior according to positive and negative stimuli from the environment. Brain-behavior relations further confirmed this lack of sensitivity to behavior adjustment as a potential underlying neuro-cognitive mechanism which explains selfish and reckless behavior that may come at the expense of others.
Sensory processing and attention allocation are shaped by threat, but the role of trait-anxiety in sensory processing as a function of threat predictability remains incompletely understood. Therefore, we measured steady-state visual evoked potentials (ssVEPs) as an index of sensory processing of predictable and unpredictable threat cues in 29 low (LA) and 29 high (HA) trait-anxious participants during a modified NPU-paradigm followed by an extinction phase. Three different contextual cues indicated safety (N), predictable (P) or unpredictable threat (U), while foreground cues signalled shocks in the P-condition only. All participants allocated increased attentional resources to the central P-threat cue, replicating previous findings. Importantly, LA individuals exhibited larger ssVEP amplitudes to contextual threat (U and P) than to contextual safety cues, while HA individuals did not differentiate among contextual cues in general. Further, HA exhibited higher aversive ratings of all contexts compared to LA. These results suggest that high trait-anxious individuals might be worse at discriminating contextual threat stimuli and accordingly overestimate the probability and aversiveness of unpredictable threat. These findings support the notion of aberrant sensory processing of unpredictable threat in anxiety disorders, as this processing pattern is already evident in individuals at risk of these disorders.
Positive effects of shared reading for children’s language development are boosted by including instruction of word meanings and by increasing interactivity. The effects of engaging children as storytellers on vocabulary development have been less well studied. We developed an approach termed Interactive Elaborative Storytelling (IES), which employs both word-learning techniques and children’s storytelling in a shared-reading setting. To systematically investigate potential benefits of children as storytellers, we contrasted this approach to two experimental groups, an Elaborative Storytelling group employing word-learning techniques but no storytelling by children and a Read-Aloud group, excluding any additional techniques. The study was a 3 × 2 pre-posttest randomized design with 126 preschoolers spanning 1 week. Measured outcomes were receptive and expressive target vocabulary, story memory, and children’s behavior during story sessions. All three experimental groups made comparable gains on target words from pre- to posttest and there was no difference between groups in story memory. However, in the Elaborative Storytelling group, children were the least restless. Findings are discussed in terms of their contribution to optimizing shared reading as a method of fostering language.
Examining the testing effect in university teaching: retrievability and question format matter
(2018)
Review of learned material is crucial for the learning process. One approach that promises to increase the effectiveness of reviewing during learning is to answer questions about the learning content rather than restudying the material (testing effect). This effect is well established in lab experiments. However, existing research in educational contexts has often combined testing with additional didactical measures that hampers the interpretation of testing effects. We aimed to examine the testing effect in its pure form by implementing a minimal intervention design in a university lecture (N = 92). The last 10 min of each lecture session were used for reviewing the lecture content by either answering short-answer questions, multiple-choice questions, or reading summarizing statements about core lecture content. Three unannounced criterial tests measured the retention of learning content at different times (1, 12, and 23 weeks after the last lecture). A positive testing effect emerged for short-answer questions that targeted information that participants could retrieve from memory. This effect was independent of the time of test. The results indicated no testing effect for multiple-choice testing. These results suggest that short-answer testing but not multiple-choice testing may benefit learning in higher education contexts.
In three studies, we investigated, if and how different modes of presentation - written, auditory, audiovisual (auditory combined with pictures) - affect comprehension of semantically identical materials. Children, beginning from the age of 7, and adults were included into the studies. A vast amount of studies have shown that pictures can facilitate text comprehension (e.g. Carney & Levin, 2002).
Other than the majority of these previous studies, we assessed text comprehension with methods that we assume to allow more differentiated insights into the cognitive processes that - according to current theories - underlie text comprehension. Text comprehension involves at least three levels of mental representations (see Kintsch, 1998). Moreover, text comprehension means constructing a locally and globally coherent mental representation of the text content.
Using a sentence recognition task (see Schmalhofer & Glavanov, 1986), we examined whether the memory of the text surface, the text base, and the situation model differs between written, auditory, and audiovisual text presentation in a sample of 103 8- and 10-year-olds and adults (Study I), and between auditory and audiovisual text presentation in a sample of 106 7-, 9-, and 11-year-olds (Study II). Furthermore, we examined with 155 9- and 11-year-olds, whether the ability to draw inferences to establish local and global coherence differs between written, auditory, and audiovisual text presentation. These inferences were indicated by reaction times to words associated with a protagonist's super- (global) or subordinate (local) goal.
Overall, the results of these three studies taken together, indicate that children up to age 11 do not only have better memory of not only the text surface, but also of the situation model when pictures are added to an auditory text. This effect became apparent in comparison with both auditory and written texts. For the adults, in contrast, we did not find an effect of the presentation mode. Furthermore, both 9- and 11-year-olds were better at establishing global coherence at audiovisual compared to auditory text presentation. Written presentation turned out to be superior to auditory presentation in terms of both local and global coherence.
The rise of automated driving will fundamentally change our mobility in the near future. This thesis specifically considers the stage of so called highly automated driving (Level 3, SAE International, 2014). At this level, a system carries out vehicle guidance in specific application areas, e.g. on highway roads. The driver can temporarily suspend from monitoring the driving task and might use the time by engaging in so called non-driving related tasks (NDR-tasks). However, the driver is still in charge to resume vehicle control when prompted by the system. This new role of the driver has to be critically examined from a human factors perspective.
The main aim of this thesis was to systematically investigate the impact of different NDR-tasks on driver behavior and take-over performance. Wickens’ (2008) architecture of multiple resource theory was chosen as theoretical framework, with the building blocks of multiplicity (task interference due to resource overlap), mental workload (task demands), and aspects of executive control or self-regulation. Specific adaptations and extensions of the theory were discussed to account for the context of NDR-task interactions in highly automated driving.
Overall four driving simulator studies were carried out to investigate the role of these theoretical components. Study 1 showed that drivers focused NDR-task engagement on sections of highly automated compared to manual driving. In addition, drivers avoided task engagement prior to predictable take-over situations. These results indicate that self-regulatory behavior, as reported for manual driving, also takes place in the context of highly automated driving. Study 2 specifically addressed the impact of NDR-tasks’ stimulus and response modalities on take-over performance. Results showed that particularly visual-manual tasks with high motoric load (including the need to get rid of a handheld object) had detrimental effects. However, drivers seemed to be aware of task specific distraction in take-over situations and strictly canceled visual-manual tasks compared to a low impairing auditory-vocal task. Study 3 revealed that also the mental demand of NDR-tasks should be considered for drivers’ take-over performance. Finally, different human-machine-interfaces were developed and evaluated in Simulator Study 4. Concepts including an explicit pre-alert (“notification”) clearly supported drivers’ self-regulation and achieved high usability and acceptance ratings.
Overall, this thesis indicates that the architecture of multiple resource theory provides a useful framework for research in this field. Practical implications arise regarding the potential legal regulation of NDR-tasks as well as the design of elaborated human-machine-interfaces.
The abilities to comprehend and critically evaluate scientific texts and the various arguments stated in these texts are an important aspect of scientific literacy, but these competences are usually not formally taught to students. Previous research indicates that, although undergraduate students evaluate the claims and evidence they find in scientific
documents to some extent, these evaluations usually fail to meet normative standards. In addition, students’ use of source information for evaluation is often insufficient. The rise of the internet and the increased accessibility of information have yielded some additional challenges that highlight the importance of adequate training and instruction.The aim of the present work was to further examine introductory students’ competences to systematically and heuristically evaluate scientific information, to identify relevant strategies that are involved in a successful evaluation, and to use this knowledge to design appropriate interventions for fostering epistemic competences in university students.To this end, a number of computer-based studies, including both quantitative and qualitative data as well as experimental designs, were developed. The first two studies were designed to specify educational needs and to reveal helpful processing strategies that are required in different tasks and situations. Two expert-novice comparisons were developed, whereby the performance of German students of psychology (novices) was compared to the performance of scientists from the domain of psychology (experts) in a number of different tasks, such as systematic plausibility evaluations of informal arguments (Study 1) or heuristic evaluations of the credibility of multiple scientific documents (Study 2). A think-aloud procedure was used
to identify specific strategies that were applied in both groups during task completion, and that possibly mediated performance differences between students and scientists. In addition, relationships between different strategies and between strategy use and relevant conceptual knowledge was examined. Based on the results of the expert-novice comparisons, an intervention study, consisting of two training experiments, was constructed to foster some
competences that proved to be particularly deficient in the comparisons (Study 3). Study 1 examined introductory students’ abilities to accurately judge the plausibility of informal arguments according to normative standards, to recognise common argumentation fallacies, and to identify different structural components of arguments. The results from Study 1 indicate that many students, compared to scientists, lack relevant knowledge about the structure of arguments, and that normatively accurate evaluations of their plausibility seem to be challenging in this group. Often, common argumentation fallacies were not identified correctly. Importantly, these deficits were partly mediated by differences in strategy use: It was especially difficult for students to pay sufficient attention to the relationship between argument components when forming their judgements. Moreover, they frequently relied on their intuition or opinion as a criterion for evaluation, whereas scientists predominantly determined quality of arguments based on their internal consistency.
In addition to students’ evaluation of the plausibility of informal arguments, Study 2 examined introductory students’ competences to evaluate the credibility of multiple scientific texts, and to use source characteristics for evaluation. The results show that students struggled not only to judge the plausibility of arguments correctly, but also to heuristically judge the credibility of science texts, and these deficits were fully mediated by their insufficient use of source information. In contrast, scientists were able to apply different strategies in a flexible manner. When the conditions for evaluation did not allow systematic processing (i.e. time limit), they primarily used source characteristics for their evaluations. However, when
systematic evaluations were possible (i.e. no time limit), they used more sophisticated normative criteria for their evaluations, such as paying attention to the internal consistency of arguments (cf. Study 1). Results also showed that students, in contrast to experts, lacked relevant knowledge about different publication types, and this was related to their ability to correctly determine document credibility. The results from the expert-novice comparisons also suggest that the competences assessed in both tasks might develop as a result of a more fundamental form of scientific literacy and discipline expertise. Performances in all tasks were positively related. On the basis of these results, two training experiments were developed that aimed at fostering university students’ competences to understand and evaluate informal arguments (Study 3). Experiment 1 describes an intervention approach in which students were familiarised with the formal structure of arguments based on Toulmin’s (1958) argumentation model. The performance of the experimental group to identify the structural components of this model was compared to the performance of a control group in which speed reading skills were practiced, using a pre-post-follow-up design. Results show that the training was successful for improving the comprehension of more complex arguments and relational aspects between key components in the posttest, compared to the control group. Moreover, an interaction effect was found with study performance. High achieving students with above average grades profited the most from the training intervention. Experiment 2 showed that
training in plausibility, normative criteria of argument evaluation, and argumentation fallacies improved students’ abilities to evaluate the plausibility of arguments and, in addition, their competences to recognise structural components of arguments, compared to a speed-reading control group. These results have important implications for education and practice, which will be discussed in detail in this dissertation.
Regulatory focus (RF) theory (Higgins, 1997) states that individuals follow different strategic concerns when focusing on gains (promotion) rather than losses (prevention). Applying the Reflective-Impulsive Model (RIM, Strack & Deutsch, 2004), this dissertation investigates RF’s influence on basic information processing, specifically semantic processing (Study 1), semantic (Study 2) and affective (Study 3) associative priming, and basic reflective operations (Studies 4-7). Study 1 showed no effect of RF on pre-activation of RF-related semantic concepts in a lexical decision task (LDT). Study 2 indicated that primes fitting a promotion focus improve performance in a LDT for chronically promotion-focused individuals, but not chronically prevention-focused individuals. However, the latter performed better when targets fit their focus. Stronger affect and arousal after processing valent words fitting an RF may explain this pattern. Study 3 showed some evidence for stronger priming effects for negative primes in a bona-fide pipeline task (Fazio et al., 1995) for chronically prevention-focused participants, while also providing evidence that situational prevention focus insulates individuals from misattributing the valence of simple primes. Studies 4-7 showed that a strong chronic prevention focus leads to greater negation effects for valent primes in an Affect Misattribution Procedure (Payne et al., 2005), especially when it fits the situation. Furthermore, Study 6 showed that these effects result from stronger weighting of negated valence rather than greater ease in negation. Study 7 showed that the increased negation effect is independent of time pressure. Broad implications are discussed, including how RF effects on basic processing may explain higher-order RF effects.
Programmansätze und deren Einsatz in vorschulisch, schulisch und außerschulisch bildenden Kontexten erfreuen sich der zunehmenden Beliebtheit. Ein breites und nicht nachlassendes Interesse in Forschung und Praxis kommt insbesondere vorschulischen Trainingskonzepten zuteil, denen das Potenzial zugesprochen wird, später auftretenden Schwierigkeiten beim Erwerb der Schriftsprache wirksam vorzubeugen.
Das Würzburger Trainingsprogramm »Hören, lauschen, lernen« stellt einen konzeptionell auf schriftspracherwerbstheoretischen Annahmen fundierten und mit mehreren evaluierenden Studien erprobten Trainingsansatz dar. Dieser bezweckt, Kindern den Erwerb des Lesens und Schreibens zu erleichtern. Dem Anspruch, späteren Lese-Rechtschreibschwierigkeiten effektiv vorzubeugen, unterliegt die vorschulische Förderung bereichsspezifischer Kompetenzen des Schriftspracherwerbs, insbesondere der Kompetenz phonologische Bewusstheit. Die Förderung wird optimal ausgeschöpft, sofern Empfehlungen einer qualitativen Implementierung umgesetzt werden, die als Manualtreue, Durchführungsintensität, Programmdifferenzierung, Programmkomplexität, Implementierungsstrategien, Vermittlungsqualität und Teilnehmerreaktion spezifiziert sind.
Zunehmend diskutiert sind in der Trainingsforschung, neben der theoretischen Fundierung und dem zu erbringenden Nachweis an empirischer Evidenz von Programmansätzen, Kriterien der Praxistauglichkeit. Daher befasst sich die vorliegende Arbeit mit der Frage der Programmrobustheit gegenüber Trainereffekten. Es nahmen 300 Kinder an dem Würzburger Trainingsprogramm teil und wurden 64 Kindern gegenübergestellt, die dem regulären Kindergartenprogramm folgten. Angeleitet durch das erzieherische Personal fand das 5-monatig andauernde Training innerhalb des Vorschuljahres statt. Die kindliche Entwicklung in den bereichsspezifischen Kompetenzen der phonologischen Bewusstheit und der Graphem-Phonem-Korrespondenz wurde vor und nach der Trainingsmaßnahme sowie zum Schulübertritt und in den Kompetenzen des Rechtschreibens und Lesens zum Ende des ersten Schuljahres untersucht. Es ließen sich unmittelbar und langfristig Trainingseffekte des eingesetzten Programmes nachweisen; indessen blieb ein Transfererfolg aus.
Der Exploration von Trainereffekten unterlag eine Eruierung der Praxistauglichkeit des Trainingsprogrammes anhand der erfolgten Implementierung durch das anleitende erzieherische Personal. Aus der ursprünglich mit 300 Kindern aus 44 involvierten Kindergärten bestehenden Datenbasis wurden drei Subgruppen mit insgesamt 174 Kindern aus 17 Kindergärten identifiziert, bei denen deutliche Diskrepanzen zu unmittelbaren, langfristigen und transferierenden Effekten des Trainingsprogrammes auftraten. Exploriert wurden Unterschiede in der Durchführung, um Rückschlüsse auf qualitative Aspekte der Programmimplementierung zu ziehen. Die Befunde des Extremgruppenvergleichs deuteten an, dass weniger Aspekte der Manualtreue und Durchführungsintensität ausschlaggebend für die Programmwirksamkeit waren; vielmehr schien für die Wirksamkeit des Trainingsprogrammes die Implementierung in der Art und Weise, wie die Trainingsinhalte den Kindern durch das erzieherische Personal vermittelt waren, entscheidend zu sein. Befunde zur eruierten Teilnehmerreaktion, die auf differenzielle Fördereffekte verweisen, stellten die Trainingswirksamkeit insbesondere für Kinder heraus, bei denen prognostisch ein Risiko unterstellt war, später auftretende Schwierigkeiten mit der Schriftsprache zu entwickeln. Ferner zeichnete sich ab, dass neben der Qualität der Programmimplementierung scheinbar auch Unterschiede in der schulischen Instruktionsmethode des Lesens und Schreibens einen nivellierenden Einfluss auf den Transfererfolg des Programmes ausübten. Theoretische und praktische Implikationen für den Einsatz des Trainingsprogrammes wurden diskutiert.
The Concealed Information Test (CIT) is a well-validated means to detect whether someone possesses certain (e.g., crime-relevant) information. The current study investigated whether alcohol intoxication during CIT administration influences reaction time (RT) CIT-effects. Two opposing predictions can be made. First, by decreasing attention to critical information, alcohol intoxication could diminish CIT-effects. Second, by hampering the inhibition of truthful responses, alcohol intoxication could increase CIT-effects. A correlational field design was employed. Participants (n = 42) were recruited and tested at a bar, where alcohol consumption was voluntary and incidental. Participants completed a CIT, in which they were instructed to hide knowledge of their true identity. BAC was estimated via breath alcohol ratio. Results revealed that higher BAC levels were correlated with higher CIT-effects. Our results demonstrate that robust CIT effects can be obtained even when testing conditions differ from typical laboratory settings and strengthen the idea that response inhibition contributes to the RT-CIT effect.
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.
In today’s world of work, networking behaviors are an important and viable strategy to enhance success in work and career domains. Concerning personality as an antecedent of networking behaviors, prior studies have exclusively relied on trait perspectives that focus on how people feel, think, and act. Adopting a motivational perspective on personality, we enlarge this focus and argue that beyond traits predominantly tapping social content, motives shed further light on instrumental aspects of networking – or why people network. We use McClelland’s implicit motives framework of need for power (nPow), need for achievement (nAch), and need for affiliation (nAff) to examine instrumental determinants of networking. Using a facet theoretical approach to networking behaviors, we predict differential relations of these three motives with facets of (1) internal vs. external networking and (2) building, maintaining, and using contacts. We conducted an online study, in which we temporally separate measures (N = 539 employed individuals) to examine our hypotheses. Using multivariate latent regression, we show that nAch is related to networking in general. In line with theoretical differences between networking facets, we find that nAff is positively related to building contacts, whereas nPow is positively related to using internal contacts. In sum, this study shows that networking is not only driven by social factors (i.e., nAff), but instead the achievement motive is the most important driver of networking behaviors.
Although questionable research practices (QRPs) and p-hacking have received attention in recent years, little research has focused on their prevalence and acceptance in students. Students are the researchers of the future and will represent the field in the future. Therefore, they should not be learning to use and accept QRPs, which would reduce their ability to produce and evaluate meaningful research. 207 psychology students and fresh graduates provided self-report data on the prevalence and predictors of QRPs. Attitudes towards QRPs, belief that significant results constitute better science or lead to better grades, motivation, and stress levels were predictors. Furthermore, we assessed perceived supervisor attitudes towards QRPs as an important predictive factor. The results were in line with estimates of QRP prevalence from academia. The best predictor of QRP use was students’ QRP attitudes. Perceived supervisor attitudes exerted both a direct and indirect effect via student attitudes. Motivation to write a good thesis was a protective factor, whereas stress had no effect. Students in this sample did not subscribe to beliefs that significant results were better for science or their grades. Such beliefs further did not impact QRP attitudes or use in this sample. Finally, students engaged in more QRPs pertaining to reporting and analysis than those pertaining to study design. We conclude that supervisors have an important function in shaping students’ attitudes towards QRPs and can improve their research practices by motivating them well. Furthermore, this research provides some impetus towards identifying predictors of QRP use in academia.
Previous research has shown that low-level visual features (i.e., low-level visual saliency) as well as socially relevant information predict gaze allocation in free viewing conditions. However, these studies mainly used static and highly controlled stimulus material, thus revealing little about the robustness of attentional processes across diverging situations. Secondly, the influence of affective stimulus characteristics on visual exploration patterns remains poorly understood. Participants in the present study freely viewed a set of naturalistic, contextually rich video clips from a variety of settings that were capable of eliciting different moods. Using recordings of eye movements, we quantified to what degree social information, emotional valence and low-level visual features influenced gaze allocation using generalized linear mixed models. We found substantial and similarly large regression weights for low-level saliency and social information, affirming the importance of both predictor classes under ecologically more valid dynamic stimulation conditions. Differences in predictor strength between individuals were large and highly stable across videos. Additionally, low-level saliency was less important for fixation selection in videos containing persons than in videos not containing persons, and less important for videos perceived as negative. We discuss the generalizability of these findings and the feasibility of applying this research paradigm to patient groups.
Epigenetic mechanisms have been proposed to mediate fear extinction in animal models. Here, MAOA methylation was analyzed via direct sequencing of sodium bisulfite-treated DNA extracted from blood cells before and after a 2-week exposure therapy in a sample of n = 28 female patients with acrophobia as well as in n = 28 matched healthy female controls. Clinical response was measured using the Acrophobia Questionnaire and the Attitude Towards Heights Questionnaire. The functional relevance of altered MAOA methylation was investigated by luciferase-based reporter gene assays. MAOA methylation was found to be significantly decreased in patients with acrophobia compared with healthy controls. Furthermore, MAOA methylation levels were shown to significantly increase after treatment and correlate with treatment response as reflected by decreasing Acrophobia Questionnaire/Attitude Towards Heights Questionnaire scores. Functional analyses revealed decreased reporter gene activity in presence of methylated compared with unmethylated pCpGfree_MAOA reporter gene vector constructs. The present proof-of-concept psychotherapy-epigenetic study for the first time suggests functional MAOA methylation changes as a potential epigenetic correlate of treatment response in acrophobia and fosters further investigation into the notion of epigenetic mechanisms underlying fear extinction.
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
Acrophobia is characterized by intense fear in height situations. Virtual reality (VR) can be used to trigger such phobic fear, and VR exposure therapy (VRET) has proven effective for treatment of phobias, although it remains important to further elucidate factors that modulate and mediate the fear responses triggered in VR. The present study assessed verbal and behavioral fear responses triggered by a height simulation in a 5-sided cave automatic virtual environment (CAVE) with visual and acoustic simulation and further investigated how fear responses are modulated by immersion, i.e., an additional wind simulation, and presence, i.e., the feeling to be present in the VE. Results revealed a high validity for the CAVE and VE in provoking height related self-reported fear and avoidance behavior in accordance with a trait measure of acrophobic fear. Increasing immersion significantly increased fear responses in high height anxious (HHA) participants, but did not affect presence. Nevertheless, presence was found to be an important predictor of fear responses. We conclude that a CAVE system can be used to elicit valid fear responses, which might be further enhanced by immersion manipulations independent from presence. These results may help to improve VRET efficacy and its transfer to real situations.
Altruistic punishment is connected to trait anger, not trait altruism, if compensation is available
(2018)
Altruistic punishment and altruistic compensation are important concepts that are used to investigate altruism. However, altruistic punishment has been found to be correlated with anger. We were interested whether altruistic punishment and altruistic compensation are both driven by trait altruism and trait anger or whether the influence of those two traits is more specific to one of the behavioral options. We found that if the participants were able to apply altruistic compensation and altruistic punishment together in one paradigm, trait anger only predicts altruistic punishment and trait altruism only predicts altruistic compensation. Interestingly, these relations are disguised in classical altruistic punishment and altruistic compensation paradigms where participants can either only punish or compensate. Hence altruistic punishment and altruistic compensation paradigms should be merged together if one is interested in trait altruism without the confounding influence of trait anger.
The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.
Brain computer interfaces based on sensorimotor rhythms modulation (SMR-BCIs) allow people to emit commands to an interface by imagining right hand, left hand or feet movements. The neurophysiological activation associated with those specific mental imageries can be measured by electroencephalography and detected by machine learning algorithms. Improvements for SMR-BCI accuracy in the last 30 years seem to have reached a limit. The currrent main issue with SMR-BCIs is that between 15% to 30% cannot use the BCI, called the "BCI inefficiency" issue. Alternatively to hardware and software improvements, investigating the individual characteristics of the BCI users has became an interesting approach to overcome BCI inefficiency. In this dissertation, I reviewed existing literature concerning the individual sources of variation in SMR-BCI accuracy and identified generic individual characteristics. In the empirical investigation, attention and motor dexterity predictors for SMR-BCI performance were implemented into a trainings that would manipulate those predictors and lead to higher SMR-BCI accuracy. Those predictors were identified by Hammer et al. (2012) as the ability to concentrate (associated with relaxation levels) and "mean error duration" in a two-hand visuo-motor coordination task (VMC). Prior to a SMR-BCI session, a total of n=154 participants in two locations took part of 23 min sessions of either Jacobson’s Progressive Muscle Relaxation session (PMR), a VMC session, or a control group (CG). No effect of PMR or VMC manipulation was found, but the manipulation checks did not consistently confirm whether PMR had an effect of relaxation levels and VMC on "mean error duration". In this first study, correlations between relaxation levels or "mean error duration" and accuracy were found but not in both locations. A second study, involving n=39 participants intensified the training in four sessions on four consecutive days or either PMR, VMC or CG. The effect or manipulation was assessed for in terms of a causal relationship by using a PRE-POST study design. The manipulation checks of this second study validated the positive effect of training on both relaxation and "mean error duration". But the manipulation did not yield a specific effect on BCI accuracy. The predictors were not found again, displaying the instability of relaxation levels and "mean error duration" in being associated with BCI performance. An effect of time on BCI accuracy was found, and a correlation between State Mindfulness Scale and accuracy were reported. Results indicated that a short training of PMR or VMC were insufficient in increasing SMR-BCI accuracy. This study contrasted with studies succeeding in increasing SMR-BCI accuracy Tan et al. (2009, 2014), by the shortness of its training and the relaxation training that did not include mindfulness. It also contrasted by its manipulation checks and its comprehensive experimental approach that attempted to replicate existing predictors or correlates for SMR-BCI accuracy. The prediction of BCI accuracy by individual characteristics is receiving increased attention, but requires replication studies and a comprehensive approach, to contribute to the growing base of evidence of predictors for SMR-BCI accuracy. While short PMR and VMC trainings could not yield an effect on BCI performance, mindfulness meditation training might be beneficial for SMR-BCI accuracy. Moreover, it could be implemented for people in the locked-in-syndrome, allowing to reach the end-users that are the most in need for improvements in BCI performance.