Refine
Is part of the Bibliography
- yes (492)
Year of publication
Document Type
- Journal article (358)
- Doctoral Thesis (111)
- Book article / Book chapter (8)
- Conference Proceeding (4)
- Review (4)
- Book (2)
- Other (2)
- Preprint (2)
- Report (1)
Language
- English (492) (remove)
Keywords
- Psychologie (56)
- EEG (20)
- virtual reality (19)
- attention (17)
- P300 (14)
- anxiety (13)
- psychology (13)
- Kognition (12)
- emotion (12)
- event-related potentials (11)
Institute
- Institut für Psychologie (492) (remove)
Sonstige beteiligte Institutionen
- Adam Opel AG (1)
- BMBF (1)
- Blindeninstitut, Ohmstr. 7, 97076, Wuerzburg, Germany (1)
- Deutsches Zentrum für Präventionsforschung Psychische Gesundheit (DZPP) (1)
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society (ESI) (1)
- Evangelisches Studienwerk e.V. (1)
- Forschungsverbund ForChange des Bayrischen Kultusministeriums (1)
- Klinik für Psychiatrie und Psychotherapie, Universität Würzburg (1)
- Opel Automobile GmbH (1)
- Technische Universität Dresden (1)
Experiments in animal models have shown that running increases neuronal activity in early visual areas in light as well as in darkness. This suggests that visual processing is influenced by locomotion independent of visual input. Combining mobile electroencephalography, motion- and eye-tracking, we investigated the influence of overground free walking on cortical alpha activity (~10 Hz) and eye movements in healthy humans. Alpha activity has been considered a valuable marker of inhibition of sensory processing and shown to negatively correlate with neuronal firing rates. We found that walking led to a decrease in alpha activity over occipital cortex compared to standing. This decrease was present during walking in darkness as well as during light. Importantly, eye movements could not explain the change in alpha activity. Nevertheless, we found that walking and eye related movements were linked. While the blink rate increased with increasing walking speed independent of light or darkness, saccade rate was only significantly linked to walking speed in the light. Pupil size, on the other hand, was larger during darkness than during light, but only showed a modulation by walking in darkness. Analyzing the effect of walking with respect to the stride cycle, we further found that blinks and saccades preferentially occurred during the double support phase of walking. Alpha power, as shown previously, was lower during the swing phase than during the double support phase. We however could exclude the possibility that the alpha modulation was introduced by a walking movement induced change in electrode impedance. Overall, our work indicates that the human visual system is influenced by the current locomotion state of the body. This influence affects eye movement pattern as well as neuronal activity in sensory areas and might form part of an implicit strategy to optimally extract sensory information during locomotion.
According to the Selective Accessibility Model of anchoring, the comparison question in the standard anchoring paradigm activates information that is congruent with an anchor. As a consequence, this information will be more likely to become the basis for the absolute judgment which will therefore be assimilated toward the anchor. However, if the activated information overlaps with information that is elicited by the absolute judgment itself, the preceding comparative judgment should not exert an incremental effect and should fail to result in an anchoring effect. The present studies find this result when the comparative judgment refers to a general category and the absolute judgment refers to a subset of the general category that was activated by the anchor value. For example, participants comparing the average annual temperature in New York City to a high 102 °F judged the average winter, but not summer temperature to be higher than participants making no comparison. On the other hand, participants comparing the annual temperature to a low –4 °F judged the average summer, but not winter temperature to be lower than control participants. This pattern of results was shown also in another content domain. It is consistent with the Selective Accessibility Model but difficult to reconcile with other main explanations of the anchoring effect.
We investigated the influence of social status on behavior in a modified dictator game (DG). Since the DG contains an inherent dominance gradient, we examined the relationship between dictator decisions and recipient status, which was operationalized by three social identities and an artificial intelligence (AI). Additionally, we examined the predictive value of social dominance orientation (SDO) on the behavior of dictators toward the different social and non-social hierarchical recipients. A multilevel model analysis showed that recipients with the same status as the dictator benefited the most and the artificial intelligence the least. Furthermore, SDO, regardless of social status, predicted behavior toward recipients in such a way that higher dominance was associated with lower dictator offers. In summary, participants treated other persons of higher and lower status equally, those of equal status better and, above all, an algorithm worst. The large proportion of female participants and the limited variance of SDO should be taken into account with regard to the results of individual differences in SDO.
Acceptance-based regulation of pain, which focuses on the allowing of pain and pain related thoughts and emotions, was found to modulate pain. However, results so far are inconsistent regarding different pain modalities and indices. Moreover, studies so far often lack a suitable control condition, focus on behavioral pain measures rather than physiological correlates, and often use between-subject designs, which potentially impede the evaluation of the effectiveness of the strategies. Therefore, we investigated whether acceptance-based strategies can reduce subjective and physiological markers of acute pain in comparison to a control condition in a within-subject design. To this end, participants (N = 30) completed 24 trials comprising 10 s of heat pain stimulation. Each trial started with a cue instructing participants to welcome and experience pain (acceptance trials) or to react to the pain as it is without employing any regulation strategies (control trials). In addition to pain intensity and unpleasantness ratings, heart rate (HR) and skin conductance (SC) were recorded. Results showed significantly decreased pain intensity and unpleasantness ratings for acceptance compared to control trials. Additionally, HR was significantly lower during acceptance compared to control trials, whereas SC revealed no significant differences. These results demonstrate the effectiveness of acceptance-based strategies in reducing subjective and physiological pain responses relative to a control condition, even after short training. Therefore, the systematic investigation of acceptance in different pain modalities in healthy and chronic pain patients is warranted.
Tactile stimulation is less frequently used than visual for brain-computer interface (BCI) control, partly because of limitations in speed and accuracy. Non-visual BCI paradigms, however, may be required for patients who struggle with vision dependent BCIs because of a loss of gaze control. With the present study, we attempted to replicate earlier results by Herweg et al. (2016), with several minor adjustments and a focus on training effects and usability. We invited 16 healthy participants and trained them with a 4-class tactile P300-based BCI in five sessions. Their main task was to navigate a virtual wheelchair through a 3D apartment using the BCI. We found significant training effects on information transfer rate (ITR), which increased from a mean of 3.10–9.50 bits/min. Further, both online and offline accuracies significantly increased with training from 65% to 86% and 70% to 95%, respectively. We found only a descriptive increase of P300 amplitudes at Fz and Cz with training. Furthermore, we report subjective data from questionnaires, which indicated a relatively high workload and moderate to high satisfaction. Although our participants have not achieved the same high performance as in the Herweg et al. (2016) study, we provide evidence for training effects on performance with a tactile BCI and confirm the feasibility of the paradigm.
Data analytics as a field is currently at a crucial point in its development, as a commoditization takes place in the context of increasing amounts of data, more user diversity, and automated analysis solutions, the latter potentially eliminating the need for expert analysts. A central hypothesis of the present paper is that data visualizations should be adapted to both the user and the context. This idea was initially addressed in Study 1, which demonstrated substantial interindividual variability among a group of experts when freely choosing an option to visualize data sets. To lay the theoretical groundwork for a systematic, taxonomic approach, a user model combining user traits, states, strategies, and actions was proposed and further evaluated empirically in Studies 2 and 3. The results implied that for adapting to user traits, statistical expertise is a relevant dimension that should be considered. Additionally, for adapting to user states different user intentions such as monitoring and analysis should be accounted for. These results were used to develop a taxonomy which adapts visualization recommendations to these (and other) factors. A preliminary attempt to validate the taxonomy in Study 4 tested its visualization recommendations with a group of experts. While the corresponding results were somewhat ambiguous overall, some aspects nevertheless supported the claim that a user-adaptive data visualization approach based on the principles outlined in the taxonomy can indeed be useful. While the present approach to user adaptivity is still in its infancy and should be extended (e.g., by testing more participants), the general approach appears to be very promising.
In the current study, electroencephalography (EEG) was recorded simultaneously with facial electromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220-280 ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However, its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500-750 ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500-750 ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250 ms) than for scenes (500 ms) whereas for scenes activity changes were more pronounced over the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes.
Expectation and previous experience are both well established key mediators of placebo and nocebo effects. However, the investigation of their respective contribution to placebo and nocebo responses is rather difficult because most placebo and nocebo manipulations are contaminated by pre-existing treatment expectancies resulting from a learning history of previous medical interventions. To circumvent any resemblance to classical treatments, a purely psychological placebonocebo manipulation was established, namely, the "visual stripe pattern induced modulation of pain." To this end, experience and expectation regarding the effects of different visual cues (stripe patterns) on pain were varied across 3 different groups, with either only placebo instruction (expectation), placebo conditioning (experience), or both (expectation + experience) applied. Only the combined manipulation (expectation + experience) revealed significant behavioral and physiological placebo nocebo effects on pain. Two subsequent experiments, which, in addition to placebo and nocebo cues, included a neutral control condition further showed that especially nocebo responses were more easily induced by this psychological placebo and nocebo manipulation. The results emphasize the great effect of psychological processes on placebo and nocebo effects. Particularly, nocebo effects should be addressed more thoroughly and carefully considered in clinical practice to prevent the accidental induction of side effects.
Cognitive control is what makes goal-directed actions possible. Whenever the environment or our impulses strongly suggests a response that is incompatible with our goals, conflict arises. Such conflicts are believed to cause negative affect. Aversive consequences of conflict may be registered in a conflict monitoring module, which subsequently initiates attentional changes and action tendencies to reduce negative affect. This association suggests that behavioral adaptation might be a reflection of emotion regulation. The theoretical cornerstone of current research on emotion regulation is the process model of emotion regulation, which postulates the regulation strategies situation selection, situation modification, attentional deployment, cognitive change, and response modulation. Under the assumption that conflict adaptation and affect regulation share common mechanisms, I derived several predictions regarding cognitive control from the process model of emotion regulation and tested them in 11 experiments (N = 509). Participants engaged in situation selection towards conflict, but only when they were explicitly pointed to action-outcome contingencies (Experiments 1 to 3). I found support for a mechanism resembling situation modification, but no evidence for a role of affect (Experiments 4 to 10). Changing the evaluation of conflict had no impact on the extent of conflict adaptation (Experiment 11). Overall, there was evidence for an explicit aversiveness of cognitive conflict, but less evidence for implicit aversiveness, suggesting that conflict may trigger affect regulation processes, particularly when people explicitly have affect regulation goals in mind.
Traditionally, adversity was defined as the accumulation of environmental events (allostatic load). Recently however, a mismatch between the early and the later (adult) environment (mismatch) has been hypothesized to be critical for disease development, a hypothesis that has not yet been tested explicitly in humans. We explored the impact of timing of life adversity (childhood and past year) on anxiety and depression levels (N = 833) and brain morphology (N = 129). Both remote (childhood) and proximal (recent) adversities were differentially mirrored in morphometric changes in areas critically involved in emotional processing (i.e. amygdala/hippocampus, dorsal anterior cingulate cortex, respectively). The effect of adversity on affect acted in an additive way with no evidence for interactions (mismatch). Structural equation modeling demonstrated a direct effect of adversity on morphometric estimates and anxiety/depression without evidence of brain morphology functioning as a mediator. Our results highlight that adversity manifests as pronounced changes in brain morphometric and affective temperament even though these seem to represent distinct mechanistic pathways. A major goal of future studies should be to define critical time periods for the impact of adversity and strategies for intervening to prevent or reverse the effects of adverse childhood life experiences.
It has been argued that several reported non-visual influences on perception cannot be truly perceptual. If they were, they should affect the perception of target objects and reference objects used to express perceptual judgments, and thus cancel each other out. This reasoning presumes that non-visual manipulations impact target objects and comparison objects equally. In the present study we show that equalizing a body-related manipulation between target objects and reference objects essentially abolishes the impact of that manipulation so as it should do when that manipulation actually altered perception. Moreover, the manipulation has an impact on judgements when applied to only the target object but not to the reference object, and that impact reverses when only applied to the reference object but not to the target object. A perceptual explanation predicts this reversal, whereas explanations in terms of post-perceptual response biases or demand effects do not. Altogether these results suggest that body-related influences on perception cannot as a whole be attributed to extra-perceptual factors.
Examining the testing effect in university teaching: retrievability and question format matter
(2018)
Review of learned material is crucial for the learning process. One approach that promises to increase the effectiveness of reviewing during learning is to answer questions about the learning content rather than restudying the material (testing effect). This effect is well established in lab experiments. However, existing research in educational contexts has often combined testing with additional didactical measures that hampers the interpretation of testing effects. We aimed to examine the testing effect in its pure form by implementing a minimal intervention design in a university lecture (N = 92). The last 10 min of each lecture session were used for reviewing the lecture content by either answering short-answer questions, multiple-choice questions, or reading summarizing statements about core lecture content. Three unannounced criterial tests measured the retention of learning content at different times (1, 12, and 23 weeks after the last lecture). A positive testing effect emerged for short-answer questions that targeted information that participants could retrieve from memory. This effect was independent of the time of test. The results indicated no testing effect for multiple-choice testing. These results suggest that short-answer testing but not multiple-choice testing may benefit learning in higher education contexts.
The psychology of eating
(2013)
Most research on human fear conditioning and its generalization has focused on adults whereas only little is known about these processes in children. Direct comparisons between child and adult populations are needed to determine developmental risk markers of fear and anxiety. We compared 267 children and 285 adults in a differential fear conditioning paradigm and generalization test. Skin conductance responses (SCR) and ratings of valence and arousal were obtained to indicate fear learning. Both groups displayed robust and similar differential conditioning on subjective and physiological levels. However, children showed heightened fear generalization compared to adults as indexed by higher arousal ratings and SCR to the generalization stimuli. Results indicate overgeneralization of conditioned fear as a developmental correlate of fear learning. The developmental change from a shallow to a steeper generalization gradient is likely related to the maturation of brain structures that modulate efficient discrimination between danger and (ambiguous) safety cues.
It is one of the primary goals of medical care to secure good quality of life (QoL) while prolonging survival. This is a major challenge in severe medical conditions with a prognosis such as amyotrophic lateral sclerosis (ALS). Further, the definition of QoL and the question whether survival in this severe condition is compatible with a good QoL is a matter of subjective and culture-specific debate. Some people without neurodegenerative conditions believe that physical decline is incompatible with satisfactory QoL. Current data provide extensive evidence that psychosocial adaptation in ALS is possible, indicated by a satisfactory QoL. Thus, there is no fatalistic link of loss of QoL when physical health declines. There are intrinsic and extrinsic factors that have been shown to successfully facilitate and secure QoL in ALS which will be reviewed in the following article following the four ethical principles (1) Beneficence, (2) Non-maleficence, (3) Autonomy and (4) Justice, which are regarded as key elements of patient centered medical care according to Beauchamp and Childress. This is a JPND-funded work to summarize findings of the project NEEDSinALS (www.NEEDSinALS.com) which highlights subjective perspectives and preferences in medical decision making in ALS.
Previous studies of social phobia have reported an increased vigilance to social threat cues but also an avoidance of socially relevant stimuli such as eye gaze. The primary aim of this study was to examine attentional mechanisms relevant for perceiving social cues by means of abnormalities in scanning of facial features in patients with social phobia. In two novel experimental paradigms, patients with social phobia and healthy controls matched on age, gender and education were compared regarding their gazing behavior towards facial cues. The first experiment was an emotion classification paradigm which allowed for differentiating reflexive attentional shifts from sustained attention towards diagnostically relevant facial features. In the second experiment, attentional orienting by gaze direction was assessed in a gaze-cueing paradigm in which non-predictive gaze cues shifted attention towards or away from subsequently presented targets. We found that patients as compared to controls reflexively oriented their attention more frequently towards the eyes of emotional faces in the emotion classification paradigm. This initial hypervigilance for the eye region was observed at very early attentional stages when faces were presented for 150 ms, and persisted when facial stimuli were shown for 3 s. Moreover, a delayed attentional orienting into the direction of eye gaze was observed in individuals with social phobia suggesting a differential time course of eye gaze processing in patients and controls. Our findings suggest that basic mechanisms of early attentional exploration of social cues are biased in social phobia and might contribute to the development and maintenance of the disorder.
Research on facial emotion expression has mostly focused on emotion recognition, assuming that a small number of discrete emotions is elicited and expressed via prototypical facial muscle configurations as captured in still photographs. These are expected to be recognized by observers, presumably via template matching. In contrast, appraisal theories of emotion propose a more dynamic approach, suggesting that specific elements of facial expressions are directly produced by the result of certain appraisals and predicting the facial patterns to be expected for certain appraisal configurations. This approach has recently been extended to emotion perception, claiming that observers first infer individual appraisals and only then make categorical emotion judgments based on the estimated appraisal patterns, using inference rules. Here, we report two related studies to empirically investigate the facial action unit configurations that are used by actors to convey specific emotions in short affect bursts and to examine to what extent observers can infer a person's emotions from the predicted facial expression configurations. The results show that (1) professional actors use many of the predicted facial action unit patterns to enact systematically specified appraisal outcomes in a realistic scenario setting, and (2) naïve observers infer the respective emotions based on highly similar facial movement configurations with a degree of accuracy comparable to earlier research findings. Based on estimates of underlying appraisal criteria for the different emotions we conclude that the patterns of facial action units identified in this research correspond largely to prior predictions and encourage further research on appraisal-driven expression and inference.
Acrophobia is characterized by intense fear in height situations. Virtual reality (VR) can be used to trigger such phobic fear, and VR exposure therapy (VRET) has proven effective for treatment of phobias, although it remains important to further elucidate factors that modulate and mediate the fear responses triggered in VR. The present study assessed verbal and behavioral fear responses triggered by a height simulation in a 5-sided cave automatic virtual environment (CAVE) with visual and acoustic simulation and further investigated how fear responses are modulated by immersion, i.e., an additional wind simulation, and presence, i.e., the feeling to be present in the VE. Results revealed a high validity for the CAVE and VE in provoking height related self-reported fear and avoidance behavior in accordance with a trait measure of acrophobic fear. Increasing immersion significantly increased fear responses in high height anxious (HHA) participants, but did not affect presence. Nevertheless, presence was found to be an important predictor of fear responses. We conclude that a CAVE system can be used to elicit valid fear responses, which might be further enhanced by immersion manipulations independent from presence. These results may help to improve VRET efficacy and its transfer to real situations.
Recent research suggests that the P3b may be closely related to the activation of the locus coeruleus-norepinephrine (LC-NE) system. To further study the potential association, we applied a novel technique, the non-invasive transcutaneous vagus nerve stimulation (tVNS), which is speculated to increase noradrenaline levels. Using a within-subject cross-over design, 20 healthy participants received continuous tVNS and sham stimulation on two consecutive days (stimulation counterbalanced across participants) while performing a visual oddball task. During stimulation, oval non-targets (standard), normal-head (easy) and rotated-head (difficult) targets, as well as novel stimuli (scenes) were presented. As an indirect marker of noradrenergic activation we also collected salivary alpha-amylase (sAA) before and after stimulation. Results showed larger P3b amplitudes for target, relative to standard stimuli, irrespective of stimulation condition. Exploratory post hoc analyses, however, revealed that, in comparison to standard stimuli, easy (but not difficult) targets produced larger P3b (but not P3a) amplitudes during active tVNS, compared to sham stimulation. For sAA levels, although main analyses did not show differential effects of stimulation, direct testing revealed that tVNS (but not sham stimulation) increased sAA levels after stimulation. Additionally, larger differences between tVNS and sham stimulation in P3b magnitudes for easy targets were associated with larger increase in sAA levels after tVNS, but not after sham stimulation. Despite preliminary evidence for a modulatory influence of tVNS on the P3b, which may be partly mediated by activation of the noradrenergic system, additional research in this field is clearly warranted. Future studies need to clarify whether tVNS also facilitates other processes, such as learning and memory, and whether tVNS can be used as therapeutic tool.
Research with adults in laboratory settings has shown that distributed rereading is a beneficial learning strategy but its effects depend on time of test. When learning outcomes are measured immediately after rereading, distributed rereading yields no benefits or even detrimental effects on learning, but the beneficial effects emerge two days later. In a preregistered experiment, the effects of distributed rereading were investigated in a classroom setting with school students. Seventh-graders (N = 191) reread a text either immediately or after 1 week. Learning outcomes were measured after 4 min or 1 week. Participants in the distributed rereading condition reread the text more slowly, predicted their learning success to be lower, and reported a lower on-task focus. At the shorter retention interval, massed rereading outperformed distributed rereading in terms of learning outcomes. Contrary to students in the massed condition, students in the distributed condition showed no forgetting from the short to the long retention interval. As a result, they performed equally well as the students in the massed condition at the longer retention interval. Our results indicate that distributed rereading makes learning more demanding and difficult and leads to higher effort during rereading. Its effects on learning depend on time of test, but no beneficial effects were found, not even at the delayed test.
People who suffer Social Anxiety Disorder (SAD) are under substantial personal distress and endure impaired normal functioning in at least some parts of everyday life. Next, to the personal suffering, there are also the immense public health costs to consider, as SAD is the most common anxiety disorder and thereby one of the major psychiatric disorders in general. Over the last years, fundamental research found cognitive factors as essential components in the development and maintenance of social fears. Following leading cognitive models, avoidance behaviors are thought to be an important factor in maintaining the developed social anxieties. Therefore, this thesis aims to deepen the knowledge of avoidance behaviors exhibited in social anxiety, which allows to get a better understanding of how SAD is maintained.
To reach this goal three studies were conducted, each using a different research approach. In the first study cutting-edge Virtual Reality (VR) equipment was used to immerse participants in a virtual environment. In this virtual setting, High Socially Anxious (HSA) individuals and matched controls had to execute a social Approach-Avoidance Task (AAT). In the task, participants had to pass a virtual person displaying neutral or angry facial expressions. By using a highly immersive VR apparatus, the first described study took the initial step in establishing a new VR task for the implicit research on social approach-avoidance behaviors. By moving freely through a VR environment, participants experienced near real-life social situations. By tracking body and head movements, physical and attentional approach-avoidance processes were studied.
The second study looked at differences in attention shifts initiated by gaze-cues of neutral or emotional faces. Comparing HSA and controls, enabled a closer look at attention re-allocation with special focus on social stimuli. Further, context conditioning was used to compare task performance in a safe and in a threatening environment. Next to behavioral performance, the study also investigated neural activity using Electroencephalography (EEG) primarily looking at the N2pc component.
In the third study, eye movements of HSA and Low Socially Anxious (LSA) were analyzed using an eye-tracking apparatus while participants executed a computer task. The participants’ tasks consisted of the detection of either social or non-social stimuli in complex visual settings. The study intended to compare attention shifts towards social components between these two tasks and how high levels of social anxiety influence them. In other words, the measurements of eye movements enabled the investigation to what extent social attention is task-dependent and how it is influenced by social anxiety.
With the three described studies, three different approaches were used to get an in-depth understanding of what avoidance behaviors in SAD are and to which extent they are exhibited. Overall, the results showed that HSA individuals exhibited exaggerated physical and attentional avoidance behavior. Furthermore, the results highlighted that the task profoundly influences attention allocation. Finally, all evidence indicates that avoidance behaviors in SAD are exceedingly complex. They are not merely based on the fear of a particular stimulus, but rather involve highly compound cognitive processes, which surpass the simple avoidance of threatening stimuli. To conclude, it is essential that further research is conducted with special focus on SAD, its maintaining factors, and the influence of the chosen research task and method.
The Role of Attentional Control and Fear Acquisition and Generalization in Social Anxiety Disorder
(2020)
Although Social Anxiety Disorder (SAD) is one of the most prevalent mental disorders, still little is known about its development and maintenance. Cognitive models assume that deviations in attentional as well as associative learning processes play a role in the etiology of SAD. Amongst others, deficits in inhibitory attentional control as well as aberrations during fear generalization, which have already been observed in other anxiety disorders, are two candidate mechanisms that might contribute to the onset and retention of SAD. However, a review of the literature shows that there is a lack of research relating to these topics. Thus, the aim of the present thesis was to examine in which way individuals with SAD differ from healthy controls regarding attentional control and generalization of acquired fear during the processing of social stimuli.
Study 1 tested whether impairment in the inhibitory control of attention is a feature of SAD, and how it might be influenced by emotional expression and gaze direction of an interactional partner. For this purpose, individuals with SAD and healthy controls (HC) participated in an antisaccade task with faces displaying different emotional expressions (angry, neutral and happy) and gaze directions (direct and averted) serving as target stimuli. While the participants performed either pro- or antisaccades in response to the peripherally presented faces, their gaze behavior was recorded via eye-tracking, and ratings of valence and arousal were obtained. Results revealed that both groups showed prolonged latencies and increased error rates in trials with correct anti- compared to prosaccades. However, there were no differences between groups with regard to response latency or error rates, indicating that SAD patients did not exhibit impairment on inhibitory attentional control in comparison to HC during eye-tracking. Possible explanations for this finding could be that reduced inhibitory attentional control in SAD only occurs under certain circumstances, for example, when these individuals currently run the risk of being negatively evaluated by others and not in the mere presence of phobic stimuli, or when the cognitive load of a task is so high that it cannot be unwound by compensatory strategies, such as putting more effort into a task.
As not only deviations in attentional, but also associative learning processes might be pathogenic markers of SAD, these mechanisms were further addressed in the following experiments. Study 2 is the first that attempted to investigate the generalization of conditioned fear in patients with SAD. To this end, patients with SAD and HC were conditioned to two neutral female faces serving as conditioned stimuli (CS+: reinforced; CS-: non-reinforced) and a fearful face paired with a loud scream serving as unconditioned stimulus (US). Fear generalization was tested by presenting morphs of the two faces (GS: generalization stimuli), which varied in their similarity to the original faces. During the whole experiment, self-report ratings, heart rate (HR) and skin conductance responses (SCR) were recorded. Results demonstrated that SAD patients rated all stimuli as less pleasant and more arousing, and overestimated the occurrence of the US compared to HC, indicating a general hyperarousal in individuals with SAD. In addition, ratings and SCR indicated that both groups generalized their acquired fear from the CS+ to intermediate GSs as a function of their similarity to the CS+. However, except for the HR data, which indicated that only SAD patients but not HC displayed a generalization response in this measure, most of the results did not support the hypothesis that SAD is characterized by overgeneralization. A plausible reason for this finding could be that overgeneralization is just a key characteristic of some anxiety disorders and SAD is not one of them. Still, other factors, such as comorbidities in the individuals with SAD, could also have had an influence on the results, which is why overgeneralization was further examined in study 3.
The aim of study 3 was to investigate fear generalization on a neuronal level. Hence, high (HSA) and low socially anxious participants (LSA) underwent a conditioning paradigm, which was an adaption of the experimental design used study 2 for EEG. During the experiment, steady-state visually evoked potentials (ssVEPs) and ratings of valence and arousal were recorded. Analyses revealed significant generalization gradients in all ratings with highest fear responses to the CS+ and a progressive decline of these reactions with increasing similarity to the CS-. In contrast, the generalization gradient on a neuronal level showed highest amplitudes for the CS+ and a reduction in amplitude to the most proximal, but not distal GSs in the ssVEP signal, which might be interpreted as lateral inhibition in the visual cortex. The observed dissociation among explicit and implicit measures points to different functions of behavioral and sensory cortical processes during fear generalization: While the ratings might reflect an individual’s consciously increased readiness to react to threat, the lateral inhibition pattern in the occipital cortex might serve to maximize the contrast among stimuli with and without affective value and thereby improve adaptive behavior. As no group differences could be observed, the finding of study 2 that overgeneralization does not seem to be a marker of SAD is further consolidated.
In sum, the conducted experiments suggest that individuals with SAD are characterized by a general hyperarousal during the exposition to disorder-relevant stimuli as indicated by enhanced arousal and reduced valence ratings of the stimuli compared to HC. However, the hypotheses that reduced inhibitory attentional control and overgeneralization of conditioned fear are markers of SAD were mostly not confirmed. Further research is required to elucidate whether they only occur under certain circumstances, such as high cognitive load (e.g. handling two tasks simultaneously) or social stress (e.g. before giving a speech), or whether they are not characteristics of SAD at all. With the help of these findings, new interventions for the treatment of SAD can be developed, such as attentional bias modification or discrimination learning.
Honest actions predominate human behavior. From time to time, this general preference must yield to dishonest actions, which require an effortful process of overcoming initial honest response activation. This thesis presents three experimental series to elucidate this tug-of-war between honest and dishonest response tendencies in overtly committed instances of lies, thereby joining recent efforts to move from a sheer phenomenological perspective on dishonest responding as being more difficult than honest responding to a precise description of the underlying cognitive processes. The consideration of cognitive theories, empirical evidence, and paradigms from different research fields – dishonesty, cognitive control and sensorimotor stage models of information processing – lay the groundwork for the research questions and methodological approach of this thesis.
The experiments pinpoint the underlying conflict of dishonest responding in the central, capacity-limited stage of information processing (Experiments 1 to 4), but they also demonstrate that cognitive control processes (Experiments 5 to 7) and the internalization of false alibis (Experiments 8 to 11) can reduce or even completely eliminate this conflict. The data reveals great flexibility at the cognitive basis of dishonest responding: On the one hand, dishonest responding appears to rely heavily on capacity-limited processes of response selection to overcome honest response tendencies alongside up- and downstream consequences of response activation and monitoring. On the other hand, agents have powerful tools to mitigate these effortful processes through control adaptation and false alibis. These results support and expand current theorizing of the cognitive underpinnings of dishonest responding. Furthermore, they are alerting from an applied perspective on the detection of lies, especially when considering the flexibility of even basic cognitive processes in the face of false alibis. A promising way to move forward from here would be a fine-grained discrimination of response activation, passive decay and active inhibition of honest representations in dishonest responding and the assessment of the adaptiveness of these processes.
The present dissertation aims to shed light on different mechanisms of socio-emotional feedback in social decision-making situations. The objective is to evaluate emotional facial expressions as feedback stimuli, i.e., responses of interaction partners to certain social decisions. In addition to human faces, artificial emojis are also examined due to their relevance for modern digital communication. Previous research on the influence of emotional feedback suggests that a person's behavior can be effectively reinforced by rewarding stimuli. In the context of this dissertation, the differences in the feedback processing of human photographs and emojis, but also the evaluation of socially expected versus socially unexpected feedback were examined in detail in four studies. In addition to behavioral data, we used the electroencephalogram (EEG) in all studies to investigate neural correlates of social decision-making and emotional feedback.
As the central paradigm, all studies were based on a modified ultimatum game. The game is structured as follows: there is a so-called proposer who holds a specific amount of money (e.g., 10 cents) and offers the responder a certain amount (e.g., 3 cents). The responder then decides whether to accept or reject the offer. In the version of the ultimatum game presented here, different types of proposers are introduced. After the participants have accepted or rejected in the role of the responder, the different proposers react to the participant’s decision with specific emotional facial expressions. Different feedback patterns are used for the individual experiments conducted in the course of this dissertation.
In the first study, we investigated the influence of emotional feedback on decision-making in the modified version of the ultimatum game. We were able to show that a proposer who responds to the acceptance of an offer with a smiling face achieves more accepted offers overall than a control proposer who responds to both accepted and rejected offers with a neutral facial expression. Consequently, the smile served as a positive reinforcement. Similarly, a sad expression in response to a rejected offer also resulted in higher acceptance rates as compared to the control identity, which could be considered an expression of compassion for that proposer. On a neuronal level, we could show that there are differences between simply looking at negative emotional stimuli (i.e., sad and angry faces) and their appearance as feedback stimuli after rejected offers in the modified ultimatum game. The so-called feedback-related negativity was reduced (i.e., more positive) when negative emotions appeared as feedback from the proposers. We argued that these findings might show that the participants wanted to punish the proposers by rejecting an offer for its unfairness and therefore the negative feedback met their expectations. The altered processing of negative emotional facial expressions in the ultimatum game could therefore indicate that the punishment is interpreted as successful. This includes the expectation that the interaction partner will change his behavior in the future and eventually make fairer offers.
In the second study we wanted to show that smiling and sad emojis as feedback stimuli in the modified ultimatum game can also lead to increased acceptance rates. Contrary to our assumptions, this effect could not be observed. At the neural level as well, the findings did not correspond to our assumptions and differed strongly from those of the first study. One finding, however, was that the neural P3 component showed how the use of emojis as feedback stimuli particularly characterizes certain types of proposers. This is supported by the fact that the P3 is increased for the proposer who rewards an acceptance with a smile as well as for the proposer who reacts to rejection with a sad emoji compared to the neutral control proposer.
The third study examined the discrepancy between the findings of the first and second study. Accordingly, both humans and emojis representing the different proposers were presented in the ultimatum game. In addition, emojis were selected that showed a higher similarity to known emojis from common messenger services compared to the second study. We were able to replicate that the proposers in the ultimatum game, who reward an acceptance of the offer with a smile, led to an increased acceptance rate compared to the neutral control proposers. This difference is independent of whether the proposers are represented by emojis or human faces. With regard to the neural correlates, we were able to demonstrate that emojis and human faces differ strongly in their neural processing. Emojis showed stronger activation than human faces in the face-processing N170 component, the feedback-related negativity and the P3 component. We concluded that the results of the N170 and feedback-related negativity could indicate a signal for missing social information of emojis compared to faces. The increased P3 amplitude for emojis might imply that emojis appear unexpectedly as reward stimuli in a social decision task compared to human faces.
The last study of this project dealt with socially unexpected feedback. In comparison to the first three studies, new proposer identities were implemented. In particular, the focus was on a proposer who reacted to the rejection of an offer unexpectedly with a smile and to the acceptance with a neutral facial expression. According to the results, participants approach this unexpected smile through increased rejection, although it is accompanied by financial loss. In addition, as reported in studies one and three, we were able to show that proposers who respond to the acceptance of an offer with a smiling face and thus meet the expectations of the participants have higher offer acceptance rates than the control proposer. At the neuronal level, especially the feedback from the socially unexpected proposer led to an increased P3 amplitude, which indicates that smiling after rejection is attributed a special subjective importance.
The experiments provide new insights into the social influence through emotional feedback and the processing of relevant social cues. Due to the conceptual similarity of the studies, it was possible to differentiate between stable findings and potentially stimulus-dependent deviations, thus creating a well-founded contribution to the current research. Therefore, the novel paradigm presented here, and the knowledge gained from it could also play an important role in the future for clinical questions dealing with limited social competencies.
This thesis aims for a better understanding of the mechanisms underlying anxiety as well as trauma- and stressor-related disorders and the development of new therapeutic approaches. I was first interested in the associative learning mechanisms involved in the etiology of anxiety disorders. Second, I explored the therapeutic effects of transcutaneous vagus nerve stimulation (tVNS) as a promising new method to accelerate and stabilize extinction learning in humans.
For these purposes, I applied differential anxiety conditioning protocols realized by the implementation of virtual reality (VR). Here, a formerly neutral virtual context (anxiety context, CTX+) is presented whereby the participants unpredictably receive mildly aversive electric stimuli (unconditioned stimulus, US). Another virtual context (safety context, CTX-) is never associated with the US. Moreover, extinction of conditioned anxiety can be modeled by presenting the same contexts without US delivery. When unannounced USs were administered after extinction, i.e. reinstatement, the strength of the “returned” conditioned anxiety can provide information on the stability of the extinction memory.
In Study 1, I disentangled the role of elemental and conjunctive context representations in the acquisition of conditioned anxiety. Sequential screenshots of two virtual offices were presented like a flip-book so that I elicited the impression of walking through the contexts. Some pictures of CTX+ were paired with an US (threat elements), but not some other screenshots of the same context (non-threat elements), nor the screenshots depicting CTX- (safety elements). Higher contingency ratings for threat compared to non-threat elements revealed elemental representation. Electro-cortical responses showed larger P100 and early posterior negativity amplitudes elicited by screenshots depicting CTX+ compared to CTX- and suggested conjunctive representation. These results support the dual context representation in anxiety acquisition in healthy individuals.
Study 2 addressed the effects of tVNS on the stabilization of extinction learning by using a context conditioning paradigm. Potentiated startle responses as well as higher aversive ratings in CTX+ compared to CTX- indicate successful anxiety conditioning. Complete extinction was found in startle responses and valence ratings as no differentiation between CTX+ and CTX- suggested. TVNS did not affect extinction or reinstatement of anxiety which may be related to the inappropriate transferability of successful stimulation parameters from epilepsy patients to healthy participants during anxiety extinction.
Therefore, in Study 3 I wanted to replicate the modulatory effects of tVNS on heart rate and pain perception by the previously used parameters. However, no effects of tVNS were observed on subjective pain ratings, on pain tolerance, or on heart rate. This led to the conclusion that the modification of stimulation parameters is necessary for a successful acceleration of anxiety extinction in humans.
In Study 4, I prolonged the tVNS and, considering previous tVNS studies, I applied a cue conditioning paradigm in VR. Therefore, during acquisition a cue (CS+) presented in CTX+ predicted the US, but not another cue (CS-). Both cues were presented in a second context (CTX-) and never paired with the US. Afterward, participants received either tVNS or sham stimulation and underwent extinction learning. I found context-dependent cue conditioning only in valence ratings, which was indicated by lower valence for CS+ compared to CS- in CTX+, but no differential ratings in CTX-. Successful extinction was indicated by equal responses to CS+ and CS-. Interestingly, I found reinstatement of conditioned fear in a context-dependent manner, meaning startle response was potentiated for CS+ compared to CS- only in the anxiety context. Importantly, even the prolonged tVNS had no effect, neither on extinction nor on reinstatement of context-dependent cue conditioning. However, I found first evidence for accelerated physiological contextual extinction due to less differentiation between startles in CTX+ compared to CTX- in the tVNS than in the sham stimulated group.
In sum, this thesis first confirms the dual representation of a context in an elemental and a conjunctive manner. Second, though anxiety conditioning and context-dependent cue conditioning paradigms worked well, the translation of tVNS accelerated extinction from rats to humans needs to be further developed, especially the stimulation parameters. Nevertheless, tVNS remains a very promising approach of memory enhancement, which can be particularly auspicious in clinical settings.
Sharing stories has become increasingly popular as a means to foster young children’s vocabulary development and to target early vocabulary gaps between disadvantaged children and their better-equipped peers. Although, in general, the beneficial effects of story interventions have been demonstrated (Marulis & Neuman, 2010, 2013), many factors possibly moderating those effects – including method of story delivery as well as questioning style – merit further examination (R. L. Walsh & Hodge, 2018).
The aim of the present doctoral thesis was to test predictions from different theories on methods of story delivery and questioning styles regarding their influence on children’s vocabulary learning from listening to stories. Method of story delivery refers to the general way of how stories can be conveyed, with reading aloud and free-telling of stories (i.e., the narrator telling stories without reading from text) representing different approaches that are assumed to differ regarding narrator behavior and linguistic complexity. Questioning styles refer to different combinations of questions’ cognitive demand level (low vs. high vs. scaffolding-like increasing from low to high) and/or placement (within the story vs. after the story) during story sessions.
In the present doctoral thesis, the first two studies (Studies 1 and 2) compared reading aloud and free-telling of stories as different methods of story delivery. Study 1 consisted of two experiments utilizing a within-subjects design with 3- to 6-year-old preschool children (Nexperiment1 = 83; Nexperiment2 = 48) listening to stories once either presented read aloud or freely told. Study 2 extended the first study by examining effects on story comprehension and additionally including audiotape versions of both story-delivery methods as experimental conditions, which allowed separating narrator behavior and linguistic complexity. With the second study being conducted as a between-subjects design, 4- to 6-year-old preschool children (N = 60) heard each of the stories twice, but listened only to one type of story delivery. The results of Study 1 indicated that no differences between methods of story delivery regarding word learning and child engagement were observable when narrator behavior in terms of eye contact and gesticulation was similar. However in Study 2, when free-telling was operationalized in a more naturalistic way, marked by higher rates of eye contact and gesticulation, it resulted in better child engagement, greater vocabulary learning, and better story comprehension than reading aloud. In contrast, as indicated by both studies, differences in linguistic complexity had no short-term impact on learning and comprehension. The studies, however, could not isolate the influence of eye contact versus gesture usage and could not distinguish between different types of gestures.
The second set of studies (Studies 3 and 4) contrasted the effects of different types of question demand level (low vs. high vs. scaffolding-like increasing from low to high) and placement (within the story vs. after the story) and examined potential interactions with children’s cognitive skills. In one-to-one reading sessions (Study 3; N = 86) or small-group reading sessions (Study 4; N = 91) 4- to 6-year-old preschool children heard stories three times marked by different types of question demand level and placement or simply read-aloud without questions. The adult narrators encouraged the children to reflect on and answer questions (Study 1) and to give feedback on other children’s comments (Study 2), but in both studies, to ensure fidelity of the experimental conditions, the adult narrators did not provide corrective feedback or elaborate on the children’s answers. Results on measures of different facets of word learning indicated that asking questions resulted in better vocabulary learning than simply reading the stories aloud. However, in contrast to proposed hypotheses and across both studies, different types of question demand level and placement did not exert differential effects and they did not interact with children’s general vocabulary knowledge or memory skills. Thus, both studies suggest that those two types of questions features have no impact on children’s vocabulary learning, if questions are not followed up by narrator feedback and elaborations. However, whether different types of question placement and demand level produce differential learning gains through adult-child discussion following different questioning styles has still to be determined.
Taken together, the four studies of the present doctoral thesis underline the central role that adults play for successful story sessions with young children not only for engaging children in the story but also for extending and for correcting their utterances. Although the presented studies extend existing knowledge about methods of story delivery and questioning styles during story sessions, further research needs to examine the impact of questioning styles on word learning through subsequent adult-child discussion and to gain a better understanding of the role of nonverbal narrator behavior during story delivery.
BACKGROUND:
Thigmotaxis refers to a specific behavior of animals (i.e., to stay close to walls when exploring an open space). Such behavior can be assessed with the open field test (OFT), which is a well-established indicator of animal fear. The detection of similar open field behavior in humans may verify the translational validity of this paradigm. Enhanced thigmotaxis related to anxiety may suggest the relevance of such behavior for anxiety disorders, especially agoraphobia.
METHODS:
A global positioning system was used to analyze the behavior of 16 patients with agoraphobia and 18 healthy individuals with a risk for agoraphobia (i.e., high anxiety sensitivity) during a human OFT and compare it with appropriate control groups (n = 16 and n = 19). We also tracked 17 patients with agoraphobia and 17 control participants during a city walk that involved walking through an open market square. RESULTS: Our human OFT triggered thigmotaxis in participants; patients with agoraphobia and participants with high anxiety sensitivity exhibited enhanced thigmotaxis. This behavior was evident in increased movement lengths along the wall of the natural open field and fewer entries into the center of the field despite normal movement speed and length. Furthermore, participants avoided passing through the market square during the city walk, indicating again that thigmotaxis is related to agoraphobia.
CONCLUSIONS:
This study is the first to our knowledge to verify the translational validity of the OFT and to reveal that thigmotaxis, an evolutionarily adaptive behavior shown by most species, is related to agoraphobia, a pathologic fear of open spaces, and anxiety sensitivity, a risk factor for agoraphobia.
Fear is elicited by imminent threat and leads to phasic fear responses with selective attention, whereas anxiety is characterized by a sustained state of heightened vigilance due to uncertain danger. In the present study, we investigated attention mechanisms in fear and anxiety by adapting the NPU-threat test to measure steady-state visual evoked potentials (ssVEPs). We investigated ssVEPs across no aversive events (N), predictable aversive events (P), and unpredictable aversive events (U), signaled by four-object arrays (30 s). In addition, central cues were presented during all conditions but predictably signaled imminent threat only during the P condition. Importantly, cues and context events were flickered at different frequencies (15 Hz vs. 20 Hz) in order to disentangle respective electrocortical responses. The onset of the context elicited larger electrocortical responses for U compared to P context. Conversely, P cues elicited larger electrocortical responses compared to N cues. Interestingly, during the presence of the P cue, visuocortical processing of the concurrent context was also enhanced. The results support the notion of enhanced initial hypervigilance to unpredictable compared to predictable threat contexts, while predictable cues show electrocortical enhancement of the cues themselves but additionally a boost of context processing.
Immersive virtual reality is a powerful method to modify the environment and thereby influence experience. The present study used a virtual hand illusion and context manipulation in immersive virtual reality to examine top-down modulation of pain. Participants received painful heat stimuli on their forearm and placed an embodied virtual hand (co-located with their real one) under a virtual water tap, which dispensed virtual water under different experimental conditions. We aimed to induce a temperature illusion by a red, blue or white light suggesting warm, cold or no virtual water. In addition, the sense of agency was manipulated by allowing participants to have high or low control over the virtual hand’s movements. Most participants experienced a thermal sensation in response to the virtual water and associated the blue and red light with cool/cold or warm/hot temperatures, respectively. Importantly, the blue light condition reduced and the red light condition increased pain intensity and unpleasantness, both compared to the control condition. The control manipulation influenced the sense of agency, but did not influence pain ratings. The large effects revealed in our study suggest that context effects within an embodied setting in an immersive virtual environment should be considered within VR based pain therapy.
Models of eye-movement control distinguish between different control levels, ranging from automatic (bottom-up, stimulus-driven selection) and automatized (based on well-learned routines) to voluntary (top-down, goal-driven selection, e.g., based on instructions). However, one type of voluntary control has yet only been examined in the manual and not in the oculomotor domain, namely free-choice selection among arbitrary targets, that is, targets that are of equal interest from both a bottom-up and top-down processing perspective. Here, we ask which features of targets (identity- or location-related) are used to determine such oculomotor free-choice behavior. In two experiments, participants executed a saccade to one of four peripheral targets in three different choice conditions: unconstrained free choice, constrained free choice based on target identity (color), and constrained free choice based on target location. The analysis of choice frequencies revealed that unconstrained free-choice selection closely resembled constrained choice based on target location. The results suggest that free-choice oculomotor control is mainly guided by spatial (location-based) target characteristics. We explain these results by assuming that participants tend to avoid less parsimonious recoding of target-identity representations into spatial codes, the latter being a necessary prerequisite to configure oculomotor commands.
Objectives
Virtual reality exposure therapy (VRET) is a promising treatment for patients with fear of driving. The present pilot study is the first one focusing on behavioral effects of VRET on patients with fear of driving as measured by a post-treatment driving test in real traffic.
Methods
The therapy followed a standardized manual including psychotherapeutic and medical examination, two preparative psychotherapy sessions, five virtual reality exposure sessions, a final behavioral avoidance test (BAT) in real traffic, a closing session, and two follow-up phone assessments after six and twelve weeks. VRE was conducted in a driving simulator with a fully equipped mockup. The exposure scenarios were individually tailored to the patients’ anxiety hierarchy. A total of 14 patients were treated. Parameters on the verbal, behavioral and physiological level were assessed.
Results
The treatment was helpful to overcome driving fear and avoidance. In the final BAT, all patients mastered driving tasks they had avoided before, 71% showed an adequate driving behavior as assessed by the driving instructor, and 93% could maintain their treatment success until the second follow-up phone call. Further analyses suggest that treatment reduces avoidance behavior as well as symptoms of posttraumatic stress disorder as measured by standardized questionnaires (Avoidance and Fusion Questionnaire: p < .10, PTSD Symptom Scale–Self Report: p < .05).
Conclusions
VRET in driving simulation is very promising to treat driving fear. Further research with randomized controlled trials is needed to verify efficacy. Moreover, simulators with lower configuration stages should be tested for a broad availability in psychotherapy.
Chronic alcohol use leads to specific neurobiological alterations in the dopaminergic brain reward system, which probably are leading to a reward deficiency syndrome in alcohol dependence. The purpose of our study was to examine the effects of such hypothesized neurobiological alterations on the behavioral level, and more precisely on the implicit and explicit reward learning. Alcohol users were classified as dependent drinkers (using the DSM-IV criteria), binge drinkers (using criteria of the USA National Institute on Alcohol Abuse and Alcoholism) or low-risk drinkers (following recommendations of the Scientific board of trustees of the German Health Ministry). The final sample (n = 94) consisted of 36 low-risk alcohol users, 37 binge drinkers and 21 abstinent alcohol dependent patients. Participants were administered a probabilistic implicit reward learning task and an explicit reward- and punishment-based trial-and-error-learning task. Alcohol dependent patients showed a lower performance in implicit and explicit reward learning than low risk drinkers. Binge drinkers learned less than low-risk drinkers in the implicit learning task. The results support the assumption that binge drinking and alcohol dependence are related to a chronic reward deficit. Binge drinking accompanied by implicit reward learning deficits could increase the risk for the development of an alcohol dependence.
The affective dimensions of emotional valence and emotional arousal affect processing of verbal and pictorial stimuli. Traditional emotional theories assume a linear relationship between these dimensions, with valence determining the direction of a behavior (approach vs. withdrawal) and arousal its intensity or strength. In contrast, according to the valence-arousal conflict theory, both dimensions are interactively related: positive valence and low arousal (PL) are associated with an implicit tendency to approach a stimulus, whereas negative valence and high arousal (NH) are associated with withdrawal. Hence, positive, high-arousal (PH) and negative, low-arousal (NL) stimuli elicit conflicting action tendencies. By extending previous research that used several tasks and methods, the present study investigated whether and how emotional valence and arousal affect subjective approach vs. withdrawal tendencies toward emotional words during two novel tasks. In Study 1, participants had to decide whether they would approach or withdraw from concepts expressed by written words. In Studies 2 and 3 participants had to respond to each word by pressing one of two keys labeled with an arrow pointing upward or downward. Across experiments, positive and negative words, high or low in arousal, were presented. In Study 1 (explicit task), in line with the valence-arousal conflict theory, PH and NL words were responded to more slowly than PL and NH words. In addition, participants decided to approach positive words more often than negative words. In Studies 2 and 3, participants responded faster to positive than negative words, irrespective of their level of arousal. Furthermore, positive words were significantly more often associated with “up” responses than negative words, thus supporting the existence of implicit associations between stimulus valence and response coding (positive is up and negative is down). Hence, in contexts in which participants' spontaneous responses are based on implicit associations between stimulus valence and response, there is no influence of arousal. In line with the valence-arousal conflict theory, arousal seems to affect participants' approach-withdrawal tendencies only when such tendencies are made explicit by the task, and a minimal degree of processing depth is required.
Embodiment (i.e., the involvement of a bodily representation) is thought to be relevant in emotional experiences. Virtual reality (VR) is a capable means of activating phobic fear in patients. The representation of the patient’s body (e.g., the right hand) in VR enhances immersion and increases presence, but its effect on phobic fear is still unknown. We analyzed the influence of the presentation of the participant’s hand in VR on presence and fear responses in 32 women with spider phobia and 32 matched controls. Participants sat in front of a table with an acrylic glass container within reaching distance. During the experiment this setup was concealed by a head-mounted display (HMD). The VR scenario presented via HMD showed the same setup, i.e., a table with an acrylic glass container. Participants were randomly assigned to one of two experimental groups. In one group, fear responses were triggered by fear-relevant visual input in VR (virtual spider in the virtual acrylic glass container), while information about a real but unseen neutral control animal (living snake in the acrylic glass container) was given. The second group received fear-relevant information of the real but unseen situation (living spider in the acrylic glass container), but visual input was kept neutral VR (virtual snake in the virtual acrylic glass container). Participants were instructed to touch the acrylic glass container with their right hand in 20 consecutive trials. Visibility of the hand was varied randomly in a within-subjects design. We found for all participants that visibility of the participant’s hand increased presence independently of the fear trigger. However, in patients, the influence of the virtual hand on fear depended on the fear trigger. When fear was triggered perceptually, i.e., by a virtual spider, the virtual hand increased fear. When fear was triggered by information about a real spider, the virtual hand had no effect on fear. Our results shed light on the significance of different fear triggers (visual, conceptual) in interaction with body representations.
Are there emotional reactions towards social robots? Could you love a robot? Or, put the other way round: Could you mistreat a robot, tear it apart and sell it? Media reports people honoring military robots with funerals, mourning the “death” of a robotic dog, and granting the humanoid robot Sophia citizenship. But how profound are these reactions? Three experiments take a closer look on emotional reactions towards social robots by investigating the subjective experience of people as well as the motor expressive level. Contexts of varying degrees of Human-Robot Interaction (HRI) sketch a nuanced picture of emotions towards social robots that encompass conscious as well as unconscious reactions. The findings advance the understanding of affective experiences in HRI. It also turns the initial question into: Can emotional reactions towards social robots even be avoided?
Learning with digital media has become a substantial part of formal and informal educational processes and is gaining more and more importance. Technological progress has brought overwhelming opportunities for learners, but challenges them at the same time. Learners have to regulate their learning process to a much greater extent than in traditional learning situations in which teachers support them through external regulation. This means that learners must plan their learning process themselves, apply appropriate learning strategies, monitor, control and evaluate it. These requirements are taken into account in various models of self-regulated learning (SRL). Although the roots of research on SRL go back to the 1980s, the measurement and adequate support of SRL in technology-enhanced learning environments is still not solved in a satisfactory way. An important obstacle are the data sources used to operationalize SRL processes. In order to support SRL in adaptive learning systems and to validate theoretical models, instruments are needed which meet the classical quality criteria and also fulfil additional requirements. Suitable data channels must be measurable "online", i.e., they must be available in real time during learning for analyses or the individual adaptation of interventions. Researchers no longer only have an interest in the final results of questionnaires or tasks, but also need to examine process data from interactions between learners and learning environments in order to advance the development of theories and interventions. In addition, data sources should not be obtrusive so that the learning process is not interrupted or disturbed. Measurements of physiological data, for example, require learners to wear measuring devices. Moreover, measurements should not be reactive. This means that other variables such as learning outcomes should not be influenced by the measurement. Different data sources that are already used to study and support SRL processes, such as protocols on thinking aloud, screen recording, eye tracking, log files, video observations or physiological sensors, meet these criteria to varying degrees. One data channel that has received little attention in research on educational psychology, but is non-obtrusive, non-reactive, objective and available online, is the detailed, timely high-resolution data on observable interactions of learners in online learning environments. This data channel is introduced in this thesis as "peripheral data". It records both the content of learning environments as context, and related actions of learners triggered by mouse and keyboard, as well as the reactions of learning environments, such as structural or content changes. Although the above criteria for the use of the data are met, it is unclear whether this data can be interpreted reliably and validly with regard to relevant variables and behavior.
Therefore, the aim of this dissertation is to examine this data channel from the perspective of SRL and thus further close the existing research gap. One development project and four research projects were carried out and documented in this thesis.
Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits) that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present.
The present study examined the developmental trajectories of motor planning and executive functioning in children. To this end, we tested 217 participants with three motor tasks, measuring anticipatory planning abilities (i.e., the bar-transport-task, the sword-rotation-task and the grasp-height-task), and three cognitive tasks, measuring executive functions (i.e., the Tower-of-Hanoi-task, the Mosaic-task, and the D2-attention-endurance-task). Children were aged between 3 and 10 years and were separated into age groups by 1-year bins, resulting in a total of eight groups of children and an additional group of adults. Results suggested (1) a positive developmental trajectory for each of the sub-tests, with better task performance as children get older; (2) that the performance in the separate tasks was not correlated across participants in the different age groups; and (3) that there was no relationship between performance in the motor tasks and in the cognitive tasks used in the present study when controlling for age. These results suggest that both, motor planning and executive functions are rather heterogeneous domains of cognitive functioning with fewer interdependencies than often suggested.
Although recent developmental studies exploring the predictive power of intelligence and working memory (WM) for educational achievement in children have provided evidence for the importance of both variables, findings concerning the relative impact of IQ and WM on achievement have been inconsistent. Whereas IQ has been identified as the major predictor variable in a few studies, results from several other developmental investigations suggest that WM may be the stronger predictor of academic achievement. In the present study, data from the Munich Longitudinal Study on the Genesis of Individual Competencies (LOGIC) were used to explore this issue further. The secondary data analysis included data from about 200 participants whose IQ and WM was first assessed at the age of six and repeatedly measured until the ages of 18 and 23. Measures of reading, spelling, and math were also repeatedly assessed for this age range. Both regression analyses based on observed variables and latent variable structural equation modeling (SEM) were carried out to explore whether the predictive power of IQ and WM would differ as a function of time point of measurement (i.e., early vs. late assessment). As a main result of various regression analyses, IQ and WM turned out to be reliable predictors of academic achievement, both in early and later developmental stages, when previous domain knowledge was not included as additional predictor. The latter variable accounted for most of the variance in more comprehensive regression models, reducing the impact of both IQ and WM considerably. Findings from SEM analyses basically confirmed this outcome, indicating IQ impacts on educational achievement in the early phase, and illustrating the strong additional impact of previous domain knowledge on achievement at later stages of development.
Well-developed phonological awareness skills are a core prerequisite for early literacy development. Although effective phonological awareness training programs exist, children at risk often do not reach similar levels of phonological awareness after the intervention as children with normally developed skills. Based on theoretical considerations and first promising results the present study explores effects of an early musical training in combination with a conventional phonological training in children with weak phonological awareness skills. Using a quasi-experimental pretest-posttest control group design and measurements across a period of 2 years, we tested the effects of two interventions: a consecutive combination of a musical and a phonological training and a phonological training alone. The design made it possible to disentangle effects of the musical training alone as well the effects of its combination with the phonological training. The outcome measures of these groups were compared with the control group with multivariate analyses, controlling for a number of background variables. The sample included N = 424 German-speaking children aged 4–5 years at the beginning of the study. We found a positive relationship between musical abilities and phonological awareness. Yet, whereas the well-established phonological training produced the expected effects, adding a musical training did not contribute significantly to phonological awareness development. Training effects were partly dependent on the initial level of phonological awareness. Possible reasons for the lack of training effects in the musical part of the combination condition as well as practical implications for early literacy education are discussed.
Investigating approach-avoidance behavior regarding affective stimuli is important in broadening the understanding of one of the most common psychiatric disorders, social anxiety disorder. Many studies in this field rely on approach-avoidance tasks, which mainly assess hand movements, or interpersonal distance measures, which return inconsistent results and lack ecological validity. Therefore, the present study introduces a virtual reality task, looking at avoidance parameters (movement time and speed, distance to social stimulus, gaze behavior) during whole-body movements. These complex movements represent the most ecologically valid form of approach and avoidance behavior. These are at the core of complex and natural social behavior. With this newly developed task, the present study examined whether high socially anxious individuals differ in avoidance behavior when bypassing another person, here virtual humans with neutral and angry facial expressions. Results showed that virtual bystanders displaying angry facial expressions were generally avoided by all participants. In addition, high socially anxious participants generally displayed enhanced avoidance behavior towards virtual people, but no specifically exaggerated avoidance behavior towards virtual people with a negative facial expression. The newly developed virtual reality task proved to be an ecological valid tool for research on complex approach-avoidance behavior in social situations. The first results revealed that whole body approach-avoidance behavior relative to passive bystanders is modulated by their emotional facial expressions and that social anxiety generally amplifies such avoidance.
Continuous norming methods have seldom been subjected to scientific review. In this simulation study, we compared parametric with semi-parametric continuous norming methods in psychometric tests by constructing a fictitious population model within which a latent ability increases with age across seven age groups. We drew samples of different sizes (n = 50, 75, 100, 150, 250, 500 and 1,000 per age group) and simulated the results of an easy, medium, and difficult test scale based on Item Response Theory (IRT). We subjected the resulting data to different continuous norming methods and compared the data fit under the different test conditions with a representative cross-validation dataset of n = 10,000 per age group. The most significant differences were found in suboptimal (i.e., too easy or too difficult) test scales and in ability levels that were far from the population mean. We discuss the results with regard to the selection of the appropriate modeling techniques in psychometric test construction, the required sample sizes, and the requirement to report appropriate quantitative and qualitative test quality criteria for continuous norming methods in test manuals.
Virtual reality plays an increasingly important role in research and therapy of pathological fear. However, the mechanisms how virtual environments elicit and modify fear responses are not yet fully understood. Presence, a psychological construct referring to the ‘sense of being there’ in a virtual environment, is widely assumed to crucially influence the strength of the elicited fear responses, however, causality is still under debate. The present study is the first that experimentally manipulated both variables to unravel the causal link between presence and fear responses. Height-fearful participants (N = 49) were immersed into a virtual height situation and a neutral control situation (fear manipulation) with either high versus low sensory realism (presence manipulation). Ratings of presence and verbal and physiological (skin conductance, heart rate) fear responses were recorded. Results revealed an effect of the fear manipulation on presence, i.e., higher presence ratings in the height situation compared to the neutral control situation, but no effect of the presence manipulation on fear responses. However, the presence ratings during the first exposure to the high quality neutral environment were predictive of later fear responses in the height situation. Our findings support the hypothesis that experiencing emotional responses in a virtual environment leads to a stronger feeling of being there, i.e., increase presence. In contrast, the effects of presence on fear seem to be more complex: on the one hand, increased presence due to the quality of the virtual environment did not influence fear; on the other hand, presence variability that likely stemmed from differences in user characteristics did predict later fear responses. These findings underscore the importance of user characteristics in the emergence of presence.
Cognitive Processing in Non-Communicative Patients: What Can Event-Related Potentials Tell Us?
(2016)
Event-related potentials (ERP) have been proposed to improve the differential diagnosis of non-responsive patients. We investigated the potential of the P300 as a reliable marker of conscious processing in patients with locked-in syndrome (LIS). Eleven chronic LIS patients and 10 healthy subjects (HS) listened to a complex-tone auditory oddball paradigm, first in a passive condition (listen to the sounds) and then in an active condition (counting the deviant tones). Seven out of nine HS displayed a P300 waveform in the passive condition and all in the active condition. HS showed statistically significant changes in peak and area amplitude between conditions. Three out of seven LIS patients showed the P3 waveform in the passive condition and five of seven in the active condition. No changes in peak amplitude and only a significant difference at one electrode in area amplitude were observed in this group between conditions. We conclude that, in spite of keeping full consciousness and intact or nearly intact cortical functions, compared to HS, LIS patients present less reliable results when testing with ERP, specifically in the passive condition. We thus strongly recommend applying ERP paradigms in an active condition when evaluating consciousness in non-responsive patients.
Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI
(2016)
Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.
Context conditioning is characterized by unpredictable threat and its generalization may constitute risk factors for panic disorder (PD). Therefore, we examined differences between individuals with panic attacks (PA; N = 21) and healthy controls (HC, N = 22) in contextual learning and context generalization using a virtual reality (VR) paradigm. Successful context conditioning was indicated in both groups by higher arousal, anxiety and contingency ratings, and increased startle responses and skin conductance levels (SCLs) in an anxiety context (CTX+) where an aversive unconditioned stimulus (US) occurred unpredictably vs. a safety context (CTX−). PA compared to HC exhibited increased differential responding to CTX+ vs. CTX− and overgeneralization of contextual anxiety on an evaluative verbal level, but not on a physiological level. We conclude that increased contextual conditioning and contextual generalization may constitute risk factors for PD or agoraphobia contributing to the characteristic avoidance of anxiety contexts and withdrawal to safety contexts and that evaluative cognitive process may play a major role.
According to the motivational priming hypothesis, unpleasant stimuli activate the motivational defense system, which in turn promotes congruent affective states such as negative emotions and pain. The question arises to what degree this bottom–up impact of emotions on pain is susceptible to a manipulation of top–down-driven expectations. To this end, we investigated whether verbal instructions implying pain potentiation vs. reduction (placebo or nocebo expectations)—later on confirmed by corresponding experiences (placebo or nocebo conditioning)—might alter behavioral and neurophysiological correlates of pain modulation by unpleasant pictures. We compared two groups, which underwent three experimental phases: first, participants were either instructed that watching unpleasant affective pictures would increase pain (nocebo group) or that watching unpleasant pictures would decrease pain (placebo group) relative to neutral pictures. During the following placebo/nocebo-conditioning phase, pictures were presented together with electrical pain stimuli of different intensities, reinforcing the instructions. In the subsequent test phase, all pictures were presented again combined with identical pain stimuli. Electroencephalogram was recorded in order to analyze neurophysiological responses of pain (somatosensory evoked potential) and picture processing [visually evoked late positive potential (LPP)], in addition to pain ratings. In the test phase, ratings of pain stimuli administered while watching unpleasant relative to neutral pictures were significantly higher in the nocebo group, thus confirming the motivational priming effect for pain perception. In the placebo group, this effect was reversed such that unpleasant compared with neutral pictures led to significantly lower pain ratings. Similarly, somatosensory evoked potentials were decreased during unpleasant compared with neutral pictures, in the placebo group only. LPPs of the placebo group failed to discriminate between unpleasant and neutral pictures, while the LPPs of the nocebo group showed a clear differentiation. We conclude that the placebo manipulation already affected the processing of the emotional stimuli and, in consequence, the processing of the pain stimuli. In summary, the study revealed that the modulation of pain by emotions, albeit a reliable and well-established finding, is further tuned by reinforced expectations—known to induce placebo/nocebo effects—which should be addressed in future research and considered in clinical applications.
Multitasking, defined as performing more than one task at a time, typically yields performance decrements, for instance, in processing speed and accuracy. These performance costs are often distributed asymmetrically among the involved tasks. Under suitable conditions, this can be interpreted as a marker for prioritization of one task – the one that suffers less – over the other. One source of such task prioritization is based on the use of different effector systems (e.g., oculomotor system, vocal tract, limbs) and their characteristics. The present work explores such effector system-based task prioritization by examining to which extent associated effector systems determine which task is processed with higher priority in multitasking situations. Thus, three different paradigms are used, namely the simultaneous (stimulus) onset paradigm, the psychological refractory period (PRP) paradigm, and the task switching paradigm. These paradigms invoke situations in which two (in the present studies basic spatial decision) tasks are a) initiated at exactly the same time, b) initiated with a short varying temporal distance (but still temporally overlapping), or c) in which tasks alternate randomly (without temporal overlap). The results allow for three major conclusions: 1. The assumption of effector system-based task prioritization according to an ordinal pattern (oculomotor > pedal > vocal > manual, indicating decreasing prioritization) is supported by the observed data in the simultaneous onset paradigm. This data pattern cannot be explained by a rigid “first come, first served” task scheduling principle. 2. The data from the PRP paradigm confirmed the assumption of vocal-over-manual prioritization and showed that classic PRP effects (as a marker for task order-based prioritization) can be modulated by effector system characteristics. 3. The mere cognitive representation of task sets (that must be held active to switch between them) differing in effector systems without an actual temporal overlap in task processing, however, is not sufficient to elicit the same effector system prioritization phenomena observed for overlapping tasks. In summary, the insights obtained by the present work support the assumptions of parallel central task processing and resource sharing among tasks, as opposed to exclusively serial processing of central processing stages. Moreover, they indicate that effector systems are a crucial factor in multitasking and suggest an integration of corresponding weighting parameters in existing dual-task control frameworks.
Background
While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks.
Results
Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus.
Conclusion
The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior.
Responding in the presence of stimuli leads to an integration of stimulus features and response features into event fles, which can later be retrieved to assist action control. This integration mechanism is not limited to target stimuli, but can also include distractors (distractor-response binding). A recurring research question is which factors determine whether or not distractors are integrated. One suggested candidate factor is target-distractor congruency: Distractor-response binding effects were reported to be stronger for congruent than for incongruent target-distractor pairs. Here, we discuss a general problem with including the factor of congruency in typical analyses used to study distractor-based binding effects. Integrating this factor leads to a confound that may explain any differences between distractor-response binding effects of congruent and incongruent distractors with a simple congruency effect. Simulation data confrmed this argument. We propose to interpret previous data cautiously and discuss potential avenues to circumvent this problem in the future.
Promising initial insights show that offices designed to permit physical activity (PA) may reduce workplace sitting time. Biophilic approaches are intended to introduce natural surroundings into the workplace, and preliminary data show positive effects on stress reduction and elevated productivity within the workplace. The primary aim of this pilot study was to analyze changes in workplace sitting time and self-reported habit strength concerning uninterrupted sitting and PA during work, when relocating from a traditional office setting to “active” biophilic-designed surroundings. The secondary aim was to assess possible changes in work-associated factors such as satisfaction with the office environment, work engagement, and work performance, among office staff. In a pre-post designed field study, we collected data through an online survey on health behavior at work. Twelve participants completed the survey before (one-month pre-relocation, T1) and twice after the office relocation (three months (T2) and seven months post-relocation (T3)). Standing time per day during office hours increased from T1 to T3 by about 40 min per day (p < 0.01). Other outcomes remained unaltered. The results suggest that changing office surroundings to an active-permissive biophilic design increased standing time during working hours. Future larger-scale controlled studies are warranted to investigate the influence of office design on sitting time and work-associated factors during working hours in depth.
A hallmark of habitual actions is that, once they are established, they become insensitive to changes in the values of action outcomes. In this article, we review empirical research that examined effects of posttraining changes in outcome values in outcome-selective Pavlovian-to-instrumental transfer (PIT) tasks. This review suggests that cue-instigated action tendencies in these tasks are not affected by weak and/or incomplete revaluation procedures (e.g., selective satiety) and substantially disrupted by a strong and complete devaluation of reinforcers. In a second part, we discuss two alternative models of a motivational control of habitual action: a default-interventionist framework and expected value of control theory. It is argued that the default-interventionist framework cannot solve the problem of an infinite regress (i.e., what controls the controller?). In contrast, expected value of control can explain control of habitual actions with local computations and feedback loops without (implicit) references to control homunculi. It is argued that insensitivity to changes in action outcomes is not an intrinsic design feature of habits but, rather, a function of the cognitive system that controls habitual action tendencies.
Both low-level physical saliency and social information, as presented by human heads or bodies, are known to drive gaze behavior in free-viewing tasks. Researchers have previously made use of a great variety of face stimuli, ranging from photographs of real humans to schematic faces, frequently without systematically differentiating between the two. In the current study, we used a Generalized Linear Mixed Model (GLMM) approach to investigate to what extent schematic artificial faces can predict gaze when they are presented alone or in competition with real human faces. Relative differences in predictive power became apparent, while GLMMs suggest substantial effects for real and artificial faces in all conditions. Artificial faces were accordingly less predictive than real human faces but still contributed significantly to gaze allocation. These results help to further our understanding of how social information guides gaze in complex naturalistic scenes.
For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham, 1999) has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients' quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals, and coping strategies during a period of 2 years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering, and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years. Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS.
No abstract available.
Maladaptive coping mechanisms influence health-related quality of life (HRQoL) of individuals facing acute and chronic stress. Trait emotional intelligence (EI) may provide a protective shield against the debilitating effects of maladaptive coping thus contributing to maintained HRQoL. Low trait EI, on the other hand, may predispose individuals to apply maladaptive coping, consequently resulting in lower HRQoL. The current research is comprised of two studies. Study 1 was designed to investigate the protective effects of trait EI and its utility for efficient coping in dealing with the stress caused by chronic heart failure (CHF) in a cross-cultural setting (Pakistan vs Germany). N = 200 CHF patients were recruited at cardiology institutes of Multan, Pakistan and Würzburg as well as Brandenburg, Germany. Path analysis confirmed the expected relation between low trait EI and low HRQoL and revealed that this association was mediated by maladaptive metacognitions and negative coping strategies in Pakistani but not German CHF patients. Interestingly, also the specific coping strategies were culture-specific. The Pakistani sample considered religious coping to be highly important, whereas the German sample was focused on adopting a healthy lifestyle such as doing exercise. These findings are in line with cultural characteristics suggesting that German CHF patients have an internal locus of control as compared to an external locus of control in Pakistani CHF patients. Finally, the findings from study 1 corroborate the culture-independent validity of the metacognitive model of generalized anxiety disorder.
In addition to low trait EI, high interoception accuracy (IA) may predispose individuals to interpret cardiac symptoms as threatening, thus leading to anxiety. To examine this proposition, Study 2 compared individuals with high vs low IA in dealing with a psychosocial stressor (public speaking) in an experimental lab study. In addition, a novel physiological intervention named transcutaneous vagus nerve stimulation (t-VNS) and cognitive reappraisal (CR) were applied during and after the anticipation of the speech in order to facilitate coping with stress. N= 99 healthy volunteers participated in the study. Results showed interesting descriptive results that only reached trend level. They suggested a tendency of high IA individuals to perceive the situation as more threatening as indicated by increased heart rate and reduced heart rate variability in the high-frequency spectrum as well as high subjective anxiety during anticipation of and actual performance of the speech. This suggests a potential vulnerability of high IA individuals for developing anxiety disorders, specifically social anxiety disorder, in case negative self-focused attention and negative evaluation is applied to the (more prominently perceived) increased cardiac responding during anticipation of and the actual presentation of the public speech. The study did not reveal any significant protective effects of t-VNS and CR.
In summary, the current research suggested that low trait EI and high IA predicted worse psychological adjustment to chronic and acute distress. Low trait EI facilitated maladaptive metacognitive processes resulting in the use of negative coping strategies in Study 1; however, increased IA regarding cardioceptions predicted high physiological arousal in study 2. Finally, the German vs. the Pakistani culture greatly affected the preference for specific coping strategies. These findings have implications for caregivers to provide culture-specific treatments on the one hand. On the other hand, they highlight high IA as a possible vulnerability to be targeted for the prevention of (social) anxiety.
Forward Collision Alarms (FCA) intend to signal hazardous traffic situations and the need for an immediate corrective driver response. However, data of naturalistic driving studies revealed that approximately the half of all alarms activated by conventional FCA systems represented unnecessary alarms. In these situations, the alarm activation was correct according to the implemented algorithm, whereas the alarms led to no or only minimal driver responses. Psychological research can make an important contribution to understand drivers’ needs when interacting with driver assistance systems.
The overarching objective of this thesis was to gain a systematic understanding of psychological factors and processes that influence drivers’ perceived need for assistance in potential collision situations. To elucidate under which conditions drivers perceive alarms as unnecessary, a theoretical framework of drivers’ subjective alarm evaluation was developed. A further goal was to investigate the impact of unnecessary alarms on drivers’ responses and acceptance. Four driving simulator studies were carried out to examine the outlined research questions.
In line with the hypotheses derived from the theoretical framework, the results suggest that drivers’ perceived need for assistance is determined by their retrospective subjective hazard perception. While predictions of conventional FCA systems are exclusively based on physical measurements resulting in a time to collision, human drivers additionally consider their own manoeuvre intentions and those attributed to other road users to anticipate the further course of a potentially critical situation. When drivers anticipate a dissolving outcome of a potential conflict, they perceive the situation as less hazardous than the system. Based on this discrepancy, the system would activate an alarm, while drivers’ perceived need for assistance is low. To sum up, the described factors and processes cause drivers to perceive certain alarms as unnecessary. Although drivers accept unnecessary alarms less than useful alarms, unnecessary alarms do not reduce their overall system acceptance. While unnecessary alarms cause moderate driver responses in the short term, the intensity of responses decrease with multiple exposures to unnecessary alarms. However, overall, effects of unnecessary alarms on drivers’ alarm responses and acceptance seem to be rather uncritical.
This thesis provides insights into human factors that explain when FCAs are perceived as unnecessary. These factors might contribute to design FCA systems tailored to drivers’ needs.
A commentary on: Feeling the Conflict: The Crucial Role of Conflict Experience in Adaptationby Desender, K., Van Opstal, F., and Van den Bussche, E. (2014). Psychol. Sci. 25, 675–683. doi:10.1177/0956797613511468
Conflict adaptation in masked priming has recently been proposed to rely not on successful conflictresolution but rather on conflict experience (Desender et al., 2014). We re-assessed this proposal ina direct replication and also tested a potential confound due toconflict strength. The data supported this alternative view, but also failed to replicate basic conflict adaptation effects of the original studydespite considerable power.
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
When More Is Better – Consumption Priming Decreases Responders’ Rejections in the Ultimatum Game
(2017)
During the past decades, economic theories of rational choice have been exposed to outcomes that were severe challenges to their claim of universal validity. For example, traditional theories cannot account for refusals to cooperate if cooperation would result in higher payoffs. A prominent illustration are responders’ rejections of positive but unequal payoffs in the Ultimatum Game. To accommodate this anomaly in a rational framework one needs to assume both a preference for higher payoffs and a preference for equal payoffs. The current set of studies shows that the relative weight of these preference components depends on external conditions and that consumption priming may decrease responders’ rejections of unequal payoffs. Specifically, we demonstrate that increasing the accessibility of consumption-related information accentuates the preference for higher payoffs. Furthermore, consumption priming increased responders’ reaction times for unequal payoffs which suggests an increased conflict between both preference components. While these results may also be integrated into existing social preference models, we try to identify some basic psychological processes underlying economic decision making. Going beyond the Ultimatum Game, we propose that a distinction between comparative and deductive evaluations may provide a more general framework to account for various anomalies in behavioral economics.
In today’s world of work, networking behaviors are an important and viable strategy to enhance success in work and career domains. Concerning personality as an antecedent of networking behaviors, prior studies have exclusively relied on trait perspectives that focus on how people feel, think, and act. Adopting a motivational perspective on personality, we enlarge this focus and argue that beyond traits predominantly tapping social content, motives shed further light on instrumental aspects of networking – or why people network. We use McClelland’s implicit motives framework of need for power (nPow), need for achievement (nAch), and need for affiliation (nAff) to examine instrumental determinants of networking. Using a facet theoretical approach to networking behaviors, we predict differential relations of these three motives with facets of (1) internal vs. external networking and (2) building, maintaining, and using contacts. We conducted an online study, in which we temporally separate measures (N = 539 employed individuals) to examine our hypotheses. Using multivariate latent regression, we show that nAch is related to networking in general. In line with theoretical differences between networking facets, we find that nAff is positively related to building contacts, whereas nPow is positively related to using internal contacts. In sum, this study shows that networking is not only driven by social factors (i.e., nAff), but instead the achievement motive is the most important driver of networking behaviors.
Social attention is a ubiquitous, but also enigmatic and sometimes elusive phenomenon.
We direct our gaze at other human beings to see what they are doing
and to guess their intentions, but we may also absorb social events en passant as
they unfold in the corner of the eye. We use our gaze as a discrete communication
channel, sometimes conveying pieces of information which would be difficult
to explicate, but we may also find ourselves avoiding eye-contact with others in
moments when self-disclosure is fear-laden. We experience our gaze as the most
genuine expression of our will, but research also suggests considerable levels of
predictability and automaticity in our gaze behavior. The phenomenon’s complexity
has hindered researchers from developing a unified framework which can
conclusively accommodate all of its aspects, or from even agreeing on the most
promising research methodologies.
The present work follows a multi-methods approach, taking on several aspects
of the phenomenon from various directions. Participants in study 1 viewed dynamic
social scenes on a computer screen. Here, low-level physical saliency (i.e.
color, contrast, or motion) and human heads both attracted gaze to a similar extent,
providing a comparison of two vastly different classes of gaze predictors in
direct juxtaposition. In study 2, participants with varying degrees of social anxiety
walked in a public train station while their eye movements were tracked. With
increasing levels of social anxiety, participants showed a relative avoidance of gaze
at near compared to distant people. When replicating the experiment in a laboratory
situation with a matched participant group, social anxiety did not modulate
gaze behavior, fueling the debate around appropriate experimental designs in the
field. Study 3 employed virtual reality (VR) to investigate social gaze in a complex
and immersive, but still highly controlled situation. In this situation, participants
exhibited a gaze behavior which may be more typical for real-life compared to laboratory situations as they avoided gaze contact with a virtual conspecific unless
she gazed at them. This study provided important insights into gaze behavior in
virtual social situations, helping to better estimate the possible benefits of this
new research approach. Throughout all three experiments, participants showed
consistent inter-individual differences in their gaze behavior. However, the present
work could not resolve if these differences are linked to psychologically meaningful
traits or if they instead have an epiphenomenal character.
When observing another agent performing simple actions, these actions are systematically remembered as one’s own after a brief period of time. Such observation inflation has been documented as a robust phenomenon in studies in which participants passively observed videotaped actions. Whether observation inflation also holds for direct, face-to-face interactions is an open question that we addressed in two experiments. In Experiment 1, participants commanded the experimenter to carry out certain actions, and they indeed reported false memories of self-performance in a later memory test. The effect size of this inflation effect was similar to passive observation as confirmed by Experiment 2. These findings suggest that observation inflation might affect action memory in a broad range of real-world interactions.
Saliency-based models of visual attention postulate that, when a scene is freely viewed, attention is predominantly allocated to those elements that stand out in terms of their physical properties. However, eye-tracking studies have shown that saliency models fail to predict gaze behavior accurately when social information is included in an image. Notably, gaze pattern analyses revealed that depictions of human beings are heavily prioritized independent of their low-level physical saliency. What remains unknown, however, is whether the prioritization of such social features is a reflexive or a voluntary process. To investigate the early stages of social attention in more detail, participants viewed photographs of naturalistic scenes with and without social features (i.e., human heads or bodies) for 200 ms while their eye movements were being recorded. We observed significantly more first eye movements to regions containing social features than would be expected from a chance level distribution of saccades. Additionally, a generalized linear mixed model analysis revealed that the social content of a region better predicted first saccade direction than its saliency suggesting that social features partially override the impact of low-level physical saliency on gaze patterns. Given the brief image presentation time that precluded visual exploration, our results provide compelling evidence for a reflexive component in social attention. Moreover, the present study emphasizes the importance of considering social influences for a more coherent understanding of human attentional selection.
Visual saliency maps reflecting locations that stand out from the background in terms of their low-level physical features have proven to be very useful for empirical research on attentional exploration and reliably predict gaze behavior. In the present study we tested these predictions for socially relevant stimuli occurring in naturalistic scenes using eye tracking. We hypothesized that social features (i.e. human faces or bodies) would be processed preferentially over non-social features (i.e. objects, animals) regardless of their low-level saliency. To challenge this notion, we included three tasks that deliberately addressed non-social attributes. In agreement with our hypothesis, social information, especially heads, was preferentially attended compared to highly salient image regions across all tasks. Social information was never required to solve a task but was regarded nevertheless. More so, after completing the task requirements, viewing behavior reverted back to that of free-viewing with heavy prioritization of social features. Additionally, initial eye movements reflecting potentially automatic shifts of attention, were predominantly directed towards heads irrespective of top-down task demands. On these grounds, we suggest that social stimuli may provide exclusive access to the priority map, enabling social attention to override reflexive and controlled attentional processes. Furthermore, our results challenge the generalizability of saliency-based attention models.
Do people evaluate an open-minded midwife less positively than a caring midwife? Both open-minded and caring are generally seen as positive attributes. However, consistency varies—the attribute caring is consistent with the midwife stereotype while open-minded is not. In general, both stimulus valence and consistency can influence evaluations. Six experiments investigated the respective influence of valence and consistency on evaluative judgments in the domain of stereotyping. In an impression formation paradigm, valence and consistency of stereotypic information about target persons were manipulated orthogonally and spontaneous evaluations of these target persons were measured. Valence reliably influenced evaluations. However, for strongly valenced stereotypes, no effect of consistency was observed. Parameters possibly preventing the occurrence of consistency effects were ruled out, specifically, valence of inconsistent attributes, processing priority of category information, and impression formation instructions. However, consistency had subtle effects on evaluative judgments if the information about a target person was not strongly valenced and experimental conditions were optimal. Concluding, in principle, both stereotype valence and consistency can play a role in evaluative judgments of stereotypic target persons. However, the more subtle influence of consistency does not seem to substantially influence evaluations of stereotyped target persons. Implications for fluency research and stereotype disconfirmation are discussed.
Social Cueing of Numerical Magnitude : Observed Head Orientation Influences Number Processing
(2019)
In many parts of the modern world, numbers are used as tools to describe spatial relationships, be it heights, latitudes, or distances. However, this connection goes deeper as a myriad of studies showed that number representations are rooted in space (vertical, horizontal, and/or radial). For instance, numbers were shown to affect spatial perception and, conversely, perceptions or movements in space were shown to affect number estimations. This bidirectional link has already found didactic application in the classroom when children are taught the meaning of numbers. However, our knowledge about the cognitive (and neuropsychological) processes underlying the numerical magnitude operations is still very limited.
Several authors indicated that the processing within peripersonal space (i.e. the space surrounding the body in reaching distance) and numerical magnitude operations are functionally equivalent. This assumption has several implications that the present work aims at describing. For instance, vision and visuospatial attention orienting play a prominent role for processing within peripersonal space. Indeed, both neuropsychological and behavioral studies also suggested a similar role of vision and visuospatial attention orienting for number processing. Moreover, social cognition research showed that movements, posture and gestures affect not only the representation of one's own peripersonal space, but also the visuospatial attention behavior of an observer. Against this background, the current work tests the specific implication of the functional equivalence assumption that the spatial attention response to an observed person’s posture should extend to the observer’s numerical magnitude operations.
The empirical part of the present work tests the spatial attention response of observers to vertical head postures (with continuing eye contact to the observer) in both perceptual and numerical space. Two experimental series are presented that follow both steps from the observation of another person’s vertical head orientation (within his/her peripersonal space) to the observer’s attention orienting response (Experimental series A) as well as from there to the observer’s magnitude operations with numbers (Experimental Series B). Results show that the observation of a movement from a neutral to a vertical head orientation (Experiment 1) as well as the observation of the vertical head orientation alone (Experiment 3) shifted the observer’s spatial attention in correspondence with the direction information of the observed head (up vs. down). Movement from a vertical to a neutral end position, however, had no effect on the observer's spatial attention orienting response (Experiment 2). Furthermore, following down-tilted head posture (relative to up- or non-tilted head orientation), observers generated smaller numbers in a random number generation task (range 1- 9, Experiment 4), gave smaller estimates to numerical trivia questions (mostly multi-digit numbers, Experiment 5) and chose response keys less frequently in a free choice task that was associated with larger numerical magnitude in a intermixed numerical magnitude task.
Experimental Series A served as groundwork for Experimental Series B, as it demonstrated that observing another person’s head orientation indeed triggered the expected directional attention orienting response in the observer. Based on this preliminary work, the results of Experimental Series B lend support to the assumption that numerical magnitude operations are grounded in visuospatial processing of peripersonal space. Thus, the present studies brought together numerical and social cognition as well as peripersonal space research. Moreover, the Empirical Part of the present work provides the basis for elaborating on the role of processing within peripersonal space in terms of Walsh’s (2003, 2013) Theory of Magnitude. In this context, a specification of the Theory of Magnitude was staked out in a processing model that stresses the pivotal role of spatial attention orienting. Implications for mental magnitude operations are discussed. Possible applications in the classroom and beyond are described.
In most foreign language learning contexts, there are only rare chance for contact with native speakers of the target language. In such a situation, reading plays an important role in language acquisition as well as in gaining cultural information about the target language and its speakers.
Previous research indicated that reading in foreign language is a complex process, which is influenced by various linguistic, cognitive and affective factors. The aim of the present study was to test two structural models of the relationship between reading comprehension in native language (L1), English language (L2) reading motivation, metacognitive awareness of L2 reading strategies, and reading comprehension of English as a foreign language among the two samples. Furthermore, the current study aimed to examine the differences between Egyptian and German students in their perceived usage of reading strategies during reading English texts, as well as to explore the pattern of their motivation toward reading English texts. For this purpose, 401 students were recruited from Germany (n=200) and Egypt (n=201) to participate in the current study. In order to have information about metacognitive awareness of reading strategies, a self-report questionnaire (SORS) developed by Moktari and Sheory (2002) was used. While the L2 reading motivation variable, was measured by a reading motivation survey (L2RMQ) which was based on reviewed reading motivation research. In addition, two reading tests were administrated one to measure reading comprehension for native language (German/Arabic) and the other to measure English reading comprehension.
To analyze the collected data, descriptive statistics and independent t-tests were performed. In addition, further analysis using structural equation modeling was applied to test the strength of relationships between the variables under study.
The results from the current research revealed that L1 reading comprehension, whether in a German or Arabic language, had the strongest relationship with L2 reading comprehension. However, the relationship between L2 intrinsic reading motivation was not proven to be significant in either the German or Egyptian models. On the other hand, the relationship between L2 extrinsic reading motivation, metacognitive awareness of reading strategies, and L2 reading comprehension was only proven significant in the German sample. The discussion of these results along with their pedagogical implications for education and practice will be illustrated in the following study.
Since exposure therapy for anxiety disorders incorporates extinction of contextual anxiety, relapses may be due to reinstatement processes. Animal research demonstrated more stable extinction memory and less anxiety relapse due to vagus nerve stimulation (VNS). We report a valid human three-day context conditioning, extinction and return of anxiety protocol, which we used to examine effects of transcutaneous VNS (tVNS). Seventy-five healthy participants received electric stimuli (unconditioned stimuli, US) during acquisition (Day1) when guided through one virtual office (anxiety context, CTX+) but never in another (safety context, CTX−). During extinction (Day2), participants received tVNS, sham, or no stimulation and revisited both contexts without US delivery. On Day3, participants received three USs for reinstatement followed by a test phase. Successful acquisition, i.e. startle potentiation, lower valence, higher arousal, anxiety and contingency ratings in CTX+ versus CTX−, the disappearance of these effects during extinction, and successful reinstatement indicate validity of this paradigm. Interestingly, we found generalized reinstatement in startle responses and differential reinstatement in valence ratings. Altogether, our protocol serves as valid conditioning paradigm. Reinstatement effects indicate different anxiety networks underlying physiological versus verbal responses. However, tVNS did neither affect extinction nor reinstatement, which asks for validation and improvement of the stimulation protocol.
Strong bottom-up impulses and weak top-down control may interactively lead to overeating and, consequently, weight gain. In the present study, female university freshmen were tested at the start of the first semester and again at the start of the second semester. Attentional bias toward high- or low-calorie food-cues was assessed using a dot-probe paradigm and participants completed the Barratt Impulsiveness Scale. Attentional bias and motor impulsivity interactively predicted change in body mass index: motor impulsivity positively predicted weight gain only when participants showed an attentional bias toward high-calorie food-cues. Attentional and non-planning impulsivity were unrelated to weight change. Results support findings showing that weight gain is prospectively predicted by a combination of weak top-down control (i.e. high impulsivity) and strong bottom-up impulses (i.e. high automatic motivational drive toward high-calorie food stimuli). They also highlight the fact that only specific aspects of impulsivity are relevant in eating and weight regulation.
The unification of two major approaches to moral judgment is the purpose of the present approach. Kohlberg's well-known stage theory assumes a sequence of discrete stages that underlie all moral judgment. Stage theory recognizes the problem of integrating considerations but gives no way to solve such integration, even with information from any one stage. And, of course, the stage concept denies any significant integration from different stages. Thus, research on moral judgment needs to study the integration problem which can be tested within Anderson's theory of information integration. The main purpose of the present study was to extend this unificationist approach to the issue of sexual morality. A novel task presents information from two very different stages. The results showed that in contrast to discreteness the stage informers were positively correlated in punishment judgments of both genders about consensual sex of juveniles. Furthermore, the subjects integrated considerations from those very different stages also in contrast to the hypothesis that only a single stage was operative at any time.
Large-Scale Assessment of a Fully Automatic Co-Adaptive Motor Imagery-Based Brain Computer Interface
(2016)
In the last years Brain Computer Interface (BCI) technology has benefited from the development of sophisticated machine leaning methods that let the user operate the BCI after a few trials of calibration. One remarkable example is the recent development of co-adaptive techniques that proved to extend the use of BCIs also to people not able to achieve successful control with the standard BCI procedure. Especially for BCIs based on the modulation of the Sensorimotor Rhythm (SMR) these improvements are essential, since a not negligible percentage of users is unable to operate SMR-BCIs efficiently. In this study we evaluated for the first time a fully automatic co-adaptive BCI system on a large scale. A pool of 168 participants naive to BCIs operated the co-adaptive SMR-BCI in one single session. Different psychological interventions were performed prior the BCI session in order to investigate how motor coordination training and relaxation could influence BCI performance. A neurophysiological indicator based on the Power Spectral Density (PSD) was extracted by the recording of few minutes of resting state brain activity and tested as predictor of BCI performances. Results show that high accuracies in operating the BCI could be reached by the majority of the participants before the end of the session. BCI performances could be significantly predicted by the neurophysiological indicator, consolidating the validity of the model previously developed. Anyway, we still found about 22% of users with performance significantly lower than the threshold of efficient BCI control at the end of the session. Being the inter-subject variability still the major problem of BCI technology, we pointed out crucial issues for those who did not achieve sufficient control. Finally, we propose valid developments to move a step forward to the applicability of the promising co-adaptive methods.
Research on the deployment and use of technology to assist learning has seen a significant
rise over the last decades (Aparicio et al., 2017). The focus on course quality, technology,
learning outcome and learner satisfaction in e-learning has led to insufficient attention by
researchers to individual characteristics of learners (Cidral et al., 2017 ; Hsu et al., 2013). The current work aims to bridge this gap by investigating characteristics identified by previous works and backed by theory as influential individual differences in e-learning. These learner characteristics have been suggested as motivational factors (Edmunds et al., 2012) in decisions by learners to interact and exchange information (Luo et al., 2017).
In this work e-learning is defined as interaction dependent information seeking and sharing enabled by technology. This is primarily approached from a media psychology perspective. The role of learner characteristics namely, beliefs about the source of knowledge (Schommer, 1990), learning styles (Felder & Silverman, 1988), need for affect (Maio & Esses, 2001), need for cognition (Cacioppo & Petty, 1982) and power distance (Hofstede, 1980) on interactions to seek and share information in e-learning are investigated. These investigations were shaped by theory and empirical lessons as briefly mentioned in the next paragraphs. Theoretical support for investigations is derived from the technology acceptance model(TAM) by psychologist Davis (1989) and the hyper-personal model by communication scientist Walther (1996). The TAM was used to describe the influence of learner characteristics on decisions to use e-learning systems (Stantchev et al., 2014). The hyper-personal model described why computer-mediated communication thrives in e-learning (Kaye et al., 2016) and how learners interpret messages exchanged online (Hansen et al., 2015). This theoretical framework was followed by empirical reviews which justified the use of interaction and information seeking-sharing as key components of e-learning as well as the selection of learner characteristics. The reviews provided suggestions for the measurement of variables (Kühl et al., 2014) and the investigation design (Dascalau et al., 2015). Investigations were designed and implemented through surveys and quasi experiments which were used for three preliminary studies and two main studies. Samples were selected from Germany and Ghana with same variables tested in both countries. Hypotheses were tested with interaction and information seeking-sharing as dependent variables while beliefs about the source of knowledge, learning styles, need for affect, need for cognition and power distance were independent variables. Firstly, using analyses of variance, the influence of beliefs about the source of knowledge on interaction choices of learners was supported. Secondly, the role of need for cognition on interaction choices of learners was supported by results from a logistic regression. Thirdly, results from multiple linear regressions backed the influence of need for cognition and power distance on information seeking-sharing behaviour of learners. Fourthly, the relationship between need for affect and need for cognition
was supported. The findings may have implications for media psychology research, theories used in this work, research on e-learning, measurement of learner characteristics and the design of e-learning platforms. The findings suggest that, the beliefs learners have about the source of knowledge, their need for cognition and their power distance can influence decisions to interact and seek or share information. The outlook from reviews and findings in this work predicts more research on learner characteristics and a corresponding intensity in the use of e-learning by individuals. It is suggested that future studies investigate the relationship between learner autonomy and power distance. Studies on inter-cultural similarities amongst e-learners in different populations are also
suggested.
The present thesis addresses cognitive processing of voice information. Based on general theoretical concepts regarding mental processes it will differentiate between modular, abstract information processing approaches to cognition and interactive, embodied ideas of mental processing. These general concepts will then be transferred to the context of processing voice-related information in the context of parallel face-related processing streams. One central issue here is whether and to what extent cognitive voice processing can occur independently, that is, encapsulated from the simultaneous processing of visual person-related information (and vice versa). In Study 1 (Huestegge & Raettig, in press), participants are presented with audio-visual stimuli displaying faces uttering digits.
Audiovisual gender congruency was manipulated: There were male and female faces, each uttering digits with either a male or female voice (all stimuli were AV- synchronized). Participants were asked to categorize the gender of either the face or the voice by pressing one of two keys in each trial. A central result was that audio-visual gender congruency affected performance: Incongruent stimuli were categorized slower and more error-prone, suggesting a strong cross-modal interaction of the underlying visual and auditory processing routes. Additionally, the effect of incongruent visual information on auditory classification was stronger than the effect of incongruent auditory information on visual categorization, suggesting visual dominance over auditory processing in the context of gender classification. A gender congruency effect was also present under high cognitive load. Study 2 (Huestegge, Raettig, & Huestegge, in press) utilized the same (gender-congruent and -incongruent) stimuli, but different tasks for the participants, namely categorizing the spoken digits (into odd/even or smaller/larger than 5). This should effectively direct attention away from gender information, which was no longer task-relevant. Nevertheless, congruency effects were still observed in this study. This suggests a relatively automatic processing of cross-modal gender information, which
eventually affects basic speech-based information processing. Study 3 (Huestegge, subm.) focused on the ability of participants to match unfamiliar voices to (either static or dynamic) faces. One result was that participants were indeed able to match voices to faces. Moreover, there was no evidence for any performance increase when dynamic (vs. mere static) faces had to be matched to concurrent voices. The results support the idea that common person-related source information affects both vocal and facial features, and implicit corresponding knowledge appears to be used by participants to successfully complete face-voice matching. Taken together, the three studies (Huestegge, subm.; Huestegge & Raettig, in press; Huestegge et al., in press) provided information to further develop current theories of voice processing (in the context of face processing). On a general level, the results of all three studies are not in line with an abstract, modular view of cognition, but rather lend further support to interactive, embodied accounts of mental processing.
This dissertation highlights various aspects of basic social attention by choosing versatile approaches to disentangle the precise mechanisms underlying the preference to focus on other human beings. The progressive examination of different social processes contrasted with aspects of previously adopted principles of general attention. Recent research investigating eye movements during free exploration revealed a clear and robust social bias, especially for the faces of depicted human beings in a naturalistic scene. However, free viewing implies a combination of mechanisms, namely automatic attention (bottom-up), goal-driven allocation (top-down), or contextual cues and inquires consideration of overt (open exploration using the eyes) as well as covert orienting (peripheral attention without eye movement). Within the scope of this dissertation, all of these aspects have been disentangled in three studies to provide a thorough investigation of different influences on social attention mechanisms.
In the first study (section 2.1), we implemented top-down manipulations targeting non-social features in a social scene to test competing resources. Interestingly, attention towards social aspects prevailed, even though this was detrimental to completing the requirements. Furthermore, the tendency of this bias was evident for overall fixation patterns, as well as fixations occurring directly after stimulus onset, suggesting sustained as well as early preferential processing of social features. Although the introduction of tasks generally changes gaze patterns, our results imply only subtle variance when stimuli are social. Concluding, this experiment indicates that attention towards social aspects remains preferential even in light of top-down demands.
The second study (section 2.2) comprised of two separate experiments, one in which we investigated reflexive covert attention and another in which we tested reflexive as well as sustained overt attention for images in which a human being was unilaterally located on either the left or right half of the scene. The first experiment consisted of a modified dot-probe paradigm, in which peripheral probes were presented either congruently on the side of the social aspect, or incongruently on the non-social side. This was based on the assumption that social features would act similar to cues in traditional spatial cueing paradigms, thereby facilitating reaction times for probes presented on the social half as opposed to the non-social half. Indeed, results reflected such congruency effect. The second experiment investigated these reflexive mechanisms by monitoring eye movements and specifying the location of saccades and fixations for short as well as long presentation times. Again, we found the majority of initial saccades to be congruently directed to the social side of the stimulus. Furthermore, we replicated findings for sustained attention processes with highest fixation densities for the head region of the displayed human being.
The third study (section 2.3), tackled the other mechanism proposed in the attention dichotomy, the bottom-up influence. Specifically, we reduced the available contextual information of a scene by using a gaze-contingent display, in which only the currently fixated regions would be visible to the viewer, while the remaining image would remain masked. Thereby, participants had to voluntarily change their gaze in order to explore the stimulus. First, results revealed a replication of a social bias in free-viewing displays. Second, the preference to select social features was also evident in gaze-contingent displays. Third, we find higher recurrent gaze patterns for social images compared to non-social ones for both viewing modalities. Taken together, these findings imply a top-down driven preference for social features largely independent of contextual information.
Importantly, for all experiments, we took saliency predictions of different computational algorithms into consideration to ensure that the observed social bias was not a result of high physical saliency within these areas. For our second experiment, we even reduced the stimulus set to those images, which yielded lower mean and peak saliency for the side of the stimulus containing the social information, while considering algorithms based on low-level features, as well as pre-trained high-level features incorporated in deep learning algorithms.
Our experiments offer new insights into single attentional mechanisms with regard to static social naturalistic scenes and enable a further understanding of basic social processing, contrasting from that of non-social attention. The replicability and consistency of our findings across experiments speaks for a robust effect, attributing social attention an exceptional role within the general attention construct, not only behaviorally, but potentially also on a neuronal level and further allowing implications for clinical populations with impaired social functioning.
Endogenous Testosterone and Exogenous Oxytocin Modulate Attentional Processing of Infant Faces
(2016)
Evidence indicates that hormones modulate the intensity of maternal care. Oxytocin is known for its positive influence on maternal behavior and its important role for childbirth. In contrast, testosterone promotes egocentric choices and reduces empathy. Further, testosterone decreases during parenthood which could be an adaptation to increased parental investment. The present study investigated the interaction between testosterone and oxytocin in attentional control and their influence on attention to baby schema in women. Higher endogenous testosterone was expected to decrease selective attention to child portraits in a face-in-the-crowd-paradigm, while oxytocin was expected to counteract this effect. As predicted, women with higher salivary testosterone were slower in orienting attention to infant targets in the context of adult distractors. Interestingly, reaction times to infant and adult stimuli decreased after oxytocin administration, but only in women with high endogenous testosterone. These results suggest that oxytocin may counteract the adverse effects of testosterone on a central aspect of social behavior and maternal caretaking.
Epigenetic signatures such as methylation of the monoamine oxidase A (MAOA) gene have been found to be altered in panic disorder (PD). Hypothesizing temporal plasticity of epigenetic processes as a mechanism of successful fear extinction, the present psychotherapy-epigenetic study for we believe the first time investigated MAOA methylation changes during the course of exposure-based cognitive behavioral therapy (CBT) in PD. MAOA methylation was compared between N=28 female Caucasian PD patients (discovery sample) and N=28 age- and sex-matched healthy controls via direct sequencing of sodium bisulfite-treated DNA extracted from blood cells. MAOA methylation was furthermore analyzed at baseline (T0) and after a 6-week CBT (T1) in the discovery sample parallelized by a waiting time in healthy controls, as well as in an independent sample of female PD patients (N=20). Patients exhibited lower MAOA methylation than healthy controls (P<0.001), and baseline PD severity correlated negatively with MAOA methylation (P=0.01). In the discovery sample, MAOA methylation increased up to the level of healthy controls along with CBT response (number of panic attacks; T0-T1: +3.37±2.17%), while non-responders further decreased in methylation (-2.00±1.28%; P=0.001). In the replication sample, increases in MAOA methylation correlated with agoraphobic symptom reduction after CBT (P=0.02-0.03). The present results support previous evidence for MAOA hypomethylation as a PD risk marker and suggest reversibility of MAOA hypomethylation as a potential epigenetic correlate of response to CBT. The emerging notion of epigenetic signatures as a mechanism of action of psychotherapeutic interventions may promote epigenetic patterns as biomarkers of lasting extinction effects.
Dyspnea is common in many cardiorespiratory diseases. Already the anticipation of this aversive symptom elicits fear in many patients resulting in unfavorable health behaviors such as activity avoidance and sedentary lifestyle. This study investigated brain mechanisms underlying these anticipatory processes. We induced dyspnea using resistive-load breathing in healthy subjects during functional magnetic resonance imaging. Blocks of severe and mild dyspnea alternated, each preceded by anticipation periods. Severe dyspnea activated a network of sensorimotor, cerebellar, and limbic areas. The left insular, parietal opercular, and cerebellar cortices showed increased activation already during dyspnea anticipation. Left insular and parietal opercular cortex showed increased connectivity with right insular and anterior cingulate cortex when severe dyspnea was anticipated, while the cerebellum showed increased connectivity with the amygdala. Notably, insular activation during dyspnea perception was positively correlated with midbrain activation during anticipation. Moreover, anticipatory fear was positively correlated with anticipatory activation in right insular and anterior cingulate cortex. The results demonstrate that dyspnea anticipation activates brain areas involved in dyspnea perception. The involvement of emotion-related areas such as insula, anterior cingulate cortex, and amygdala during dyspnea anticipation most likely reflects anticipatory fear and might underlie the development of unfavorable health behaviors in patients suffering from dyspnea.
Detecting whether a suspect possesses incriminating (e.g., crime-related) information can provide valuable decision aids in court. To this means, the Concealed Information Test (CIT) has been developed and is currently applied on a regular basis in Japan. But whereas research has revealed a high validity of the CIT in student and normal populations, research investigating its validity in forensic samples in scarce. This applies even more to the reaction time-based CIT (RT-CIT), where no such research is available so far. The current study tested the application of the RT-CIT for an imaginary mock crime scenario both in a sample of prisoners (n = 27) and a matched control group (n = 25). Results revealed a high validity of the RT-CIT for discriminating between crime-related and crime-unrelated information, visible in medium to very high effect sizes for error rates and reaction times. Interestingly, in accordance with theories that criminal offenders may have worse response inhibition capacities and that response inhibition plays a crucial role in the RT-CIT, CIT-effects in the error rates were even elevated in the prisoners compared to the control group. No support for this hypothesis could, however, be found in reaction time CIT-effects. Also, performance in a standard Stroop task, that was conducted to measure executive functioning, did not differ between both groups and no correlation was found between Stroop task performance and performance in the RT-CIT. Despite frequently raised concerns that the RT-CIT may not be applicable in non-student and forensic populations, our results thereby do suggest that such a use may be possible and that effects seem to be quite large. Future research should build up on these findings by increasing the realism of the crime and interrogation situation and by further investigating the replicability and the theoretical substantiation of increased effects in non-student and forensic samples.
A negative mood-congruent attention bias has been consistently observed, for example, in clinical studies on major depression. This bias is assumed to be dysfunctional in that it supports maintaining a sad mood, whereas a potentially adaptive role has largely been neglected. Previous experiments involving sad mood induction techniques found a negative mood-congruent attention bias specifically for young individuals, explained by an adaptive need for information transfer in the service of mood regulation. In the present study we investigated the attentional bias in typically developing children (aged 6–12 years) when happy and sad moods were induced. Crucially, we manipulated the age (adult vs. child) of the displayed pairs of facial expressions depicting sadness, anger, fear and happiness. The results indicate that sad children indeed exhibited a mood specific attention bias toward sad facial expressions. Additionally, this bias was more pronounced for adult faces. Results are discussed in the context of an information gain which should be stronger when looking at adult faces due to their more expansive life experience. These findings bear implications for both research methods and future interventions.
Altruistic punishment is connected to trait anger, not trait altruism, if compensation is available
(2018)
Altruistic punishment and altruistic compensation are important concepts that are used to investigate altruism. However, altruistic punishment has been found to be correlated with anger. We were interested whether altruistic punishment and altruistic compensation are both driven by trait altruism and trait anger or whether the influence of those two traits is more specific to one of the behavioral options. We found that if the participants were able to apply altruistic compensation and altruistic punishment together in one paradigm, trait anger only predicts altruistic punishment and trait altruism only predicts altruistic compensation. Interestingly, these relations are disguised in classical altruistic punishment and altruistic compensation paradigms where participants can either only punish or compensate. Hence altruistic punishment and altruistic compensation paradigms should be merged together if one is interested in trait altruism without the confounding influence of trait anger.
Although questionable research practices (QRPs) and p-hacking have received attention in recent years, little research has focused on their prevalence and acceptance in students. Students are the researchers of the future and will represent the field in the future. Therefore, they should not be learning to use and accept QRPs, which would reduce their ability to produce and evaluate meaningful research. 207 psychology students and fresh graduates provided self-report data on the prevalence and predictors of QRPs. Attitudes towards QRPs, belief that significant results constitute better science or lead to better grades, motivation, and stress levels were predictors. Furthermore, we assessed perceived supervisor attitudes towards QRPs as an important predictive factor. The results were in line with estimates of QRP prevalence from academia. The best predictor of QRP use was students’ QRP attitudes. Perceived supervisor attitudes exerted both a direct and indirect effect via student attitudes. Motivation to write a good thesis was a protective factor, whereas stress had no effect. Students in this sample did not subscribe to beliefs that significant results were better for science or their grades. Such beliefs further did not impact QRP attitudes or use in this sample. Finally, students engaged in more QRPs pertaining to reporting and analysis than those pertaining to study design. We conclude that supervisors have an important function in shaping students’ attitudes towards QRPs and can improve their research practices by motivating them well. Furthermore, this research provides some impetus towards identifying predictors of QRP use in academia.
Arrow cues and other overlearned spatial symbols automatically orient attention according to their spatial meaning. This renders them similar to exogenous cues that occur at stimulus location. Exogenous cues trigger shifts of attention even when they are presented subliminally. Here, we investigate to what extent the mechanisms underlying the orienting of attention by exogenous cues and by arrow cues are comparable by analyzing the effects of visible and masked arrow cues on attention. In Experiment 1, we presented arrow cues with overall 50% validity. Visible cues, but not masked cues, lead to shifts of attention. In Experiment 2, the arrow cues had an overall validity of 80%. Now both visible and masked arrows lead to shifts of attention. This is in line with findings that subliminal exogenous cues capture attention only in a top-down contingent manner, that is, when the cues fit the observer’s intentions.
The present study investigated event-related brain potentials elicited by true and false negated statements to evaluate if discrimination of the truth value of negated information relies on conscious processing and requires higher-order cognitive processing in healthy subjects across different levels of stimulus complexity. The stimulus material consisted of true and false negated sentences (sentence level) and prime-target expressions (word level). Stimuli were presented acoustically and no overt behavioral response of the participants was required. Event-related brain potentials to target words preceded by true and false negated expressions were analyzed both within group and at the single subject level. Across the different processing conditions (word pairs and sentences), target words elicited a frontal negativity and a late positivity in the time window from 600-1000 msec post target word onset. Amplitudes of both brain potentials varied as a function of the truth value of the negated expressions. Results were confirmed at the single-subject level. In sum, our results support recent suggestions according to which evaluation of the truth value of a negated expression is a time-and cognitively demanding process that cannot be solved automatically, and thus requires conscious processing. Our paradigm provides insight into higher-order processing related to language comprehension and reasoning in healthy subjects. Future studies are needed to evaluate if our paradigm also proves sensitive for the detection of consciousness in non-responsive patients.
Background:
This study investigated the relation between social desirability and self-reported physical activity in web-based research.
Findings:
A longitudinal study (N = 5,495, 54% women) was conducted on a representative sample of the Dutch population using the Marlowe-Crowne Scale as social desirability measure and the short form of the International Physical Activity Questionnaire. Social desirability was not associated with self-reported physical activity (in MET-minutes/week), nor with its sub-behaviors (i.e., walking, moderate-intensity activity, vigorous-intensity activity, and sedentary behavior). Socio-demographics (i.e., age, sex, income, and education) did not moderate the effect of social desirability on self-reported physical activity and its sub-behaviors.
Conclusions:
This study does not throw doubt on the usefulness of the Internet as a medium to collect self-reports on physical activity.
The Concealed Information Test (CIT) is a well-validated means to detect whether someone possesses certain (e.g., crime-relevant) information. The current study investigated whether alcohol intoxication during CIT administration influences reaction time (RT) CIT-effects. Two opposing predictions can be made. First, by decreasing attention to critical information, alcohol intoxication could diminish CIT-effects. Second, by hampering the inhibition of truthful responses, alcohol intoxication could increase CIT-effects. A correlational field design was employed. Participants (n = 42) were recruited and tested at a bar, where alcohol consumption was voluntary and incidental. Participants completed a CIT, in which they were instructed to hide knowledge of their true identity. BAC was estimated via breath alcohol ratio. Results revealed that higher BAC levels were correlated with higher CIT-effects. Our results demonstrate that robust CIT effects can be obtained even when testing conditions differ from typical laboratory settings and strengthen the idea that response inhibition contributes to the RT-CIT effect.
Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts’ advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts’ extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI) related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas
Human actions are generally not determined by external stimuli, but by internal goals and by the urge to evoke desired effects in the environment. To reach these effects, humans typically have to act. But at times, deciding not to act can be better suited or even the only way to reach a desired effect. What mental processes are involved when people decide not to act to reach certain effects? From the outside it may seem that nothing remarkable is happening, because no action can be observed. However, I present three studies which disclose the cognitive processes that control nonactions.
The present experiments address situations where people intentionally decide to omit certain actions in order to produce a predictable effect in the environment. These experiments are based on the ideomotor hypothesis, which suggests that bidirectional associations can be formed between actions and the resulting effects. Because of these associations, anticipating the effects can in turn activate the respective action. The results of the present experiments show that associations can be formed between nonactions (i.e., the intentional decision not to act) and the resulting effects. Due to these associations, perceiving the nonaction effects encourages not acting (Exp. 1–3). What is more, planning a nonaction seems to come with an activation of the effects that inevitably follow the nonaction (Exp. 4–5). These results suggest that the ideomotor hypothesis can be expanded to nonactions and that nonactions are cognitively represented in terms of their sensory effects. Furthermore, nonaction effects can elicit a sense of agency (Exp. 6–8). That is, even though people refrain from acting, the resulting nonaction effects are perceived as self-produced effects.
In a nutshell, these findings demonstrate that intentional nonactions include specific mechanisms and processes, which are involved, for instance, in effect anticipation and the sense of agency. This means that, while it may seem that nothing remarkable is happening when people decide not to act, complex processes run on the inside, which are also involved in intentional actions.
The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.
The limbic system and especially the amygdala have been identified as key structures in emotion induction and regulation. Recently research has additionally focused on the influence of prefrontal areas on emotion processing in the limbic system and the amygdala. Results from fMRI studies indicate that the prefrontal cortex (PFC) is involved not only in emotion induction but also in emotion regulation. However, studies using fNIRS only report prefrontal brain activation during emotion induction. So far it lacks the attempt to compare emotion induction and emotion regulation with regard to prefrontal activation measured with fNIRS, to exclude the possibility that the reported prefrontal brain activation in fNIRS studies are mainly caused by automatic emotion regulation processes. Therefore this work tried to distinguish emotion induction from regulation via fNIRS of the prefrontal cortex. 20 healthy women viewed neutral pictures as a baseline condition, fearful pictures as induction condition and reappraised fearful pictures as regulation condition in randomized order. As predicted, the view-fearful condition led to higher arousal ratings than the view-neutral condition with the reappraise-fearful condition in between. For the fNIRS results the induction condition showed an activation of the bilateral PFC compared to the baseline condition (viewing neutral). The regulation condition showed an activation only of the left PFC compared to the baseline condition, although the direct comparison between induction and regulation condition revealed no significant difference in brain activation. Therefore our study underscores the results of previous fNIRS studies showing prefrontal brain activation during emotion induction and rejects the hypothesis that this prefrontal brain activation might only be a result of automatic emotion regulation processes.
The rise of automated driving will fundamentally change our mobility in the near future. This thesis specifically considers the stage of so called highly automated driving (Level 3, SAE International, 2014). At this level, a system carries out vehicle guidance in specific application areas, e.g. on highway roads. The driver can temporarily suspend from monitoring the driving task and might use the time by engaging in so called non-driving related tasks (NDR-tasks). However, the driver is still in charge to resume vehicle control when prompted by the system. This new role of the driver has to be critically examined from a human factors perspective.
The main aim of this thesis was to systematically investigate the impact of different NDR-tasks on driver behavior and take-over performance. Wickens’ (2008) architecture of multiple resource theory was chosen as theoretical framework, with the building blocks of multiplicity (task interference due to resource overlap), mental workload (task demands), and aspects of executive control or self-regulation. Specific adaptations and extensions of the theory were discussed to account for the context of NDR-task interactions in highly automated driving.
Overall four driving simulator studies were carried out to investigate the role of these theoretical components. Study 1 showed that drivers focused NDR-task engagement on sections of highly automated compared to manual driving. In addition, drivers avoided task engagement prior to predictable take-over situations. These results indicate that self-regulatory behavior, as reported for manual driving, also takes place in the context of highly automated driving. Study 2 specifically addressed the impact of NDR-tasks’ stimulus and response modalities on take-over performance. Results showed that particularly visual-manual tasks with high motoric load (including the need to get rid of a handheld object) had detrimental effects. However, drivers seemed to be aware of task specific distraction in take-over situations and strictly canceled visual-manual tasks compared to a low impairing auditory-vocal task. Study 3 revealed that also the mental demand of NDR-tasks should be considered for drivers’ take-over performance. Finally, different human-machine-interfaces were developed and evaluated in Simulator Study 4. Concepts including an explicit pre-alert (“notification”) clearly supported drivers’ self-regulation and achieved high usability and acceptance ratings.
Overall, this thesis indicates that the architecture of multiple resource theory provides a useful framework for research in this field. Practical implications arise regarding the potential legal regulation of NDR-tasks as well as the design of elaborated human-machine-interfaces.
The novel BackHome system offers individuals with disabilities a range of useful services available via brain-computer interfaces (BCIs), to help restore their independence. This is the time such technology is ready to be deployed in the real world, that is, at the target end users’ home. This has been achieved by the development of practical electrodes, easy to use software, and delivering telemonitoring and home support capabilities which have been conceived, implemented, and tested within a user-centred design approach. The final BackHome system is the result of a 3-year long process involving extensive user engagement to maximize effectiveness, reliability, robustness, and ease of use of a home based BCI system. The system is comprised of ergonomic and hassle-free BCI equipment; one-click software services for Smart Home control, cognitive stimulation, and web browsing; and remote telemonitoring and home support tools to enable independent home use for nonexpert caregivers and users. BackHome aims to successfully bring BCIs to the home of people with limited mobility to restore their independence and ultimately improve their quality of life.
In classical conditioning, an initially neutral stimulus (conditioned stimulus, CS) becomes associated with a biologically salient event (unconditioned stimulus, US), which might be pain (aversive conditioning) or food (appetitive conditioning). After a few associations, the CS is able to initiate either defensive or consummatory responses, respectively. Contrary to aversive conditioning, appetitive conditioning is rarely investigated in humans, although its importance for normal and pathological behaviors (e.g., obesity, addiction) is undeniable. The present study intents to translate animal findings on appetitive conditioning to humans using food as an US. Thirty-three participants were investigated between 8 and 10 am without breakfast in order to assure that they felt hungry. During two acquisition phases, one geometrical shape (avCS+) predicted an aversive US (painful electric shock), another shape (appCS+) predicted an appetitive US (chocolate or salty pretzel according to the participants' preference), and a third shape (CS) predicted neither US. In a extinction phase, these three shapes plus a novel shape (NEW) were presented again without US delivery. Valence and arousal ratings as well as startle and skin conductance (SCR) responses were collected as learning indices. We found successful aversive and appetitive conditioning. On the one hand, the avCS+ was rated as more negative and more arousing than the CS and induced startle potentiation and enhanced SCR. On the other hand, the appCS+ was rated more positive than the CS and induced startle attenuation and larger SCR. In summary, we successfully confirmed animal findings in (hungry) humans by demonstrating appetitive learning and normal aversive learning.