TY - JOUR A1 - Kaussner, Y. A1 - Kuraszkiewicz, A. M. A1 - Schoch, S. A1 - Markel, Petra A1 - Hoffmann, S. A1 - Baur-Streubel, R. A1 - Kenntner-Mabiala, R. A1 - Pauli, P. T1 - Treating patients with driving phobia by virtual reality exposure therapy – a pilot study JF - PLoS ONE N2 - Objectives Virtual reality exposure therapy (VRET) is a promising treatment for patients with fear of driving. The present pilot study is the first one focusing on behavioral effects of VRET on patients with fear of driving as measured by a post-treatment driving test in real traffic. Methods The therapy followed a standardized manual including psychotherapeutic and medical examination, two preparative psychotherapy sessions, five virtual reality exposure sessions, a final behavioral avoidance test (BAT) in real traffic, a closing session, and two follow-up phone assessments after six and twelve weeks. VRE was conducted in a driving simulator with a fully equipped mockup. The exposure scenarios were individually tailored to the patients’ anxiety hierarchy. A total of 14 patients were treated. Parameters on the verbal, behavioral and physiological level were assessed. Results The treatment was helpful to overcome driving fear and avoidance. In the final BAT, all patients mastered driving tasks they had avoided before, 71% showed an adequate driving behavior as assessed by the driving instructor, and 93% could maintain their treatment success until the second follow-up phone call. Further analyses suggest that treatment reduces avoidance behavior as well as symptoms of posttraumatic stress disorder as measured by standardized questionnaires (Avoidance and Fusion Questionnaire: p < .10, PTSD Symptom Scale–Self Report: p < .05). Conclusions VRET in driving simulation is very promising to treat driving fear. Further research with randomized controlled trials is needed to verify efficacy. Moreover, simulators with lower configuration stages should be tested for a broad availability in psychotherapy. KW - Mental health therapies KW - Heart rate KW - Animal behavior KW - Instructors KW - Psychometrics KW - Post-traumatic stress disorder KW - Fear KW - Pilot studies Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-201051 VL - 15 IS - 1 ER - TY - JOUR A1 - Paelecke-Habermann, Yvonne A1 - Paelecke, Marko A1 - Mauth, Juliane A1 - Tschisgale, Juliane A1 - Lindenmeyer, Johannes A1 - Kübler, Andrea T1 - A comparison of implicit and explicit reward learning in low risk alcohol users versus people who binge drink and people with alcohol dependence JF - Addictive Behaviors Reports N2 - Chronic alcohol use leads to specific neurobiological alterations in the dopaminergic brain reward system, which probably are leading to a reward deficiency syndrome in alcohol dependence. The purpose of our study was to examine the effects of such hypothesized neurobiological alterations on the behavioral level, and more precisely on the implicit and explicit reward learning. Alcohol users were classified as dependent drinkers (using the DSM-IV criteria), binge drinkers (using criteria of the USA National Institute on Alcohol Abuse and Alcoholism) or low-risk drinkers (following recommendations of the Scientific board of trustees of the German Health Ministry). The final sample (n = 94) consisted of 36 low-risk alcohol users, 37 binge drinkers and 21 abstinent alcohol dependent patients. Participants were administered a probabilistic implicit reward learning task and an explicit reward- and punishment-based trial-and-error-learning task. Alcohol dependent patients showed a lower performance in implicit and explicit reward learning than low risk drinkers. Binge drinkers learned less than low-risk drinkers in the implicit learning task. The results support the assumption that binge drinking and alcohol dependence are related to a chronic reward deficit. Binge drinking accompanied by implicit reward learning deficits could increase the risk for the development of an alcohol dependence. KW - Alcohol dependence KW - Binge drinking KW - Low risk alcohol use KW - Implicit and explicit reward learning Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-201406 VL - 9 ER - TY - JOUR A1 - Citron, Francesca M. M. A1 - Abugaber, David A1 - Herbert, Cornelia T1 - Approach and Withdrawal Tendencies during Written Word Processing: Effects of Task, Emotional Valence, and Emotional Arousal JF - frontiers in Psychology N2 - The affective dimensions of emotional valence and emotional arousal affect processing of verbal and pictorial stimuli. Traditional emotional theories assume a linear relationship between these dimensions, with valence determining the direction of a behavior (approach vs. withdrawal) and arousal its intensity or strength. In contrast, according to the valence-arousal conflict theory, both dimensions are interactively related: positive valence and low arousal (PL) are associated with an implicit tendency to approach a stimulus, whereas negative valence and high arousal (NH) are associated with withdrawal. Hence, positive, high-arousal (PH) and negative, low-arousal (NL) stimuli elicit conflicting action tendencies. By extending previous research that used several tasks and methods, the present study investigated whether and how emotional valence and arousal affect subjective approach vs. withdrawal tendencies toward emotional words during two novel tasks. In Study 1, participants had to decide whether they would approach or withdraw from concepts expressed by written words. In Studies 2 and 3 participants had to respond to each word by pressing one of two keys labeled with an arrow pointing upward or downward. Across experiments, positive and negative words, high or low in arousal, were presented. In Study 1 (explicit task), in line with the valence-arousal conflict theory, PH and NL words were responded to more slowly than PL and NH words. In addition, participants decided to approach positive words more often than negative words. In Studies 2 and 3, participants responded faster to positive than negative words, irrespective of their level of arousal. Furthermore, positive words were significantly more often associated with “up” responses than negative words, thus supporting the existence of implicit associations between stimulus valence and response coding (positive is up and negative is down). Hence, in contexts in which participants' spontaneous responses are based on implicit associations between stimulus valence and response, there is no influence of arousal. In line with the valence-arousal conflict theory, arousal seems to affect participants' approach-withdrawal tendencies only when such tendencies are made explicit by the task, and a minimal degree of processing depth is required. KW - approach KW - withdrawal KW - valence KW - arousal KW - emotion KW - words KW - polarity effects Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-165318 VL - 6 IS - 1935 ER - TY - JOUR A1 - Peperkorn, Henrik M. A1 - Diemer, Julia E. A1 - Alpers, Georg W. A1 - Mühlberger, Andreas T1 - Representation of Patients' Hand Modulates Fear Reactions of Patients with Spider Phobia in Virtual Reality JF - frontiers in Psychology N2 - Embodiment (i.e., the involvement of a bodily representation) is thought to be relevant in emotional experiences. Virtual reality (VR) is a capable means of activating phobic fear in patients. The representation of the patient’s body (e.g., the right hand) in VR enhances immersion and increases presence, but its effect on phobic fear is still unknown. We analyzed the influence of the presentation of the participant’s hand in VR on presence and fear responses in 32 women with spider phobia and 32 matched controls. Participants sat in front of a table with an acrylic glass container within reaching distance. During the experiment this setup was concealed by a head-mounted display (HMD). The VR scenario presented via HMD showed the same setup, i.e., a table with an acrylic glass container. Participants were randomly assigned to one of two experimental groups. In one group, fear responses were triggered by fear-relevant visual input in VR (virtual spider in the virtual acrylic glass container), while information about a real but unseen neutral control animal (living snake in the acrylic glass container) was given. The second group received fear-relevant information of the real but unseen situation (living spider in the acrylic glass container), but visual input was kept neutral VR (virtual snake in the virtual acrylic glass container). Participants were instructed to touch the acrylic glass container with their right hand in 20 consecutive trials. Visibility of the hand was varied randomly in a within-subjects design. We found for all participants that visibility of the participant’s hand increased presence independently of the fear trigger. However, in patients, the influence of the virtual hand on fear depended on the fear trigger. When fear was triggered perceptually, i.e., by a virtual spider, the virtual hand increased fear. When fear was triggered by information about a real spider, the virtual hand had no effect on fear. Our results shed light on the significance of different fear triggers (visual, conceptual) in interaction with body representations. KW - virtual reality KW - presence KW - immersion KW - perception KW - fear KW - specific phobia Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-165307 VL - 7 IS - 268 ER - TY - THES A1 - Menne, Isabelle M. T1 - Facing Social Robots – Emotional Reactions towards Social Robots N2 - Ein Army Colonel empfindet Mitleid mit einem Roboter, der versuchsweise Landminen entschärft und deklariert den Test als inhuman (Garreau, 2007). Roboter bekommen militärische Beförderungen, Beerdigungen und Ehrenmedaillen (Garreau, 2007; Carpenter, 2013). Ein Schildkrötenroboter wird entwickelt, um Kindern beizubringen, Roboter gut zu behandeln (Ackermann, 2018). Der humanoide Roboter Sophia wurde erst kürzlich Saudi-Arabischer Staatsbürger und es gibt bereits Debatten, ob Roboter Rechte bekommen sollen (Delcker, 2018). Diese und ähnliche Entwicklungen zeigen schon jetzt die Bedeutsamkeit von Robotern und die emotionale Wirkung die diese auslösen. Dennoch scheinen sich diese emotionalen Reaktionen auf einer anderen Ebene abzuspielen, gemessen an Kommentaren in Internetforen. Dort ist oftmals die Rede davon, wieso jemand überhaupt emotional auf einen Roboter reagieren kann. Tatsächlich ist es, rein rational gesehen, schwierig zu erklären, warum Menschen mit einer leblosen (‚mindless‘) Maschine mitfühlen sollten. Und dennoch zeugen nicht nur oben genannte Berichte, sondern auch erste wissenschaftliche Studien (z.B. Rosenthal- von der Pütten et al., 2013) von dem emotionalen Einfluss den Roboter auf Menschen haben können. Trotz der Bedeutsamkeit der Erforschung emotionaler Reaktionen auf Roboter existieren bislang wenige wissenschaftliche Studien hierzu. Tatsächlich identifizierten Kappas, Krumhuber und Küster (2013) die systematische Analyse und Evaluation sozialer Reaktionen auf Roboter als eine der größten Herausforderungen der affektiven Mensch-Roboter Interaktion. Nach Scherer (2001; 2005) bestehen Emotionen aus der Koordination und Synchronisation verschiedener Komponenten, die miteinander verknüpft sind. Motorischer Ausdruck (Mimik), subjektives Erleben, Handlungstendenzen, physiologische und kognitive Komponenten gehören hierzu. Um eine Emotion vollständig zu erfassen, müssten all diese Komponenten gemessen werden, jedoch wurde eine solch umfassende Analyse bisher noch nie durchgeführt (Scherer, 2005). Hauptsächlich werden Fragebögen eingesetzt (vgl. Bethel & Murphy, 2010), die allerdings meist nur das subjektive Erleben abfragen. Bakeman und Gottman (1997) geben sogar an, dass nur etwa 8% der psychologischen Forschung auf Verhaltensdaten basiert, obwohl die Psychologie traditionell als das ‚Studium von Psyche und Verhalten‘ (American Psychological Association, 2018) definiert wird. Die Messung anderer Emotionskomponenten ist selten. Zudem sind Fragebögen mit einer Reihe von Nachteilen behaftet (Austin, Deary, Gibson, McGregor, Dent, 1998; Fan et al., 2006; Wilcox, 2011). Bethel und Murphy (2010) als auch Arkin und Moshkina (2015) plädieren für einen Multi-Methodenansatz um ein umfassenderes Verständnis von affektiven Prozessen in der Mensch-Roboter Interaktion zu erlangen. Das Hauptziel der vorliegenden Dissertation ist es daher, mithilfe eines Multi-Methodenansatzes verschiedene Komponenten von Emotionen (motorischer Ausdruck, subjektive Gefühlskomponente, Handlungstendenzen) zu erfassen und so zu einem vollständigeren und tiefgreifenderem Bild emotionaler Prozesse auf Roboter beizutragen. Um dieses Ziel zu erreichen, wurden drei experimentelle Studien mit insgesamt 491 Teilnehmern durchgeführt. Mit unterschiedlichen Ebenen der „apparent reality“ (Frijda, 2007) sowie Macht / Kontrolle über die Situation (vgl. Scherer & Ellgring, 2007) wurde untersucht, inwiefern sich Intensität und Qualität emotionaler Reaktionen auf Roboter ändern und welche weiteren Faktoren (Aussehen des Roboters, emotionale Expressivität des Roboters, Behandlung des Roboters, Autoritätsstatus des Roboters) Einfluss ausüben. Experiment 1 basierte auf Videos, die verschiedene Arten von Robotern (tierähnlich, anthropomorph, maschinenartig), die entweder emotional expressiv waren oder nicht (an / aus) in verschiedenen Situationen (freundliche Behandlung des Roboters vs. Misshandlung) zeigten. Fragebögen über selbstberichtete Gefühle und die motorisch-expressive Komponente von Emotionen: Mimik (vgl. Scherer, 2005) wurden analysiert. Das Facial Action Coding System (Ekman, Friesen, & Hager, 2002), die umfassendste und am weitesten verbreitete Methode zur objektiven Untersuchung von Mimik, wurde hierfür verwendet. Die Ergebnisse zeigten, dass die Probanden Gesichtsausdrücke (Action Unit [AU] 12 und AUs, die mit positiven Emotionen assoziiert sind, sowie AU 4 und AUs, die mit negativen Emotionen assoziiert sind) sowie selbstberichtete Gefühle in Übereinstimmung mit der Valenz der in den Videos gezeigten Behandlung zeigten. Bei emotional expressiven Robotern konnten stärkere emotionale Reaktionen beobachtet werden als bei nicht-expressiven Robotern. Der tierähnliche Roboter Pleo erfuhr in der Misshandlungs-Bedingung am meisten Mitleid, Empathie, negative Gefühle und Traurigkeit, gefolgt vom anthropomorphen Roboter Reeti und am wenigsten für den maschinenartigen Roboter Roomba. Roomba wurde am meisten Antipathie zugeschrieben. Die Ergebnisse knüpfen an frühere Forschungen an (z.B. Krach et al., 2008; Menne & Schwab, 2018; Riek et al., 2009; Rosenthal-von der Pütten et al., 2013) und zeigen das Potenzial der Mimik für eine natürliche Mensch-Roboter Interaktion. Experiment 2 und Experiment 3 übertrugen die klassischen Experimente von Milgram (1963; 1974) zum Thema Gehorsam in den Kontext der Mensch-Roboter Interaktion. Die Gehorsamkeitsstudien von Milgram wurden als sehr geeignet erachtet, um das Ausmaß der Empathie gegenüber einem Roboter im Verhältnis zum Gehorsam gegenüber einem Roboter zu untersuchen. Experiment 2 unterschied sich von Experiment 3 in der Ebene der „apparent reality“ (Frijda, 2007): in Anlehnung an Milgram (1963) wurde eine rein text-basierte Studie (Experiment 2) einer live Mensch-Roboter Interaktion (Experiment 3) gegenübergestellt. Während die abhängigen Variablen von Experiment 2 aus den Selbstberichten emotionaler Gefühle sowie Einschätzungen des hypothetischen Verhaltens bestand, erfasste Experiment 3 subjektive Gefühle sowie reales Verhalten (Reaktionszeit: Dauer des Zögerns; Gehorsamkeitsrate; Anzahl der Proteste; Mimik) der Teilnehmer. Beide Experimente untersuchten den Einfluss der Faktoren „Autoritätsstatus“ (hoch / niedrig) des Roboters, der die Befehle erteilt (Nao) und die emotionale Expressivität (an / aus) des Roboters, der die Strafen erhält (Pleo). Die subjektiven Gefühle der Teilnehmer aus Experiment 2 unterschieden sich zwischen den Gruppen nicht. Darüber hinaus gaben nur wenige Teilnehmer (20.2%) an, dass sie den „Opfer“-Roboter definitiv bestrafen würden. Ein ähnliches Ergebnis fand auch Milgram (1963). Das reale Verhalten von Versuchsteilnehmern in Milgrams‘ Labor-Experiment unterschied sich jedoch von Einschätzungen hypothetischen Verhaltens von Teilnehmern, denen Milgram das Experiment nur beschrieben hatte. Ebenso lassen Kommentare von Teilnehmern aus Experiment 2 darauf schließen, dass das beschriebene Szenario möglicherweise als fiktiv eingestuft wurde und Einschätzungen von hypothetischem Verhalten daher kein realistisches Bild realen Verhaltens gegenüber Roboter in einer live Interaktion zeichnen können. Daher wurde ein weiteres Experiment (Experiment 3) mit einer Live Interaktion mit einem Roboter als Autoritätsfigur (hoher Autoritätsstatus vs. niedriger) und einem weiteren Roboter als „Opfer“ (emotional expressiv vs. nicht expressiv) durchgeführt. Es wurden Gruppenunterschiede in Fragebögen über emotionale Reaktionen gefunden. Dem emotional expressiven Roboter wurde mehr Empathie entgegengebracht und es wurde mehr Freude und weniger Antipathie berichtet als gegenüber einem nicht-expressiven Roboter. Außerdem konnten Gesichtsausdrücke beobachtet werden, die mit negativen Emotionen assoziiert sind während Probanden Nao’s Befehl ausführten und Pleo bestraften. Obwohl Probanden tendenziell länger zögerten, wenn sie einen emotional expressiven Roboter bestrafen sollten und der Befehl von einem Roboter mit niedrigem Autoritätsstatus kam, wurde dieser Unterschied nicht signifikant. Zudem waren alle bis auf einen Probanden gehorsam und bestraften Pleo, wie vom Nao Roboter befohlen. Dieses Ergebnis steht in starkem Gegensatz zu dem selbstberichteten hypothetischen Verhalten der Teilnehmer aus Experiment 2 und unterstützt die Annahme, dass die Einschätzungen von hypothetischem Verhalten in einem Mensch-Roboter-Gehorsamkeitsszenario nicht zuverlässig sind für echtes Verhalten in einer live Mensch-Roboter Interaktion. Situative Variablen, wie z.B. der Gehorsam gegenüber Autoritäten, sogar gegenüber einem Roboter, scheinen stärker zu sein als Empathie für einen Roboter. Dieser Befund knüpft an andere Studien an (z.B. Bartneck & Hu, 2008; Geiskkovitch et al., 2016; Menne, 2017; Slater et al., 2006), eröffnet neue Erkenntnisse zum Einfluss von Robotern, zeigt aber auch auf, dass die Wahl einer Methode um Empathie für einen Roboter zu evozieren eine nicht triviale Angelegenheit ist (vgl. Geiskkovitch et al., 2016; vgl. Milgram, 1965). Insgesamt stützen die Ergebnisse die Annahme, dass die emotionalen Reaktionen auf Roboter tiefgreifend sind und sich sowohl auf der subjektiven Ebene als auch in der motorischen Komponente zeigen. Menschen reagieren emotional auf einen Roboter, der emotional expressiv ist und eher weniger wie eine Maschine aussieht. Sie empfinden Empathie und negative Gefühle, wenn ein Roboter misshandelt wird und diese emotionalen Reaktionen spiegeln sich in der Mimik. Darüber hinaus unterscheiden sich die Einschätzungen von Menschen über ihr eigenes hypothetisches Verhalten von ihrem tatsächlichen Verhalten, weshalb videobasierte oder live Interaktionen zur Analyse realer Verhaltensreaktionen empfohlen wird. Die Ankunft sozialer Roboter in der Gesellschaft führt zu nie dagewesenen Fragen und diese Dissertation liefert einen ersten Schritt zum Verständnis dieser neuen Herausforderungen. N2 - An Army Colonel feels sorry for a robot that defuses landmines on a trial basis and declares the test inhumane (Garreau, 2007). Robots receive military promotions, funerals and medals of honor (Garreau, 2007; Carpenter, 2013). A turtle robot is being developed to teach children to treat robots well (Ackermann, 2018). The humanoid robot Sophia recently became a Saudi Arabian citizen and there are now debates whether robots should have rights (Delcker, 2018). These and similar developments already show the importance of robots and the emotional impact they have. Nevertheless, these emotional reactions seem to take place on a different level, judging by comments in internet forums alone: Most often, emotional reactions towards robots are questioned if not denied at all. In fact, from a purely rational point of view, it is difficult to explain why people should empathize with a mindless machine. However, not only the reports mentioned above but also first scientific studies (e.g. Rosenthal- von der Pütten et al., 2013) bear witness to the emotional influence of robots on humans. Despite the importance of researching emotional reactions towards robots, there are few scientific studies on this subject. In fact, Kappas, Krumhuber and Küster (2013) identified effective testing and evaluation of social reactions towards robots as one of the major challenges of affective Human-Robot Interaction (HRI). According to Scherer (2001; 2005), emotions consist of the coordination and synchronization of different components that are linked to each other. These include motor expression (facial expressions), subjective experience, action tendencies, physiological and cognitive components. To fully capture an emotion, all these components would have to be measured, but such a comprehensive analysis has never been performed (Scherer, 2005). Primarily, questionnaires are used (cf. Bethel & Murphy, 2010) but most of them only capture subjective experiences. Bakeman and Gottman (1997) even state that only about 8% of psychological research is based on behavioral data, although psychology is traditionally defined as the 'study of the mind and behavior' (American Psychological Association, 2018). The measurement of other emotional components is rare. In addition, questionnaires have a number of disadvantages (Austin, Deary, Gibson, McGregor, Dent, 1998; Fan et al., 2006; Wilcox, 2011). Bethel and Murphy (2010) as well as Arkin and Moshkina (2015) argue for a multi-method approach to achieve a more comprehensive understanding of affective processes in HRI. The main goal of this dissertation is therefore to use a multi-method approach to capture different components of emotions (motor expression, subjective feeling component, action tendencies) and thus contribute to a more complete and profound picture of emotional processes towards robots. To achieve this goal, three experimental studies were conducted with a total of 491 participants. With different levels of ‘apparent reality’ (Frijda, 2007) and power/control over the situation (cf. Scherer & Ellgring, 2007), the extent to which the intensity and quality of emotional responses to robots change were investigated as well as the influence of other factors (appearance of the robot, emotional expressivity of the robot, treatment of the robot, authority status of the robot). Experiment 1 was based on videos showing different types of robots (animal-like, anthropomorphic, machine-like) in different situations (friendly treatment of the robot vs. torture treatment) while being either emotionally expressive or not. Self-reports of feelings as well as the motoric-expressive component of emotion: facial expressions (cf. Scherer, 2005) were analyzed. The Facial Action Coding System (Ekman, Friesen, & Hager, 2002), the most comprehensive and most widely used method for objectively assessing facial expressions, was utilized for this purpose. Results showed that participants displayed facial expressions (Action Unit [AU] 12 and AUs associated with positive emotions as well as AU 4 and AUs associated with negative emotions) as well as self-reported feelings in line with the valence of the treatment shown in the videos. Stronger emotional reactions could be observed for emotionally expressive robots than non-expressive robots. Most pity, empathy, negative feelings and sadness were reported for the animal-like robot Pleo while watching it being tortured, followed by the anthropomorphic robot Reeti and least for the machine-like robot Roomba. Most antipathy was attributed to Roomba. The findings are in line with previous research (e.g., Krach et al., 2008; Menne & Schwab, 2018; Riek et al., 2009; Rosenthal-von der Pütten et al., 2013) and show facial expressions’ potential for a natural HRI. Experiment 2 and Experiment 3 transferred Milgram’s classic experiments (1963; 1974) on obedience into the context of HRI. Milgram’s obedience studies were deemed highly suitable to study the extent of empathy towards a robot in relation to obedience to a robot. Experiment 2 differed from Experiment 3 in the level of ‘apparent reality’ (Frijda, 2007): based on Milgram (1963), a purely text-based study (Experiment 2) was compared with a live HRI (Experiment 3). While the dependent variables of Experiment 2 consisted of self-reports of emotional feelings and assessments of hypothetical behavior, Experiment 3 measured subjective feelings and real behavior (reaction time: duration of hesitation; obedience rate; number of protests; facial expressions) of the participants. Both experiments examined the influence of the factors "authority status" (high / low) of the robot giving the orders (Nao) and the emotional expressivity (on / off) of the robot receiving the punishments (Pleo). The subjective feelings of the participants from Experiment 2 did not differ between the groups. In addition, only few participants (20.2%) stated that they would definitely punish the "victim" robot. Milgram (1963) found a similar result. However, the real behavior of participants in Milgram's laboratory experiment differed from the estimates of hypothetical behavior of participants to whom Milgram had only described the experiment. Similarly, comments from participants in Experiment 2 suggest that the scenario described may have been considered fictitious and that assessments of hypothetical behavior may not provide a realistic picture of real behavior towards robots in a live interaction. Therefore, another experiment (Experiment 3) was performed with a live interaction with a robot as authority figure (high authority status vs. low) and another robot as "victim" (emotional expressive vs. non-expressive). Group differences were found in questionnaires on emotional responses. More empathy was shown for the emotionally expressive robot and more joy and less antipathy was reported than for a non-expressive robot. In addition, facial expressions associated with negative emotions could be observed while subjects executed Nao's command and punished Pleo. Although subjects tended to hesitate longer when punishing an emotionally expressive robot and the order came from a robot with low authority status, this difference did not reach significance. Furthermore, all but one subject were obedient and punished Pleo as commanded by the Nao robot. This result stands in stark contrast to the self-reported hypothetical behavior of the participants from Experiment 2 and supports the assumption that the assessments of hypothetical behavior in a Human-Robot obedience scenario are not reliable for real behavior in a live HRI. Situational variables, such as obedience to authorities, even to a robot, seem to be stronger than empathy for a robot. This finding is in line with previous studies (e.g. Bartneck & Hu, 2008; Geiskkovitch et al., 2016; Menne, 2017; Slater et al., 2006), opens up new insights into the influence of robots, but also shows that the choice of a method to evoke empathy for a robot is not a trivial matter (cf. Geiskkovitch et al., 2016; cf. Milgram, 1965). Overall, the results support the assumption that emotional reactions to robots are profound and manifest both at the subjective level and in the motor component. Humans react emotionally to a robot that is emotionally expressive and looks less like a machine. They feel empathy and negative feelings when a robot is abused and these emotional reactions are reflected in facial expressions. In addition, people's assessments of their own hypothetical behavior differ from their actual behavior, which is why video-based or live interactions are recommended for analyzing real behavioral responses. The arrival of social robots in society leads to unprecedented questions and this dissertation provides a first step towards understanding these new challenges. N2 - Are there emotional reactions towards social robots? Could you love a robot? Or, put the other way round: Could you mistreat a robot, tear it apart and sell it? Media reports people honoring military robots with funerals, mourning the “death” of a robotic dog, and granting the humanoid robot Sophia citizenship. But how profound are these reactions? Three experiments take a closer look on emotional reactions towards social robots by investigating the subjective experience of people as well as the motor expressive level. Contexts of varying degrees of Human-Robot Interaction (HRI) sketch a nuanced picture of emotions towards social robots that encompass conscious as well as unconscious reactions. The findings advance the understanding of affective experiences in HRI. It also turns the initial question into: Can emotional reactions towards social robots even be avoided? T2 - Im Angesicht sozialer Roboter - Emotionale Reaktionen angesichts sozialer Roboter KW - Roboter KW - social robot KW - emotion KW - FACS KW - Facial Action Coding System KW - facial expressions KW - emotional reaction KW - Human-Robot Interaction KW - HRI KW - obedience KW - empathy KW - Gefühl KW - Mimik KW - Mensch-Maschine-Kommunikation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-187131 SN - 978-3-95826-120-4 SN - 978-3-95826-121-1 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, 978-3-95826-120-4, 27,80 EUR. PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - THES A1 - Hörmann, Markus T1 - Analyzing and fostering students' self-regulated learning through the use of peripheral data in online learning environments T1 - Analyse und Förderung des selbstgesteuerten Lernens durch die Verwendung von peripheren Daten in Online-Lernumgebungen N2 - Learning with digital media has become a substantial part of formal and informal educational processes and is gaining more and more importance. Technological progress has brought overwhelming opportunities for learners, but challenges them at the same time. Learners have to regulate their learning process to a much greater extent than in traditional learning situations in which teachers support them through external regulation. This means that learners must plan their learning process themselves, apply appropriate learning strategies, monitor, control and evaluate it. These requirements are taken into account in various models of self-regulated learning (SRL). Although the roots of research on SRL go back to the 1980s, the measurement and adequate support of SRL in technology-enhanced learning environments is still not solved in a satisfactory way. An important obstacle are the data sources used to operationalize SRL processes. In order to support SRL in adaptive learning systems and to validate theoretical models, instruments are needed which meet the classical quality criteria and also fulfil additional requirements. Suitable data channels must be measurable "online", i.e., they must be available in real time during learning for analyses or the individual adaptation of interventions. Researchers no longer only have an interest in the final results of questionnaires or tasks, but also need to examine process data from interactions between learners and learning environments in order to advance the development of theories and interventions. In addition, data sources should not be obtrusive so that the learning process is not interrupted or disturbed. Measurements of physiological data, for example, require learners to wear measuring devices. Moreover, measurements should not be reactive. This means that other variables such as learning outcomes should not be influenced by the measurement. Different data sources that are already used to study and support SRL processes, such as protocols on thinking aloud, screen recording, eye tracking, log files, video observations or physiological sensors, meet these criteria to varying degrees. One data channel that has received little attention in research on educational psychology, but is non-obtrusive, non-reactive, objective and available online, is the detailed, timely high-resolution data on observable interactions of learners in online learning environments. This data channel is introduced in this thesis as "peripheral data". It records both the content of learning environments as context, and related actions of learners triggered by mouse and keyboard, as well as the reactions of learning environments, such as structural or content changes. Although the above criteria for the use of the data are met, it is unclear whether this data can be interpreted reliably and validly with regard to relevant variables and behavior. Therefore, the aim of this dissertation is to examine this data channel from the perspective of SRL and thus further close the existing research gap. One development project and four research projects were carried out and documented in this thesis. N2 - Lernen mit digitalen Medien ist ein substantieller Bestandteil formeller und informeller Bildungsprozesse geworden und gewinnt noch immer an Bedeutung. Technologischer Fortschritt hat überwältigende Möglichkeiten für Lernende geschaffen, stellt aber gleichzeitig auch große Anforderungen an sie. Lernende müssen ihren Lernprozess sehr viel stärker selbst regulieren als in traditionellen Lernsituationen, in denen Lehrende durch externe Regulation unterstützen. Das heißt, Lernende müssen ihren Lernprozess selbst planen, geeignete Lernstrategien anwenden, ihn überwachen, steuern und evaluieren. Diesen Anforderungen wird in verschiedenen Modellen des selbst-regulierten Lernens (SRL) Rechnung getragen. Obwohl die Wurzeln der Forschung zu SRL bis in die 1980er Jahren zurück reichen, ist die Messung und adäquate Unterstützung von SRL in technologie-gestützten Lernumgebungen noch immer nicht zufriedenstellend gelöst. Eine wichtige Hürde sind dabei die Datenquellen, die zur Operationalisierung von SRL-Prozessen herangezogen werden. Um SRL in adaptiven Lernsystemen zu unterstützen und theoretische Modelle zu validieren, werden Instrumente benötigt, die klassischen Gütekriterien genügen und darüber hinaus weitere Anforderungen erfüllen. Geeignete Datenkanäle müssen „online“ messbar sein, das heißt bereits während des Lernens in Echtzeit für Analysen oder die individuelle Anpassung von Interventionen zur Verfügung stehen. Forschende interessieren sich nicht mehr nur für die Endergebnisse von Fragebögen oder Aufgaben, sondern müssen auch Prozessdaten von Interaktionen zwischen Lernenden und Lernumgebungen untersuchen, um die Entwicklung von Theorien und Interventionen voranzutreiben. Zudem sollten Datenquellen nicht intrusiv sein, sodass der Lernprozess nicht unterbrochen oder gestört wird. Dies ist zum Beispiel bei Messungen physiologischer Daten der Fall, zu deren Erfassung die Lernenden Messgeräte tragen müssen. Außerdem sollten Messungen nicht reaktiv sein – andere Variablen (z.B. der Lernerfolg) sollten also nicht von der Messung beeinflusst werden. Unterschiedliche Datenquellen die zur Untersuchung und Unterstützung von SRL-Prozessen bereits verwendet werden, wie z.B. Protokolle über lautes Denken, Screen-Recording, Eye Tracking, Log-Files, Videobeobachtungen oder physiologische Sensoren erfüllen diese Kriterien in jeweils unterschiedlichem Ausmaß. Ein Datenkanal, dem in der pädagogische-psychologischen Forschung bislang kaum Beachtung geschenkt wurde, der aber nicht-intrusiv, nicht-reaktiv, objektiv und online verfügbar ist, sind detaillierte, zeitlich hochauflösende Daten über die beobachtbare Interkation von Lernenden in online Lernumgebungen. Dieser Datenkanal wird in dieser Arbeit als „peripheral data“ eingeführt. Er zeichnet sowohl den Inhalt von Lernumgebungen als Kontext auf, als auch darauf bezogene Aktionen von Lernenden, ausgelöst durch Maus und Tastatur, sowie die Reaktionen der Lernumgebungen, wie etwa strukturelle oder inhaltliche Veränderungen. Zwar sind die oben genannten Kriterien zur Nutzung der Daten erfüllt, allerdings ist unklar, ob diese Daten auch reliabel und valide hinsichtlich relevanten Variablen und Verhaltens interpretiert werden können. Ziel dieser Dissertation ist es daher, diesen Datenkanal aus Perspektive des SRL zu untersuchen und damit die bestehende Forschungslücke weiter zu schließen. Dafür wurden eine Entwicklungs- sowie vier Forschungsarbeiten durchgeführt und in dieser Arbeit dokumentiert. KW - Selbstgesteuertes Lernen KW - Computerunterstütztes Lernen KW - self-regulated learning KW - process analysis KW - online learning KW - mouse tracking KW - keyboard tracking KW - learning KW - selfregulated Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-180097 ER - TY - JOUR A1 - Gressmann, Marcel A1 - Janczyk, Markus T1 - The (Un)Clear Effects of Invalid Retro-Cues JF - Frontiers in Psychology N2 - Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits) that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present. KW - visual working memory KW - retro-cue KW - attention KW - replication Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-165296 VL - 7 IS - 244 ER - TY - JOUR A1 - Wunsch, Kathrin A1 - Pfister, Roland A1 - Henning, Anne A1 - Aschersleben, Gisa A1 - Weigelt, Matthias T1 - No Interrelation of Motor Planning and Executive Functions across Young Ages JF - Frontiers in Psychology N2 - The present study examined the developmental trajectories of motor planning and executive functioning in children. To this end, we tested 217 participants with three motor tasks, measuring anticipatory planning abilities (i.e., the bar-transport-task, the sword-rotation-task and the grasp-height-task), and three cognitive tasks, measuring executive functions (i.e., the Tower-of-Hanoi-task, the Mosaic-task, and the D2-attention-endurance-task). Children were aged between 3 and 10 years and were separated into age groups by 1-year bins, resulting in a total of eight groups of children and an additional group of adults. Results suggested (1) a positive developmental trajectory for each of the sub-tests, with better task performance as children get older; (2) that the performance in the separate tasks was not correlated across participants in the different age groups; and (3) that there was no relationship between performance in the motor tasks and in the cognitive tasks used in the present study when controlling for age. These results suggest that both, motor planning and executive functions are rather heterogeneous domains of cognitive functioning with fewer interdependencies than often suggested. KW - anticipatory planning KW - end-state comfort effect KW - developmental disorders KW - child development KW - motor development Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-165281 VL - 7 IS - 1031 ER - TY - JOUR A1 - Schneider, Wolfgang A1 - Niklas, Frank T1 - Intelligence and verbal short-term memory/working memory: their interrelationships from childhood to young adulthood and their impact on academic achievement JF - Journal of Intelligence N2 - Although recent developmental studies exploring the predictive power of intelligence and working memory (WM) for educational achievement in children have provided evidence for the importance of both variables, findings concerning the relative impact of IQ and WM on achievement have been inconsistent. Whereas IQ has been identified as the major predictor variable in a few studies, results from several other developmental investigations suggest that WM may be the stronger predictor of academic achievement. In the present study, data from the Munich Longitudinal Study on the Genesis of Individual Competencies (LOGIC) were used to explore this issue further. The secondary data analysis included data from about 200 participants whose IQ and WM was first assessed at the age of six and repeatedly measured until the ages of 18 and 23. Measures of reading, spelling, and math were also repeatedly assessed for this age range. Both regression analyses based on observed variables and latent variable structural equation modeling (SEM) were carried out to explore whether the predictive power of IQ and WM would differ as a function of time point of measurement (i.e., early vs. late assessment). As a main result of various regression analyses, IQ and WM turned out to be reliable predictors of academic achievement, both in early and later developmental stages, when previous domain knowledge was not included as additional predictor. The latter variable accounted for most of the variance in more comprehensive regression models, reducing the impact of both IQ and WM considerably. Findings from SEM analyses basically confirmed this outcome, indicating IQ impacts on educational achievement in the early phase, and illustrating the strong additional impact of previous domain knowledge on achievement at later stages of development. KW - intelligence KW - short-term memory KW - working memory KW - academic achievement KW - domain knowledge KW - LOGIC study Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-198004 SN - 2079-3200 VL - 5 IS - 2 ER - TY - JOUR A1 - Kempert, Sebastian A1 - Götz, Regina A1 - Blatter, Kristine A1 - Tibken, Catharina A1 - Artelt, Cordula A1 - Schneider, Wolfgang A1 - Stanat, Petra T1 - Training Early Literacy Related Skills: To Which Degree Does a Musical Training Contribute to Phonological Awareness Development? JF - Frontiers in Psychology N2 - Well-developed phonological awareness skills are a core prerequisite for early literacy development. Although effective phonological awareness training programs exist, children at risk often do not reach similar levels of phonological awareness after the intervention as children with normally developed skills. Based on theoretical considerations and first promising results the present study explores effects of an early musical training in combination with a conventional phonological training in children with weak phonological awareness skills. Using a quasi-experimental pretest-posttest control group design and measurements across a period of 2 years, we tested the effects of two interventions: a consecutive combination of a musical and a phonological training and a phonological training alone. The design made it possible to disentangle effects of the musical training alone as well the effects of its combination with the phonological training. The outcome measures of these groups were compared with the control group with multivariate analyses, controlling for a number of background variables. The sample included N = 424 German-speaking children aged 4–5 years at the beginning of the study. We found a positive relationship between musical abilities and phonological awareness. Yet, whereas the well-established phonological training produced the expected effects, adding a musical training did not contribute significantly to phonological awareness development. Training effects were partly dependent on the initial level of phonological awareness. Possible reasons for the lack of training effects in the musical part of the combination condition as well as practical implications for early literacy education are discussed. KW - phonological awareness KW - musical training KW - phonological training KW - preschool children KW - early literacy Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-165272 VL - 7 IS - 1803 ER - TY - JOUR A1 - Lange, Bastian A1 - Pauli, Paul T1 - Social anxiety changes the way we move—A social approach-avoidance task in a virtual reality CAVE system JF - PLoS ONE N2 - Investigating approach-avoidance behavior regarding affective stimuli is important in broadening the understanding of one of the most common psychiatric disorders, social anxiety disorder. Many studies in this field rely on approach-avoidance tasks, which mainly assess hand movements, or interpersonal distance measures, which return inconsistent results and lack ecological validity. Therefore, the present study introduces a virtual reality task, looking at avoidance parameters (movement time and speed, distance to social stimulus, gaze behavior) during whole-body movements. These complex movements represent the most ecologically valid form of approach and avoidance behavior. These are at the core of complex and natural social behavior. With this newly developed task, the present study examined whether high socially anxious individuals differ in avoidance behavior when bypassing another person, here virtual humans with neutral and angry facial expressions. Results showed that virtual bystanders displaying angry facial expressions were generally avoided by all participants. In addition, high socially anxious participants generally displayed enhanced avoidance behavior towards virtual people, but no specifically exaggerated avoidance behavior towards virtual people with a negative facial expression. The newly developed virtual reality task proved to be an ecological valid tool for research on complex approach-avoidance behavior in social situations. The first results revealed that whole body approach-avoidance behavior relative to passive bystanders is modulated by their emotional facial expressions and that social anxiety generally amplifies such avoidance. KW - emotions KW - face KW - behavior KW - social anxiety disorder KW - anxiolytics KW - analysis of variance KW - virtual reality KW - questionnaires Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-200528 VL - 14 IS - 12 ER - TY - JOUR A1 - Lenhard, Alexandra A1 - Lenhard, Wolfgang A1 - Gary, Sebastian T1 - Continuous norming of psychometric tests: A simulation study of parametric and semi-parametric approaches JF - PLoS ONE N2 - Continuous norming methods have seldom been subjected to scientific review. In this simulation study, we compared parametric with semi-parametric continuous norming methods in psychometric tests by constructing a fictitious population model within which a latent ability increases with age across seven age groups. We drew samples of different sizes (n = 50, 75, 100, 150, 250, 500 and 1,000 per age group) and simulated the results of an easy, medium, and difficult test scale based on Item Response Theory (IRT). We subjected the resulting data to different continuous norming methods and compared the data fit under the different test conditions with a representative cross-validation dataset of n = 10,000 per age group. The most significant differences were found in suboptimal (i.e., too easy or too difficult) test scales and in ability levels that were far from the population mean. We discuss the results with regard to the selection of the appropriate modeling techniques in psychometric test construction, the required sample sizes, and the requirement to report appropriate quantitative and qualitative test quality criteria for continuous norming methods in test manuals. KW - statistical models KW - simulation and modeling KW - psychometrics KW - age groups KW - skewness KW - normal distribution KW - polynomials KW - statistical distributions Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-200480 VL - 14 IS - 9 ER - TY - JOUR A1 - Gromer, Daniel A1 - Reinke, Max A1 - Christner, Isabel A1 - Pauli, Paul T1 - Causal interactive links between presence and fear in virtual reality height exposure JF - Frontiers in Psychology N2 - Virtual reality plays an increasingly important role in research and therapy of pathological fear. However, the mechanisms how virtual environments elicit and modify fear responses are not yet fully understood. Presence, a psychological construct referring to the ‘sense of being there’ in a virtual environment, is widely assumed to crucially influence the strength of the elicited fear responses, however, causality is still under debate. The present study is the first that experimentally manipulated both variables to unravel the causal link between presence and fear responses. Height-fearful participants (N = 49) were immersed into a virtual height situation and a neutral control situation (fear manipulation) with either high versus low sensory realism (presence manipulation). Ratings of presence and verbal and physiological (skin conductance, heart rate) fear responses were recorded. Results revealed an effect of the fear manipulation on presence, i.e., higher presence ratings in the height situation compared to the neutral control situation, but no effect of the presence manipulation on fear responses. However, the presence ratings during the first exposure to the high quality neutral environment were predictive of later fear responses in the height situation. Our findings support the hypothesis that experiencing emotional responses in a virtual environment leads to a stronger feeling of being there, i.e., increase presence. In contrast, the effects of presence on fear seem to be more complex: on the one hand, increased presence due to the quality of the virtual environment did not influence fear; on the other hand, presence variability that likely stemmed from differences in user characteristics did predict later fear responses. These findings underscore the importance of user characteristics in the emergence of presence. KW - presence KW - fear KW - virtual reality KW - visual realism KW - acrophobia Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-201855 VL - 10 IS - 141 ER - TY - JOUR A1 - Lugo, Zulay R. A1 - Quitadamo, Lucia R. A1 - Bianchi, Luigi A1 - Pellas, Fréderic A1 - Veser, Sandra A1 - Lesenfants, Damien A1 - Real, Ruben G. L. A1 - Herbert, Cornelia A1 - Guger, Christoph A1 - Kotchoubey, Boris A1 - Mattia, Donatella A1 - Kübler, Andrea A1 - Laureys, Steven A1 - Noirhomme, Quentin T1 - Cognitive Processing in Non-Communicative Patients: What Can Event-Related Potentials Tell Us? JF - Frontiers in Human Neuroscience N2 - Event-related potentials (ERP) have been proposed to improve the differential diagnosis of non-responsive patients. We investigated the potential of the P300 as a reliable marker of conscious processing in patients with locked-in syndrome (LIS). Eleven chronic LIS patients and 10 healthy subjects (HS) listened to a complex-tone auditory oddball paradigm, first in a passive condition (listen to the sounds) and then in an active condition (counting the deviant tones). Seven out of nine HS displayed a P300 waveform in the passive condition and all in the active condition. HS showed statistically significant changes in peak and area amplitude between conditions. Three out of seven LIS patients showed the P3 waveform in the passive condition and five of seven in the active condition. No changes in peak amplitude and only a significant difference at one electrode in area amplitude were observed in this group between conditions. We conclude that, in spite of keeping full consciousness and intact or nearly intact cortical functions, compared to HS, LIS patients present less reliable results when testing with ERP, specifically in the passive condition. We thus strongly recommend applying ERP paradigms in an active condition when evaluating consciousness in non-responsive patients. KW - P300 KW - event-related potentials KW - locked-in syndrome KW - vegetative state KW - unresponsive wakefulness syndrome KW - minimally conscious state Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-165165 VL - 10 IS - 569 ER - TY - JOUR A1 - Zhou, Sijie A1 - Allison, Brendan Z. A1 - Kübler, Andrea A1 - Cichocki, Andrzej A1 - Wang, Xingyu A1 - Jin, Jing T1 - Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI JF - Frontiers in Computational Neuroscience N2 - Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged. KW - brain computer interface KW - event-related potentials KW - auditory KW - music background KW - audio stimulus Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-165101 VL - 10 IS - 105 ER - TY - JOUR A1 - Neueder, Dorothea A1 - Andreatta, Marta A1 - Pauli, Paul T1 - Contextual fear conditioning and fear generalization in individuals with panic attacks JF - Frontiers in Behavioral Neuroscience N2 - Context conditioning is characterized by unpredictable threat and its generalization may constitute risk factors for panic disorder (PD). Therefore, we examined differences between individuals with panic attacks (PA; N = 21) and healthy controls (HC, N = 22) in contextual learning and context generalization using a virtual reality (VR) paradigm. Successful context conditioning was indicated in both groups by higher arousal, anxiety and contingency ratings, and increased startle responses and skin conductance levels (SCLs) in an anxiety context (CTX+) where an aversive unconditioned stimulus (US) occurred unpredictably vs. a safety context (CTX−). PA compared to HC exhibited increased differential responding to CTX+ vs. CTX− and overgeneralization of contextual anxiety on an evaluative verbal level, but not on a physiological level. We conclude that increased contextual conditioning and contextual generalization may constitute risk factors for PD or agoraphobia contributing to the characteristic avoidance of anxiety contexts and withdrawal to safety contexts and that evaluative cognitive process may play a major role. KW - contextual fear conditioning KW - anxiety generalization KW - startle response KW - panic disorder KW - virtual reality Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-201318 VL - 13 ER - TY - JOUR A1 - Reicherts, Philipp A1 - Pauli, Paul A1 - Mösler, Camilla A1 - Wieser, Matthias J. T1 - Placebo manipulations reverse pain potentiation by unpleasant affective stimuli JF - Frontiers in Psychiatry N2 - According to the motivational priming hypothesis, unpleasant stimuli activate the motivational defense system, which in turn promotes congruent affective states such as negative emotions and pain. The question arises to what degree this bottom–up impact of emotions on pain is susceptible to a manipulation of top–down-driven expectations. To this end, we investigated whether verbal instructions implying pain potentiation vs. reduction (placebo or nocebo expectations)—later on confirmed by corresponding experiences (placebo or nocebo conditioning)—might alter behavioral and neurophysiological correlates of pain modulation by unpleasant pictures. We compared two groups, which underwent three experimental phases: first, participants were either instructed that watching unpleasant affective pictures would increase pain (nocebo group) or that watching unpleasant pictures would decrease pain (placebo group) relative to neutral pictures. During the following placebo/nocebo-conditioning phase, pictures were presented together with electrical pain stimuli of different intensities, reinforcing the instructions. In the subsequent test phase, all pictures were presented again combined with identical pain stimuli. Electroencephalogram was recorded in order to analyze neurophysiological responses of pain (somatosensory evoked potential) and picture processing [visually evoked late positive potential (LPP)], in addition to pain ratings. In the test phase, ratings of pain stimuli administered while watching unpleasant relative to neutral pictures were significantly higher in the nocebo group, thus confirming the motivational priming effect for pain perception. In the placebo group, this effect was reversed such that unpleasant compared with neutral pictures led to significantly lower pain ratings. Similarly, somatosensory evoked potentials were decreased during unpleasant compared with neutral pictures, in the placebo group only. LPPs of the placebo group failed to discriminate between unpleasant and neutral pictures, while the LPPs of the nocebo group showed a clear differentiation. We conclude that the placebo manipulation already affected the processing of the emotional stimuli and, in consequence, the processing of the pain stimuli. In summary, the study revealed that the modulation of pain by emotions, albeit a reliable and well-established finding, is further tuned by reinforced expectations—known to induce placebo/nocebo effects—which should be addressed in future research and considered in clinical applications. KW - placebo and nocebo effects KW - emotion processing KW - psychological pain modulation KW - late positive potential KW - somatosensory evoked potential Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-201200 VL - 10 IS - 663 ER - TY - THES A1 - Hoffmann, Mareike T1 - Effector System Prioritization in Multitasking T1 - Effektorsystempriorisierung im Multitasking N2 - Multitasking, defined as performing more than one task at a time, typically yields performance decrements, for instance, in processing speed and accuracy. These performance costs are often distributed asymmetrically among the involved tasks. Under suitable conditions, this can be interpreted as a marker for prioritization of one task – the one that suffers less – over the other. One source of such task prioritization is based on the use of different effector systems (e.g., oculomotor system, vocal tract, limbs) and their characteristics. The present work explores such effector system-based task prioritization by examining to which extent associated effector systems determine which task is processed with higher priority in multitasking situations. Thus, three different paradigms are used, namely the simultaneous (stimulus) onset paradigm, the psychological refractory period (PRP) paradigm, and the task switching paradigm. These paradigms invoke situations in which two (in the present studies basic spatial decision) tasks are a) initiated at exactly the same time, b) initiated with a short varying temporal distance (but still temporally overlapping), or c) in which tasks alternate randomly (without temporal overlap). The results allow for three major conclusions: 1. The assumption of effector system-based task prioritization according to an ordinal pattern (oculomotor > pedal > vocal > manual, indicating decreasing prioritization) is supported by the observed data in the simultaneous onset paradigm. This data pattern cannot be explained by a rigid “first come, first served” task scheduling principle. 2. The data from the PRP paradigm confirmed the assumption of vocal-over-manual prioritization and showed that classic PRP effects (as a marker for task order-based prioritization) can be modulated by effector system characteristics. 3. The mere cognitive representation of task sets (that must be held active to switch between them) differing in effector systems without an actual temporal overlap in task processing, however, is not sufficient to elicit the same effector system prioritization phenomena observed for overlapping tasks. In summary, the insights obtained by the present work support the assumptions of parallel central task processing and resource sharing among tasks, as opposed to exclusively serial processing of central processing stages. Moreover, they indicate that effector systems are a crucial factor in multitasking and suggest an integration of corresponding weighting parameters in existing dual-task control frameworks. N2 - Das gleichzeitige Bearbeiten von mehreren Aufgaben (Multitasking) führt in der Regel zu schlechterer Performanz, zum Beispiel bezüglich Geschwindigkeit und Genauigkeit der Aufgabenausführung. Diese sogenannten Doppelaufgaben- (oder Multitasking-) Kosten sind oft asymmetrisch auf die involvierten Aufgaben verteilt. Dies kann unter bestimmten Gegebenheiten als Priorisierung von jenen Aufgaben, die mit geringeren Kosten assoziiert sind über jene, die stärker durch die Doppelaufgabensituation leiden, interpretiert werden. Eine Quelle für solch eine Aufgabenpriorisierung sind unterschiedliche Effektorsysteme (z.B. Blickbewegungsapparat, Extremitäten, Vokaltrakt), mit denen die Aufgaben jeweils ausgeführt werden sollen. Die vorliegende Arbeit untersucht solche effektorsystembasierte Priorisierung, das heißt, inwiefern assoziierte Effektorsysteme determinieren, ob Aufgaben in Multitasking-Situationen priorisiert verarbeitet werden. Dazu wurden drei verschiedene experimentelle Paradigmen genutzt: a) das „Simultane Stimulus-Darbietungs-Paradigma“, b) das „Psychologische Refraktärperioden-Paradigma“ und c) das „Aufgabenwechsel-Paradigma“. Innerhalb dieser Paradigmen werden Reaktionen (Reaktionszeiten und Fehlerraten) gemessen und zwischen verschiedenen Effektorsystemen verglichen, die a) zum genau gleichen Zeitpunkt gestartet werden, b) mit einem kurzen, variierten zeitlichen Versatz gestartet werden, aber in ihrer Ausführung überlappen, oder c) zwischen denen in unvorhersehbarer Reihenfolge hin und her gewechselt werden soll. Entsprechend dieser drei Ansätze erlauben die Ergebnisse drei wichtige Schlussfolgerungen: 1. Unter simultanem Einsetzen der Aufgabenverarbeitung (und damit ohne extern suggerierte Reihenfolge) folgen Doppelaufgabenkontrollprozesse einem ordinalen Priorisierungsmuster auf Basis der mit den Aufgaben assoziierten Effektorsysteme in der Reihenfolge: okulomotorisch > pedal > vokal > manuell (im Sinne einer absteigenden Priorisierung). Dieses Muster ist nicht durch Bearbeitungsgeschwindigkeit im Sinne eines „wer zuerst kommt, mahlt zuerst“-Prinzips erklärbar. 2. Eine Aufgabenpriorisierung, die auf einer externen Aufgabenreihenfolge basiert (gemessen im PRP-Effekt), kann durch die mit den Aufgaben assoziierten Effektorsysteme moduliert werden. 3. Systematische effektorsystembasierte Aufgabenpriorisierung ist nur dann konsistent zu beobachten, wenn Teile der Aufgabenverarbeitung zeitlich überlappen. Eine rein mentale Repräsentation einer Aufgabe, die in einem anderen Effektorsystem ausgeführt werden soll, reicht nicht dazu aus, um das oben beschriebene Priorisierungsmuster vollständig zu instanziieren. Alles in allem sprechen die Ergebnisse der vorliegenden Arbeit für parallele (und gegen ausschließlich serielle) Reaktionsauswahlprozesse und dafür, dass limitierte kognitive Ressourcen zwischen Aufgaben aufgeteilt werden. Außerdem zeigen die vorliegenden Ergebnisse den substantiellen Einfluss von Effektorsystemen auf Ressourcenzuweisungsprozesse in Mehrfachaufgabensituationen und legen nahe, entsprechende Gewichtungsparameter in bestehende Modelle zu Doppelaufgabenkontrolle zu integrieren. KW - Mehrfachtätigkeit KW - task prioritization KW - response modalities KW - cognitive control KW - Multitasking KW - Effektorsysteme Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-201084 ER - TY - JOUR A1 - Schneider, Norbert A1 - Huestegge, Lynn T1 - Interaction of oculomotor and manual behavior: evidence from simulated driving in an approach–avoidance steering task JF - Cognitive Research: Principles and Implications N2 - Background While the coordination of oculomotor and manual behavior is essential for driving a car, surprisingly little is known about this interaction, especially in situations requiring a quick steering reaction. In the present study, we analyzed oculomotor gaze and manual steering behavior in approach and avoidance tasks. Three task blocks were implemented within a dynamic simulated driving environment requiring the driver either to steer away from/toward a visual stimulus or to switch between both tasks. Results Task blocks requiring task switches were associated with higher manual response times and increased error rates. Manual response times did not significantly differ depending on whether drivers had to steer away from vs toward a stimulus, whereas oculomotor response times and gaze pattern variability were increased when drivers had to steer away from a stimulus compared to steering toward a stimulus. Conclusion The increased manual response times and error rates in mixed tasks indicate performance costs associated with cognitive flexibility, while the increased oculomotor response times and gaze pattern variability indicate a parsimonious cross-modal action control strategy (avoiding stimulus fixation prior to steering away from it) for the avoidance scenario. Several discrepancies between these results and typical eye–hand interaction patterns in basic laboratory research suggest that the specific goals and complex perceptual affordances associated with driving a vehicle strongly shape cross-modal control of behavior. KW - steering KW - driving simulation KW - gaze control KW - visual orientation Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-200419 VL - 4 ER - TY - JOUR A1 - Pfister, Roland A1 - Frings, Christian A1 - Moeller, Birte T1 - The Role of Congruency for Distractor-Response Binding: A Caveat JF - Advances in Cognitive Psychologe N2 - Responding in the presence of stimuli leads to an integration of stimulus features and response features into event fles, which can later be retrieved to assist action control. This integration mechanism is not limited to target stimuli, but can also include distractors (distractor-response binding). A recurring research question is which factors determine whether or not distractors are integrated. One suggested candidate factor is target-distractor congruency: Distractor-response binding effects were reported to be stronger for congruent than for incongruent target-distractor pairs. Here, we discuss a general problem with including the factor of congruency in typical analyses used to study distractor-based binding effects. Integrating this factor leads to a confound that may explain any differences between distractor-response binding effects of congruent and incongruent distractors with a simple congruency effect. Simulation data confrmed this argument. We propose to interpret previous data cautiously and discuss potential avenues to circumvent this problem in the future. KW - action control KW - distractor-response binding KW - congruency sequences KW - sequence analysis Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-200265 VL - 15 IS - 2 ER -