TY - GEN A1 - Hauf, Juliane E. K. A1 - Nieding, Gerhild A1 - Seger, Benedikt T. T1 - Correction to: The development of dynamic perceptual simulations during sentence comprehension T2 - Cognitive Processing N2 - No abstract available. KW - Erratum Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-352611 N1 - The original article can be found online at https://doi.org/10.1007/s10339-020-00959-7 VL - 22 IS - 4 ER - TY - JOUR A1 - Landmann, Eva A1 - Breil, Christina A1 - Huestegge, Lynn A1 - Böckler, Anne T1 - The semantics of gaze in person perception: a novel qualitative-quantitative approach JF - Scientific Reports N2 - Interpreting gaze behavior is essential in evaluating interaction partners, yet the ‘semantics of gaze’ in dynamic interactions are still poorly understood. We aimed to comprehensively investigate effects of gaze behavior patterns in different conversation contexts, using a two-step, qualitative-quantitative procedure. Participants watched video clips of single persons listening to autobiographic narrations by another (invisible) person. The listener’s gaze behavior was manipulated in terms of gaze direction, frequency and direction of gaze shifts, and blink frequency; emotional context was manipulated through the valence of the narration (neutral/negative). In Experiment 1 (qualitative-exploratory), participants freely described which states and traits they attributed to the listener in each condition, allowing us to identify relevant aspects of person perception and to construct distinct rating scales that were implemented in Experiment 2 (quantitative-confirmatory). Results revealed systematic and differential meanings ascribed to the listener’s gaze behavior. For example, rapid blinking and fast gaze shifts were rated more negatively (e.g., restless and unnatural) than slower gaze behavior; downward gaze was evaluated more favorably (e.g., empathetic) than other gaze aversion types, especially in the emotionally negative context. Overall, our study contributes to a more systematic understanding of flexible gaze semantics in social interaction. KW - human behaviour KW - psychology KW - semantics of gaze KW - person perception KW - face perception Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-361413 SN - 2045-2322 VL - 14 IS - 1 ER - TY - JOUR A1 - Ju, Qianqian A1 - Gan, Yiqun A1 - Rinn, Robin A1 - Duan, Yanping A1 - Lippke, Sonia T1 - Health Status Stability of Patients in a Medical Rehabilitation Program: What Are the Roles of Time, Physical Fitness Level, and Self-efficacy? JF - International Journal of Behavioral Medicine N2 - Background Individuals’ physical and mental health, as well as their chances of returning to work after their ability to work is damaged, can be addressed by medical rehabilitation. Aim This study investigated the developmental trends of mental and physical health among patients in medical rehabilitation and the roles of self-efficacy and physical fitness in the development of mental and physical health. Design A longitudinal design that included four time-point measurements across 15 months. Setting A medical rehabilitation center in Germany. Population Participants included 201 patients who were recruited from a medical rehabilitation center. Methods To objectively measure physical fitness (lung functioning), oxygen reabsorption at anaerobic threshold (VO2AT) was used, along with several self-report scales. Results We found a nonlinear change in mental health among medical rehabilitation patients. The results underscored the importance of medical rehabilitation for patients’ mental health over time. In addition, patients’ physical health was stable over time. The initial level of physical fitness (VO2AT) positively predicted their mental health and kept the trend more stable. Self-efficacy appeared to have a positive relationship with mental health after rehabilitation treatment. Conclusions This study revealed a nonlinear change in mental health among medical rehabilitation patients. Self-efficacy was positively related to mental health, and the initial level of physical fitness positively predicted the level of mental health after rehabilitation treatment. Clinical Rehabilitation More attention could be given to physical capacity and self-efficacy for improving and maintaining rehabilitants’ mental health. KW - latent growth curve model KW - mental health KW - physical fitness KW - self-efficacy KW - physical health Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-308445 SN - 1070-5503 SN - 1532-7558 VL - 29 IS - 5 ER - TY - THES A1 - Menne, Isabelle M. T1 - Facing Social Robots – Emotional Reactions towards Social Robots N2 - Ein Army Colonel empfindet Mitleid mit einem Roboter, der versuchsweise Landminen entschärft und deklariert den Test als inhuman (Garreau, 2007). Roboter bekommen militärische Beförderungen, Beerdigungen und Ehrenmedaillen (Garreau, 2007; Carpenter, 2013). Ein Schildkrötenroboter wird entwickelt, um Kindern beizubringen, Roboter gut zu behandeln (Ackermann, 2018). Der humanoide Roboter Sophia wurde erst kürzlich Saudi-Arabischer Staatsbürger und es gibt bereits Debatten, ob Roboter Rechte bekommen sollen (Delcker, 2018). Diese und ähnliche Entwicklungen zeigen schon jetzt die Bedeutsamkeit von Robotern und die emotionale Wirkung die diese auslösen. Dennoch scheinen sich diese emotionalen Reaktionen auf einer anderen Ebene abzuspielen, gemessen an Kommentaren in Internetforen. Dort ist oftmals die Rede davon, wieso jemand überhaupt emotional auf einen Roboter reagieren kann. Tatsächlich ist es, rein rational gesehen, schwierig zu erklären, warum Menschen mit einer leblosen (‚mindless‘) Maschine mitfühlen sollten. Und dennoch zeugen nicht nur oben genannte Berichte, sondern auch erste wissenschaftliche Studien (z.B. Rosenthal- von der Pütten et al., 2013) von dem emotionalen Einfluss den Roboter auf Menschen haben können. Trotz der Bedeutsamkeit der Erforschung emotionaler Reaktionen auf Roboter existieren bislang wenige wissenschaftliche Studien hierzu. Tatsächlich identifizierten Kappas, Krumhuber und Küster (2013) die systematische Analyse und Evaluation sozialer Reaktionen auf Roboter als eine der größten Herausforderungen der affektiven Mensch-Roboter Interaktion. Nach Scherer (2001; 2005) bestehen Emotionen aus der Koordination und Synchronisation verschiedener Komponenten, die miteinander verknüpft sind. Motorischer Ausdruck (Mimik), subjektives Erleben, Handlungstendenzen, physiologische und kognitive Komponenten gehören hierzu. Um eine Emotion vollständig zu erfassen, müssten all diese Komponenten gemessen werden, jedoch wurde eine solch umfassende Analyse bisher noch nie durchgeführt (Scherer, 2005). Hauptsächlich werden Fragebögen eingesetzt (vgl. Bethel & Murphy, 2010), die allerdings meist nur das subjektive Erleben abfragen. Bakeman und Gottman (1997) geben sogar an, dass nur etwa 8% der psychologischen Forschung auf Verhaltensdaten basiert, obwohl die Psychologie traditionell als das ‚Studium von Psyche und Verhalten‘ (American Psychological Association, 2018) definiert wird. Die Messung anderer Emotionskomponenten ist selten. Zudem sind Fragebögen mit einer Reihe von Nachteilen behaftet (Austin, Deary, Gibson, McGregor, Dent, 1998; Fan et al., 2006; Wilcox, 2011). Bethel und Murphy (2010) als auch Arkin und Moshkina (2015) plädieren für einen Multi-Methodenansatz um ein umfassenderes Verständnis von affektiven Prozessen in der Mensch-Roboter Interaktion zu erlangen. Das Hauptziel der vorliegenden Dissertation ist es daher, mithilfe eines Multi-Methodenansatzes verschiedene Komponenten von Emotionen (motorischer Ausdruck, subjektive Gefühlskomponente, Handlungstendenzen) zu erfassen und so zu einem vollständigeren und tiefgreifenderem Bild emotionaler Prozesse auf Roboter beizutragen. Um dieses Ziel zu erreichen, wurden drei experimentelle Studien mit insgesamt 491 Teilnehmern durchgeführt. Mit unterschiedlichen Ebenen der „apparent reality“ (Frijda, 2007) sowie Macht / Kontrolle über die Situation (vgl. Scherer & Ellgring, 2007) wurde untersucht, inwiefern sich Intensität und Qualität emotionaler Reaktionen auf Roboter ändern und welche weiteren Faktoren (Aussehen des Roboters, emotionale Expressivität des Roboters, Behandlung des Roboters, Autoritätsstatus des Roboters) Einfluss ausüben. Experiment 1 basierte auf Videos, die verschiedene Arten von Robotern (tierähnlich, anthropomorph, maschinenartig), die entweder emotional expressiv waren oder nicht (an / aus) in verschiedenen Situationen (freundliche Behandlung des Roboters vs. Misshandlung) zeigten. Fragebögen über selbstberichtete Gefühle und die motorisch-expressive Komponente von Emotionen: Mimik (vgl. Scherer, 2005) wurden analysiert. Das Facial Action Coding System (Ekman, Friesen, & Hager, 2002), die umfassendste und am weitesten verbreitete Methode zur objektiven Untersuchung von Mimik, wurde hierfür verwendet. Die Ergebnisse zeigten, dass die Probanden Gesichtsausdrücke (Action Unit [AU] 12 und AUs, die mit positiven Emotionen assoziiert sind, sowie AU 4 und AUs, die mit negativen Emotionen assoziiert sind) sowie selbstberichtete Gefühle in Übereinstimmung mit der Valenz der in den Videos gezeigten Behandlung zeigten. Bei emotional expressiven Robotern konnten stärkere emotionale Reaktionen beobachtet werden als bei nicht-expressiven Robotern. Der tierähnliche Roboter Pleo erfuhr in der Misshandlungs-Bedingung am meisten Mitleid, Empathie, negative Gefühle und Traurigkeit, gefolgt vom anthropomorphen Roboter Reeti und am wenigsten für den maschinenartigen Roboter Roomba. Roomba wurde am meisten Antipathie zugeschrieben. Die Ergebnisse knüpfen an frühere Forschungen an (z.B. Krach et al., 2008; Menne & Schwab, 2018; Riek et al., 2009; Rosenthal-von der Pütten et al., 2013) und zeigen das Potenzial der Mimik für eine natürliche Mensch-Roboter Interaktion. Experiment 2 und Experiment 3 übertrugen die klassischen Experimente von Milgram (1963; 1974) zum Thema Gehorsam in den Kontext der Mensch-Roboter Interaktion. Die Gehorsamkeitsstudien von Milgram wurden als sehr geeignet erachtet, um das Ausmaß der Empathie gegenüber einem Roboter im Verhältnis zum Gehorsam gegenüber einem Roboter zu untersuchen. Experiment 2 unterschied sich von Experiment 3 in der Ebene der „apparent reality“ (Frijda, 2007): in Anlehnung an Milgram (1963) wurde eine rein text-basierte Studie (Experiment 2) einer live Mensch-Roboter Interaktion (Experiment 3) gegenübergestellt. Während die abhängigen Variablen von Experiment 2 aus den Selbstberichten emotionaler Gefühle sowie Einschätzungen des hypothetischen Verhaltens bestand, erfasste Experiment 3 subjektive Gefühle sowie reales Verhalten (Reaktionszeit: Dauer des Zögerns; Gehorsamkeitsrate; Anzahl der Proteste; Mimik) der Teilnehmer. Beide Experimente untersuchten den Einfluss der Faktoren „Autoritätsstatus“ (hoch / niedrig) des Roboters, der die Befehle erteilt (Nao) und die emotionale Expressivität (an / aus) des Roboters, der die Strafen erhält (Pleo). Die subjektiven Gefühle der Teilnehmer aus Experiment 2 unterschieden sich zwischen den Gruppen nicht. Darüber hinaus gaben nur wenige Teilnehmer (20.2%) an, dass sie den „Opfer“-Roboter definitiv bestrafen würden. Ein ähnliches Ergebnis fand auch Milgram (1963). Das reale Verhalten von Versuchsteilnehmern in Milgrams‘ Labor-Experiment unterschied sich jedoch von Einschätzungen hypothetischen Verhaltens von Teilnehmern, denen Milgram das Experiment nur beschrieben hatte. Ebenso lassen Kommentare von Teilnehmern aus Experiment 2 darauf schließen, dass das beschriebene Szenario möglicherweise als fiktiv eingestuft wurde und Einschätzungen von hypothetischem Verhalten daher kein realistisches Bild realen Verhaltens gegenüber Roboter in einer live Interaktion zeichnen können. Daher wurde ein weiteres Experiment (Experiment 3) mit einer Live Interaktion mit einem Roboter als Autoritätsfigur (hoher Autoritätsstatus vs. niedriger) und einem weiteren Roboter als „Opfer“ (emotional expressiv vs. nicht expressiv) durchgeführt. Es wurden Gruppenunterschiede in Fragebögen über emotionale Reaktionen gefunden. Dem emotional expressiven Roboter wurde mehr Empathie entgegengebracht und es wurde mehr Freude und weniger Antipathie berichtet als gegenüber einem nicht-expressiven Roboter. Außerdem konnten Gesichtsausdrücke beobachtet werden, die mit negativen Emotionen assoziiert sind während Probanden Nao’s Befehl ausführten und Pleo bestraften. Obwohl Probanden tendenziell länger zögerten, wenn sie einen emotional expressiven Roboter bestrafen sollten und der Befehl von einem Roboter mit niedrigem Autoritätsstatus kam, wurde dieser Unterschied nicht signifikant. Zudem waren alle bis auf einen Probanden gehorsam und bestraften Pleo, wie vom Nao Roboter befohlen. Dieses Ergebnis steht in starkem Gegensatz zu dem selbstberichteten hypothetischen Verhalten der Teilnehmer aus Experiment 2 und unterstützt die Annahme, dass die Einschätzungen von hypothetischem Verhalten in einem Mensch-Roboter-Gehorsamkeitsszenario nicht zuverlässig sind für echtes Verhalten in einer live Mensch-Roboter Interaktion. Situative Variablen, wie z.B. der Gehorsam gegenüber Autoritäten, sogar gegenüber einem Roboter, scheinen stärker zu sein als Empathie für einen Roboter. Dieser Befund knüpft an andere Studien an (z.B. Bartneck & Hu, 2008; Geiskkovitch et al., 2016; Menne, 2017; Slater et al., 2006), eröffnet neue Erkenntnisse zum Einfluss von Robotern, zeigt aber auch auf, dass die Wahl einer Methode um Empathie für einen Roboter zu evozieren eine nicht triviale Angelegenheit ist (vgl. Geiskkovitch et al., 2016; vgl. Milgram, 1965). Insgesamt stützen die Ergebnisse die Annahme, dass die emotionalen Reaktionen auf Roboter tiefgreifend sind und sich sowohl auf der subjektiven Ebene als auch in der motorischen Komponente zeigen. Menschen reagieren emotional auf einen Roboter, der emotional expressiv ist und eher weniger wie eine Maschine aussieht. Sie empfinden Empathie und negative Gefühle, wenn ein Roboter misshandelt wird und diese emotionalen Reaktionen spiegeln sich in der Mimik. Darüber hinaus unterscheiden sich die Einschätzungen von Menschen über ihr eigenes hypothetisches Verhalten von ihrem tatsächlichen Verhalten, weshalb videobasierte oder live Interaktionen zur Analyse realer Verhaltensreaktionen empfohlen wird. Die Ankunft sozialer Roboter in der Gesellschaft führt zu nie dagewesenen Fragen und diese Dissertation liefert einen ersten Schritt zum Verständnis dieser neuen Herausforderungen. N2 - An Army Colonel feels sorry for a robot that defuses landmines on a trial basis and declares the test inhumane (Garreau, 2007). Robots receive military promotions, funerals and medals of honor (Garreau, 2007; Carpenter, 2013). A turtle robot is being developed to teach children to treat robots well (Ackermann, 2018). The humanoid robot Sophia recently became a Saudi Arabian citizen and there are now debates whether robots should have rights (Delcker, 2018). These and similar developments already show the importance of robots and the emotional impact they have. Nevertheless, these emotional reactions seem to take place on a different level, judging by comments in internet forums alone: Most often, emotional reactions towards robots are questioned if not denied at all. In fact, from a purely rational point of view, it is difficult to explain why people should empathize with a mindless machine. However, not only the reports mentioned above but also first scientific studies (e.g. Rosenthal- von der Pütten et al., 2013) bear witness to the emotional influence of robots on humans. Despite the importance of researching emotional reactions towards robots, there are few scientific studies on this subject. In fact, Kappas, Krumhuber and Küster (2013) identified effective testing and evaluation of social reactions towards robots as one of the major challenges of affective Human-Robot Interaction (HRI). According to Scherer (2001; 2005), emotions consist of the coordination and synchronization of different components that are linked to each other. These include motor expression (facial expressions), subjective experience, action tendencies, physiological and cognitive components. To fully capture an emotion, all these components would have to be measured, but such a comprehensive analysis has never been performed (Scherer, 2005). Primarily, questionnaires are used (cf. Bethel & Murphy, 2010) but most of them only capture subjective experiences. Bakeman and Gottman (1997) even state that only about 8% of psychological research is based on behavioral data, although psychology is traditionally defined as the 'study of the mind and behavior' (American Psychological Association, 2018). The measurement of other emotional components is rare. In addition, questionnaires have a number of disadvantages (Austin, Deary, Gibson, McGregor, Dent, 1998; Fan et al., 2006; Wilcox, 2011). Bethel and Murphy (2010) as well as Arkin and Moshkina (2015) argue for a multi-method approach to achieve a more comprehensive understanding of affective processes in HRI. The main goal of this dissertation is therefore to use a multi-method approach to capture different components of emotions (motor expression, subjective feeling component, action tendencies) and thus contribute to a more complete and profound picture of emotional processes towards robots. To achieve this goal, three experimental studies were conducted with a total of 491 participants. With different levels of ‘apparent reality’ (Frijda, 2007) and power/control over the situation (cf. Scherer & Ellgring, 2007), the extent to which the intensity and quality of emotional responses to robots change were investigated as well as the influence of other factors (appearance of the robot, emotional expressivity of the robot, treatment of the robot, authority status of the robot). Experiment 1 was based on videos showing different types of robots (animal-like, anthropomorphic, machine-like) in different situations (friendly treatment of the robot vs. torture treatment) while being either emotionally expressive or not. Self-reports of feelings as well as the motoric-expressive component of emotion: facial expressions (cf. Scherer, 2005) were analyzed. The Facial Action Coding System (Ekman, Friesen, & Hager, 2002), the most comprehensive and most widely used method for objectively assessing facial expressions, was utilized for this purpose. Results showed that participants displayed facial expressions (Action Unit [AU] 12 and AUs associated with positive emotions as well as AU 4 and AUs associated with negative emotions) as well as self-reported feelings in line with the valence of the treatment shown in the videos. Stronger emotional reactions could be observed for emotionally expressive robots than non-expressive robots. Most pity, empathy, negative feelings and sadness were reported for the animal-like robot Pleo while watching it being tortured, followed by the anthropomorphic robot Reeti and least for the machine-like robot Roomba. Most antipathy was attributed to Roomba. The findings are in line with previous research (e.g., Krach et al., 2008; Menne & Schwab, 2018; Riek et al., 2009; Rosenthal-von der Pütten et al., 2013) and show facial expressions’ potential for a natural HRI. Experiment 2 and Experiment 3 transferred Milgram’s classic experiments (1963; 1974) on obedience into the context of HRI. Milgram’s obedience studies were deemed highly suitable to study the extent of empathy towards a robot in relation to obedience to a robot. Experiment 2 differed from Experiment 3 in the level of ‘apparent reality’ (Frijda, 2007): based on Milgram (1963), a purely text-based study (Experiment 2) was compared with a live HRI (Experiment 3). While the dependent variables of Experiment 2 consisted of self-reports of emotional feelings and assessments of hypothetical behavior, Experiment 3 measured subjective feelings and real behavior (reaction time: duration of hesitation; obedience rate; number of protests; facial expressions) of the participants. Both experiments examined the influence of the factors "authority status" (high / low) of the robot giving the orders (Nao) and the emotional expressivity (on / off) of the robot receiving the punishments (Pleo). The subjective feelings of the participants from Experiment 2 did not differ between the groups. In addition, only few participants (20.2%) stated that they would definitely punish the "victim" robot. Milgram (1963) found a similar result. However, the real behavior of participants in Milgram's laboratory experiment differed from the estimates of hypothetical behavior of participants to whom Milgram had only described the experiment. Similarly, comments from participants in Experiment 2 suggest that the scenario described may have been considered fictitious and that assessments of hypothetical behavior may not provide a realistic picture of real behavior towards robots in a live interaction. Therefore, another experiment (Experiment 3) was performed with a live interaction with a robot as authority figure (high authority status vs. low) and another robot as "victim" (emotional expressive vs. non-expressive). Group differences were found in questionnaires on emotional responses. More empathy was shown for the emotionally expressive robot and more joy and less antipathy was reported than for a non-expressive robot. In addition, facial expressions associated with negative emotions could be observed while subjects executed Nao's command and punished Pleo. Although subjects tended to hesitate longer when punishing an emotionally expressive robot and the order came from a robot with low authority status, this difference did not reach significance. Furthermore, all but one subject were obedient and punished Pleo as commanded by the Nao robot. This result stands in stark contrast to the self-reported hypothetical behavior of the participants from Experiment 2 and supports the assumption that the assessments of hypothetical behavior in a Human-Robot obedience scenario are not reliable for real behavior in a live HRI. Situational variables, such as obedience to authorities, even to a robot, seem to be stronger than empathy for a robot. This finding is in line with previous studies (e.g. Bartneck & Hu, 2008; Geiskkovitch et al., 2016; Menne, 2017; Slater et al., 2006), opens up new insights into the influence of robots, but also shows that the choice of a method to evoke empathy for a robot is not a trivial matter (cf. Geiskkovitch et al., 2016; cf. Milgram, 1965). Overall, the results support the assumption that emotional reactions to robots are profound and manifest both at the subjective level and in the motor component. Humans react emotionally to a robot that is emotionally expressive and looks less like a machine. They feel empathy and negative feelings when a robot is abused and these emotional reactions are reflected in facial expressions. In addition, people's assessments of their own hypothetical behavior differ from their actual behavior, which is why video-based or live interactions are recommended for analyzing real behavioral responses. The arrival of social robots in society leads to unprecedented questions and this dissertation provides a first step towards understanding these new challenges. N2 - Are there emotional reactions towards social robots? Could you love a robot? Or, put the other way round: Could you mistreat a robot, tear it apart and sell it? Media reports people honoring military robots with funerals, mourning the “death” of a robotic dog, and granting the humanoid robot Sophia citizenship. But how profound are these reactions? Three experiments take a closer look on emotional reactions towards social robots by investigating the subjective experience of people as well as the motor expressive level. Contexts of varying degrees of Human-Robot Interaction (HRI) sketch a nuanced picture of emotions towards social robots that encompass conscious as well as unconscious reactions. The findings advance the understanding of affective experiences in HRI. It also turns the initial question into: Can emotional reactions towards social robots even be avoided? T2 - Im Angesicht sozialer Roboter - Emotionale Reaktionen angesichts sozialer Roboter KW - Roboter KW - social robot KW - emotion KW - FACS KW - Facial Action Coding System KW - facial expressions KW - emotional reaction KW - Human-Robot Interaction KW - HRI KW - obedience KW - empathy KW - Gefühl KW - Mimik KW - Mensch-Maschine-Kommunikation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-187131 SN - 978-3-95826-120-4 SN - 978-3-95826-121-1 N1 - Parallel erschienen als Druckausgabe in Würzburg University Press, 978-3-95826-120-4, 27,80 EUR. PB - Würzburg University Press CY - Würzburg ET - 1. Auflage ER - TY - JOUR A1 - Mocke, Viola A1 - Weller, Lisa A1 - Frings, Christian A1 - Rothermund, Klaus A1 - Kunde, Wilfried T1 - Task relevance determines binding of effect features in action planning JF - Attention, Perception, & Psychophysics N2 - Action planning can be construed as the temporary binding of features of perceptual action effects. While previous research demonstrated binding for task-relevant, body-related effect features, the role of task-irrelevant or environment-related effect features in action planning is less clear. Here, we studied whether task-relevance or body-relatedness determines feature binding in action planning. Participants planned an action A, but before executing it initiated an intermediate action B. Each action relied on a body-related effect feature (index vs. middle finger movement) and an environment-related effect feature (cursor movement towards vs. away from a reference object). In Experiments 1 and 2, both effects were task-relevant. Performance in action B suffered from partial feature overlap with action A compared to full feature repetition or alternation, which is in line with binding of both features while planning action A. Importantly, this cost disappeared when all features were available but only body-related features were task-relevant (Experiment 3). When only the environment-related effect of action A was known in advance, action B benefitted when it aimed at the same (vs. a different) environment-related effect (Experiment 4). Consequently, the present results support the idea that task relevance determines whether binding of body-related and environment-related effect features takes place while the pre-activation of environment-related features without binding them primes feature-overlapping actions. KW - action planning KW - motor control KW - binding KW - effect anticipations Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-231906 SN - 1943-3921 VL - 82 ER - TY - JOUR A1 - Murali, Supriya A1 - Händel, Barbara T1 - Motor restrictions impair divergent thinking during walking and during sitting JF - Psychological Research N2 - Creativity, specifically divergent thinking, has been shown to benefit from unrestrained walking. Despite these findings, it is not clear if it is the lack of restriction that leads to the improvement. Our goal was to explore the effects of motor restrictions on divergent thinking for different movement states. In addition, we assessed whether spontaneous eye blinks, which are linked to motor execution, also predict performance. In experiment 1, we compared the performance in Guilford's alternate uses task (AUT) during walking vs. sitting, and analysed eye blink rates during both conditions. We found that AUT scores were higher during walking than sitting. Albeit eye blinks differed significantly between movement conditions (walking vs. sitting) and task phase (baseline vs. thinking vs. responding), they did not correlate with task performance. In experiment 2 and 3, participants either walked freely or in a restricted path, or sat freely or fixated on a screen. When the factor restriction was explicitly modulated, the effect of walking was reduced, while restriction showed a significant influence on the fluency scores. Importantly, we found a significant correlation between the rate of eye blinks and creativity scores between subjects, depending on the restriction condition. Our study shows a movement state-independent effect of restriction on divergent thinking. In other words, similar to unrestrained walking, unrestrained sitting also improves divergent thinking. Importantly, we discuss a mechanistic explanation of the effect of restriction on divergent thinking based on the increased size of the focus of attention and the consequent bias towards flexibility. KW - creativity KW - humans KW - sitting KW - walking KW - thinking Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-267722 SN - 1430-2772 VL - 86 IS - 7 ER - TY - JOUR A1 - Kozlik, Julia A1 - Neumann, Roland A1 - Lozo, Ljubica T1 - Contrasting motivational orientation and evaluative coding accounts: on the need to differentiate the effectors of approach/avoidance responses JF - Frontiers in Psychology N2 - Several emotion theorists suggest that valenced stimuli automatically trigger motivational orientations and thereby facilitate corresponding behavior. Positive stimuli were thought to activate approach motivational circuits which in turn primed approach-related behavioral tendencies whereas negative stimuli were supposed to activate avoidance motivational circuits so that avoidance-related behavioral tendencies were primed (motivational orientation account). However, recent research suggests that typically observed affective stimulus response compatibility phenomena might be entirely explained in terms of theories accounting for mechanisms of general action control instead of assuming motivational orientations to mediate the effects (evaluative coding account). In what follows, we explore to what extent this notion is applicable. We present literature suggesting that evaluative coding mechanisms indeed influence a wide variety of affective stimulus response compatibility phenomena. However, the evaluative coding account does not seem to be sufficient to explain affective S-R compatibility effects. Instead, several studies provide clear evidence in favor of the motivational orientation account that seems to operate independently of evaluative coding mechanisms. Implications for theoretical developments and future research designs are discussed. KW - emotional facial expressions KW - cerebral asymmetry KW - compatibility KW - perception KW - affective S-R compatibility KW - approach-avoidance behavior KW - automatic evaluation KW - arm flexion KW - stimuli KW - determinants KW - information KW - emotional responses KW - approach and avoidance KW - facial muscle contractions KW - theory of event coding Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-143192 VL - 6 IS - 563 ER - TY - JOUR A1 - Klaffehn, Annika L. A1 - Sellmann, Florian B. A1 - Kirsch, Wladimir A1 - Kunde, Wilfried A1 - Pfister, Roland T1 - Temporal binding as multisensory integration: Manipulating perceptual certainty of actions and their effects JF - Attention, Perception & Psychophysics N2 - It has been proposed that statistical integration of multisensory cues may be a suitable framework to explain temporal binding, that is, the finding that causally related events such as an action and its effect are perceived to be shifted towards each other in time. A multisensory approach to temporal binding construes actions and effects as individual sensory signals, which are each perceived with a specific temporal precision. When they are integrated into one multimodal event, like an action-effect chain, the extent to which they affect this event's perception depends on their relative reliability. We test whether this assumption holds true in a temporal binding task by manipulating certainty of actions and effects. Two experiments suggest that a relatively uncertain sensory signal in such action-effect sequences is shifted more towards its counterpart than a relatively certain one. This was especially pronounced for temporal binding of the action towards its effect but could also be shown for effect binding. Other conceptual approaches to temporal binding cannot easily explain these results, and the study therefore adds to the growing body of evidence endorsing a multisensory approach to temporal binding. KW - temporal processing KW - perception and action KW - multisensory processing Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-273195 SN - 1943-393X VL - 83 IS - 8 ER - TY - JOUR A1 - Berti, Stefan A1 - Vossel, Gerhard A1 - Gamer, Matthias T1 - The orienting response in healthy aging: Novelty P3 indicates no general decline but reduced efficacy for fast stimulation rates JF - Frontiers in Psychology N2 - Automatic orienting to unexpected changes in the environment is a pre-requisite for adaptive behavior. One prominent mechanism of automatic attentional control is the Orienting Response (OR). Despite the fundamental significance of the OR in everyday life, only little is known about how the OR is affected by healthy aging. We tested this question in two age groups (19–38 and 55–72 years) and measured skin-conductance responses (SCRs) and event-related brain potentials (ERPs) to novels (i.e., short environmental sounds presented only once in the experiment; 10% of the trials) compared to standard sounds (600 Hz sinusoidal tones with 200 ms duration; 90% of the trials). Novel and standard stimuli were presented in four conditions differing in the inter-stimulus interval (ISI) with a mean ISI of either 10, 3, 1, or 0.5 s (blocked presentation). In both age groups, pronounced SCRs were elicited by novels in the 10 s ISI condition, suggesting the elicitation of stable ORs. These effects were accompanied by pronounced N1 and frontal P3 amplitudes in the ERP, suggesting that automatic novelty processing and orientation of attention are effective in both age groups. Furthermore, the SCR and ERP effects declined with decreasing ISI length. In addition, differences between the two groups were observable with the fastest presentation rates (i.e., 1 and 0.5 s ISI length). The most prominent difference was a shift of the peak of the frontal positivity from around 300 to 200 ms in the 19–38 years group while in the 55–72 years group the amplitude of the frontal P3 decreased linearly with decreasing ISI length. Taken together, this pattern of results does not suggest a general decline in processing efficacy with healthy aging. At least with very rare changes (here, the novels in the 10 s ISI condition) the OR is as effective in healthy older adults as in younger adults. With faster presentation rates, however, the efficacy of the OR decreases. This seems to result in a switch from novelty to deviant processing in younger adults, but less so in the group of older adults. KW - psychology KW - attention KW - change detection KW - auditory system KW - novelty processing KW - event-related potential (ERP) KW - P300 KW - skin conductance response (SCR) Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-173651 VL - 8 ER - TY - JOUR A1 - Scherer, Reinhold A1 - Faller, Josef A1 - Friedrich, Elisabeth V. C. A1 - Opisso, Eloy A1 - Costa, Ursula A1 - Kübler, Andrea A1 - Müller-Putz, Gernot R. T1 - Individually Adapted Imagery Improves Brain-Computer Interface Performance in End-Users with Disability JF - PLoS ONE N2 - Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair- wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e. g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within- day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage. KW - single-trial EEG classification KW - motor imagery technology KW - spatial filters movement KW - communication systems KW - BCI Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-143021 VL - 10 IS - 5 ER -