Refine
Is part of the Bibliography
- yes (35) (remove)
Year of publication
- 2014 (35) (remove)
Document Type
- Journal article (26)
- Doctoral Thesis (9)
Keywords
- ERP (3)
- Essgewohnheit (3)
- Essverhalten (3)
- fMRI (3)
- food-cues (3)
- Achtsamkeit (2)
- Achtsamkeitsbasiertes Training (2)
- EEG (2)
- body mass index (2)
- consciousness (2)
Institute
- Institut für Psychologie (35) (remove)
Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc.) and food composition (calories, macronutrients, etc.) are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g), energy density (kcal), and physical image characteristics (color composition, contrast, brightness, size, complexity) are provided. The food-pics image database is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior. Read F
Lesen ist keine passive Rezeption schriftlichen Materials, sondern eine aktive, wechselseitige Beeinflussung von Text und Leser. Der Erwerb von Lesekompetenz ist daher ein komplexer und langwieriger Prozess, der nicht mit der Alphabetisierung in der Grundschule endet, sondern bis ins Erwachsenenalter hinein andauert.
In nationalen und internationalen Studien zeigten deutsche Jugendliche zum Teil gravierende Defizite im Hinblick auf die Lesekompetenz. Inzwischen wurden zwar zahlreiche Einflussfaktoren und Ansatzpunkte für Fördermaßnahmen identifizifiziert und Interventionen konzipiert. Um diese Maßnahmen jedoch gezielt und gewinnbringend einsetzen und evaluieren zu können, ist es erforderlich, den Leistungsstand der Schüler umfassend zu erheben. Bislang fehlten hierfür geeignete Diagnoseinstrumente für die mittleren und höheren Klassenstufen. Daher wurden im Projekt "LESEN - Lesen ermöglicht Sinnentnahme" zwei Lesetests für die Sekundarstufe entwickelt: LESEN 6-7 für die Klassenstufen sechs und sieben sowie LESEN 8-9 für die Klassenstufen acht und neun.
LESEN 6-7 und LESEN 8-9 sind zwei analog aufgebaute Lesetests, die vor allem auf die kognitiven Aspekte der Lesekompetenz, also das Leseverständnis, fokussieren. Beide Tests enthalten jeweils zwei Subtests: Basale Lesekompetenz (BLK) und Textverständnis (TV). Der Subtest BLK besteht aus einer Satzleseaufgabe und erfasst die Lesegeschwindigkeit und das Verständnis einfacher, kurzer Sätze. Der Subtest TV enthält einen expositorischen und einen narrativen Text mit geschlossenen Verständnisfragen, die die inhaltliche Verarbeitung prüfen. Damit orientiert sich der Aufbau der Tests am aktuellen Forschungsstand, demzufolge Leseverständnis sich aus basalen Prozessen und hierarchiehöheren Verständnisleistungen zusammensetzt. Bezüglich des Verständnisses werden in der Literatur verschiedene Verarbeitungsebenen beschrieben, die bei der Konstruktion des Subtests TV explizit Berücksichtigung fanden.
Methodisch orientierte sich die Konstruktion von LESEN 6-7 und LESEN 8-9 zunächst an der Klassischen Testtheorie (KTT). Während für den Subtest BLK darüber hinaus kein Testmodell nötig war, da die Anzahl der in der vorgegebenen Zeit gelesenen Sätze bereits eine metrische Variable darstellt, wurde dem Subtest TV das dichotome Rasch-Modell zugrunde gelegt. Bei Letzterem wurden daher zusätzlich entsprechende Rasch-Kennwerte für die Itemselektion herangezogen. Beide Tests wurden an einer großen Stichprobe, die jeweils Schüler mehrerer deutscher Bundesländer und verschiedener Schularten einschloss, normiert. Zudem wurden jeweils beide Subtests eingehend auf Reliabilität und Validität sowie weitere gängige Testgütekriterien geprüft. Der Subtest TV wurde darüber hinaus auf Rasch-Modell-Konformität untersucht.
Die Ergebnisse der empirischen Erprobung der beiden Tests fallen sehr zufriedenstellend aus. Die Normstichprobe umfasst 1.644 Schüler für LESEN 6-7 und 945 Schüler für LESEN 8-9. Sowohl die KTT- als auch die Rasch-Kennwerte für die Reliabilität liegen im mittelhohen bis hohen Bereich. Die inhaltliche Validität ergibt sich aus den stringent aus der Theorie abgeleiteten Iteminhalten. Die Konstruktvalidität wird durch größtenteils hohe bis sehr hohe Korrelationen mit konstruktnahen Skalen gestützt. Im Sinne konvergenter Validität korrelieren die Ergebniswerte von LESEN 6-7 und LESEN 8-9 außerdem höher mit konstruktnahen Außenkriterien (Lehrerurteil zur Lesekompetenz, Deutschnote) als mit konstruktfernen Außenkriterien (Gesamtnotenschnitt, Mathematiknote). Die niedrige bis nicht vorhandene Korrelation mit konstruktfernen Außenkriterien weist auf diskriminante Validität der Tests hin. Weiter sprechen die größtenteils erwartungskonformen Ergebnisse im Hinblick auf verschiedene aus der Theorie und empirischen Vorbefunden abgeleitete Hypothesen u. a. in Bezug auf Klassenstufen- und Schulartunterschiede für die Validität von LESEN 6-7 und LESEN 8-9. Die Ergebnisse der Rasch-Modell-Konformitätsprüfung für den Subtest TV sprechen für das Vorliegen von Itemhomogenität in beiden Tests, jedoch eher gegen das Vorliegen von Personenhomogenität.
Insgesamt erfüllen LESEN 6-7 und LESEN 8-9 gängige Testgütekriterien in zufriedenstellendem
Maße. Sie ermöglichen sowohl auf Gruppen- als auch auf Individualebene eine umfassende Erfassung des Leseverständnisses von Sekundarschülern sowie in allen vier Klassenstufen eine Differenzierung im gesamten Leistungsspektrum.
Conflict Management
(2014)
Humans have a remarkable ability to plan ahead, set goals for the future and then to act accordingly. Unfortunately, this is not always the case. Everybody has experienced situations in which motivational urges like a tendency to drink another beer, or over-learned behavioral routines like driving on the right side of the road collide with ones´ goals. This tug of war between impulsive or habitual action tendencies and goal-directed actions is called a conflict.
Conflict is ubiquitous and comes in many different ways. Not surprisingly, the means to control conflict are diverse, too. Clearly, people can manage conflict in multiple ways: When expecting a conflict situation to occur in the future, one can recruit more effort to resolve the conflict, for instance by inhibiting unwanted urges or habits. Alternatively one can avoid the conflict situation and thereby circumvent possible failures to control habits and impulses. Furthermore, when currently facing a conflict, people can mobilize more effort to overcome the conflict. Alternatively they can withdraw from the conflict situation to minimize the risk of indulging in their impulses and habits.
To account for these different ways to master a conflict, the present thesis takes an initial step towards a characterization of the variability of control. To this aim, two dimensions of control will be identified that result from partially incompatible constraints on action control. These dimensions depict a trade-off between flexibility and stability and between anticipatory early selection and reactive late correction of control parameters. To describe how these control trade-offs interact and to explain how conflict is handled to ensure adaptation behavior, the conflict management framework is proposed. A corollary of this framework suggests that one strategy to control conflict comprises of a tendency to withdraw from a conflict situation.
The empirical part probed this behavioral response to conflict and tested whether participants withdraw from conflict situations. To approach this hypothesis, three series of experiments are presented that employ free choice paradigms, speeded response classification tasks and continuous movement tracking tasks to reveal withdrawal from conflict. Results show that conflict caused motivational avoidance tendencies (Experiment 1 &2), biased decision making away from conflict tasks (Experiment 3 & 5) and affected the execution of more complex courses of action (Experiment 6 & 7).
The results lend support for the proposed conflict management framework and provide the ground for a more thorough treatment of how the different conflict strategies can be integrated. As a first step, a connectionist model is presented that accounts for the simultaneous implementation of two conflict strategies observed in Experiments 3 – 5. The remainder of the present thesis analyses failures to integrate different conflict strategies. It is discussed how the conflict management framework can shed light on selected psychopathologies, inter-individual differences in control and break-downs of self-control.
Task instructions modulate the attentional mode affecting the auditory MMN and the semantic N400
(2014)
Event-related potentials (ERPs) have been proven to be a useful tool to complement clinical assessment and to detect residual cognitive functions in patients with disorders of consciousness. These ERPs are of ten recorded using passive or unspecific instructions. Patient data obtained this way are then compared to data from healthy participants, which are usually recorded using active instructions. The present study investigates the effect of attentive modulations and particularly the effect of activevs. passive instruction on the ERP components mismatch negativity (MMN) and N400. A sample of 18 healthy participants listened to three auditory paradigms: anoddball, aword priming, and a sentence paradigm. Each paradigm was presented three times with different instructions: ignoring auditory stimuli, passive listening, and focused attention on the auditory stimuli. After each task, the participants indicated their subjective effort. The N400 decreased from the focused task to the passive task, and was extinct in the ignore task. The MMN exhibited higher amplitudes in the focused and passive task compared to the ignore task. The data indicate an effect of attention on the supratemporal component of the MMN. Subjective effort was equally high in the passive and focused tasks but reduced in the ignore task. We conclude that passive listening during EEG recording is stressful and attenuates ERPs, which renders the interpretation of the results obtained in such conditions difficult.
Fear conditioning is an efficient model of associative learning, which has greatly improved our knowledge of processes underlying the development and maintenance of pathological fear and anxiety. In a differential fear conditioning paradigm, one initially neutral stimulus (NS) is paired with an aversive event (unconditioned stimulus, US), whereas another stimulus does not have any consequences. After a few pairings the NS is associated with the US and consequently becomes a conditioned stimulus (CS+), which elicits a conditioned response (CR).
The formation of explicit knowledge of the CS/US association during conditioning is referred to as contingency awareness. Findings about its role in fear conditioning are ambiguous. The development of a CR without contingency awareness has been shown in delay fear conditioning studies. One speaks of delay conditioning, when the US coterminates with or follows directly on the CS+. In trace conditioning, a temporal gap or “trace interval” lies between CS+ and US. According to existing evidence, trace conditioning is not possible on an implicit level and requires more cognitive resources than delay conditioning.
The associations formed during fear conditioning are not exclusively associations between specific cues and aversive events. Contextual cues form the background milieu of the learning process and play an important role in both acquisition and the extinction of conditioned fear and anxiety. A common limitation in human fear conditioning studies is the lack of ecological validity, especially regarding contextual information. The use of Virtual Reality (VR) is a promising approach for creating a more complex environment which is close to a real life situation.
I conducted three studies to examine cue and contextual fear conditioning with regard to the role of contingency awareness. For this purpose a VR paradigm was created, which allowed for exact manipulation of cues and contexts as well as timing of events. In all three experiments, participants were guided through one or more virtual rooms serving as contexts, in which two different lights served as CS and an electric stimulus as US. Fear potentiated startle (FPS) responses were measured as an indicator of implicit fear conditioning. To test whether participants had developed explicit awareness of the CS-US contingencies, subjective ratings were collected.
The first study was designed as a pilot study to test the VR paradigm as well as the conditioning protocol. Additionally, I was interested in the effect of contingency awareness. Results provided evidence, that eye blink conditioning is possible in the virtual environment and that it does not depend on contingency awareness. Evaluative conditioning, as measured by subjective ratings, was only present in the group of participants who explicitly learned the association between CS and US.
To examine acquisition and extinction of both fear associated cues and contexts, a novel cue-context generalization paradigm was applied in the second study. Besides the interplay of cues and contexts I was again interested in the effect of contingency awareness. Two different virtual offices served as fear and safety context, respectively. During acquisition, the CS+ was always followed by the US in the fear context. In the safety context, none of the lights had any consequences. During extinction, a additional (novel) context was introduced, no US was delivered in any of the contexts. Participants showed enhanced startle responses to the CS+ compared to the CS- in the fear context. Thus, discriminative learning took place regarding both cues and contexts during acquisition. This was confirmed by subjective ratings, although only for participants with explicit contingency awareness. Generalization of fear to the novel context after conditioning did not depend on awareness and was observable only on trend level.
In a third experiment I looked at neuronal correlates involved in extinction of fear memory by means of functional magnetic resonance imaging (fMRI). Of particular interest were differences between extinction of delay and trace fear conditioning. I applied the paradigm tested in the pilot study and additionally manipulated timing of the stimuli: In the delay conditioning group (DCG) the US was administered with offset of one light (CS+), in the trace conditioning group (TCG) the US was presented 4s after CS+ offset. Most importantly, prefrontal activation differed between the two groups. In line with existing evidence, the ventromedial prefrontal cortex (vmPFC) was activated in the DCG. In the TCG I found activation of the dorsolateral prefrontal cortex (dlPFC), which might be associated with modulation of working memory processes necessary for bridging the trace interval and holding information in short term memory.
Taken together, virtual reality proved to be an elegant tool for examining human fear conditioning in complex environments, and especially for manipulating contextual information. Results indicate that explicit knowledge of contingencies is necessary for attitude formation in fear conditioning, but not for a CR on an implicit level as measured by FPS responses. They provide evidence for a two level account of fear conditioning. Discriminative learning was successful regarding both cues and contexts. Imaging results speak for different extinction processes in delay and trace conditioning, hinting that higher working memory contribution is required for trace than for delay conditioning.
Extinction is an important mechanism to inhibit initially acquired fear responses. There is growing evidence that the ventromedial prefrontal cortex (vmPFC) inhibits the amygdala and therefore plays an important role in the extinction of delay fear conditioning. To our knowledge, there is no evidence on the role of the prefrontal cortex in the extinction of trace conditioning up to now. Thus, we compared brain structures involved in the extinction of human delay and trace fear conditioning in a between-subjects-design in an fMRI study. Participants were passively guided through a virtual environment during learning and extinction of conditioned fear. Two different lights served as conditioned stimuli (CS); as unconditioned stimulus (US) a mildly painful electric stimulus was delivered. In the delay conditioning group (DCG) the US was administered with offset of one light (CS+), whereas in the trace conditioning group (TCG) the US was presented 4s after CS+ offset. Both groups showed insular and striatal activation during early extinction, but differed in their prefrontal activation. The vmPFC was mainly activated in the DCG, whereas the TCG showed activation of the dorsolateral prefrontal cortex (dlPFC) during extinction. These results point to different extinction processes in delay and trace conditioning. VmPFC activation during extinction of delay conditioning might reflect the inhibition of the fear response. In contrast, dlPFC activation during extinction of trace conditioning may reflect modulation of working memory processes which are involved in bridging the trace interval and hold information in short term memory.
In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice's emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.
The extinction of conditioned fear depends on an efficient interplay between the amygdala and the medial prefrontal cortex (mPFC). In rats, high-frequency electrical mPFC stimulation has been shown to improve extinction by means of a reduction of amygdala activity. However, so far it is unclear whether stimulation of homologues regions in humans might have similar beneficial effects. Healthy volunteers received one session of either active or sham repetitive transcranial magnetic stimulation (rTMS) covering the mPFC while undergoing a 2-day fear conditioning and extinction paradigm. Repetitive TMS was applied offline after fear acquisition in which one of two faces (CS+ but not CS−) was associated with an aversive scream (UCS). Immediate extinction learning (day 1) and extinction recall (day 2) were conducted without UCS delivery. Conditioned responses (CR) were assessed in a multimodal approach using fear-potentiated startle (FPS), skin conductance responses (SCR), functional near-infrared spectroscopy (fNIRS), and self-report scales. Consistent with the hypothesis of a modulated processing of conditioned fear after high-frequency rTMS, the active group showed a reduced CS+/CS− discrimination during extinction learning as evident in FPS as well as in SCR and arousal ratings. FPS responses to CS+ further showed a linear decrement throughout both extinction sessions. This study describes the first experimental approach of influencing conditioned fear by using rTMS and can thus be a basis for future studies investigating a complementation of mPFC stimulation to cognitive behavioral therapy (CBT).
Emotionale Esser neigen dazu, in emotional belastenden Situationen überwiegend süße und fettreiche Nahrung häufig in Abwesenheit von Hunger zu essen, um negative Gefühle zu bewältigen. In unangenehmen emotionalen Zuständen setzen sie sich kaum mit den Emotionen auseinander und essen stattdessen. Es fällt ihnen oft schwer, ihren Emotionen, aber auch ihren Hunger- und Sättigungsgefühlen Aufmerksamkeit zu schenken und diese zu erkennen. Emotionales Essverhalten kann Betroffene davon abhalten, einen konstruktiven Umgang mit den emotionalen Belastungen zu erlernen, und kann zu Übergewicht, den mit Übergewicht einhergehenden ernährungsbedingten Erkrankungen oder gar zu Essstörungen führen. Um diese langfristigen Folgen gar nicht erst entstehen zu lassen, ist es von zentraler Bedeutung, problematischen Formen des emotionalen Essverhaltens vorzubeugen oder sie zu verändern.
Die vorliegende Dissertation umfasst 3 empirische Studien, in denen anhand von standardisierten und selbst entwickelten Fragebögen sowie mithilfe der Experience-Sampling-Methode die Auslöser und die Modifikation emotionalen Essverhaltens untersucht wurden. Die Experience-Sampling-Methode basiert auf alltagsnahen, zeitlich präzisen, multiplen Messungen im Feld.
In der 1. Studie wurde bei gesunden Personen die Wirkung der negativen Gefühle als Auslöser des emotionalen Essverhaltens im Alltag beobachtet. Die Befunde deuten darauf hin, dass keine bestimmten negativen Emotionen zu existieren scheinen, die einen stärkeren Einfluss auf das emotionale Essverhalten haben als andere, und dass alle negativen Gefühle nahezu gleichermaßen das Potenzial bergen, emotionales Essverhalten hervorzurufen.
Diese Erkenntnis ist in die Konzeption eines Trainingsprogramms zur Veränderung des emotionalen Essverhaltens für emotionale Esser eingeflossen, das achtsamkeitsbasierte Konzepte mit verhaltenstherapeutischen Behandlungsmethoden kombiniert.
Die 2. und die 3. Studie überprüften mit einem randomisierten, kontrollierten Design die Machbarkeit und die Wirksamkeit des Gruppentrainings im ambulanten und klinischen Setting. Das Training ließ sich ambulant und in einer Klinik sehr gut umsetzen und wurde von den Teilnehmern weitgehend positiv bewertet.
Im Anschluss an das Training aßen emotionale Esser weniger emotional, indem sie durch die achtsame Selbstbeobachtung lernten, die Essauslöser besser zu erkennen und sie von den körperlichen Hungerempfindungen zu unterscheiden. Vielmehr fingen sie bis zu einem gewissen Grad an, achtsamer zu essen und das Essen zu genießen. Die Emotionsregulation verbesserte sich ebenfalls in vielen Aspekten. Die Trainingsteilnehmer entwickelten die Kompetenzen oder tendierten dazu, ihre Gefühle aufmerksamer wahrzunehmen, klarer zu erkennen und zu benennen, sie stärker positiv zu beeinflussen und sie leichter zu akzeptieren, wenn sie im Augenblick nicht verändert werden konnten.
Somit konnten die Machbarkeit und die Wirksamkeit des entwickelten Trainingsprogramms sowohl im ambulanten Setting als auch in den bedeutendsten Aspekten im klinischen Kontext als Teil eines breiter angelegten Therapiekonzepts nachgewiesen werden.
Die drei vorgelegten Arbeiten liefern einen Beitrag zum Verständnis und zur Veränderung des emotionalen Essverhaltens. Weitergehende Untersuchungen sollten die Merkmale der äußeren Situation als Risikofaktoren für das emotionale Essverhalten analysieren sowie die zeitliche Stabilität der Trainingseffekte testen, die außerhalb einer Katamnese von 3 Monaten liegt. Die Wirksamkeit des Trainingsprogramms könnte ferner gegenüber anderen Therapieverfahren und multizentrisch in unterschiedlichen Kliniken unter zahlreichen Rahmenbedingungen geprüft werden.
Emotionale Esser neigen dazu, in emotional belastenden Situationen überwiegend süße und fettreiche Nahrung häufig in Abwesenheit von Hunger zu essen, um negative Gefühle zu bewältigen. In unangenehmen emotionalen Zuständen setzen sie sich kaum mit den Emotionen auseinander und essen stattdessen. Es fällt ihnen oft schwer, ihren Emotionen, aber auch ihren Hunger- und Sättigungsgefühlen Aufmerksamkeit zu schenken und diese zu erkennen. Emotionales Essverhalten kann Betroffene davon abhalten, einen konstruktiven Umgang mit den emotionalen Belastungen zu erlernen, und kann zu Übergewicht, den mit Übergewicht einhergehenden ernährungsbedingten Erkrankungen oder gar zu Essstörungen führen. Um diese langfristigen Folgen gar nicht erst entstehen zu lassen, ist es von zentraler Bedeutung, problematischen Formen des emotionalen Essverhaltens vorzubeugen oder sie zu verändern. Die vorliegende Dissertation umfasst 3 empirische Studien, in denen anhand von standardisierten und selbst entwickelten Fragebögen sowie mithilfe der Experience-Sampling-Methode die Auslöser und die Modifikation emotionalen Essverhaltens untersucht wurden. Die Experience-Sampling-Methode basiert auf alltagsnahen, zeitlich präzisen, multiplen Messungen im Feld. In der 1. Studie wurde bei gesunden Personen die Wirkung der negativen Gefühle als Auslöser des emotionalen Essverhaltens im Alltag beobachtet. Die Befunde deuten darauf hin, dass keine bestimmten negativen Emotionen zu existieren scheinen, die einen stärkeren Einfluss auf das emotionale Essverhalten haben als andere, und dass alle negativen Gefühle nahezu gleichermaßen das Potenzial bergen, emotionales Essverhalten hervorzurufen. Diese Erkenntnis ist in die Konzeption eines Trainingsprogramms zur Veränderung des emotionalen Essverhaltens für emotionale Esser eingeflossen, das achtsamkeitsbasierte Konzepte mit verhaltenstherapeutischen Behandlungsmethoden kombiniert. Die 2. und die 3. Studie überprüften mit einem randomisierten, kontrollierten Design die Machbarkeit und die Wirksamkeit des Gruppentrainings im ambulanten und klinischen Setting. Das Training ließ sich ambulant und in einer Klinik sehr gut umsetzen und wurde von den Teilnehmern weitgehend positiv bewertet. Im Anschluss an das Training aßen emotionale Esser weniger emotional, indem sie durch die achtsame Selbstbeobachtung lernten, die Essauslöser besser zu erkennen und sie von den körperlichen Hungerempfindungen zu unterscheiden. Vielmehr fingen sie bis zu einem gewissen Grad an, achtsamer zu essen und das Essen zu genießen. Die Emotionsregulation verbesserte sich ebenfalls in vielen Aspekten. Die Trainingsteilnehmer entwickelten die Kompetenzen oder tendierten dazu, ihre Gefühle aufmerksamer wahrzunehmen, klarer zu erkennen und zu benennen, sie stärker positiv zu beeinflussen und sie leichter zu akzeptieren, wenn sie im Augenblick nicht verändert werden konnten. Somit konnten die Machbarkeit und die Wirksamkeit des entwickelten Trainingsprogramms sowohl im ambulanten Setting als auch in den bedeutendsten Aspekten im klinischen Kontext als Teil eines breiter angelegten Therapiekonzepts nachgewiesen werden. Die drei vorgelegten Arbeiten liefern einen Beitrag zum Verständnis und zur Veränderung des emotionalen Essverhaltens. Weitergehende Untersuchungen sollten die Merkmale der äußeren Situation als Risikofaktoren für das emotionale Essverhalten analysieren sowie die zeitliche Stabilität der Trainingseffekte testen, die außerhalb einer Katamnese von 3 Monaten liegt. Die Wirksamkeit des Trainingsprogramms könnte ferner gegenüber anderen Therapieverfahren und multizentrisch in unterschiedlichen Kliniken unter zahlreichen Rahmenbedingungen geprüft werden.
Cognitive theories on causes of developmental dyslexia can be divided into language-specific and general accounts. While the former assume that words are special in that associated processing problems are rooted in language-related cognition (e.g., phonology) deficits, the latter propose that dyslexia is rather rooted in a general impairment of cognitive (e.g., visual and/or auditory) processing streams. In the present study, we examined to what extent dyslexia (typically characterized by poor orthographic representations) may be associated with a general deficit in visual long-term memory (LTM) for details. We compared object- and detail-related visual LTM performance (and phonological skills) between dyslexic primary school children and IQ-, age-, and gender-matched controls. The results revealed that while the overall amount of LTM errors was comparable between groups, dyslexic children exhibited a greater portion of detail-related errors. The results suggest that not only phonological, but also general visual resolution deficits in LTM may play an important role in developmental dyslexia.
Brain-Computer Interfaces (BCIs) strive to decode brain signals into control commands for severely handicapped people with no means of muscular control. These potential users of noninvasive BCIs display a large range of physical and mental conditions. Prior studies have shown the general applicability of BCI with patients, with the conflict of either using many training sessions or studying only moderately restricted patients. We present a BCI system designed to establish external control for severely motor-impaired patients within a very short time. Within only six experimental sessions, three out of four patients were able to gain significant control over the BCI, which was based on motor imagery or attempted execution. For the most affected patient, we found evidence that the BCI could outperform the best assistive technology (AT) of the patient in terms of control accuracy, reaction time and information transfer rate. We credit this success to the applied user-centered design approach and to a highly flexible technical setup. State-of-the art machine learning methods allowed the exploitation and combination of multiple relevant features contained in the EEG, which rapidly enabled the patients to gain substantial BCI control. Thus, we could show the feasibility of a flexible and tailorable BCI application in severely disabled users. This can be considered a significant success for two reasons: Firstly, the results were obtained within a short period of time, matching the tight clinical requirements. Secondly, the participating patients showed, compared to most other studies, very severe communication deficits. They were dependent on everyday use of AT and two patients were in a locked-in state. For the most affected patient a reliable communication was rarely possible with existing AT.
Background
People with severe disabilities, e.g. due to neurodegenerative disease, depend on technology that allows for accurate wheelchair control. For those who cannot operate a wheelchair with a joystick, brain-computer interfaces (BCI) may offer a valuable option. Technology depending on visual or auditory input may not be feasible as these modalities are dedicated to processing of environmental stimuli (e.g. recognition of obstacles, ambient noise). Herein we thus validated the feasibility of a BCI based on tactually-evoked event-related potentials (ERP) for wheelchair control. Furthermore, we investigated use of a dynamic stopping method to improve speed of the tactile BCI system.
Methods
Positions of four tactile stimulators represented navigation directions (left thigh: move left; right thigh: move right; abdomen: move forward; lower neck: move backward) and N = 15 participants delivered navigation commands by focusing their attention on the desired tactile stimulus in an oddball-paradigm.
Results
Participants navigated a virtual wheelchair through a building and eleven participants successfully completed the task of reaching 4 checkpoints in the building. The virtual wheelchair was equipped with simulated shared-control sensors (collision avoidance), yet these sensors were rarely needed.
Conclusion
We conclude that most participants achieved tactile ERP-BCI control sufficient to reliably operate a wheelchair and dynamic stopping was of high value for tactile ERP classification. Finally, this paper discusses feasibility of tactile ERPs for BCI based wheelchair control.
Action feedback affects the perception of action-related objects beyond actual action success
(2014)
Successful object-oriented action typically increases the perceived size of aimed target objects. This phenomenon has been assumed to reflect an impact of an actor's current action ability on visual perception. The actual action ability and the explicit knowledge of action outcome, however, were confounded in previous studies. The present experiments aimed at disentangling these two factors. Participants repeatedly tried to hit a circular target varying in size with a stylus movement under restricted feedback conditions. After each movement they were explicitly informed about the success in hitting the target and were then asked to judge target size. The explicit feedback regarding movement success was manipulated orthogonally to actual movement success. The results of three experiments indicated the participants' bias to judge relatively small targets as larger and relatively large targets as smaller after explicit feedback of failure than after explicit feedback of success. This pattern was independent of the actual motor performance, suggesting that the actors' evaluations of motor actions may bias perception of target objects in itself.
Modulation of sensorimotor rhythms (SMR) was suggested as a control signal for brain-computer interfaces (BCI). Yet, there is a population of users estimated between 10 to 50% not able to achieve reliable control and only about 20% of users achieve high (80–100%) performance. Predicting performance prior to BCI use would facilitate selection of the most feasible system for an individual, thus constitute a practical benefit for the user, and increase our knowledge about the correlates of BCI control. In a recent study, we predicted SMR-BCI performance from psychological variables that were assessed prior to the BCI sessions and BCI control was supported with machine-learning techniques. We described two significant psychological predictors, namely the visuo-motor coordination ability and the ability to concentrate on the task. The purpose of the current study was to replicate these results thereby validating these predictors within a neurofeedback based SMR-BCI that involved no machine learning.Thirty-three healthy BCI novices participated in a calibration session and three further neurofeedback training sessions. Two variables were related with mean SMR-BCI performance: (1) a measure for the accuracy of fine motor skills, i.e., a trade for a person’s visuo-motor control ability; and (2) subject’s “attentional impulsivity”. In a linear regression they accounted for almost 20% in variance of SMR-BCI performance, but predictor (1) failed significance. Nevertheless, on the basis of our prior regression model for sensorimotor control ability we could predict current SMR-BCI performance with an average prediction error of M = 12.07%. In more than 50% of the participants, the prediction error was smaller than 10%. Hence, psychological variables played a moderate role in predicting SMR-BCI performance in a neurofeedback approach that involved no machine learning. Future studies are needed to further consolidate (or reject) the present predictors.
Objective: The aim of this longitudinal study was to identify predictors of instantaneous well-being in patients with amyotrophic lateral sclerosis (ALS). Based on flow theory well-being was expected to be highest when perceived demands and perceived control were in balance, and that thinking about the past would be a risk factor for rumination which would in turn reduce well-being.
Methods: Using the experience sampling method, data on current activities, associated aspects of perceived demands, control, and well-being were collected from 10 patients with ALS three times a day for two weeks.
Results: Results show that perceived control was uniformly and positively associated with well-being, but that demands were only positively associated with well-being when they were perceived as controllable. Mediation analysis confirmed thinking about the past, but not thinking about the future, to be a risk factor for rumination and reduced well-being.
Discussion: Findings extend our knowledge of factors contributing to well-being in ALS as not only perceived control but also perceived demands can contribute to well-being. They further show that a focus on present experiences might contribute to increased well-being.
Albeit research on brain-computer interfaces (BCI) for controlling applications has expanded tremendously, we still face a translational gap when bringing BCI to end-users. To bridge this gap, we adapted the user-centered design (UCD) to BCI research and development which implies a shift from focusing on single aspects, such as accuracy and information transfer rate (ITR), to a more holistic user experience. The UCD implements an iterative process between end-users and developers based on a valid evaluation procedure. Within the UCD framework usability of a device can be defined with regard to its effectiveness, efficiency, and satisfaction. We operationalized these aspects to evaluate BCI-controlled applications. Effectiveness was regarded equivalent to accuracy of selections and efficiency to the amount of information transferred per time unit and the effort invested (workload). Satisfaction was assessed with questionnaires and visual-analogue scales. These metrics have been successfully applied to several BCI-controlled applications for communication and entertainment, which were evaluated by end-users with severe motor impairment. Results of four studies, involving a total of N = 19 end-users revealed: effectiveness was moderate to high; efficiency in terms of ITR was low to high and workload low to medium; depending on the match between user and technology, and type of application satisfaction was moderate to high. The here suggested evaluation metrics within the framework of the UCD proved to be an applicable and informative approach to evaluate BCI controlled applications, and end-users with severe impairment and in the locked-in state were able to participate in this process.
Electroencephalography (EEG) often fails to assess both the level (i.e., arousal) and the content (i.e., awareness) of pathologically altered consciousness in patients without motor responsiveness. This might be related to a decline of awareness, to episodes of low arousal and disturbed sleep patterns, and/or to distorting and attenuating effects of the skull and intermediate tissue on the recorded brain signals. Novel approaches are required to overcome these limitations. We introduced epidural electrocorticography (ECoG) for monitoring of cortical physiology in a late-stage amytrophic lateral sclerosis patient in completely locked-in state (CLIS) Despite long-term application for a period of six months, no implant related complications occurred. Recordings from the left frontal cortex were sufficient to identify three arousal states. Spectral analysis of the intrinsic oscillatory activity enabled us to extract state-dependent dominant frequencies at <4, similar to 7 and similar to 20 Hz, representing sleep-like periods, and phases of low and elevated arousal, respectively. In the absence of other biomarkers, ECoG proved to be a reliable tool for monitoring circadian rhythmicity, i.e., avoiding interference with the patient when he was sleeping and exploiting time windows of responsiveness. Moreover, the effects of interventions addressing the patient's arousal, e.g., amantadine medication, could be evaluated objectively on the basis of physiological markers, even in the absence of behavioral parameters. Epidural ECoG constitutes a feasible trade-off between surgical risk and quality of recorded brain signals to gain information on the patient's present level of arousal. This approach enables us to optimize the timing of interactions and medical interventions, all of which should take place when the patient is in a phase of high arousal. Furthermore, avoiding low responsiveness periods will facilitate measures to implement alternative communication pathways involving brain-computer interfaces (BCI).
Are certain foods addictive?
(2014)
Background: Food craving refers to an intense desire to consume a specific kind of food of which chocolate is the most often craved one. It is this intensity and specificity that differentiates food craving from feelings of hunger. Although food craving and hunger often co-occur, an energy deficit is not a prerequisite for experiencing food craving, that is, it can also occur without being hungry. Food craving often precedes and predicts over- or binge eating which makes it a reasonable target in the treatment of eating disorders or obesity. One of the arguably most extensively validated measures for the assessment of food craving are the Food Cravings Questionnaires (FCQs), which measure food craving on a state (FCQ-S) and trait (FCQ-T) level. Specifically, the FCQ-S measures the intensity of current food craving whereas the FCQ-T measures the frequency of food craving experiences in general. The aims of the present thesis were to provide a German measure for the assessment of food craving and to investigate cognitive, behavioral, and physiological correlates of food craving. For this purpose, a German version of the FCQs was presented and its reliability and validity was evaluated. Using self-reports, relationships between trait food craving and dieting were examined. Cognitive-behavioral correlates of food craving were investigated using food-related tasks assessing executive functions. Psychophysiological correlates of food craving were investigated using event-related potentials (ERPs) in the electroencephalogram and heart rate variability (HRV). Possible intervention approaches to reduce food craving were derived from results of those studies.
Methods: The FCQs were translated into German and their psychometric properties and correlates were investigated in a questionnaire-based study (articles #1 & #2). The relationship between state and trait food craving with executive functioning was examined with behavioral tasks measuring working memory performance and behavioral inhibition which involved highly palatable food-cues (articles #3 & #4). Electrophysiological correlates of food craving were tested with ERPs during a craving regulation task (article #5). Finally, a pilot study on the effects of HRV-biofeedback for reducing food craving was conducted (article #6).
Results: The FCQs demonstrated high internal consistency while their factorial structure could only partially be replicated. The FCQ-T also had high retest-reliability which, expectedly, was lower for the FCQ-S. Validity of the FCQ-S was shown by positive relationships with current food deprivation and negative affect. Validity of the FCQ-T was shown by positive correlations with related constructs. Importantly, scores on the subscales of the FCQ-T were able to discriminate between non-dieters and successful and unsuccessful dieters (article #1). Furthermore, scores on the FCQ-T mediated the relationship between rigid dietary control strategies and low dieting success (article #2). With regard to executive functioning, high-calorie food-cues impaired working memory performance, yet this was independent of trait food craving and rarely related to state food craving (article #3). Behavioral disinhibition in response to high-calorie food-cues was predicted by trait food craving, particularly when participants were also impulsive (article #4). Downregulation of food craving by cognitive strategies in response to high-calorie food-cues increased early, but not later, segments of the Late Positive Potential (LPP) (article #5). Few sessions of HRV-biofeedback reduced self-reported food cravings and eating and weight concerns in high trait food cravers (article #6).
Conclusions: The German FCQs represent sound measures with good psychometric properties for the assessment of state and trait food craving. Although state food craving increases during cognitive tasks involving highly palatable food-cues, impairment of task performance does not appear to be mediated by current food craving experiences. Instead, trait food craving is associated with low behavioral inhibition in response to high-calorie food-cues, but not with impaired working memory performance. Future studies need to examine if trait food craving and, subsequently, food-cue affected behavioral inhibition can be reduced by using food-related inhibition tasks as a training. Current food craving and ERPs in response to food-cues can easily be modulated by cognitive strategies, yet the LPP probably does not represent a direct index of food craving. Finally, HRV-biofeedback may be a useful add-on element in the treatment of disorders in which food cravings are elevated. To conclude, the current thesis provided measures for the assessment of food craving in German and showed differential relationships between state and trait food craving with self-reported dieting behavior, food-cue affected executive functioning, ERPs and HRV-biofeedback. These results provide promising starting points for interventions to reduce food craving based on (1) food-cue-related behavioral trainings of executive functions, (2) cognitive craving regulation strategies, and (3) physiological parameters such as HRV-biofeedback.