Refine
Is part of the Bibliography
- yes (345) (remove)
Year of publication
- 2011 (345) (remove)
Document Type
- Journal article (211)
- Doctoral Thesis (118)
- Preprint (10)
- Conference Proceeding (2)
- Master Thesis (2)
- Book (1)
- Report (1)
Language
- English (345) (remove)
Keywords
- Medizin (15)
- Expression (9)
- Activation (8)
- Cancer (7)
- Quran (7)
- Koran (6)
- Taufliege (6)
- Text Mining (6)
- Apoptosis (5)
- Biene (5)
Institute
- Theodor-Boveri-Institut für Biowissenschaften (67)
- Graduate School of Life Sciences (29)
- Physikalisches Institut (27)
- Neurologische Klinik und Poliklinik (19)
- Rudolf-Virchow-Zentrum (19)
- Klinik und Poliklinik für Psychiatrie, Psychosomatik und Psychotherapie (17)
- Institut für Molekulare Infektionsbiologie (16)
- Medizinische Klinik und Poliklinik I (15)
- Institut für Psychologie (13)
- Institut für Virologie und Immunbiologie (12)
Sonstige beteiligte Institutionen
The Nucleotide Excision Repair (NER) pathway is able to remove a vast diversity of structurally unrelated DNA lesions and is the only repair mechanism in humans responsible for the excision of UV induced DNA damages. The NER mechanism raises two fundamental questions: 1) How is DNA damage recognition achieved discriminating damaged from non damaged DNA? 2) How is DNA incision regulated preventing endonucleases to cleave DNA non specifically but induce and ensure dual incision of damaged DNA? Thus, the aim of this work was to investigate the mechanisms leading from recognition to incision of damaged DNA. To decipher the underlying process of damage recognition in a prokaryotic model system, the intention of the first part of this work was to co crystallize the helicase UvrB form Bacillus caldotenax together with a DNA substrate comprising a fluorescein adducted thymine as an NER substrate. Incision assays were performed to address the question whether UvrB in complex with the endonuclease UvrC is able to specifically incise damaged DNA employing DNA substrates with unpaired regions at different positions with respect to the DNA lesion. The results presented here indicate that the formation of a specific pre incision complex is independent of the damage sensor UvrA. The preference for 5’ bubble substrate suggests that UvrB is able to slide along the DNA favorably in a 5’ → 3’ direction until it directly encounters a DNA damage on the translocating strand to then recruit the endonuclease UvrC. In the second part of this work, the novel endonuclease Bax1 from Thermoplasma acidophilum was characterized. Due to its close association to archaeal XPB, a potential involvement of Bax1 in archaeal NER has been postulated. Bax1 was shown to be a Mg2+ dependent, structure specific endonuclease incising 3’ overhang substrates in the single stranded region close to the ssDNA/dsDNA junction. Site directed mutagenesis of conserved amino acids was employed to identify putative active site residues of Bax1. In complex with the helicase XPB, however, incision activity of Bax1 is altered regarding substrate specificity. The presence of two distinct XPB/Bax1 complexes with different endonuclease activities indicates that XPB regulates Bax1 incision activity providing insights into the physical and functional interactions of XPB and Bax1.
It has been proposed that different features of a face provide a source of information for separate perceptual and cognitive processes. Properties of a face that remain rather stable over time, so called invariant facial features, yield information about a face’s identity, and changeable aspects of faces transmit information underlying social communication such as emotional expressions and speech movements. While processing of these different face properties was initially claimed to be independent, a growing body of evidence suggests that these sources of information can interact when people recognize faces with whom they are familiar. This is the case because the way a face moves can contain patterns that are characteristic for that specific person, so called idiosyncratic movements. As a face becomes familiar these idiosyncratic movements are learned and hence also provide information serving face identification. While an abundance of experiments has addressed the independence of invariant and variable facial features in face recognition, little is known about the exact nature of the impact idiosyncratic facial movements have on face recognition. Gaining knowledge about the way facial motion contributes to face recognition is, however, important for a deeper understanding of the way the brain processes and recognizes faces. In the following dissertation three experiments are reported that investigate the impact familiarity of changeable facial features has on processes of face recognition. Temporal aspects of the processing of familiar idiosyncratic facial motion were addressed in the first experiment via EEG by investigating the influence familiar facial movement exerts on event-related potentials associated to face processing and face recognition. After being familiarized with a face and its idiosyncratic movement, participants viewed familiar or unfamiliar faces with familiar or unfamiliar facial movement while their brain potentials were recorded. Results showed that familiarity of facial motion influenced later event-related potentials linked to memory processes involved in face recognition. The second experiment used fMRI to investigate the brain areas involved in processing familiar facial movement. Participants’ BOLD-signal was registered while they viewed familiar and unfamiliar faces with familiar or unfamiliar idiosyncratic movement. It was found that activity of brain regions, such as the fusiform gyrus, that underlie the processing of face identity, was modulated by familiar facial movement. Together these two experiments provide valuable information about the nature of the involvement of idiosyncratic facial movement in face recognition and have important implications for cognitive and neural models of face perception and recognition. The third experiment addressed the question whether idiosyncratic facial movement could increase individuation in perceiving faces from a different ethnic group and hence reduce impaired recognition of these other-race faces compared to own-race faces, a phenomenon named the own-race bias. European participants viewed European and African faces that were each animated with an idiosyncratic smile while their attention was either directed to the form or the motion of the face. Subsequently recognition memory for these faces was tested. Results showed that the own-race bias was equally present in both attention conditions indicating that idiosyncratic facial movement was not able to reduce or diminish the own-race bias. In combination the here presented experiments provide further insight into the involvement of idiosyncratic facial motion in face recognition. It is necessary to consider the dynamic component of faces when investigating face recognition because static facial images are not able to provide the full range of information that leads to recognition of a face. In order to reflect the full process of face recognition, cognitive and neural models of face perception and recognition need to integrate dynamic facial features as a source of information which contributes to the recognition of a face.
Polarity and migration are essential for T cell activation, homeostasis, recirculation and effector function. To address how T cells coordinate polarization and migration when interacting with dendritic cells (DC) during homeostatic and activating conditions, a low density collagen model was used for confocal live-cell imaging and high-resolution 3D reconstruction of fixed samples. During short-lived (5 to 15 min) and migratory homeostatic interactions, recently activated T cells simultaneously maintained their amoeboid polarization and polarized towards the DC. The resulting fully dynamic and asymmetrical interaction plane comprised all compartments of the migrating T cell: the actin-rich leading edge drove migration but displayed only moderate signaling activity; the mid-zone mediated TCR/MHC induced signals associated with homeostatic proliferation; and the rear uropod mediated predominantly MHC independent signals possibly connected to contact-dependent T cell survival. This “dynamic immunological synapse” with distinct signaling sectors enables moving T cells to serially sample antigen-presenting cells and resident tissue cells and thus to collect information along the way. In contrast to homeostatic contacts, recognition of the cognate antigen led to long-lasting T cell/DC interaction with T cell rounding, disintegration of the uropod, T cell polarization towards the DC, and the formation of a symmetrical contact plane. However, the polarity of the continuously migrating DC remained intact and T cells aggregated within the DC uropod, an interesting cellular compartment potentially involved in T cell activation and regulation of the immune response. Taken together, 3D collagen facilitates high resolution morphological studies of T cell function under realistic, in vivo-like conditions.
In this study I investigate the role of Schwann cell and axon-derived trophic signals as modifiers of axonal integrity and sprouting in motoneuron disease and diabetic neuropathy (DNP). The first part of this thesis focuses on the role of the Schwann-cell-derived ciliary neurotrophic factor (CNTF) for compensatory sprouting in a mouse model for mild spinal muscular atrophy (SMA). In the second part, the role of the insulin-like growth factor 1 (IGF-1) and its binding protein 5 (IGFBP-5) is examined in the peripheral nerves of patients with DNP and in two corresponding mouse models. Proximal SMA is caused by homozygous loss or mutation of the SMN1 gene on human chromosome 5. The different forms of SMA can be divided into four groups, depending on the levels of SMN protein produced from a second SMN gene (SMN2) and the severity of the disease. Patients with milder forms of the disease, type III and type IV SMA, normally reach adulthood and regularly show enlargement of motor units, signifying the reinnervation of denervated muscle fibers. However, the underlying mechanisms are not understood. Smn+/- mice, a model of type III/IV SMA, are phenotypically normal, but they reveal progressive loss of motor neurons and denervation of motor endplates starting at 4 weeks of age. The progressive loss of spinal motor neurons reaches 50% at 12 months but muscle strength is not reduced. The first evidence for axonal sprouting as a compensatory mechanism in these animals was the more than 2-fold increase in amplitude of single motor unit action potentials (SMUAP) in the gastrocnemius muscle. Confocal analysis confirmed pronounced sprouting of innervating motor axons. As CNTF is highly expressed in Schwann cells and known to be involved in sprouting, its role for this compensatory sprouting response and the maintenance of muscle strength in Smn+/- mice was investigated. Deletion of CNTF in this mouse model results in reduced sprouting and decline of muscle strength in Smn+/- Cntf-/- mice. These findings indicate that CNTF is necessary for a sprouting response and thus enhances the size of motor units in skeletal muscles of Smn+/- mice. DNP afflicting motor and sensory nerve fibers is a major complication in diabetes mellitus. The underlying cellular mechanisms of motor axon degeneration are poorly understood. IGFBP-5, an inhibitory binding protein for IGF-1, is highly upregulated in peripheral nerves in patients with DNP. The study investigates the pathogenic relevance of this finding in transgenic mice overexpressing IGFBP-5 in motor axons. These mice develop motor axonopathy similar to that seen in DNP. Motor axon degeneration is also observed in mice in which the IGF-1 receptor (IGF-1R) was conditionally depleted in motoneurons, indicating that reduced activity of IGF-1 on IGF-1R in motoneurons is responsible for the observed effect. These data provide evidence that elevated expression of IGFBP-5 in diabetic nerves reduces the availability of IGF-1 for IGF-1R on motor axons leading to progressive neurodegeneration, and thus offers novel treatment strategies.
Practical optimization problems often comprise several incomparable and conflicting objectives. When booking a trip using several means of transport, for instance, it should be fast and at the same time not too expensive. The first part of this thesis is concerned with the algorithmic solvability of such multiobjective optimization problems. Several solution notions are discussed and compared with respect to their difficulty. Interestingly, these solution notions are always equally difficulty for a single-objective problem and they differ considerably already for two objectives (unless P = NP). In this context, the difference between search and decision problems is also investigated in general. Furthermore, new and improved approximation algorithms for several variants of the traveling salesperson problem are presented. Using tools from discrepancy theory, a general technique is developed that helps to avoid an obstacle that is often hindering in multiobjective approximation: The problem of combining two solutions such that the new solution is balanced in all objectives and also mostly retains the structure of the original solutions. The second part of this thesis is dedicated to several aspects of systems of equations for (formal) languages. Firstly, conjunctive and Boolean grammars are studied, which are extensions of context-free grammars by explicit intersection and complementation operations, respectively. Among other results, it is shown that one can considerably restrict the union operation on conjunctive grammars without changing the generated language. Secondly, certain circuits are investigated whose gates do not compute Boolean values but sets of natural numbers. For these circuits, the equivalence problem is studied, i.\,e.\ the problem of deciding whether two given circuits compute the same set or not. It is shown that, depending on the allowed types of gates, this problem is complete for several different complexity classes and can thus be seen as a parametrized) representative for all those classes.
In recent years high-throughput experiments provided a vast amount of data from all areas of molecular biology, including genomics, transcriptomics, proteomics and metabolomics. Its analysis using bioinformatics methods has developed accordingly, towards a systematic approach to understand how genes and their resulting proteins give rise to biological form and function. They interact with each other and with other molecules in highly complex structures, which are explored in network biology. The in-depth knowledge of genes and proteins obtained from high-throughput experiments can be complemented by the architecture of molecular networks to gain a deeper understanding of biological processes. This thesis provides methods and statistical analyses for the integration of molecular data into biological networks and the identification of functional modules, as well as its application to distinct biological data. The integrated network approach is implemented as a software package, termed BioNet, for the statistical language R. The package includes the statistics for the integration of transcriptomic and functional data with biological networks, the scoring of nodes and edges of these networks as well as methods for subnetwork search and visualisation. The exact algorithm is extensively tested in a simulation study and outperforms existing heuristic methods for the calculation of this NP-hard problem in accuracy and robustness. The variability of the resulting solutions is assessed on perturbed data, mimicking random or biased factors that obscure the biological signal, generated for the integrated data and the network. An optimal, robust module can be calculated using a consensus approach, based on a resampling method. It summarizes optimally an ensemble of solutions in a robust consensus module with the estimated variability indicated by confidence values for the nodes and edges. The approach is subsequently applied to two gene expression data sets. The first application analyses gene expression data for acute lymphoblastic leukaemia (ALL) and differences between the subgroups with and without an oncogenic BCR/ABL gene fusion. In a second application gene expression and survival data from diffuse large B-cell lymphomas are examined. The identified modules include and extend already existing gene lists and signatures by further significant genes and their interactions. The most important novelty is that these genes are determined and visualised in the context of their interactions as a functional module and not as a list of independent and unrelated transcripts. In a third application the integrative network approach is used to trace changes in tardigrade metabolism to identify pathways responsible for their extreme resistance to environmental changes and endurance in an inactive tun state. For the first time a metabolic network approach is proposed to detect shifts in metabolic pathways, integrating transcriptome and metabolite data. Concluding, the presented integrated network approach is an adequate technique to unite high-throughput experimental data for single molecules and their intermolecular dependencies. It is flexible to apply on diverse data, ranging from gene expression changes over metabolite abundances to protein modifications in a combination with a suitable molecular network. The exact algorithm is accurate and robust in comparison to heuristic approaches and delivers an optimal, robust solution in form of a consensus module with confidence values. By the integration of diverse sources of information and a simultaneous inspection of a molecular event from different points of view, new and exhaustive insights into biological processes can be acquired.
Bacterial protein toxins belong to the most potent toxins which are known. They exist in many different forms and are part of our every day live. Some of them are spread by the bacteria during infections and therefore play a crucial role in pathogenicity of these strains. Others are secreted as a defense mechanism and could be uptaken with spoiled food. Concerning toxicity, some of the binary toxins of the AB7-type belong to the most potent and dangerous toxins in the world. Even very small amounts of these proteins are able to cause severe symptoms during an infection with pathogen species of the genus Clostridium or Bacillus. Apart from the thread the toxins constitute, they exhibit a unique way of intoxication. Members of the AB7-toxin family consist of a pore-forming subunit B, that acts as a molecular syringe to translocate the enzymatic moieties A into the cytosol of target cells. This complex mechanism does not only kill cells with high efficiency and therefore should be studied for treatment, but also displays a possibility to address certain cells with a specific protein cargo if used as a molecular delivery tool. Concerning both issues, binding and translocation of the channel are the crucial steps to either block or modify the system in the desired way. To gain deeper insight into the transport of binary toxins the structure of the B subunit is of great importance, but being a membrane protein, no crystal could be obtained up to now for either protective antigen (PA) of Anthrax toxin or any other AB7-type binding domain. Therefore, the method of choice in this work is an electro-physical approach using the so-called black-lipid-bilayer system for determination of biophysical constants. Additionally, diverse cell based assays serve as a proving method for the data gained during in vitro measurements. Further information was gathered with specially designed mutants of the protein channel. The first part of this thesis focuses on the translocation process and its possible use as a molecular tool to deliver protein cargo into special cell types. The task was addressed by measuring the binding of different effector proteins related and unrelated to the AB7 toxin family. These proteins were tested in titration experiments for the blockage of the ion current through a membrane saturated with toxin channels. Especially the influence of positively charged His-tags has been determined in detail for PA and C2II. As described in chapter 2, a His-tag transferred the ability of being transported by PA, but not by C2II, to different proteins like EDIN (from S. aureus) in vitro and in cell-based experiments. This process was found to change the well-known voltage-dependency of PA to a huge extend and therefore is related to membrane potentials which play a crucial role in many processes in living cells. Chapter 3 sums up findings, which depict that binding partners of PA share certain common motives. These could be detected in a broad range of substrates, ranging from simple ions in an electrolyte over small molecules to complex protein effectors. The gathered information could be further used to design blocker-substrates for treatment of Anthrax infections or tags, which render PA possible as a molecular syringe for cargo proteins. The deeper insight to homologies and differences of binary toxin components is the core of chapter 4, in which the cross-reactivity of Anthrax and C2-toxin was analyzed. The presented results lead to a better understanding of different motives involved in binding and translocation to and via the B components PA and C2II, as well as the enzymatically active A moieties edema factor (EF), lethal factor (LF) and C2I. In the second part of the thesis, the blockage of intoxication is the center of interest. Therefore, chapter 5 focuses on the analysis of specially designed blocker-substrate molecules for PA. These molecules form a plug in the pore, abolishing translocation of the enzymatic units. Especially, if multi-resistant strains of Anthrax (said to be already produced in Russia as a biological weapon) are taken into consideration, these substrates could stop intoxication and buy time, to deal with the infection. Chapter 6 describes the blockage of PA-channels by anti-His antibody from the trans-side of the porin, an effect which was not described for any other antibody before. Interestingly, even mutation of the estimated target amino acid Histidine 310 to Glycine could not interfere with this ionic strength dependent binding.
Currently, we observe a strong growth of services and applications, which use the Internet for data transport. However, the network requirements of these applications differ significantly. This makes network management difficult, since it complicated to separate network flows into application classes without inspecting application layer data. Network virtualization is a promising solution to this problem. It enables running different virtual network on the same physical substrate. Separating networks based on the service supported within allows controlling each network according to the specific needs of the application. The aim of such a network control is to optimize the user perceived quality as well as the cost efficiency of the data transport. Furthermore, network virtualization abstracts the network functionality from the underlying implementation and facilitates the split of the currently tightly integrated roles of Internet Service Provider and network owner. Additionally, network virtualization guarantees that different virtual networks run on the same physical substrate do not interfere with each other. This thesis discusses different aspects of the network virtualization topic. It is focused on how to manage and control a virtual network to guarantee the best Quality of Experience for the user. Therefore, a top-down approach is chosen. Starting with use cases of virtual networks, a possible architecture is derived and current implementation options based on hardware virtualization are explored. In the following, this thesis focuses on assessing the Quality of Experience perceived by the user and how it can be optimized on application layer. Furthermore, options for measuring and monitoring significant network parameters of virtual networks are considered.
The present work reviews the experimental literature on the acute effects of alcohol on human behaviour related to driving performance. A meta-analysis was conducted which includes studies published between 1954 and 2007 in order to provide a comprehensive knowledge of the substance alcohol. 450 studies reporting 5,300 findings were selected from over 12,000 references after applying certain in- and exclusion criteria. Thus, the present meta-analysis comprises far more studies than reviews on alcohol up to now. In the selected studies, different performance tests were conducted which were relevant for driving. The classification system used in this work assigns these tests to eight categories. The main categories consist of several sub categories classifying the tasks more precisely. The main categories were: (1) visual functions, (2) attention (including vigilance), (3) divided attention, (4) en-/decoding (including information processing and memory), (5) reaction time (including simple reaction time and choice reaction time), (6) psychomotor skills, (7) tracking and (8) driving. In addition to the performance aspect, the classification system takes into account mood and social behaviour variables related to driving safety like tiredness or aggression. Following the evaluation method of vote-counting, the number of significant findings and the number of non-significant findings were summarised per blood alcohol concentration (BAC) group. Thereby, a quantitative estimation of the effects of alcohol depending on the BAC was established, the so-called impairment function, which shows the percentage of significantly impaired findings. In order to provide a general overview of alcohol effects on driving-related performance, a global impairment function was established by aggregating all performance findings. The function is nearly linear with about 30% significant findings at a BAC of 0.05% and 50% significant findings at a BAC of 0.08%. In addition, more specific impairment functions considering only the findings of the single behavioural categories were calculated. The results revealed that impairment depends not only on the BAC, but also clearly differs between most of the performance categories. Tracking and driving performance were most affected by alcohol with impairment beginning at very low BACs of 0.02%. Also psychomotor skills were considerably affected by rather low BACs. Impairment of visual functions and information processing occurred at BACs of 0.04% and increased substantially with higher BACs. Impairment in memory tests could be found with very low BACs of 0.02%, but varied depending on the kind of memory. Performance decrements in divided attention tests could also be found with very low BACs in some studies. Attention started to be impaired at 0.04% BAC, but – as in vigilance tasks – considerable impairment only occurred at higher BACs. Choice reaction time was affected at lower BACs than simple reaction time, which was – together with the critical flicker fusion frequency – the least sensitive parameter to the effects of alcohol. To conclude, most skills which are relevant for the safe operation of a vehicle are clearly impaired by BACs of 0.05%, with motor functions being more affected than cognitive functions and complex tasks more than simple tasks. Generally, the results provided no evidence of a threshold effect for alcohol. There was no driving-related performance category for which a sudden transition from unimpaired to impaired occurred at a particular BAC level. In addition, a comparison was made between the present meta-analysis and two reviews of Moskowitz (Moskowitz & Fiorentino, 2000; Moskowitz & Robinson, 1988). Moskowitz reported much lower BACs at which performance was impaired. The reasons for this discrepancy lies in a different way to review scientific findings. On the one hand, Moskowitz focused on significant findings when selecting studies and findings for his reviews. On the other hand, the evaluation method used by Moskowitz ignored non-significant findings and counted each study once at the lowest BAC for which impairment was found. Those non-significant findings are as important as the significant ones in order to determine thresholds of impairment. Therefore, in contrast to Moskowitz, the present work describes the effects of alcohol with functions considering also the non-significant findings. The significance of the non-significant is emphasized with respect to the selection procedure as well as to the evaluation method.
There is such vast amount of visual information in our surroundings at any time that filtering out the important information for further processing is a basic requirement for any visual system. This is accomplished by deploying attention to focus on one source of sensory inputs to the exclusion of others (Luck and Mangun 2009). Attention has been studied extensively in humans and non human primates (NHPs). In Drosophila, visual attention was first demonstrated in 1980 (Wolf and Heisenberg 1980) but this field remained largely unexplored until recently. Lately, however, studies have emerged that hypothesize the role of attention in several behaviors but do not specify the characteristic properties of attention. So, the aim of this research was to characterize the phenomenon of visual attention in wild-type Drosophila, including both externally cued and covert attention using tethered flight at a torque meter. Development of systematic quantifiable behavioral tests was a key aspect for this which was not only important for analyzing the behavior of a population of wild-type flies but also for comparing the wild-type flies with mutant flies. The latter would help understand the molecular, genetic, and neuronal bases of attention. Since Drosophila provides handy genetic tools, a model of attention in Drosophila will serve to the greater questions about the neuronal circuitry and mechanisms involved which might be analogous to those in primates. Such a model might later be used in research involving disorders of attention. Attention can be guided to a certain location in the visual field by the use of external cues. Here, using visual cues the attention of the fly was directed to one or the other of the two visual half-fields. A simple yet robust paradigm was designed with which the results were easily quantifiable. This paradigm helped discover several interesting properties of the cued attention, the most substantial one being that this kind of external guidance of attention is restricted to the lower part of the fly’s visual field. The guiding cue had an after-effect, i.e. it could occur at least up to 2 seconds before the test and still bias it. The cue could also be spatially separated from the test by at least 20° and yet attract the attention although the extent of the focus of attention (FoA) was smaller than one lower visual half-field. These observations excluded the possibility of any kind of interference between the test and the cue stimuli. Another interesting observation was the essentiality of continuous visibility of the test stimulus but not the cue for effective cuing. When the contrast of the visual scene was inverted, differences in response frequencies and cuing effects were observed. Syndirectional yaw torque responses became more frequent than the antidirectional responses and cuing was no longer effective in the lower visual field with inverted contrast. Interestingly, the test stimulus with simultaneous displacement of two stripes not only effectuated a phasic yaw torque response but also a landing response. A 50 landing response was produced in more than half of the cases whenever a yaw torque response was produced. Elucidation of the neuronal correlates of the cued attention was commenced. Pilot experiments with hydroxyurea (HU) treated flies showed that mushroom bodies were not required for the kind of guidance of attention tested in this study. Dopamine mutants were also tested for the guidance of attention in the lower visual field. Surprisingly, TH-Gal4/UAS-shits1 flies flew like wild-type flies and also showed normal optomotor response during the initial calibration phase of the experiment but did not show any phasic yaw torque or landing response at 18 °C, 25 °C or 30 °C. dumb2 flies that have almost no D1 dopamine receptor dDA1 expression in the mushroom bodies and the central complex (Kim et al. 2007) were also tested and like THGal4/ UAS-shits1 flies did not show any phasic yaw torque or landing response. Since the dopamine mutants did not show the basic yaw torque response for the test the role of dopamine in attention could not be deduced. A different paradigm would be needed to test these mutants. Not only can attention be guided through external cues, it can also be shifted endogenously (covert attention). Experiments with the windows having oscillating stripes nicely demonstrated the phenomenon of covert attention due to the production of a characteristic yaw torque pattern by the flies. However, the results were not easily quantifiable and reproducible thereby calling for a more systematic approach. Experiments with simultaneous opposing displacements of two stripes provide a promising avenue as the results from these experiments showed that the flies had a higher tendency to deliver one type of response than when the responses would be produced stochastically suggesting that attention increased this tendency. Further experiments and analysis of such experiments could shed more light on the mechanisms of covert attention in flies.
In this thesis different algorithms for the solution of generalized Nash equilibrium problems with the focus on global convergence properties are developed. A globalized Newton method for the computation of normalized solutions, a nonsmooth algorithm based on an optimization reformulation of the game-theoretic problem, and a merit function approach and an interior point method for the solution of the concatenated Karush-Kuhn-Tucker-system are analyzed theoretically and numerically. The interior point method turns out to be one of the best existing methods for the solution of generalized Nash equilibrium problems.
The acquired immunodeficiency syndrome (AIDS) is currently the most infectious disease worldwide. It is caused by the human immunodeficiency virus (HIV). At the moment there are ~33.3 million people infected with HIV. Sub-Saharan Africa, with ~22.5 million people infected accounts for 68% of the global burden. In most African countries antiretroviral therapy (ART) is administered in limited-resource settings with standardised first- and second-line ART regimens. During this study I analysed the therapy-naïve population of Cape Town, South Africa and Mwanza, Tanzania for any resistance associated mutations (RAMs) against protease inhibitors, nucleoside reverse transcriptase inhibitors and non-nucleoside reverse transcriptase inhibitors. My results indicate that HIV-1 subtype C accounts for ~95% of all circulating strains in Cape Town, South Africa. I could show that ~3.6% of the patient derived viruses had RAMs, despite patients being therapy-naïve. In Mwanza, Tanzania the HIV drug resistance (HIVDR) prevalence in the therapy-naïve population was 14.8% and significantly higher in the older population, >25 years. Therefore, the current WHO transmitted HIVDR (tHIVDR) survey that is solely focused on the transmission of HIVDR and that excludes patients over 25 years of age may result in substantial underestimation of the prevalence of HIVDR in the therapy-naïve population. Based on the prevalence rates of tHIVDR in the study populations it is recommended that all HIV-1 positive individuals undergo a genotyping resistance test before starting ART. I also characterized vif sequences from HIV-1 infected patients from Cape Town, South Africa as the Vif protein has been shown to counteract the antiretroviral activity of the cellular APOBEC3G/F cytidine deaminases. There is no selective pressure on the HIV-1 Vif protein from current ART regimens and vif sequences was used as an evolutionary control. As the majority of phenotypic resistance assays are still based on HIV-1 subtype B, I wanted to design an infectious HIV-1 subtype C proviral molecular clone that can be used for in vitro assays based on circulating strains in South Africa. Therefore, I characterized an early primary HIV-1 subtype C isolate from Cape Town, South Africa and created a new infectious subtype C proviral molecular clone (pZAC). The new pZAC virus has a significantly higher transient viral titer after transfection and replication rate than the previously published HIV-1 subtype C virus from Botswana. The optimized proviral molecular clone, pZAC could be used in future cell culture and phenotypic HIV resistance assays regarding HIV-1 subtype C.
Yersinia enterocolitica subsp. palearctica serobiotype O:3/4 comprises about 80-90 % of all human patient isolates in Germany and Europe and is responsible for sporadic cases worldwide. Even though this serobiotype is low pathogenic, Y. enterocolitica subsp. palearctica serobiotype O:3/4 is involved in gastroenteritis, lymphadenitis and various extraintestinal sequelae as reactive arthritis. The main animal reservoir of this serobiotype are pigs, causing a high rate of O:3/4 contaminations of raw pork in butcher shops in Germany (e.g. Bavaria 25 %) and countries in north-east Europe. As Y. enterocolitica O:3/4 is geographically and phylogenetically distinct from the so far sequenced mouse-virulent O:8/1B strain, complete genome sequencing has been performed for the European serobiotype O:3/4 DSMZ reference strain Y11, which has been isolated from a patient stool. To gain greater insight into the Y. enterocolitica subspecies palearctica group, also draft genome sequences of two other human O:3/4 isolates (strains Y8265, patient isolate, and Y5307, patient isolate associated with reactive arthritis), a closely related Y. enterocolitica palearctica serobiotype O:5,27/3 (strain Y527P), and two biotype 1A strains (a nosocomial strain of serogroup O:5 and an environmental serogroup O:36 isolate) have been performed. Those strains were compared to the high-pathogenic Y. enterocolitica subsp. enterocolitica serobiotype O:8/1B strain 8081 to address the peculiarities of the strain Y11 and the Y. enterocolitica subspecies palearctica group. The main focus was to unravel the pathogenic potential of strain Y11 and thus to identify novel putative virulence genes and fitness factors, especially those that may constitute host specificity of serobiotype O:3/4. Y. enterocolitica subspecies palearctica serobiotype O:3/4 strains lack most of the mouse-virulence-associated determinants of Y. enterocolitica subsp. enterocolitica serotype O:8, for example the HPI, Yts1 type 2 and Ysa type three secretion systems. In comparison, serobiotype O:3/4 strains obviously acquired a different set of genes and genomic islands for virulence and fitness such as the Ysp type three secretion system, an RtxA-like putative toxin, insecticidal toxins and a functional PTS system for N-acetyl-galactosamine uptake, named aga-operon. The aga-operon is able to support the growth of the Y. enterocolitica subsp. enterocolitica O:8/1B on N-acetyl-galactosamine after transformation with the aga operon. Besides these genes, also two prophages, PhiYep-2 and PhiYep-3, and a asn tRNA-associated GIYep-01 genomic island might influence the Y. enterocolitica subsp. palearctica serobiotype O:3/4 pathoadaptation. The PhiYep-3 prophage and the GIYep-01 island show recombination activity and PhiYep-3 was not found in all O:3/4 strains of a small strain collection tested. Y. enterocolitica subsp. palearctica serobiotype O:5,27/3 strain Y527P was found to be closely related to all serobiotype O:3/4 strains, whereas the biotype 1A isolates have more mosaic-segmented genomes and share putative virulence genes both with serobiotypes O:8/1B and O:3/4, which implies their common descent. Besides the pYV virulence plasmid, biotype 1A strains lack classical virulence markers as the Ail adhesin, the YstA enterotoxin, and the virulence-associated protein C. Interestingly, there are no notable differences between the known virulence factors present in nosocomial and environmental strains, except the presence of a truncated Rtx toxin-like gene cluster and remnants of a P2-like prophage in the hospital serogroup O:5 isolate.
Understanding of complex interactions and events in a nervous system, leading from the molecular level up to certain behavioural patterns calls for interdisciplinary interactions of various research areas. The goal of the presented work is to achieve such an interdisciplinary approach to study and manipulate animal behaviour and its underlying mechanisms. Optical in vivo imaging is a new constantly evolving method, allowing one to study not only the local but also wide reaching activity in the nervous system. Due to ease of its genetic accessibility Drosophila melanogaster represents an extraordinary experimental organism to utilize not only imaging but also various optogenetic techniques to study the neuronal underpinnings of behaviour. In this study four genetically encoded sensors were used to investigate the temporal dynamics of cAMP concentration changes in the horizontal lobes of the mushroom body, a brain area important for learning and memory, in response to various physiological and pharmacological stimuli. Several transgenic lines with various genomic insertion sites for the sensor constructs Epac1, Epac2, Epac2K390E and HCN2 were screened for the best signal quality, one line was selected for further experiments. The in vivo functionality of the sensor was assessed via pharmacological application of 8-bromo-cAMP as well as Forskolin, a substance stimulating cAMP producing adenylyl cyclases. This was followed by recording of the cAMP dynamics in response to the application of dopamine and octopamine, as well as to the presentation of electric shock, odorants or a simulated olfactory signal, induced by acetylcholine application to the observed brain area. In addition the interaction between the shock and the simulated olfactory signal by simultaneous presentation of both stimuli was studied. Preliminary results are supporting a coincidence detection mechanism at the level of the adenylyl cyclase as postulated by the present model for classical olfactory conditioning. In a second series of experiments an effort was made to selecticvely activate a subset of neurons via the optogenetic tool Channelrhodopsin (ChR2). This was achieved by recording the behaviour of the fly in a walking ball paradigm. A new method was developed to analyse the walking behaviour of the animal whose brain was made optically accessible via a dissection technique, as used for imaging, thus allowing one to target selected brain areas. Using the Gal4-UAS system the protocerebral bridge, a substructure of the central complex, was highlighted by expressing the ChR2 tagged by fluorescent protein EYFP. First behavioural recordings of such specially prepared animals were made. Lastly a new experimental paradigm for single animal conditioning was developed (Shock Box). Its design is based on the established Heat Box paradigm, however in addition to spatial and operant conditioning available in the Heat Box, the design of the new paradigm allows one to set up experiments to study classical and semioperant olfactory conditioning, as well as semioperant place learning and operant no idleness experiments. First experiments involving place learning were successfully performed in the new apparatus.
Indirect Search for Dark Matter in the Universe - the Multiwavelength and Multiobject Approach
(2011)
Cold dark matter constitutes a basic tenet of modern cosmology, essential for our understanding of structure formation in the Universe. Since its first discovery by means of spectroscopic observations of the dynamics of the Coma cluster some 80 years ago, mounting evidence of its gravitational pull and its impact on the geometry of space-time has build up across a wide range of scales, from galaxies to the entire Hubble flow. The apparent lack of electromagnetic coupling and independent measurements of the energy density of baryonic matter from the primordial abundances of light elements show the non-baryonic nature of dark matter, and its clustering properties prove that it is cold, i.e. that it has a temperature lower than its mass during the time of radiation-matter equality. A generic particle candidate for cold dark matter are weakly interacting massive particles at the electroweak symmetry-breaking scale, such as the neutralinos in R-parity conserving supersymmetry. Such particles would naturally freeze-out with a cosmologically relevant relic density at early times in the expanding Universe. Subsequent clustering of matter would recover annihilation interactions between the dark matter particles to some extent and thus lead to potentially observable high-energy emission from the decaying unstable secondaries produced in annihilation events. The spectra of the secondaries would permit a determination of the mass and annihilation cross section, which are crucial for the microphysical identification of the dark matter. This the central motivation for indirect dark matter searches. However, presently neither the indirect searches, nor the complementary direct searches based on the detection of elastic scattering events, nor the production of candidate particles in collider experiments, has yet provided unequivocal evidence for dark matter. This does not come as a surprise, since the dark matter particles interact only through weak interactions and therefore the corresponding secondary emission must be extremely faint. It turns out that even for the strongest mass concentrations in the Universe, the dark matter annihilation signal is expected to not exceed the level of competing astrophysical sources. Thus, the discrimination of the putative dark matter annihilation signal from the signals of the astrophysical inventory has become crucial for indirect search strategies. In this thesis, a novel search strategy will be developed and exemplified in which target selection across a wide range of masses, astrophysical background estimation, and multiwavelength signatures play the key role. It turns out that the uncertainties regarding the halo profile and the boost due to surviving substructure are bigger for halos at the lower end of the observed mass scales, i.e. in the regime of dwarf galaxies and below, while astrophysical backgrounds tend to become more severe for massive dark matter halos such as clusters of galaxies. By contrast, the uncertainties due to unknown details of particle physics are invariant under changes of the halo mass. Therefore, the different scaling behaviors can be employed to significantly cut down on the uncertainties in observations of different targets covering a major part of the involved mass scales. This strategical approach was implemented in the scientific program carried out with the MAGIC telescope system. Observations of dwarf galaxies and the Virgo- and Perseus clusters of galaxies have been carried out and, at the time of writing, result in some of the most stringent constraints on weakly interacting massive particles from indirect searches. Here, the low-threshold design of the MAGIC telescope system plays a crucial role, since the bulk of the high-energy photons, produced with a high multiplicity during the fragmentation of unstable dark matter annihilation products, are emitted at energies well below the dark matter mass scale. The upper limits severely constrain less generic, but more prolific scenarios characterized by extraordinarily high annihilation efficiencies.
During the last decades the standard model of particle physics has evolved to one of the most precise theories in physics, describing the properties and interactions of fundamental particles in various experiments with a high accuracy. However it lacks on some shortcomings from experimental as well as from theoretical point of view: There is no approved mechanism for the generation of masses of the fundamental particles, in particular also not for the light, but massive neutrinos. In addition the standard model does not provide an explanation for the observance of dark matter in the universe. Moreover the gauge couplings of the three forces in the standard model do not unify, implying that a fundamental theory combining all forces can not be formulated. Within this thesis we address supersymmetric models as answers to these various questions, but instead of focusing on the most simple supersymmetrization of the standard model, we consider basic extensions, namely the next-to-minimal supersymmetric standard model (NMSSM), which contains an additional singlet field, and R-parity violating models. R-parity is a discrete symmetry introduced to guarantee the stability of the proton. Using lepton number violating terms in the context of bilinear R-parity violation and the munuSSM we are able to explain neutrino physics intrinsically supersymmetric, since those terms induce a mixing between the neutralinos and the neutrinos. Since 2009 the Large Hadron Collider (LHC) at CERN explores the new energy regime of Tera-electronvolt, allowing the production of potentially existing heavy particles by the collision of protons. Thus the near future might provide answers to the open questions of mass generation in the standard model and show hints towards physics beyond the standard model. Therefore this thesis works out the phenomenology of the supersymmetric models under consideration and tries to point out differences to the well-known features of the simplest supersymmetric realization of the standard model. In case of the R-parity violating models the decays of the light neutralinos can result in displaced vertices. In combination with a light singlet state these displaced vertices might offer a rich phenomenology like non-standard Higgs decays into a pair of singlinos decaying with displaced vertices. Within this thesis we present some calculations at next order of perturbation theory, since one-loop corrections provide possibly large contributions to the tree-level masses and decay widths. We are using an on-shell renormalization scheme to calculate the masses of neutralinos and charginos including the neutrinos and leptons in case of the R-parity violating models at one-loop level. The discussion shows the similarities and differences to existing calculations in another renormalization scheme, namely the DRbar scheme. Moreover we consider two-body decays of the form chi_j^0 -> chi_l^\pm W^\mp involving a heavy gauge boson in the final state at one-loop level. Corrections are found to be large in case of small or vanishing tree-level decay widths and also for the R-parity violating decay of the lightest neutralino chi_1^0 -> l^\pm W^\mp. An interesting feature of the models based on bilinear R-parity violation is the correlation between the branching ratios of the lightest neutralino decays and the neutrino mixing angles. We discuss these relations at tree-level and for two-body decays chi_1^0 -> l^\pm W^\mp also at one-loop level, since only the full one-loop corrections result in the tree-level expected behavior. The appendix describes the two programs MaCoR and CNNDecays being developed for the analysis carried out in this thesis. MaCoR allows for the calculation of mass matrices and couplings in the models under consideration and CNNDecays is used for the one-loop calculations of neutralino and chargino mass matrices and the two-body decay widths.
Attention-deficit/hyperactivity disorder (ADHD) is a genetically complex childhood onset neurodevelopmental disorder which is highly persistent into adulthood. Several chromo-somal regions associated with this disorder were identified previously in genome-wide linkage scans, association (GWA) and copy number variation (CNV) studies. In this work the results of case-control and family-based association studies using a can-didate gene approach are presented. For this purpose, possible candidate genes for ADHD have been finemapped using mass array-based SNP genotyping. The genes KCNIP4, CDH13 and DIRAS2 have been found to be associated with ADHD and, in addition, with cluster B and cluster C personality disorders (PD) which are known to be related to ADHD. Most of the associations found in this work would not withstand correction for multiple testing. However, a replication in several independent populations has been achieved and in conjunction with previous evidence from linkage, GWA and CNV studies, it is assumed that there are true associations between those genes and ADHD. Further investigation of DIRAS2 by quantitative real-time PCR (qPCR) revealed expression in the hippocampus, cerebral cortex and cerebellum of the human brain and a significant increase in Diras2 expression in the mouse brain during early development. In situ hybrid-izations on murine brain slices confirmed the results gained by qPCR in the human brain. Moreover, Diras2 is expressed in the basolateral amygdala, structures of the olfactory system and several other brain regions which have been implicated in the psychopatholo-gy of ADHD. In conclusion, the results of this work provide further support to the existence of a strong genetic component in the pathophysiology of ADHD and related disorders. KCNIP4, CDH13 and DIRAS2 are promising candidates and need to be further examined to get more knowledge about the neurobiological basis of this common disease. This knowledge is essential for understanding the molecular mechanisms underlying the emergence of this disorder and for the development of new treatment strategies.
The scope of the present work encompasses the influence of experience (i.e. expertise) for feature processing in unconscious information processing. In the introduction, I describe the subliminal priming paradigm, a method to examine how stimuli, we are not aware of, nonetheless influence our actions. The activation of semantic response categories, the impact of learned stimulus-response links, and the action triggering through programmed stimulus-response links are the main three hypotheses to explain unconscious response activation. Besides, the congruence of perceptual features can also influence subliminal priming. On the basis of the features location and form, I look at evidence that exists so far for perceptual priming. The second part of the introduction reviews the literature showing perceptual superiority of experts. This is illustrated exemplarily with three domains of expertise – playing action video games, which constitutes a general form of perceptual expertise, radiology, a more natural form of expertise, and expertise in the game of chess, which is seen as the Drosophila of psychology. In the empirical section, I report nine experiments that applied a subliminal check detection task. Experiment 1 shows subliminal response priming for chess experts but not for chess novices. Thus, chess experts are able to judge unconsciously presented chess configurations as checking or nonchecking. The results of Experiment 2 suggest that acquired perceptual chunks, and not the ability to integrate perceptual features unconsciously, was responsible for unconscious check detection, because experts’ priming does not occur for simpler chess configurations which afforded an unfamiliar classification. With a more complex chess detection task, Experiment 3 indicates that chess experts are not able to process perceptual features in parallel or alternatively, that chess experts are not able to form specific expectations which are obviously necessary to elicit priming if many chess displays are applied. The aim of Experiment 4-9 was to further elaborate on unconscious processing of the single features location and form in novices. In Experiment 4 and 5, perceptual priming according the congruence of the single features location and form outperformed semantically-based response priming. Experiment 6 and 7 show that (in contrast to form priming) the observed location priming effect is rather robust and is also evident for an unexpected form or colour. In Experiment 8, location and form priming, which was additionally related to response priming, were directly compared to each other. Location priming was again stronger than form priming. Finally, Experiment 9 demonstrates that with the subliminal check detection task it is possible to induce response priming in novices when the confounding influences of location and form are absent. In the General discussion, I first summarized the findings. Second, I discuss possible underlying mechanisms of different subliminal perception in experts and novices. Third, I focus on subliminal perceptual priming in novices, especially on the impact of the features location and form. And finally, I discuss a framework, the action trigger account that integrates the different results of the present work.
At the present day the idea of cosmological inflation constitutes an important extension of Big Bang theory. Since its appearance in the early 1980’s many physical mechanisms have been worked out that put the inflationary expansion of space that proceeds the Hot Big Bang on a sound theoretical basis. Among the achievements of the theory of inflation are the explanaition of the almost Euclidean geometry of ‘visible’space, the homogeneity of the cosmic background radiation but, in particular, also the tiny inhomogeneity of a relative amplitude of 10−5. In many models of inflation the inflationary phase ends only locally. Hence, there exists the possibility that the inflationary process still goes on in regions beyond our visual horizon. This property is commonly termed ‘eternal inflation’. In the framework of a cosmological scalar fields, eternal inflation can manifest itself in a variety of ways. On the one hand fluctuations of the field, if sufficiently large, can work against the classical trajectory and therefore counteract the end of inflation. In regions where this is the case the accelerated expansion of space continues at a higher rate. In parts of this region the process may replicate itself again and in this way may continue throughout all of time. Space and field are said to reproduce themselves. On the other hand, a mechanism that can occur in addition or independent of the latter, is so called vacuum tunneling. If the potential of the scalar field has several local minima, a semi-classical calculation suggests that within a spherical region, a bubble, the field can tunnel to another state. The respective tunneling rates depend on the potential difference and the shape of the potential between the states. Generally, the tunneling rate is exponentially suppressed, which means that the inflation lasts for a long time before tunneling takes place. The ongoing inflationary process effectively reduces local curvature, anistotropy and inhomogeneity, so that this property is known as the ‘cosmic no-hair conjecture’. For this reason cosmological considerations of the evolution of bubbles thus far almost entirely involved vacuum (de Sitter) backgrounds. However, new insights in the framework of string theory suggest high tunneling rates which allow for the possibility of bubble nucleation in non-vacuum dominated backgrounds. In this case the evolution of the bubble depends on the properties of the background spacetime. A deeper introduction in chapter 4 is followed by the presentation of the Lemaître-Tolman spacetime in chapter 5 which constitutes the background spacetime in the study of the effect of matter and inhomogeneity on the evolution of vacuum bubbles. In chapter 6 we explicitly describe the application of the ‘thin-shell’ formalism and the resulting system of equations. This is succeeded in chapter 7 by the detailed analysis of bubble evolution in various limits of the Lemaître-Tolman spacetime and a Robertson-Walker spacetime with a rapid phase transition. The central observations are that the presence of dust, at a fixed surface energy density, goes along with a smaller nucleation volume and possibly leads to a a collapse of the bubble. In an expanding background, the radially inhomogeneous dust profile is efficiently diluted so that there is essentially no effect on the evolution of the domain wall. This changes in a radially inhomogeneous curvature profile, positive curvature decelerates the expansion of the bubble. Moreover, we point out that the adopted approach does not allow for a treatment of a, physically expected, matter transfer so that the results are to be understood as preliminary under this caveat. In the second part of this thesis we consider potential observable consequences of bubble collisions in the cosmic microwave background radiation. The topological nature of the signal suggests the use of statistics that are well suited to quantify the morphological properties of the temperature fluctuations. In chapter 10 we present Minkowski Functionals (MFs) that exactly provide such statistics. The presented error analysis allows for a higher precision of numerical MFs in comparison to earlier methods. In chapter 12 we present the application of our algorithm to a Gaussian and a collision map. We motivate the expected MFs and extract their numerical counterparts. We find that our least-squares fitting procedure accurately reproduces an underlying signal only when a large number of realizations of maps are averaged over, while for a single WMAP and PLANCK resolution map, only when a highly prominent disk, with |δT| = 2√σG and ϑd = 40◦, we are able to recover the result. This is unfortunate, as it means that MF are intrinsically too noisy to be able to distinguish cold and hot spots in the CMB for small sizes.
Die Chlorophylle stellen in der Natur die wichtigsten Pigmente dar, weil sie verantwortlich für die Photosynthese sind und hierbei vielfältige Funktionen wahrnehmen, die sich aus ihrer Selbstassemblierung sowie den vorteilhaften optischen und Redox-Eigenschaften ergeben. Die in dieser Arbeit untersuchten semisynthetischen Zinkchlorine stellen Modellverbindungen des natürlichen Bacteriochlorophylls c (BChl c) der Lichtsammelsysteme (light-harvesting: LH) in Chlorosomen von Bakterien, jedoch ohne Proteingerüst, dar. Die entscheidenden Vorteile dieser Zinkchlorine (ZnChl) gegenüber den natürlichen BChls bestehen im einfachen semisynthetischen Zugang ausgehend von Chlorophyll a (Chl a), ihrer gesteigerten chemischen Stabilität sowie der Möglichkeit ihre Selbstassemblierung durch gezielte chemische Modifizierung der Seitenketten in der Peripherie zu steuern. Während bereits mehrfach über die vielversprechenden Redox- und excitonischen Eigenschaften von Aggregaten von ZnChl und natürlichem BChl c und den damit verbundene Voraussetzungen für Excitontransport über große Distanzen berichtet wurde, sind die Ladungstransporteigenschaften von Aggregaten der biomimetischen ZnChl bis heute unerforscht. Die vorliegende Arbeit beschäftigt sich mit der Aufklärung der Struktur von Aggregaten einer Vielzahl von semisynthetischen Zinkchlorophyllderivaten im Feststoff, in Lösung und auf Oberflächen durch die Kombination verschiedenster spektroskopischer, kristallographischer und mikroskopischer Techniken an die sich Untersuchungen zum Ladungstransport in den Aggregaten anschließen. Schema 1 zeigt die verschiedenen, in dieser Arbeit synthetisierten ZnChls, die entweder mit einer Hydroxy- oder Methoxygruppe in der 31-Position funktionalisiert sind sowie Substituenten unterschiedlicher Art, Länge und Verzweigung an der Benzylestergruppe in 172-Position tragen.Die Packung dieser Farbstoffe hängt entscheidend von ihrer chemischen Struktur ab. Während die ZnChls 1a, 2a, 3 mit 31-Hydroxygruppe und Alkylseitenketten (Dodecyl bzw. Oligoethylenglykol) gut lösliche stabförmige Aggregate bilden, lagern sich die analogen Verbindungen mit 31-Methoxygruppe (1b, 2b) zu Stapeln in Lösung und auf Oberflächen zusammen. Diese supramolekularen Polymere wurden im Detail in Kapitel 3 mit Hilfe von UV/Vis- und CD-Spektroskopie (circular dichroism: CD) sowie dynamische Lichtstreuung (dynamic light scattering: DLS) untersucht. Darüber hinaus lieferten temperaturabhängige UV/Vis- in Kombination mit DLS-Messungen wertvolle Informationen über die Aggregationsprozess dieser beiden Sorten von Aggregaten. Während sich die ZnChl 1a mit 31 Hydroxygruppe entsprechend dem isodesmischen Modell zu röhrenförmigen Aggregaten zusammenlagern, bilden sich die stapelförmigen Aggregate von 1b nach einem kooperativen Keimbildungs-Wachstums-Mechanismus (nucleation-elongation mechanism). Detaillierte elektronenmikroskopische Studien lieferten erstmals überzeugende Beweise für röhrenförmige Nanostrukturen der Aggregate des wasserlöslichen 31-Hydroxy Zinkchlorin 3. Die gemessenen Durchmesser der Röhren von ~ 5-6 nm dieser Aggregate liegen in hervorragender Übereinstimmung mit den Elektronenmikroskopie-Daten von BChl c Stabaggregaten in Chlorosomen (Chloroflexus aurantiacus, Durchmesser ~ 5-6 nm) und entsprechen damit dem von Holzwarth und Schaffner postulierten röhrenförmigen Modell... Im Einklang mit ihren hoch geordneten, robusten Strukturen, die sich eindimensional in einer Größenordnung von Mikrometeren erstrecken, sowie ihrer Fähigkeit zum effizienten Ladungs-trägertransport stellen diese selbstassemblierten Nanoröhren von ZnChls vielversprechende Ausgangsmaterialien für die Fertigung supramolekularer elektronischer Bauteile dar. Wissenschaftliche Bemühungen einige dieser Moleküle und ihre entsprechenden supramolekularen Polymere für die Fertigung von (opto-)elektronischen Bauteilen wie organischen Feldeffekttransistoren zu benutzten, stellen lohnende Aufgaben für die Zukunft dar...