Refine
Has Fulltext
- yes (2786) (remove)
Year of publication
Document Type
- Doctoral Thesis (2786) (remove)
Language
- English (2786) (remove)
Keywords
- Maus (61)
- Taufliege (61)
- Drosophila (39)
- Signaltransduktion (39)
- Topologischer Isolator (37)
- Thrombozyt (36)
- Genexpression (34)
- Tissue Engineering (31)
- Leistungsbewertung (29)
- T-Lymphozyt (28)
Institute
- Graduate School of Life Sciences (778)
- Theodor-Boveri-Institut für Biowissenschaften (482)
- Physikalisches Institut (208)
- Institut für Informatik (139)
- Institut für Theoretische Physik und Astrophysik (123)
- Institut für Organische Chemie (113)
- Institut für Mathematik (112)
- Institut für Psychologie (111)
- Institut für Pharmazie und Lebensmittelchemie (103)
- Julius-von-Sachs-Institut für Biowissenschaften (88)
Schriftenreihe
Sonstige beteiligte Institutionen
- Helmholtz Institute for RNA-based Infection Research (HIRI) (7)
- Fraunhofer-Institut für Silicatforschung ISC (5)
- Technische Hochschule Nürnberg Georg Simon Ohm (3)
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- EMBL Heidelberg (2)
- Institut für Tierökologie und Tropenbiologie (2)
- Rudolf Virchow Center for Integrative and Translational Bioimaging, University of Würzburg (2)
- Universität Belgrad, Serbien (2)
- Universitätsklinikum Münster (2)
- Universitätsklinikum Würzburg (2)
ResearcherID
- B-1911-2015 (1)
- B-4606-2017 (1)
- C-2593-2016 (1)
- D-1250-2010 (1)
- I-5818-2014 (1)
- J-8841-2015 (1)
- M-1240-2017 (1)
- N-2030-2015 (1)
- N-3741-2015 (1)
EU-Project number / Contract (GA) number
- 311781 (1)
- 320377 (1)
- EU (FP7/ 2007-2013) (1)
The availability of coherent soft x-rays through the nonlinear optical process of high-harmonic generation allows for the monitoring of the fastest events ever observed in the laboratory. The attosecond pulses produced are the fundamental tool for the time-resolved study of electron motion in atoms, molecules, clusters, liquids and solids in the future. However, in order to exploit the full potential of this new tool it is necessary to control the coherent soft x-ray spectra and to enhance the efficiency of conversion from laser light to the soft x-ray region in the harmonic-generation process. This work developed a comprehensive approach towards the optimization of the harmonic generation process. As this process represents a fundamental example of \emph{light}--\emph{matter} interaction there are two ways of controlling it: Shaping the generating laser \emph{light} and designing ideal states of \emph{matter} for the conversion medium. Either of these approaches was closely examined. In addition, going far beyond simply enhancing the conversion process it could be shown that the qualitative spectral response of the process can be modified by shaping the driving laser pulse. This opens the door to a completely new field of research: Optimal quantum control in the attosecond soft x-ray region---the realm of electron dynamics. In the same way as it is possible to control molecular or lattice vibrational dynamics with adaptively shaped femtosecond laser pulses these days, it will now be feasible to perform real-time manipulation of tightly bound electron motion with adaptively shaped attosecond light fields. The last part of this work demonstrated the capability of the herein developed technique of coherent soft-x-ray spectral shaping, where a measured experimental feedback was used to perform a closed-loop optimization of the interaction of shaped soft x-ray light with a sulfur hexafluoride molecule to arrive at different control objectives. For the optimization of the high-harmonic-generation process by engineering the conversion medium, both the gas phase and the liquid phase were explored both in experiment and theory. Molecular media were demonstrated to behave more efficiently than commonly used atomic targets when elliptically polarized driving laser pulses are applied. Theory predicted enhancement of harmonic generation for linearly polarized driving fields when the internuclear distance is increased. Reasons for this are identified as the increased overlap of the returning electron wavefunction due to molecular geometry and the control over the delocalization of the initial electronic state leading to less quantum-mechanical spreading of the electron wavepacket during continuum propagation. A new experimental scheme has been worked out, using the method of molecular wavepacket generation as a tool to enhance the harmonic conversion efficiency in `pump--drive' schemes. The latter was then experimentally implemented in the study of high-harmonic generation from water microdroplets. A transition between the dominant laser--soft-x-ray conversion mechanisms could be observed, identifying plasma-breakdown as the fundamental limit of high-density high-harmonic generation. Harmonics up to the 27th order were observed for optimally laser-prepared water droplets. To control the high-harmonic generation process by the application of shaped laser light fields a laser-pulse shaper based on a deformable membrane mirror was built. Pulse-shape optimization resulted in increased high-harmonic generation efficiency --- but more importantly the qualitative shape of the spectral response could be significantly modified for high-harmonic generation in waveguides. By adaptive optimization employing closed-loop strategies it was possible to selectively generate narrow (single harmonics) and broad bands of harmonic emission. Tunability could be demonstrated both for single harmonic orders and larger regions of several harmonics. Whereas any previous experiment reported to date always produced a plateau of equally intense harmonics, it has been possible to demonstrate ``untypical'' harmonic soft x-ray spectra exhibiting ``switched-off'' harmonic orders. The high degree of controllability paves the way for quantum control experiments in the soft x-ray spectral region. It was also demonstrated that the degree of control over the soft x-ray shape depends on the high-harmonic generation geometry. Experiments performed in the gas jet could not change the relative emission strengths of neighboring harmonic orders. In the waveguide geometry, the relative harmonic yield of neighboring orders could be modified at high contrast ratios. A simulation based solely on the single atom response could not reproduce the experimentally observed contrast ratios, pointing to the importance of propagation (phase matching) effects as a reason for the high degree of controllability observed in capillaries, answering long-standing debates in the field. A prototype experiment was presented demonstrating the versatility of the developed soft x-ray shaping technique for quantum control in this hitherto unexplored wavelength region. Shaped high-harmonic spectra were again used in an adaptive feedback loop experiment to control the gas-phase photodissociation reaction of SF$_6$ molecules. A time-of-flight mass spectrometer was used for the detection of the ionic fragments. The branching ratios of particular fragmentation channels could be varied by optimally shaped soft x-ray light fields. Although in one case only slight changes of the branching ratio were possible, an optimal solution was found, proving the sufficient technical stability of this unique coherent soft-x-ray shaping method for future applications in optimal control. Active shaping of the spectral amplitude in coherent spectral regions of $\sim$10~eV bandwidth was shown to directly correspond to shaping the temporal features of the emerging soft x-ray pulses on sub-femtosecond time scales. This can be understood by the dualism of frequency and time with the Fourier transformation acting as translator. A quantum-mechanical simulation was used to clarify the magnitude of temporal control over the shape of the attosecond pulses produced in the high-harmonic-generation process. In conjunction with the experimental results, the first attosecond time-scale pulse shaper could thus be demonstrated in this work. The availability of femtosecond pulse shapers opened the field of adaptive femtosecond quantum control. The milestone idea of closed-loop feedback control to be implemented experimentally was expressed by Judson and Rabitz in their seminal work titled ``Teaching lasers to control molecules''. This present work extends and turns around this statement. Two fundamentally new achievements can now be added, which are ``Teaching molecules to control laser light conversion'' and ``Teaching lasers to control coherent soft x-ray light''. The original idea thus enabled the leap from femtosecond control of molecular dynamics into the new field of attosecond control of electron motion to be explored in the future. The \emph{closed}-loop approach could really \emph{open} the door towards fascinating new perspectives in science. Coming back to the introduction in order to close the loop, let us reconsider the analogy to the general chemical reaction. Photonic reaction control was presented by designing and engineering effective media (catalysts) and controlling the preparation of educt photons within the shaped laser pulses to selectively produce desired photonic target states in the soft x-ray spectral region. These newly synthesized target states in turn could be shown to be effective in the control of chemical reactions. The next step to be accomplished will be the control of sub-femtosecond time-scale electronic reactions with adaptively controlled coherent soft x-ray photon bunches. To that end a time-of-flight high-energy photoelectron spectrometer has recently been built, which will now allow to directly monitor electronic dynamics in atomic, molecular or solid state systems. Fundamentally new insights and applications of the nonlinear interaction of shaped attosecond soft x-ray pulses with matter can be expected from these experiments.
This thesis is concerned with the development of an on-line in-situ device for a chemical characterisation of flowing aerosols. The thesis describes the principles and most important features of such a system, allowing also on-line measurements using Raman spectroscopy as a diagnostic technique An analysis of the effect of forced oscillations on the motion of the particle dispersed in a gas flow is given in Chapter 2. Also the most important particle parameters are introduced. A review of the particle/fluid interaction in laminar air flows and the response of the particle is presented. In Chapter 3 the behaviour of the particle under different external conditions (ion bombardment and electric fields) is extended. A brief review of the most important particle charging theories (diffusion, field, and alternating potential charging) shows, that the effect of the electrical properties (represented by the dielectric constant) of the particles affects the charging process. A non-contact method for particle charge measurement was also presented. In the second part of the chapter, the interaction between the electric field and the charged particle for the purpose of particle trapping is illustrated. The most common systems like the two or four ring electrodynamic balance and the quadrupole trap are pointed out. In Chapter 4 a short review of the possibility of using scattered light to study aerosol particles is presented. First, the conditions and the facilities of using the Mie theory for particle size and refractive index determination are mentioned, then some features concerning the classical treatment of the Raman effect are presented Supported by the theoretical considerations exposed in Chapter 2, 3, and 4 the construction and the tests of different devices are presented in Chapter 5. Following the goal of the thesis, first an overview of the used materials and methods for particle generation is presented. Then, the constructed charging devices are described (from the mechanical and electrical point of view) and compared by measuring the acquired charge on the particle. Charged particles can be trapped in different containers. Two types of axially symmetric electrodynamic balances (two ring or an extended four ring configuration) were presented. For a deeper understanding these systems were studied using analytic and numerical methods. Considering the presented purpose of the work another type of trapping system has been developed, namely the quadrupole trap. A similar theoretical characterisation (in term’s of Mathieu equation) as for the electrodynamic balance was presented pointing out some specific features of this system. The incoming particle stream will be focused to the centre of the system simultaneously also the applied DC and AC potential onto the tube electrodes, yields a stable trapping of one or more particles. Chapter 6 consists of two parts: the system for single particle and for many particles investigation. The individual devices presented in Chapter 5 are now put together. The first part presents the method and the experimental realisation of a set-up for solid particle injection. In order to suppress the phase injection disadvantage found for the electrodynamic balance a developed program processes the information obtained from a particle cloud through an adequate electronic detection system, and reduces the number of particles until just one single particle is trapped. The method for one particle investigation can be extended for many particles. Using the presented set-up the particles are moved from one quadrupole to another and transformed from a particle cloud to a particle stream. A linearity between an external vertical mounted detector and the formed image of the particle stream on the CCD camera has been observed and used for simultaneous detection of many particles by Raman spectroscopy. For both methods Raman results are presented. One limitation of Raman Spectroscopy is the relatively long integration time needed for adequate signal-to-noise ratio. There are two factors which influence the integration time: first the incident radiation and the detector sensitivity, and second the intensity of the Raman bands. Using a CCD detector, the desired detector sensitivity should be achieved. So, the improvement of the signal-to-noise ratio should be the next goal in the system development. In order to reduce the integration time an optical system including optic fibres and the integration of an FT-Raman module operating in the visible region is planed. The goal of this work was to develop and construct an instrument for on-line in-situ single particle investigation by Raman spectroscopy. With the presented experimental set-up and the developed program the purpose of the work, the on-line in-situ near atmospheric pressure aerosol investigation was achieved. The Raman spectroscopy has been used successfully for a chemical characterisation of the aerosol particles.
The success of diagnostic knowledge systems has been proved over the last decades. Nowadays, intelligent systems are embedded in machines within various domains or are used in interaction with a user for solving problems. However, although such systems have been applied very successfully the development of a knowledge system is still a critical issue. Similarly to projects dealing with customized software at a highly innovative level a precise specification often cannot be given in advance. Moreover, necessary requirements of the knowledge system can be defined not until the project has been started or are changing during the development phase. Many success factors depend on the feedback given by users, which can be provided if preliminary demonstrations of the system can be delivered as soon as possible, e.g., for interactive systems validation the duration of the system dialog. This thesis motivates that classical, document-centered approaches cannot be applied in such a setting. We cope with this problem by introducing an agile process model for developing diagnostic knowledge systems, mainly inspired by the ideas of the eXtreme Programming methodology known in software engineering. The main aim of the presented work is to simplify the engineering process for domain specialists formalizing the knowledge themselves. The engineering process is supported at a primary level by the introduction of knowledge containers, that define an organized view of knowledge contained in the system. Consequently, we provide structured procedures as a recommendation for filling these containers. The actual knowledge is acquired and formalized right from start, and the integration to runnable knowledge systems is done continuously in order to allow for an early and concrete feedback. In contrast to related prototyping approaches the validity and maintainability of the collected knowledge is ensured by appropriate test methods and restructuring techniques, respectively. Additionally, we propose learning methods to support the knowledge acquisition process sufficiently. The practical significance of the process model strongly depends on the available tools supporting the application of the process model. We present the system family d3web and especially the system d3web.KnowME as a highly integrated development environment for diagnostic knowledge systems. The process model and its activities, respectively, are evaluated in two real life applications: in a medical and in an environmental project the benefits of the agile development are clearly demonstrated.
One primary source for self-knowledge is social comparison. Often objective criteria for self-evaluations are not available or useful and therefore comparisons with other people play a crucial role in self-evaluations. But the question is whether social comparisons could indeed provide information about the self without consuming too much cognitive resources or time. Therefore, in this research I wanted to look at practice effects in social comparison and the particular significance of routine standards. Whereas traditional research on standard selection mostly focused on goal-oriented and strategic standard selection processes, this research sets out to integrate social cognitive knowledge, ideas, and methods. Researchers from many different fields agree that people’s behavior and thinking is not fully determined by rational choices or normative considerations. Quite the contrary, factors like knowledge accessibility, habits, procedural practice, stereotyping, categorization, and many more cognitive processes play an important role. The same may be true in social comparison and standard selection. In my research I demonstrate that efficiency concerns play an important role in social comparison. Since people may not be able to engage in a strategic standard selection whenever they engage in social comparison processes, there has to be a more efficient alternative. Using routine standards would be such an alternative. The efficiency advantage of routine standards may thereby be founded not only in the abandonment of a strategic but arduous standard selection process, but also in a higher efficiency of the comparison process itself. I therefore set out to show how the use of routine standards facilitates the social comparison processes. This was done in three steps. First, I replicated and improved our former research (Mussweiler & Rüter, 2003, JPSP) indicating that people really do use their best friends as routine standards to evaluate themselves. Second, I demonstrated that it is more efficient to compare with a routine standard than with another standard. In Studies 2 and 3 I therefore show that comparisons between the self and a routine standard (either a natural routine standard like the best friend or a experimentally induced routine standard based on practice) are faster and more efficient than comparisons with other standards. Finally, I looked at the underlying mechanism of the efficiency advantage of routine standards. The results of Studies 4 and 5 point out, that both general as well as specific practice effects occur with repeated comparisons. Whereas a specific practice effect implies the repeated processing of the same content (i.e., knowledge about the routine standard), general practice effects indicate that the pure process (i.e., comparing the self with a routine standard) becomes more efficient regardless whether new content (i.e., comparison relevant knowledge) has to be processed. Taken together, the efficiency advantage of routine standards during self-evaluation is based not only on the lack of necessity for an arduous standard selection, but is additionally supported by the facilitation of the comparison process itself. The efficiency of routine standards may provide an explanation as to why people base self-evaluations on comparisons with these standards and dispense with strategic considerations to select the most suitable standard.
Cellular proliferation, differentiation and survival in response to extracellular signals are controlled by the signal transduction pathway of Ras, Raf and MAP kinase. The Raf proteins are serine/threonine kinases with essential function in growth/differentiation/survival - related signal transduction events. In mammals, three functional (A-, B-, and C-Raf) genes were described. Biochemical studies suggest overlapping and differential utilization of Raf isozymes. However, the frequent co-expression of Raf isozymes and their multiple activators and effectors impedes the full understanding of their specific roles. The elucidation of these roles is important due to the involvement of the Ras/Raf/MEK/MAP kinase cascade in human disorders especially in tumor development and progression. B-Raf was shown to posses the strongest kinase activity among Raf kinases and display antiapoptotic properties. Mice deficient in B-Raf show overall growth retardation and die between E10.5 and E12.5 of vascular defects caused by excessive death of differentiated endothelial cells. To elucidate the redundancy of Raf isozymes during embryonic development and to rescue B-Raf-/- (KO) phenotype, B-Raf alleles were disrupted by introducing A-Raf cDNA under the control of endogenous B-Raf promoter. The resulting BRaf A-Raf/A-Raf (KIN) phenotype depends on genetic background. The living embryos displaying normal development but size reduction were found with low incidence at E12.5d-16.5d. All of them displayed the rescue of vascular system. One adult p20 mouse without any visible defects in development and behavior was obtained. On the other hand, the processes of neurogenesis and neural precursors migration in survived embryos were disturbed which led in some cases to underdevelopment of different brain compartments. TUNEL and cell proliferation (PCNA staining) assays revealed more apoptotic (E13.5d) and less proliferating(E12.5d cells within ventricular and sub-ventricular zones of brain ventricles and in striatum of KIN embryos. In addition, more apoptotic cells were detected in many other tissues of E13.5d and in lung of E16.5d KIN embryos but not in adult KIN mouse. p20 KIN mouse demonstrated reduced fraction of neural precursor cells in sub-granular zone of hippocampus and mature neurons in olfactory bulb. The other processes of neurogenesis were not disturbed in adult KIN animal. Fibroblasts obtained from KIN embryos demonstrated less proliferative ability and were more susceptible to apoptotic stimuli compared to WT. This was accompanied by the reduction of active ERK and Akt required for survival, and with decrease of inactive phosphorylated BAD. The kinetic of both ERK and Akt phosphorylation upon serum stimulation was delayed. All these data indicate that moderate A-Raf kinase activity can prevent the endothelial apoptosis but is not enough to completely rescue the other developmental consequences.
The development and in-depth characterization of new fluoroaryl functionalized ORMOCER® materials (inorganic-organic hybrid polymers) for optical waveguide applications in telecommunication is presented. The preparation of the materials included precursor silane synthesis, hydrolysis/polycondensation of organoalkoxysilane mixtures, and photolithographic processing of the resulting oligosiloxane resins in order to establish the inorganic-organic hybrid network. During all stages of ORMOCER® preparation, structure-property relations were deduced from characterization data, particularly with respect to low optical loss in the important near-infrared spectral region as well as refractive index. With the aid of molecular modeling, structural characteristics of oligomeric intermediates were visualized, which was found valuable in the fundamental understanding of the material class. The material development started with the syntheses of a variety of commercially unavailable fluorinated and unfluorinated arylalkoxysilanes by means of Grignard and hydrosilylation pathways, respectively. A survey of silane optical properties, particularly their absorptions at the telecom wavelengths 1310 nm and 1550 nm, gave an impulse to the choice of suitable precursors for the preparation of low-loss ORMOCER® resins. Accordingly, precursor silane mixtures and hydrolysis/polycondensation reaction conditions were chosen and optimized with regard to low contents of C-H and Si-OH functions. Thus, absorptions as low as 0.04 dB/cm at 1310 nm and 0.18 dB/cm at 1550 nm, respectively, could be obtained from an oligosiloxane resin based on pentafluorophenyltrimethoxysilane (1) mixed with pentafluorophenyl(vinyl)-dimethoxysilane (5). In order to improve the organic crosslinkability under photolithographic processing conditions, further resins on the basis of the aforementioned were prepared, which additionally incorporated the styrene-analogous precursor 4-vinyltetrafluorophenyl-trimethoxysilane (4). Thus, ORMOCER® resins with low optical losses of 0.28 dB/cm at 1310 nm and 0.42 dB/cm at 1550 nm, respectively, were prepared, which exhibited excellent photopatternability. The manufacture of micropatterns such as optical waveguide structures by UV-photolithography under clean room conditions was the final stage of material synthesis. The optimization of processing parameters allowed the preparation of test patterns for the determination of optical, dielectrical and mechanical properties. A low optical loss of 0.51 dB/cm at 1550 nm could be measured on a waveguide manufactured from a photopatternable fluoroaryl functionalized ORMOCER®. The structural characterization of liquid resins as well as cured ORMOCER® samples was accomplished chiefly with solution and solid state 29Si-NMR spectroscopy, respectively. Particularly for polycondensates incorporating species based on more than one precursor silane, the spectra showed a high degree of complexity. An additional challenge arouse from the partial loss of fluoroaryl groups during ORMOCER® condensation and curing, which resulted in even more condensation products. Thus, in order to provide a basis for resin analysis, first the hydrolysis/condensation reactions of the isolated precursors were investigated under reaction time-resolution with NMR spectroscopy at low temperature. Backed by signal assignments in these single-precursor systems, the respective species could also be identified in the complex resin spectra, allowing for their quantitative interpretation. The structural characterization was rounded out by IR spectroscopy and SAXS analyses. With the help of molecular modeling, the experimental data were finally transferred into a three-dimensional image of an organosiloxane oligomer, which is representative for a photopatternable fluoroaryl functionalized ORMOCER® resin. The combination of low-temperature NMR, which made the characterization of polycondensates possible, with oligomer modeling paved the way to a further understanding of ORMOCER® resin systems. On the basis of this visualization of structural characteristics, e.g. properties such as organic crosslinkability of oligomers were discussed in the light of steric features within the molecular structure. Thus, new possibilities were established for the systematic optimization of ORMOCER® formulations. Structure-property relations with respect to optical loss and refraction, as determined within this work, follow trends, which are in accordance with the literature. Particularly the direct comparison of data derived from analogous fluorinated and unfluorinated ORMOCER® resins showed that fluorination results in significant decrease in NIR optical loss. Additionally, different unfluorinated aryl functionalized systems with varying aliphatic C-H content were compared. In case of a lower aliphatic content, a widening effect on the 1310 nm window was found. This is due to a shift of arylic C-H vibrations (1145 nm) towards lower wavelengths compared to aliphatic C-H (1188 nm). Finally, on the basis of NIR spectra of analogous fluorinated resins with low and high silanol content, respectively, a significant impact of (Si)O-H groups on the 1550 nm window was demonstrated, while the 1310 nm window was unaffected. This is due to O-H vibrations with a maximum at 1387 nm and further bands at higher wavelength. The index of refraction was drastically lowered due to fluorination. Thus, the analogous fluorinated and unfluorinated ORMOCER® resins had indices of 1.497 and 1.570, respectively, in the VIS region. For the fluorinated systems, refraction did not change significantly during organic cross-connection and hardbake. In conclusion, the new fluoroaryl functionalized ORMOCER® systems represent low-loss materials for telecom applications. In addition, in-depth characterization during material development allowed the proposal of structure-property relations, particularly with respect to optical properties, which are of considerable importance for future developments.
The present work consist of two major parts. The first part, extending over chapters 1, 2, 3 and 4, addresses the design and construction of a device capable of determining the shell thickness and the core size for monolayer spherical particles in a flow. The second part containing chapters 5, 6, 7, 8, 9 and 10, concentrate on the use of Raman spectroscopy as a space application, namely for use as a tool for in situ planetary investigations. This part directly addresses the MIRAS project, a study run under the auspices of Federal Ministry of Education and Research, BMBF and German Aerospace Center, DLR under national registration number 50OW0103. MIRAS stands for "Mineral Investigation by in situ Raman Spectroscopy". Microcapsule Sizing by Elastic Light Scattering The industrial development of processes based on microcapsules depends on the possibility to provide clear and complete information about the properties of these microcapsules. However, the tools for an easy and efficient determination of the microcapsule properties are lacking, several methods being often required to describe adequately the microcapsule behavior. Methods for evaluating the individual size and size distribution of both the core and the shell are required together with methods for measuring the mechanical strength, stability in appli-cation media, permeability of the shell, etc. Elastic light scattering measurements provide a possible way of determining properties such as core size, shell size and refractive index. The design and con-struction of a device capable of measuring the above mentioned parameters for a core-shell particle is the subject of the first part of this thesis. The basic principle of measurement for the device proposed here consists of an-alyzing one particle at a time by recording the elastic light scattering pattern at angles between approx. 60 and 120 grad. By comparing the experimentally recorded phase functions with the previously calculated phase functions stored in a database, the geometry of the scattering object can be identified. In our case the geometry is characterized by two parameters: the shell thickness and the core radius. In chapter 2 a short overview on the methods used for sizing microparticles is given. Different sizing methods are compared, and the advantages and disadvan-tages for the general problem of sizing are shortly discussed. It is observed that all sizing methods that are based on elastic light scattering theories are ensemble methods. Chapter 3 focusses on the theories used for calculating the theoretical scattering patterns with emphasize on the Mie theory. The generalization of Mie theory for layered particles is shortly presented and the far field intensity approximations are discussed. The last chapter (4) of this first part describes the experimental approach for building an automatic microcapsule sizer. The approach started by O. Sbanski [76] with the development of a software packet for calculating and storing theoret-ical phase functions for core-shell particles was continued with the designing and construction of a measuring device. The hardware construction and the software with all implemented corrections imposed by the individual setup components are described in detail. For the laser, the monochromaticity, the intensity profile of the beam as well as the planarity of the equi-phase fronts are taken into consid-eration. The flow cell with three different designs is described, and the influences of the employed design on the light scattering patterns are discussed together with the optical system used for recording the experimental phase functions. The detection system formed by two identical linear CCD arrays is presented together with the software approach used for data acquisition. Ways of improving the quality and the speed of the analyzing process are discussed. The final section presents measurements run on samples made of homogeneous spheres and also on samples containing industrial microcapsules. Mineral Investigation by in situ Raman Spectroscopy The envisaged future planetary missions require space-born instruments, which are highly miniaturized with respect to volume and mass and which have low needs of power. A micro Raman spectrometer as a stand alone device on a planetary surface (e.g. Mars) offers a wide spectrum of possibilities. It can assess the chemical analysis via determination of the mineral composition, detect organic molecules in the soil, identify the principal mineral phases, etc. The technical developments in the last years have introduced a new generation of small Raman systems suitable for robotic mineral characterization on planetary surfaces [20, 95]. Two different types of spectrometer were considered for the MIRAS study. As supporting laboratory experiments for the MIRAS study, the measure-ments on standard minerals and on SNC Mars meteorites are discussed in chapter 6. The following SNC meteorites have been investigated: Sayh al Uhaymir 060, Dar al Gani 735, Dar al Gani 476, Northwest Africa 856, Los Angeles, Northwest Africa 1068 and Zagami. Pyrite as a hitherto undescribed phase in the picritic (olivin-phyric) shergottite NWA 1068 as well as reduced carbon (e.g. graphite) and anatase in the shergottite Say al Uhaymir 060 are new findings for this class of meteorites. A detailed description of the proposed designs for MIRAS, with the compo-nents used for building the test version on a breadboard is covered in chapter 7. The scientific as well as the mission requirements imposed on the instrument are discussed. The basic design is presented and the main components that are brought together to build the device being the laser unit, the Raman head, the Rayleigh filtering box, and the spectral sensor (spectrometer with a matching de-tector) are described. The two proposed designs, one based on an acousto-optic tunable filter (AOTF) and the other based on a dispersive hadamard transform spectrometer are compared to each other. The actual breadboard setup with the detailed description of the components follows in Section 7.3. Further de-velopment of a Raman spectrometer for planetary investigations is proposed in combination with a microscope as part of the Extended-MIRAS project. The software developed for controlling the breadboard version of MIRAS is described in chapter 8 together with a short description of the structure of a relational database used for in house spectra management. The measuring pro-cedures and the data processing steps are presented. Spectra acquired with the MIRAS breadboard version based on the AOTF are shown in chapter 9. The final chapter addresses a rather different possibility of using Raman spectroscopy for planetary investigations. The chapter summarizes the content of four tech-nical notes that were established within the study contracted by the European Space Agency with firma Kayser-Threde in Munich concerning the possibility of applying Raman spectroscopy in the field of remote imaging.
The present thesis encompasses two parts. The first supramolecular part focuses on the development of new flexible self-assembling zwitterions as building blocks for supramolecular polymers. In the second part, the aim was to develop bioorganic receptors for amino acids and dipeptides in aqueous media. Both research projects are based on the guanidiniocarbonyl pyrrole 1 as a new efficient binding motif for the complexation of carboxylates in polar solution.A necessary requirement for the realization of these research projects was to develop an efficient and mild synthetic approach for the cationic guanidiniocarbonyl pyrroles in general. The harsh reaction conditions of the previously used method and the problematic purification of the cationic guanidinocarbonyl pyrroles so far prevented a more extensive exploration in bioorganic and supramolecular research. In the course of this work I successfully developed a new synthesis starting with mono tBoc-protected guanidine that was coupled with a benzyl protected pyrrole carboxylic acid. After deprotection of the benzyl group, a key intermediate in the newly developed synthesis, the tBoc-protected guanidinocarbonyl pyrrole acid, was obtained. This new, mild and extremely efficient synthetic approach for the introduction of acyl guanidines is now the standard procedure in our group for the preparation of both solution and solid-phase guanidiniocarbonyl pyrroles. With this facile method at hand, a new class of flexible zwitterions, in which a carboxylate is linked via an alkyl chain to a guanidiniocarbonyl pyrrole cation was synthesized. The self-aggregation and the influence of the length and therefore flexibility of the alkyl spacer on the structure and stability of the formed aggregates were studied in solution and gas phase. In solution the aggregation was studied by NMR-dilution experiments in DMSO which suggest that flexible zwitterions with n = 1, 3 and 5 form oligomers. For n = 1, highly stable helical aggregates with nanometer size are formed. In the gas phase studies the stability and the fragmentation kinetics of a series of sodiated dimeric zwitterions with n = 2, 3 and 5 were investigated. This was done by infrared multiphoton dissociation Fourier transform ion cyclotron resonance mass spectrometry (IRMPD-FT-ICR-MS). These kinds of studies can be used in the future for a more directed design of supramolecular building blocks The bioorganic research part comprises three different projects. In a first project I synthesized four new arginine analogues which can be implemented in peptides as a substitute for arginine. Therefore, I developed the new multi-step synthesis shown below for these arginine analogues. As a test for their application in normal solid phase synthesis, I successfully prepared a tripeptide sequence Ala-AA1-Val (AA: arginine analogue. In a second project I studied the influence of additional ionic interactions within our binding motif. I synthesized a di-cationic and a tris-cationic receptor and evaluated the binding properties via NMR titration experiments against a variety of amino acids. Especially, the tris-cationic receptor was capable to strongly complex amino acids. The association constants were about a factor of 100 higher than those for the guanidiniocarbonyl pyrroles known so far. Even in 90 %water/10 % DMSO the association constants determined by NMR titration were extremely high with values around Kass = 2000 M-1. In the third project I developed a de-novo designed receptor for C-terminal dipeptides in a beta-sheet conformation based on molecular calculations. This receptor was studied in NMR and also UV titration experiments. In 40 % water/ 60 % DMSO the association constants were too strong to be measured by NMR titration experiments. Therefore, the complexation properties of 12 were studied by UV titration in water (with 10 % DMSO added for solubility reasons) with various dipeptides and amino acids as substrates. The data show that 12 binds dipeptides very efficiently even in water with association constants Kass > 10000 M-1, making 12 one of the most effective dipeptide receptors known so far. In contrast to that, simple amino acids are bound up to ten times less efficiently (Kass > 1000 M-1) than dipeptides. In the series of dipeptides studied the complex stability increases depending on the side chains present in the order Gly < Ala < Val which is a result of the decreasing flexibility of the peptide and the increasing hydrophobicity of the side chains. The binding properties of this receptor are superior to any other dipeptide receptor reported so far. Within my thesis I have not only developed an essential, mild and efficient synthetic approach for guanidiniocarbonyl pyrroles in general, but also a new binding motif for the complexation of amino acids 15, 11 and in addition a dipeptide receptor 12 that is superior to all dipeptides receptors known so far.
Nitric oxide production by tobacco plants and cell cultures under normal conditions and under stress
(2004)
Nitric oxide (NO) is a gaseous free radical involved in the regulation of diverse biochemical and physiological processes in animals. During the last decade, evidence has accumulated that NO might also play an important role as a second messenger in plants. Of special interest were observations that NO was involved in a signal chain leading to the hypersensitive response (HR) in incompatible plant-pathogen interactions. In contrast to animals, plants have probably several enzymes that may produce NO. Potential candidates are: Cytosolic nitrate reductase (NR; EC 1.6.6.1), plasma-membrane (PM)-nitrite: NO reductase (Ni:NOR), nitric oxide synthase (NOS; EC 1.14.13.39) and Xanthine dehydrogenase (XDH; EC 1.1.1.204). The major goal of this work was to quantify NO production by plants, and to identify the enzymes responsible for NO production. As a major method, NO production by tobacco leaves or cell suspensions was followed under normal, non-stress conditions, and under biotic stress, through on-line measurement of NO emission into the gas phase (chemiluminescence). Plants used were tobacco wild-type (N. tabacum cv Xanthi or cv Gatersleben), NR-free mutants grown on ammonium in order to prevent NR induction, plants grown on tungstate to inhibit synthesis of functional MoCoenzymes, and a NO-overproducing nitrite reductase (NiR)-deficient transformant. Induction of HR in tobacco leaves and in cell suspensions was achieved using the fungal peptide elicitor cryptogein. Non-elicited leaves from nitrate-grown plants showed a typical NO-emission pattern where NO-emission was low in dark, higher in the light and very high under dark-anaerobic conditions. Even at maximum rates, NO production in vivo was only a few percent of total NR activity (NRA). Consistent with that, with a solution of purified NR as a simple, “low quenching” system, NO-emission was also about 1 % of NRA. Thus, NO scavenging by leaves and stirred cell suspensions appeared small and NO-emission into purified air should give a reliable estimate of NO production. NO-emission was always high in a NiR-deficient transformant which accumulated nitrite, and NO-emission was completely absent in plants or cell suspensions which did not contain NR. Thus, in healthy plants or cell suspensions, NO-emission was exclusively due to the reduction of nitrite to NO, mainly by cytosolic NR. In addition to nitrite, cytosolic NADH appears as an important factor limiting NO production. Unexpectedly, plants (in absence of NR) were able to reduce nitrite to NO under anaerobic conditions through an unknown enzyme system that was not a MoCo-enzyme and was cyanide-sensitive. When infiltrated into leaves at nanomolar concentrations, the fungal elicitor cryptogein provoked cell death in tobacco leaves and cell suspensions. The HR could be prevented by the NO-scavengers PTIO or c-PTIO, suggesting that NO production was indeed required for the HR. However, the product of the reaction of c-PTIO with NO, c-PTI, also prevented cell death without quenching NO emission. Thus, prevention of cell death by c- PTIO is no proof for an involvement of NO. No differences were found in the HR induction between NR-free plants and/or cell suspensions and WT plants. Thus, NR appears not necessary for the HR. Further, and in contrast to literature suggestions, a continuously high NO-overproduction by a NiR-free mutant did not interfere with the development of the HR. Most surprisingly, no additional NO-emission from tobacco leaves was induced by cryptogein at any phase of the HR. In contrast, some NO-emission, paralleled by nitrite accumulation, was detected 3-6 h after cryptogein addition with nitrate grown cell suspensions, but not with NR free, ammonium- grown cells. Thus, induction of NO-emission by cryptogein appeared somehow correlated with NR and nitrite, at least in cell suspensions. But since cryptogein induced the HR even in NR-free cell suspensions, this nitrite-related NO- emission was not required for cell death. NOS inhibitors neither prevented cell death nor did they affect nitrite-dependent NO-emission. Thus, in total these data question the often proposed role of NO as a signal in the HR, and of NOS as source for NO.
The point of departure for the present work has been the following free boundary value problem for analytic functions $f$ which are defined on a domain $G \subset \mathbb{C}$ and map into the unit disk $\mathbb{D}= \{z \in \mathbb{C} : |z|<1 \}$. Problem 1: Let $z_1, \ldots, z_n$ be finitely many points in a bounded simply connected domain $G \subset \mathbb{C}$. Show that there exists a holomorphic function $f:G \to \mathbb{D}$ with critical points $z_j$ (counted with multiplicities) and no others such that $\lim_{z \to \xi} \frac{|f'(z)|}{1-|f(z)|^2}=1$ for all $\xi \in \partial G$. If $G=\mathbb{D}$, Problem 1 was solved by K?nau [5] in the case of one critical point, and for more than one critical point by Fournier and Ruscheweyh [3]. The method employed by K?nau, Fournier and Ruscheweyh easily extends to more general domains $G$, say bounded by a Dini-smooth Jordan curve, but does not work for arbitrary bounded simply connected domains. In this paper we present a new approach to Problem 1, which shows that this boundary value problem is not an isolated question in complex analysis, but is intimately connected to a number of basic open problems in conformal geometry and non-linear PDE. One of our results is a solution to Problem 1 for arbitrary simply connected domains. However, we shall see that our approach has also some other ramifications, for instance to a well-known problem due to Rellich and Wittich in PDE. Roughly speaking, this paper is broken down into two parts. In a first step we construct a conformal metric in a bounded regular domain $G\subset \mathbb{C}$ with prescribed non-positive Gaussian curvature $k(z)$ and prescribed singularities by solving the first boundary value problem for the Gaussian curvature equation $\Delta u =-k(z) e^{2u}$ in $G$ with prescribed singularities and continuous boundary data. This is related to the Berger-Nirenberg problem in Riemannian geometry, the question which functions on a surface R can arise as the Gaussian curvature of a Riemannian metric on R. The special case, where $k(z)=-4$ and the domain $G$ is bounded by finitely many analytic Jordan curves was treated by Heins [4]. In a second step we show every conformal pseudo-metric on a simply connected domain $G\subseteq \mathbb{C}$ with constant negative Gaussian curvature and isolated zeros of integer order is the pullback of the hyperbolic metric on $\mathbb{D}$ under an analytic map $f:G \to \mathbb{D}$. This extends a theorem of Liouville which deals with the case that the pseudo-metric has no zeros at all. These two steps together allow a complete solution of Problem 1. Contents: Chapter I contains the statement of the main results and connects them with some old and new problems in complex analysis, conformal geometry and PDE: the Uniformization Theorem for Riemann surfaces, the problem of Schwarz-Picard, the Berger-Nirenberg problem, Wittich's problem, etc.. Chapter II and III have preparatory character. In Chapter II we recall some basic results about ordinary differential equations in the complex plane. In our presentation we follow Laine [6], but we have reorganized the material and present a self-contained account of the basic features of Riccati, Schwarzian and second order differential equations. In Chapter III we discuss the first boundary value problem for the Poisson equation. We shall need to consider this problem in the most general situation, which does not seem to be covered in a satisfactory way in the existing literature, see [1,2]. In Chapter IV we turn to a discussion of conformal pseudo-metrics in planar domains. We focus on conformal metrics with prescribed singularities and prescribed non-positive Gaussian curvature. We shall establish the existence of such metrics, that is, we solve the corresponding Gaussian curvature equation by making use of the results of Chapter III. In Chapter V we show that every constantly curved pseudo-metric can be represented as the pullback of either the hyperbolic, the euclidean or the spherical metric under an analytic map. This is proved by using the results of Chapter II. Finally we give in Chapter VI some applications of our results. [1,2] Courant, H., Hilbert, D., Methoden der Mathematischen Physik, Erster/ Zweiter Band, Springer-Verlag, Berlin, 1931/1937. [3] Fournier, R., Ruscheweyh, St., Free boundary value problems for analytic functions in the closed unit disk, Proc. Amer. Math. Soc. (1999), 127 no. 11, 3287-3294. [4] Heins, M., On a class of conformal metrics, Nagoya Math. J. (1962), 21, 1-60. [5] K?nau, R., L?gentreue Randverzerrung bei analytischer Abbildung in hyperbolischer und sph?ischer Geometrie, Mitt. Math. Sem. Giessen (1997), 229, 45-53. [6] Laine, I., Nevanlinna Theory and Complex Differential Equations, de Gruyter, Berlin - New York, 1993.
The experimental work of this thesis addresses the questions of whether established cell lines injected into murine blastocysts find their way back home and seed preferentially at the site of their origin. Furthermore, can they change their fate and differentiate to unrelated cell types when exposed to the embryonic environment. This survey was based on the fact that different cell lines have different potentials in developing embryos, dependent on their cellular identity. The cell lines used in this survey were AGM region-deriving DAS 104-4, DAS 104-8 cells, yolk sac-deriving YSE cells and bone marrow-deriving FDCP mix cells. These cells were injected into mouse blastocysts. Donor cells were traced in developing embryos via specific markers. Analysis of the embryos revealed that DAS cells are promiscuous in their seeding pattern, since they were found in all analysed tissues with similar frequencies. YSE cells showed preferences in seeding yolk sac and liver. YSE donor cells in chimaeric tissues were not able to change their immuno-phenotype, indicating that they did not change their destiny. Analysis of adult mice did not reveal any of YSE-derived cells donor contribution. In contrast, FDCP mix cells mostly engrafted haematopoietic tissues, although the embryos analysed by in situ hybridization had donor signals frequently in cartilage primordia, heads, and livers. Analysis of whether FDCPmix-derived cells found in foetal livers were of haematopoietic or hepatocytes nature showed that progeny of injected FDCP mix cells do not differentiate into cells that express a hepatocyte-specific marker. Further analysis showed that FDCPmix-derived donor cells found in brain express neural or haematopoietic markers. In order to reveal if they transdifferentiate to neurons or fuse with neurons/glial cells, nuclear diameters of donor and recipient cells were determined. Comparison of the nuclear diameters of recipient and donor cells revealed no differences. Therefore this suggests that progeny of FDCP mix in brain are not fusion products. Analysis of adult mice tissues revealed that presence of FDCP mix-derived cells was the highest in brains. These results confirmed the assumption that the developmental potential of the analysed cells cannot be easily modified, even when exposed to early embryonic environment. Therefore one can conclude that the analysed cell types had different homing patterns depending on their origins.
Although the role of B-cells in autoimmunity is not completely understood, their importance in the pathogenesis of autoimmune diseases has been more appreciated in the past few years. It is now well known that they have roles in addition to (auto) antibody production and are involved by different mechanisms in the regulation of T-cell mediated autoimmune disorders. The evolution of an autoimmune disease is a dynamic process, which takes a course of years during which complex immunoregulatory mechanisms shape the immune repertoire until the development of clinical disease. During this course, the B-cell repertoire itself is influenced and a change in the distribution of immunoglobulin heavy and light chain genes can be observed. B-cell depletive therapies have beneficial effects in patients suffering from rheumatoid arthritis (RA), highlighting also the central role of B-cells in the pathogenesis of this disease. Nevertheless, the mechanism of action is unclear. It has been hypothesised that B-cell depletion is able to reset deviated humoral immunity. Therefore we wanted to investigate if transient B-cell depletion results in changes of the peripheral B-cell receptor repertoire. To address this issue, expressed immunoglobulin genes of two patients suffering from RA were analysed; one patient for the heavy chain repertoire (patient H), one patient for the light chain repertoire (patient L). Both patients were treated with rituximab, an anti-CD20 monoclonal antibody that selectively depletes peripheral CD20+ B-cells for several months. The B-cell repertoire was studied before therapy and at the earliest time point after B-cell regeneration in both patients. A longer follow-up (up to 27 months) was performed in patient H who was treated a second time with rituximab after 17 months. Heavy chain gene analysis was carried out by nested-PCR on bulk DNA from peripheral B-cells using family-specific primers, followed by subcloning and sequencing. During the study, patient H received two courses of antibody treatment. B-cell depletion lasted 7 and 10 months, respectively and each time was accompanied by a clinical improvement. Anti-CD20 therapy induced two types of changes in this patient. During the early phase of B-cell regeneration, we noticed the presence of an expanded and recirculating population of highly mutated B-cells. These cells expressed very different immunoglobulin VH genes compared before therapy. They were class-switched and could be detected for a short period only. The long-term changes were more subtle. Nevertheless, characteristic changes in the VH2 family, as well as in specific mini-genes like VH3-23, 4-34 or 1-69 were noticed. Some of these genes have already been reported to be biased in autoimmune diseases. Also in autoimmune diseases, in particular in RA, clonal B-cells have been frequently found in the repertoire. B-cell depletion with anti-CD20 antibody resulted in a long term loss of clonal B-cells in patient H. Thus, temporary B-cell depletion induced significant changes in the heavy chain repertoire. For the light chain gene analysis, the repertoire changes were analysed separately for naive (CD27-) and memory (CD27+) B-cells. Individual CD19+ B-cells were sorted into CD27- and CD27+ cells and single cell RT-PCR was performed, followed by direct sequencing. During the study, patient L received one course of antibody treatment. B-cell depletion lasted 10 months and the light chain repertoire was studied before and after therapy. Before therapy, some differences in the distribution of VL and JL genes were observed between naive and memory B-cells. In particular, the predominant usage of Jk-proximal Vk genes by the CD27- naive B-cells indicated that the receptor editing was less frequent in this population compared to memory cells. In VlJl rearrangements also, some evidence for decreased receptor editing was noticed, with the overrepresentation of the Jl2/3 gene segments. The CDR3 regions of naive and memory cells showed different characteristics: the activity of the terminal deoxynucleotidyl transferase and exonuclease in Vl(5’) side was greater in memory cells. Also in the light chain repertoire, we observed some changes induced by the B-cell depletive therapy. There was a tendency of a less frequent usage of Jk-proximal Vk genes in the naive population. Some Vl genes, previously described in autoimmune diseases and connected to rheumatoid factor activity, such as 3p, 3r, 1g, were not found after therapy. The different characteristics of the CDR3 regions of VlJl rearrangements were not observed anymore. Very significantly, the ratio Vk to Vl was shifted toward a greater usage of Vk genes in the naive population after therapy. Taken together, these results indicate that therapeutic transient B-cell depletion by anti-CD20 antibody therapy modulates the immunoglobulin gene repertoire in the two RA patients studied. Measurable changes were observed in the heavy chain as well as in the light chain repertoire, which may be relevant to the course of the disease. This also supports the notion that the composition of the B-cell repertoire is influenced by the disease and that B-cell depletion can reset biases that are typically found in autoimmune diseases.
In the last years more than one hundred microbial genomes have been sequenced, many of them from pathogenic bacteria. The availability of this huge amount of sequence data enormously increases our knowledge on the genome structure and plasticity, as well as on the microbial diversity and evolution. In parallel, these data are the basis for the scientific “revolution” in the field of industrial and environmental biotechnology and medical microbiology – diagnostics and therapy, development of new drugs and vaccines against infectious agents. Together with the genomic approach, other molecular biological methods such as PCR, DNA-chip technology, subtractive hybridization, transcriptomics and proteomics are of increasing importance for research on infectious diseases and public health. The aim of this work was to characterize the genome structure and -content of the probiotic Escherichia coli strain Nissle 1917 (O6:K5:H31) and to compare these data with publicly available data on the genomes of different pathogenic and non-pathogenic E. coli strains and other closely related species. A cosmid genomic library of strain Nissle 1917 was screened for clones containing the genetic determinants contributing to the successful survival in and colonization of the human body, as well as to mediate this strain’s probiotic effect as part of the intestinal microflora. Four genomic islands (GEI I-IVNissle 1917) were identifed and characterized. They contain many known fitness determinants (mch/mcm, foc, iuc, kps, ybt), as well as novel genes of unknown function, mobile genetic elements or newly identified putative fitness-contributing factors (Sat, Iha, ShiA-homologue, Ag43-homologues). All islands were found to be integrated next to tRNA genes (serX, pheV, argW and asnT, respectively). Their structure and chromosomal localization closely resembles those of analogous islands in the genome of uropathogenic E. coli strain CFT073 (O6:K2(?):H1), but they lack important virulence genes of uropathogenic E. coli (hly, cnf, prf/pap). Evidence for instability of GEI IINissle 1917 was given, since a deletion event in which IS2 elements play a role was detected. This event results in loss of a 30 kb DNA region, containing important fitness determinants (iuc, sat, iha), and therefore probably might influence the colonization capacity of Nissle 1917 strain. In addition, a screening of the sequence context of tRNA-encoding genes in the genome of Nissle 1917 was performed to identify genome wide potential integration sites of “foreign” DNA. As a result, similar “tRNA screening patterns” have been observed for strain Nissle 1917 and for the uropathogenic E. coli O6 strains (UPEC) 536 and CFT073. I. Summary 4 The molecular reason for the semi-rough phenotype and serum sensitivity of strain Nissle 1917 was analyzed. The O6-antigen polymerase-encoding gene wzy was identified, and it was shown that the reason for the semi-rough phenotype is a frame shift mutation in wzy, due to the presence of a premature stop codon. It was shown that the restoration of the O side-chain LPS polymerization by complementation with a functional wzy gene increased serumresistance of strain Nissle 1917. The results of this study show that despite the genome similarity of the E. coli strain Nissle 1917 with the UPEC strain CFT073, the strain Nissle 1917 exhibits a specific set of geno- and phenotypic features which contribute to its probiotic action. By comparison with the available data on the genomics of different species of Enterobacteriaceae, this study contributes to our understanding of the important processes such as horizontal gene transfer, deletions and rearrangements which contribute to genome diversity and -plasticity, and which are driving forces for the evolution of bacterial variants. At last, the fim, bcs and rfaH determinats whose expression contributes to the mutlicellular behaviour and biofilm formation of E. coli strain Nissle 1917 have been characterized.
Summary The nature of the chemical bond is a topic under constant debate. What is known about individual molecular properties and functional groups is often taught and rationalized by explaining Lewis structures, which, in turn, make extensive use of the valence concept. The valence concept distinguishes between electrons, which do not participate in chemical interactions (core electrons) and those, which do (single, double, triple bonds, lone-pair electrons, etc.). Additionally, individual electrons are assigned to atomic centers. The valence concept is of paramount success: It allows the successful planning of chemical syntheses and analyses, it explains the behavior of individual functional groups, and, moreover, it provides the “language” to think of and talk about molecular structure and chemical interactions. The resounding success of the valence concept may be misleading to forget its approximative character. On the other hand, quantum mechanics provide in principle a quantitative description of all chemical phenomena, but there is no discrimination between electrons in quantum mechanics. From the quantum mechanical point of view there are only indistinguishable electrons in the field of the nuclei, i.e., it is impossible to assign a given electron to a particular center or to ascribe a particular purpose to individual electrons. The concept of indistinguishability of micro particles is founded on the Heisenberg uncertainty relation, which states, that wavepackets diverge in the 6N dimensional phase space, such that individual trajectories can not be identified. Hence it is a deep-rooted and approved physical concept. As an introduction to the present work density partitioning schemes were discussed, which divide the total molecular density into chemically meaningful areas. These partitioning schemes are intimately related to either the concepts of bound atoms in a molecule (as in the Atoms In Molecules theory (AIM) according to Bader or as in the Hirshfeld partitioning scheme) or to the concept of chemical structure in the sense of Lewis structures, which divide the total molecular density into core and valence density, where the valence density is split up again into bonding and non-bonding electron densities. Examples are early and recent loge theories, the topological analysis by means of the Electron Localization Function (ELF), and the Natural Bond Orbital (NBO) approach. Of these partitioning schemes, the theories according to Bader (AIM), to Becke and Edgecomb (ELF) and according to Weinhold (NBO and Natural Resonance Theory, NRT), respectively, were reviewed in detail critically. Points of criticism were explicated for each of the mentioned theories. Since theoretically derived electron densities are to be compared to experimentally derived densities, a brief introduction into the theory of X-ray di®raction experiments was given and the multipole formalism was introduced. The procedure of density refinement was briefly discussed. Various suggestions for improvements were developed: One strategy would be the employment of model parameters, which are to a maximum degree mutually orthogonal, with the object of minimizing correlations among the model parameters, e.g., to introduce nodal planes into the radial functions of the multipole model. A further suggestion involves the guidance of the iterative refinement procedure by an extremum principle, which states, that when di®erent solutions to the least squares minimization problem are available with about the same statistical measures of quality and with about the same residual density, then the solution is to prefer, which yields a minimum density at the bond critical point (BCP) and a maximum polarity in terms of the ratio of distances between the BCP and the nuclei. This suggestion is based on the well known fact, that the bond polarity (in terms of the ratio of distances between the BCP and the respective nuclei) is underestimated in the experiment. Another suggestion for including physical constraints is the explicit consideration of the virial theorem, e.g., by evaluating the integration of the Laplacian over the entire atomic basins and comparing this value to zero and to the value obtained from the integration of the electron gradient field over the atomic surface. The next suggestion was to explicitly use the electrostatic theorem of Feynman (often also denoted as Hellmann-Feynman theorem), which states, that the forces onto the nuclei can be calculated from the purely classical electrostatic forces of the electron distribution and the nuclei distribution. For a stationary system, these forces must add to zero. This also provides an internal quality criterion of the density model. This can be performed in an iterative way during the refinement procedure or as a test of the final result. The use of the electrostatic theorem is expected to reduce significantly correlations among static density parameters and parameters describing vibrations, since it is a valuable tool to discriminate between physically reasonable and artificial static electron densities. All of these mentioned suggestions can be applied as internal quality criteria. The last suggestion is based on the idea to initiate the experimental refinement with a set of model parameters, which is, as much as possible close to the final solution. This can be achieved by performing periodic boundary conditions calculations, from which theoretically created files are obtained, which contain the Miller indices (h, k, l) and the respective intensity I. This file is used for a model parameter estimation (refinement), which excludes vibrations. The resulting parameters can be used for the experimental refinement, where, in a first step, the density parameters are fixed to determine the parameters describing vibrations. For a fine tuning, again the electrostatic theorem and the other above mentioned suggestions could be applied. Theoretical predictions should not be biased by the method of computation. Therefore the dependence of the density analyzing tools on the level of calculation (method of calculation/basis set) and on the substituents in complex chemical bonding situations were evaluated in the second part of the present work. A number of compounds containing formal single and double sulfur nitrogen bonds was investigated. For these compounds, experimental data were also available. The calculated data were compared internally and with the experimental results. The internal comparison was drawn with regard to questions of convergency as well as with regard to questions of consistency: The resulting molecular properties from NBO/NRT analyses were found to be very stable, when the geometries were optimized at the respective level of theory. This stability is valid for variations in the methods of calculation as well as for variations in the basis set. Only the individual resonance weights of the contributing Natural Lewis Structures differed considerably depending on the level of calculation and depending on the substituents. However, the deviations were in both cases to a large extent within a limit which preserves the descending order of the leading resonance structure weights. The resulting bond orders, i.e., the total, covalent and ionic bond order from NRT calculations, were not affected by the shift in the resonance weights. The analysis of the bond topological parameters resulted in a discrimination between insensitive parameters and sensitive parameters. The stable parameters do neither depend strongly on the method of calculation nor on the basis set. Only minor variation occurs in the numerical values of these parameters, when the level of calculation is changed or even when other functional groups (H, Me, or tBu) are employed, as long as the methods of calculation do not drop considerably below a standard level. The bond descriptors of the sulfur nitrogen bonds were found to be also stable with respect to the functional groups R = H, R = Me, and R = tBu. Stable parameters are the bond distance, the density at the bond critical point (BCP) and the ratio of distances between the BCP and the nuclei A and B, which varies clearly when considering the formal bond type. For very small basis sets like the 3-21G basis set, this characteristic stability collapses. The sensitive parameters are based on the second derivatives of the density with respect to the coordinates. This is in accordance with the well known fact, that the total second derivative of the density with respect to the coordinates is a strongly oscillating function with positive as well as negative values. A profound deviation has to be anticipated as a consequence of strong oscillations. lambda3, which describes the local charge depletion in the direction of the interaction line, is the most varying parameter. A detailed analysis revealed that the position of the BCP in the rampant edge of the Laplacian distribution is responsible for the sensitivity of the numerical value of lambda3 in formal double bonds. Since the slope of the Laplacian assumes very high values in its rampant edge, a tiny displacement of the BCP leads already to a considerable change in lambda3. This instability is not a failure of the underlying theory, but it yields de facto to a considerable dependence of sensitive bond topological properties on the method of calculation and on the applied basis sets. Since the total second derivative is important to judge on the nature of the bond in the AIM theory (closed shell interactions versus shared interactions), the changes in lambda3 can lead to differing chemical interpretations. The comparison of theoretically derived bond topological properties of various sulfur nitrogen bonds provides the possibility to measure the self consistency of this data set. All data sets clearly exhibit a linear correlation between the bond distances and the density at the BCP on one hand and between the bond distances and the Laplacian values at the BCP on the other hand. These correlations were almost independent of the basis set size. In this context, the linear regression has to be regarded exclusively as a descriptive statistics tool. There is no correlation anticipated a priori. The formal bond type was found to be readily deducible from the theoretically obtained bond topological descriptors of the model systems. In this sense, the bond topological properties are self consistent despite of the numerical sensitivity of the derivatives, as exemplified above. Often, calculations are performed with the experimentally derived equilibrium geometries and not with optimized ones. Applying this approach, the computationally costly geometry optimizations are saved. Following this approach the bond topological properties were calculated using very flexible basis sets and employing the fixed experimental geometry (which, of course, includes the application of tBu groups). Regression coe±cients similar to those from optimized geometries were obtained for correlations between bond distances and the densities at the BCP as well as for the correlation between bond distances and the Laplacian at the BCP, i.e. the approach is valid. However, the data points scattered less and the coe±cient of correlation was clearly increased when geometry optimizations were performed beforehand. The comparison between data obtained from theory and experiment revealed fundamental discrepancies: In the data set of bond topological parameters from the experiment, the behavior of only 2 out of 3 insensitive parameters was comparable to the behavior of the theoretically obtained values, i.e. theoretical and experimental bond distances as well as theoretical and experimental densities at the BCP correlate. From the theoretically obtained data it was easy to deduce the formal bond type from the position of the BCP, since it changed in a systematic manner. The respective experimentally obtained values were almost constant and did not change systematically. For the SN bonds containing compounds, the total second derivative assumes exclusively negative values in the experiment. Due to the different internal behavior, experimentally and theoretically sensitive bond topological values could not be compared directly. The qualitative agreement in the Laplacian distribution, however, was excellent. In the third and last part of this work, the application to chemical systems follows. Formal hypervalent molecules, i.e. molecules where some atoms are considered to hold more than 8 electrons in their valence shell, were investigated. These were compounds containing sulfur nitrogen bonds (H(NtBu)2SMe, H2C{S(NtBu)2(NHtBu)}2, S(NtBu)2 and S(NtBu)3) and a highly coordinated silicon compound. The set of sulfur nitrogen compounds also contained a textbook example for valence expansion, the sulfur triimide. For these molecules, experimental reference values were available from high resolution X-ray experiments. The experimental results were in the case of the sulfur triimide not unique. Furthermore, from the experimental bond topological data no definite conclusion about the formal bonding type could be drawn. The situation of sulfur nitrogen bonds in the above mentioned set of molecules was analyzed in terms of a geometry discussion and by means of a topological analysis. The methyl-substituted isolated molecules served as model compounds. For the interpretation of the bonding situation additional NBO/NRT calculations were preformed for the sulfur nitrogen compounds and an ELF calculation and analysis was performed for the silicon compound. The ELF analysis included not only the presentation and discussion of the ELF-isosurfaces (eta = 0.85), but also the investigation of populations of disynaptic valence basins and the percentage contributions to these populations of the individual atoms when the disynaptic valence basins are split into atomic contributions according to Bader’s partitioning scheme. The question of chemical interest was whether hypervalency is present in the set of molecules or not. In the first case the octet rule would be violated, in the second case Pauling’s verdict would be violated. While the concept of hypervalency is well established in chemistry, the violation of Pauling’s verdict is not. The quantitative numbers of the sensitive bond topological values from theory and experiment were not comparable, since no systematic relationship between the experimentally and theoretically determined sensitive bond descriptors was found. However, the insensitive parameters are in good agreement and the qualitative Laplacian distribution is, with few exceptions, in excellent agreement. The formal bonding type was deduced from experimental and theoretical topological data by considering the number and shape of valence shell charge concentrations in proximity to the sulfur and nitrogen centers. The results from NBO/NRT calculations confirmed the findings. All employed density analyzing tools AIM, ELF and NBO/NRT coincided in describing the bonding situation in the formally hypervalent molecules as highly polar. A comparison and analysis of experimentally and theoretically derived electron densities led consistently to the result, that regarding this set of molecules, hypervalency has to be excluded unequivocally.
A theory of managed floating
(2003)
After the experience with the currency crises of the 1990s, a broad consensus has emerged among economists that such shocks can only be avoided if countries that decided to maintain unrestricted capital mobility adopt either independently floating exchange rates or very hard pegs (currency boards, dollarisation). As a consequence of this view which has been enshrined in the so-called impossible trinity all intermediate currency regimes are regarded as inherently unstable. As far as the economic theory is concerned, this view has the attractive feature that it not only fits with the logic of traditional open economy macro models, but also that for both corner solutions (independently floating exchange rates with a domestically oriented interest rate policy; hard pegs with a completely exchange rate oriented monetary policy) solid theoretical frameworks have been developed. Above all the IMF statistics seem to confirm that intermediate regimes are indeed less and less fashionable by both industrial countries and emerging market economies. However, in the last few years an anomaly has been detected which seriously challenges this paradigm on exchange rate regimes. In their influential cross-country study, Calvo and Reinhart (2000) have shown that many of those countries which had declared themselves as ‘independent floaters’ in the IMF statistics were charaterised by a pronounced ‘fear of floating’ and were actually heavily reacting to exchange rate movements, either in the form of an interest rate response, or by intervening in foreign exchange markets. The present analysis can be understood as an approach to develop a theoretical framework for this managed floating behaviour that – even though it is widely used in practice – has not attracted very much attention in monetary economics. In particular we would like to fill the gap that has recently been criticised by one of the few ‘middle-ground’ economists, John Williamson, who argued that “managed floating is not a regime with well-defined rules” (Williamson, 2000, p. 47). Our approach is based on a standard open economy macro model typically employed for the analysis of monetary policy strategies. The consequences of independently floating and market determined exchange rates are evaluated in terms of a social welfare function, or, to be more precise, in terms of an intertemporal loss function containing a central bank’s final targets output and inflation. We explicitly model the source of the observable fear of floating by questioning the basic assumption underlying most open economy macro models that the foreign exchange market is an efficient asset market with rational agents. We will show that both policy reactions to the fear of floating (an interest rate response to exchange rate movements which we call indirect managed floating, and sterilised interventions in the foreign exchange markets which we call direct managed floating) can be rationalised if we allow for deviations from the assumption of perfectly functioning foreign exchange markets and if we assume a central bank that takes these deviations into account and behaves so as to reach its final targets. In such a scenario with a high degree of uncertainty about the true model determining the exchange rate, the rationale for indirect managed floating is the monetary policy maker’s quest for a robust interest rate policy rule that performs comparatively well across a range of alternative exchange rate models. We will show, however, that the strategy of indirect managed floating still bears the risk that the central bank’s final targets might be negatively affected by the unpredictability of the true exchange rate behaviour. This is where the second policy measure comes into play. The use of sterilised foreign exchange market interventions to counter movements of market determined exchange rates can be rationalised by a central bank’s effort to lower the risk of missing its final targets if it only has a single instrument at its disposal. We provide a theoretical model-based foundation of a strategy of direct managed floating in which the central bank targets, in addition to a short-term interest rate, the nominal exchange rate. In particular, we develop a rule for the instrument of intervening in the foreign exchange market that is based on the failure of foreign exchange market to guarantee a reliable relationship between the exchange rate and other fundamental variables.
Summary: In the present work, two important negative regulators of T cell responses in rats were examined. At the molecular level, rat CTLA-4, a receptor important for deactivating T cell responses, was examined for the expression pattern and in vitro functions. For this purpose, anti-rat CTLA-4 mAbs were generated. Consistent with the studies in mice and humans, rat CTLA-4 was detectable only in CD25+CD4+ regulatory T cells in unstimulated rats, and was upregulated in all activated T cells. Cross-linking rat CTLA-4 led to the deactivation of anti-TCR- and anti-CD28 stimulated (costimulation) T cell responses such as reduction in activation marker expression, proliferation, and cytokine IL-2 production. Although T cells stimulated with the superagonistic anti-CD28 antibody alone without TCR engagement also increased their CTLA-4 expression, a delayed kinetics of CTLA-4 upregulation was found in cells stimulated in this way. The physiological relevance of this finding needs further investigation. At the cellular level, rat CD25+CD4+ regulatory T cells were examined here in detail. Using rat anti-CTLA-4 mAbs, the phenotype of CD25+CD4+ regulatory T cells was investigated. Identical to the mouse and human Treg phenotype, rat CD25+CD4+ T cells constitutively expressed CTLA-4, were predominantly CD45RC low, and expressed high level of CD62L (L-selectin). CD25+CD4+ cells proliferated poorly and were unable to produce IL-2 upon engagement of the TCR and CD28. Furthermore, rat CD25+CD4+ cells produced high amounts of anti-inflammatory cytokine IL-10 upon stimulation. Importantly, freshly isolated CD25+CD4+ T cells from naïve rats exhibited suppressor activities in the in vitro suppressor assays. In vitro, CD25+CD4+ regulatory T cells proliferated vigorously upon superagonistic anti-CD28 stimulation and became very potent suppressor cells. In vivo, a single injection of CD28 superagonist into rats induced transient accumulation and activation of CD25+CD4+ regulatory T cells. These findings suggest firstly that efficient expansion of CD25+CD4+ cells without losing their suppressive effects (even enhance their suppressive activities) can be achieved with the superagonistic anti- CD28 antibody in vitro. Secondly, the induction of disproportional expansion of CD25+CD4+ cells by a single injection of superagonistic anti-CD28 antibody in vivo implies that superagonistic anti-CD28 antibody may be a promising candidate in treating autoimmune diseases by causing a transient increase of activated CD25+CD4+ T cells and thus tipping ongoing autoimmune responses toward selftolerance.
This study investigates the credit channel in the transmission of monetary policy in Germany by means of a structural analysis of aggregate bank loan data. We base our analysis on a stylized model of the banking firm, which specifies the loan supply decisions of banks in the light of expectations about the future course of monetary policy. Using the model as a guide, we apply a vector error correction model (VECM), in which we identify long-run cointegration relationships that can be interpreted as loan supply and loan demand equations. In this way, the identification problem inherent in reduced form approaches based on aggregate data is explicitly addressed. The short-run dynamics is explored by means of innovation analysis, which displays the reaction of the variables in the system to a monetary policy shock. The main implication of our results is that the credit channel in Germany appears to be effective, as we find that loan supply effects in addition to loan demand effects contribute to the propagation of monetary policy measures.
A CD8+ cell-mediated host defense relies on cognate killing of infected target cells and on local inflammation induced by the secretion of IFN-g. Using assays of single cell resolution, it was studied to what extent these two effector function of CD8+ cells are linked. Granzyme B (GzB) is stored in cytolytic granules of CD8+ cells and its secretion is induced by antigen recognition of these cells. Following entry into the cytosol GzB induces apoptosis in the target cells. It was measured whether GzB release by individual CD8+ cells is accompanied by the secretion of IFN-gƒnƒnand of other cytokines. HIV peptide libraries were tested on bulk peripheral blood mononuclear cells and on purified CD4+ and CD8+ cells obtained from HIV infected individuals. The library included a panel of previously defined HLA class I restricted HIV peptides and an overlapping 20-mer peptide-series that covered the entire gp120 molecule. To characterize the in vivo differentiation state of the T-cells, freshly isolated lymphocytes were tested in assays of 24h duration. The data showed that only ~20% of the peptides triggered the release of both GzB and IFN-g from CD8+ cells. The majority of the HIV peptides induced either GzB or IFN-g, ~40% in each category. The GzB positive, IFN-g negative CD8+ cells did not produce IL-4 or IL-5, which suggests that they do not correspond to Tc2 cells but represent a novel Tc1 subclass, which was termed Tc1c. Also the IFN-g positive, GzB negative CD8+ cell subpopulation represents a yet undefined CD8+ effector cell lineage that was termed Tc1b. Tc1b and Tc1c cells are likely to make different, possibly antagonistic contributions to the control of HIV infection. Since IFN-g activates HIV replication in latently infected macrophages, the secretion of this cytokine by Tc1b cells in the absence of killing may have adverse effects on the host defense. In contrast, cytolysis by Tc1c cells in the absence of IFN-g production might represent the protective class of response. Further studies in the field of Tc1 effector cell diversity should lead to valuable insights for management of infections and developing rationales for vaccine design.
The present investigation report a protocol to obtain dendritic cells (DC) that protects mice against fatal leishmaniasis. DC were generated from bone marrow precursors, pulsed with leishmanial antigen and activated with CpG oligodeoxinucleotides. Mice that were vaccinated with these cells were strongly protected against the clinical and parasitological manifestations of leishmaniasis and developed a Th1 immune response. protection was solid and long-lasting, and was also dependent of the via of administration. Whe the mechanism of protection was studied, it was observed that the availability of the cytokine interleukin-12 at the time of vaccination was a key requirement, but that the source of this cytokine is not the donor cells but unidentified cells from the recipients.