Refine
Year of publication
Document Type
- Journal article (7624)
- Doctoral Thesis (2803)
- Book article / Book chapter (133)
- Conference Proceeding (108)
- Preprint (107)
- Working Paper (66)
- Review (43)
- Report (27)
- Master Thesis (26)
- Book (20)
Language
- English (10983) (remove)
Keywords
- Toxikologie (119)
- Medizin (99)
- inflammation (91)
- Psychologie (86)
- Biochemie (85)
- cancer (80)
- Organische Chemie (68)
- gene expression (68)
- Anorganische Chemie (66)
- Maus (64)
Institute
- Theodor-Boveri-Institut für Biowissenschaften (1905)
- Graduate School of Life Sciences (799)
- Physikalisches Institut (595)
- Institut für Psychologie (492)
- Neurologische Klinik und Poliklinik (388)
- Medizinische Klinik und Poliklinik II (372)
- Institut für Organische Chemie (366)
- Medizinische Klinik und Poliklinik I (357)
- Institut für Anorganische Chemie (353)
- Institut für Molekulare Infektionsbiologie (353)
Schriftenreihe
- Cultural Animal Studies, Band 3 (24)
- Berichte aus der Informatik (1)
- Deuterocanonical and Cognate Literature Studies (1)
- Deuterocanonical and Cognate Literature Yearbook (1)
- International Archives of the History of Ideas / Archives internationales d’histoire des idées 242 (1)
- Methods in Molecular Biology 2533 (1)
- Methods in Molecular Biology; 2643 (1)
Sonstige beteiligte Institutionen
- VolkswagenStiftung (24)
- Johns Hopkins School of Medicine (17)
- Helmholtz Institute for RNA-based Infection Research (HIRI) (7)
- IZKF Nachwuchsgruppe Geweberegeneration für muskuloskelettale Erkrankungen (7)
- Clinical Trial Center (CTC) / Zentrale für Klinische Studien Würzburg (ZKSW) (5)
- Fraunhofer-Institut für Silicatforschung ISC (5)
- Johns Hopkins University School of Medicine (5)
- Wilhelm-Conrad-Röntgen-Forschungszentrum für komplexe Materialsysteme (5)
- Bernhard-Heine-Centrum für Bewegungsforschung (4)
- Johns Hopkins School of Medicine, Baltimore, MD, U.S. (4)
ResearcherID
- B-1911-2015 (1)
- B-4606-2017 (1)
- C-2593-2016 (1)
- D-1221-2009 (1)
- D-1250-2010 (1)
- I-5818-2014 (1)
- J-8841-2015 (1)
- M-1240-2017 (1)
- N-2030-2015 (1)
- N-3741-2015 (1)
Summary The nature of the chemical bond is a topic under constant debate. What is known about individual molecular properties and functional groups is often taught and rationalized by explaining Lewis structures, which, in turn, make extensive use of the valence concept. The valence concept distinguishes between electrons, which do not participate in chemical interactions (core electrons) and those, which do (single, double, triple bonds, lone-pair electrons, etc.). Additionally, individual electrons are assigned to atomic centers. The valence concept is of paramount success: It allows the successful planning of chemical syntheses and analyses, it explains the behavior of individual functional groups, and, moreover, it provides the “language” to think of and talk about molecular structure and chemical interactions. The resounding success of the valence concept may be misleading to forget its approximative character. On the other hand, quantum mechanics provide in principle a quantitative description of all chemical phenomena, but there is no discrimination between electrons in quantum mechanics. From the quantum mechanical point of view there are only indistinguishable electrons in the field of the nuclei, i.e., it is impossible to assign a given electron to a particular center or to ascribe a particular purpose to individual electrons. The concept of indistinguishability of micro particles is founded on the Heisenberg uncertainty relation, which states, that wavepackets diverge in the 6N dimensional phase space, such that individual trajectories can not be identified. Hence it is a deep-rooted and approved physical concept. As an introduction to the present work density partitioning schemes were discussed, which divide the total molecular density into chemically meaningful areas. These partitioning schemes are intimately related to either the concepts of bound atoms in a molecule (as in the Atoms In Molecules theory (AIM) according to Bader or as in the Hirshfeld partitioning scheme) or to the concept of chemical structure in the sense of Lewis structures, which divide the total molecular density into core and valence density, where the valence density is split up again into bonding and non-bonding electron densities. Examples are early and recent loge theories, the topological analysis by means of the Electron Localization Function (ELF), and the Natural Bond Orbital (NBO) approach. Of these partitioning schemes, the theories according to Bader (AIM), to Becke and Edgecomb (ELF) and according to Weinhold (NBO and Natural Resonance Theory, NRT), respectively, were reviewed in detail critically. Points of criticism were explicated for each of the mentioned theories. Since theoretically derived electron densities are to be compared to experimentally derived densities, a brief introduction into the theory of X-ray di®raction experiments was given and the multipole formalism was introduced. The procedure of density refinement was briefly discussed. Various suggestions for improvements were developed: One strategy would be the employment of model parameters, which are to a maximum degree mutually orthogonal, with the object of minimizing correlations among the model parameters, e.g., to introduce nodal planes into the radial functions of the multipole model. A further suggestion involves the guidance of the iterative refinement procedure by an extremum principle, which states, that when di®erent solutions to the least squares minimization problem are available with about the same statistical measures of quality and with about the same residual density, then the solution is to prefer, which yields a minimum density at the bond critical point (BCP) and a maximum polarity in terms of the ratio of distances between the BCP and the nuclei. This suggestion is based on the well known fact, that the bond polarity (in terms of the ratio of distances between the BCP and the respective nuclei) is underestimated in the experiment. Another suggestion for including physical constraints is the explicit consideration of the virial theorem, e.g., by evaluating the integration of the Laplacian over the entire atomic basins and comparing this value to zero and to the value obtained from the integration of the electron gradient field over the atomic surface. The next suggestion was to explicitly use the electrostatic theorem of Feynman (often also denoted as Hellmann-Feynman theorem), which states, that the forces onto the nuclei can be calculated from the purely classical electrostatic forces of the electron distribution and the nuclei distribution. For a stationary system, these forces must add to zero. This also provides an internal quality criterion of the density model. This can be performed in an iterative way during the refinement procedure or as a test of the final result. The use of the electrostatic theorem is expected to reduce significantly correlations among static density parameters and parameters describing vibrations, since it is a valuable tool to discriminate between physically reasonable and artificial static electron densities. All of these mentioned suggestions can be applied as internal quality criteria. The last suggestion is based on the idea to initiate the experimental refinement with a set of model parameters, which is, as much as possible close to the final solution. This can be achieved by performing periodic boundary conditions calculations, from which theoretically created files are obtained, which contain the Miller indices (h, k, l) and the respective intensity I. This file is used for a model parameter estimation (refinement), which excludes vibrations. The resulting parameters can be used for the experimental refinement, where, in a first step, the density parameters are fixed to determine the parameters describing vibrations. For a fine tuning, again the electrostatic theorem and the other above mentioned suggestions could be applied. Theoretical predictions should not be biased by the method of computation. Therefore the dependence of the density analyzing tools on the level of calculation (method of calculation/basis set) and on the substituents in complex chemical bonding situations were evaluated in the second part of the present work. A number of compounds containing formal single and double sulfur nitrogen bonds was investigated. For these compounds, experimental data were also available. The calculated data were compared internally and with the experimental results. The internal comparison was drawn with regard to questions of convergency as well as with regard to questions of consistency: The resulting molecular properties from NBO/NRT analyses were found to be very stable, when the geometries were optimized at the respective level of theory. This stability is valid for variations in the methods of calculation as well as for variations in the basis set. Only the individual resonance weights of the contributing Natural Lewis Structures differed considerably depending on the level of calculation and depending on the substituents. However, the deviations were in both cases to a large extent within a limit which preserves the descending order of the leading resonance structure weights. The resulting bond orders, i.e., the total, covalent and ionic bond order from NRT calculations, were not affected by the shift in the resonance weights. The analysis of the bond topological parameters resulted in a discrimination between insensitive parameters and sensitive parameters. The stable parameters do neither depend strongly on the method of calculation nor on the basis set. Only minor variation occurs in the numerical values of these parameters, when the level of calculation is changed or even when other functional groups (H, Me, or tBu) are employed, as long as the methods of calculation do not drop considerably below a standard level. The bond descriptors of the sulfur nitrogen bonds were found to be also stable with respect to the functional groups R = H, R = Me, and R = tBu. Stable parameters are the bond distance, the density at the bond critical point (BCP) and the ratio of distances between the BCP and the nuclei A and B, which varies clearly when considering the formal bond type. For very small basis sets like the 3-21G basis set, this characteristic stability collapses. The sensitive parameters are based on the second derivatives of the density with respect to the coordinates. This is in accordance with the well known fact, that the total second derivative of the density with respect to the coordinates is a strongly oscillating function with positive as well as negative values. A profound deviation has to be anticipated as a consequence of strong oscillations. lambda3, which describes the local charge depletion in the direction of the interaction line, is the most varying parameter. A detailed analysis revealed that the position of the BCP in the rampant edge of the Laplacian distribution is responsible for the sensitivity of the numerical value of lambda3 in formal double bonds. Since the slope of the Laplacian assumes very high values in its rampant edge, a tiny displacement of the BCP leads already to a considerable change in lambda3. This instability is not a failure of the underlying theory, but it yields de facto to a considerable dependence of sensitive bond topological properties on the method of calculation and on the applied basis sets. Since the total second derivative is important to judge on the nature of the bond in the AIM theory (closed shell interactions versus shared interactions), the changes in lambda3 can lead to differing chemical interpretations. The comparison of theoretically derived bond topological properties of various sulfur nitrogen bonds provides the possibility to measure the self consistency of this data set. All data sets clearly exhibit a linear correlation between the bond distances and the density at the BCP on one hand and between the bond distances and the Laplacian values at the BCP on the other hand. These correlations were almost independent of the basis set size. In this context, the linear regression has to be regarded exclusively as a descriptive statistics tool. There is no correlation anticipated a priori. The formal bond type was found to be readily deducible from the theoretically obtained bond topological descriptors of the model systems. In this sense, the bond topological properties are self consistent despite of the numerical sensitivity of the derivatives, as exemplified above. Often, calculations are performed with the experimentally derived equilibrium geometries and not with optimized ones. Applying this approach, the computationally costly geometry optimizations are saved. Following this approach the bond topological properties were calculated using very flexible basis sets and employing the fixed experimental geometry (which, of course, includes the application of tBu groups). Regression coe±cients similar to those from optimized geometries were obtained for correlations between bond distances and the densities at the BCP as well as for the correlation between bond distances and the Laplacian at the BCP, i.e. the approach is valid. However, the data points scattered less and the coe±cient of correlation was clearly increased when geometry optimizations were performed beforehand. The comparison between data obtained from theory and experiment revealed fundamental discrepancies: In the data set of bond topological parameters from the experiment, the behavior of only 2 out of 3 insensitive parameters was comparable to the behavior of the theoretically obtained values, i.e. theoretical and experimental bond distances as well as theoretical and experimental densities at the BCP correlate. From the theoretically obtained data it was easy to deduce the formal bond type from the position of the BCP, since it changed in a systematic manner. The respective experimentally obtained values were almost constant and did not change systematically. For the SN bonds containing compounds, the total second derivative assumes exclusively negative values in the experiment. Due to the different internal behavior, experimentally and theoretically sensitive bond topological values could not be compared directly. The qualitative agreement in the Laplacian distribution, however, was excellent. In the third and last part of this work, the application to chemical systems follows. Formal hypervalent molecules, i.e. molecules where some atoms are considered to hold more than 8 electrons in their valence shell, were investigated. These were compounds containing sulfur nitrogen bonds (H(NtBu)2SMe, H2C{S(NtBu)2(NHtBu)}2, S(NtBu)2 and S(NtBu)3) and a highly coordinated silicon compound. The set of sulfur nitrogen compounds also contained a textbook example for valence expansion, the sulfur triimide. For these molecules, experimental reference values were available from high resolution X-ray experiments. The experimental results were in the case of the sulfur triimide not unique. Furthermore, from the experimental bond topological data no definite conclusion about the formal bonding type could be drawn. The situation of sulfur nitrogen bonds in the above mentioned set of molecules was analyzed in terms of a geometry discussion and by means of a topological analysis. The methyl-substituted isolated molecules served as model compounds. For the interpretation of the bonding situation additional NBO/NRT calculations were preformed for the sulfur nitrogen compounds and an ELF calculation and analysis was performed for the silicon compound. The ELF analysis included not only the presentation and discussion of the ELF-isosurfaces (eta = 0.85), but also the investigation of populations of disynaptic valence basins and the percentage contributions to these populations of the individual atoms when the disynaptic valence basins are split into atomic contributions according to Bader’s partitioning scheme. The question of chemical interest was whether hypervalency is present in the set of molecules or not. In the first case the octet rule would be violated, in the second case Pauling’s verdict would be violated. While the concept of hypervalency is well established in chemistry, the violation of Pauling’s verdict is not. The quantitative numbers of the sensitive bond topological values from theory and experiment were not comparable, since no systematic relationship between the experimentally and theoretically determined sensitive bond descriptors was found. However, the insensitive parameters are in good agreement and the qualitative Laplacian distribution is, with few exceptions, in excellent agreement. The formal bonding type was deduced from experimental and theoretical topological data by considering the number and shape of valence shell charge concentrations in proximity to the sulfur and nitrogen centers. The results from NBO/NRT calculations confirmed the findings. All employed density analyzing tools AIM, ELF and NBO/NRT coincided in describing the bonding situation in the formally hypervalent molecules as highly polar. A comparison and analysis of experimentally and theoretically derived electron densities led consistently to the result, that regarding this set of molecules, hypervalency has to be excluded unequivocally.
A theory of managed floating
(2003)
After the experience with the currency crises of the 1990s, a broad consensus has emerged among economists that such shocks can only be avoided if countries that decided to maintain unrestricted capital mobility adopt either independently floating exchange rates or very hard pegs (currency boards, dollarisation). As a consequence of this view which has been enshrined in the so-called impossible trinity all intermediate currency regimes are regarded as inherently unstable. As far as the economic theory is concerned, this view has the attractive feature that it not only fits with the logic of traditional open economy macro models, but also that for both corner solutions (independently floating exchange rates with a domestically oriented interest rate policy; hard pegs with a completely exchange rate oriented monetary policy) solid theoretical frameworks have been developed. Above all the IMF statistics seem to confirm that intermediate regimes are indeed less and less fashionable by both industrial countries and emerging market economies. However, in the last few years an anomaly has been detected which seriously challenges this paradigm on exchange rate regimes. In their influential cross-country study, Calvo and Reinhart (2000) have shown that many of those countries which had declared themselves as ‘independent floaters’ in the IMF statistics were charaterised by a pronounced ‘fear of floating’ and were actually heavily reacting to exchange rate movements, either in the form of an interest rate response, or by intervening in foreign exchange markets. The present analysis can be understood as an approach to develop a theoretical framework for this managed floating behaviour that – even though it is widely used in practice – has not attracted very much attention in monetary economics. In particular we would like to fill the gap that has recently been criticised by one of the few ‘middle-ground’ economists, John Williamson, who argued that “managed floating is not a regime with well-defined rules” (Williamson, 2000, p. 47). Our approach is based on a standard open economy macro model typically employed for the analysis of monetary policy strategies. The consequences of independently floating and market determined exchange rates are evaluated in terms of a social welfare function, or, to be more precise, in terms of an intertemporal loss function containing a central bank’s final targets output and inflation. We explicitly model the source of the observable fear of floating by questioning the basic assumption underlying most open economy macro models that the foreign exchange market is an efficient asset market with rational agents. We will show that both policy reactions to the fear of floating (an interest rate response to exchange rate movements which we call indirect managed floating, and sterilised interventions in the foreign exchange markets which we call direct managed floating) can be rationalised if we allow for deviations from the assumption of perfectly functioning foreign exchange markets and if we assume a central bank that takes these deviations into account and behaves so as to reach its final targets. In such a scenario with a high degree of uncertainty about the true model determining the exchange rate, the rationale for indirect managed floating is the monetary policy maker’s quest for a robust interest rate policy rule that performs comparatively well across a range of alternative exchange rate models. We will show, however, that the strategy of indirect managed floating still bears the risk that the central bank’s final targets might be negatively affected by the unpredictability of the true exchange rate behaviour. This is where the second policy measure comes into play. The use of sterilised foreign exchange market interventions to counter movements of market determined exchange rates can be rationalised by a central bank’s effort to lower the risk of missing its final targets if it only has a single instrument at its disposal. We provide a theoretical model-based foundation of a strategy of direct managed floating in which the central bank targets, in addition to a short-term interest rate, the nominal exchange rate. In particular, we develop a rule for the instrument of intervening in the foreign exchange market that is based on the failure of foreign exchange market to guarantee a reliable relationship between the exchange rate and other fundamental variables.
Summary: In the present work, two important negative regulators of T cell responses in rats were examined. At the molecular level, rat CTLA-4, a receptor important for deactivating T cell responses, was examined for the expression pattern and in vitro functions. For this purpose, anti-rat CTLA-4 mAbs were generated. Consistent with the studies in mice and humans, rat CTLA-4 was detectable only in CD25+CD4+ regulatory T cells in unstimulated rats, and was upregulated in all activated T cells. Cross-linking rat CTLA-4 led to the deactivation of anti-TCR- and anti-CD28 stimulated (costimulation) T cell responses such as reduction in activation marker expression, proliferation, and cytokine IL-2 production. Although T cells stimulated with the superagonistic anti-CD28 antibody alone without TCR engagement also increased their CTLA-4 expression, a delayed kinetics of CTLA-4 upregulation was found in cells stimulated in this way. The physiological relevance of this finding needs further investigation. At the cellular level, rat CD25+CD4+ regulatory T cells were examined here in detail. Using rat anti-CTLA-4 mAbs, the phenotype of CD25+CD4+ regulatory T cells was investigated. Identical to the mouse and human Treg phenotype, rat CD25+CD4+ T cells constitutively expressed CTLA-4, were predominantly CD45RC low, and expressed high level of CD62L (L-selectin). CD25+CD4+ cells proliferated poorly and were unable to produce IL-2 upon engagement of the TCR and CD28. Furthermore, rat CD25+CD4+ cells produced high amounts of anti-inflammatory cytokine IL-10 upon stimulation. Importantly, freshly isolated CD25+CD4+ T cells from naïve rats exhibited suppressor activities in the in vitro suppressor assays. In vitro, CD25+CD4+ regulatory T cells proliferated vigorously upon superagonistic anti-CD28 stimulation and became very potent suppressor cells. In vivo, a single injection of CD28 superagonist into rats induced transient accumulation and activation of CD25+CD4+ regulatory T cells. These findings suggest firstly that efficient expansion of CD25+CD4+ cells without losing their suppressive effects (even enhance their suppressive activities) can be achieved with the superagonistic anti- CD28 antibody in vitro. Secondly, the induction of disproportional expansion of CD25+CD4+ cells by a single injection of superagonistic anti-CD28 antibody in vivo implies that superagonistic anti-CD28 antibody may be a promising candidate in treating autoimmune diseases by causing a transient increase of activated CD25+CD4+ T cells and thus tipping ongoing autoimmune responses toward selftolerance.
This study investigates the credit channel in the transmission of monetary policy in Germany by means of a structural analysis of aggregate bank loan data. We base our analysis on a stylized model of the banking firm, which specifies the loan supply decisions of banks in the light of expectations about the future course of monetary policy. Using the model as a guide, we apply a vector error correction model (VECM), in which we identify long-run cointegration relationships that can be interpreted as loan supply and loan demand equations. In this way, the identification problem inherent in reduced form approaches based on aggregate data is explicitly addressed. The short-run dynamics is explored by means of innovation analysis, which displays the reaction of the variables in the system to a monetary policy shock. The main implication of our results is that the credit channel in Germany appears to be effective, as we find that loan supply effects in addition to loan demand effects contribute to the propagation of monetary policy measures.
A CD8+ cell-mediated host defense relies on cognate killing of infected target cells and on local inflammation induced by the secretion of IFN-g. Using assays of single cell resolution, it was studied to what extent these two effector function of CD8+ cells are linked. Granzyme B (GzB) is stored in cytolytic granules of CD8+ cells and its secretion is induced by antigen recognition of these cells. Following entry into the cytosol GzB induces apoptosis in the target cells. It was measured whether GzB release by individual CD8+ cells is accompanied by the secretion of IFN-gƒnƒnand of other cytokines. HIV peptide libraries were tested on bulk peripheral blood mononuclear cells and on purified CD4+ and CD8+ cells obtained from HIV infected individuals. The library included a panel of previously defined HLA class I restricted HIV peptides and an overlapping 20-mer peptide-series that covered the entire gp120 molecule. To characterize the in vivo differentiation state of the T-cells, freshly isolated lymphocytes were tested in assays of 24h duration. The data showed that only ~20% of the peptides triggered the release of both GzB and IFN-g from CD8+ cells. The majority of the HIV peptides induced either GzB or IFN-g, ~40% in each category. The GzB positive, IFN-g negative CD8+ cells did not produce IL-4 or IL-5, which suggests that they do not correspond to Tc2 cells but represent a novel Tc1 subclass, which was termed Tc1c. Also the IFN-g positive, GzB negative CD8+ cell subpopulation represents a yet undefined CD8+ effector cell lineage that was termed Tc1b. Tc1b and Tc1c cells are likely to make different, possibly antagonistic contributions to the control of HIV infection. Since IFN-g activates HIV replication in latently infected macrophages, the secretion of this cytokine by Tc1b cells in the absence of killing may have adverse effects on the host defense. In contrast, cytolysis by Tc1c cells in the absence of IFN-g production might represent the protective class of response. Further studies in the field of Tc1 effector cell diversity should lead to valuable insights for management of infections and developing rationales for vaccine design.
The present investigation report a protocol to obtain dendritic cells (DC) that protects mice against fatal leishmaniasis. DC were generated from bone marrow precursors, pulsed with leishmanial antigen and activated with CpG oligodeoxinucleotides. Mice that were vaccinated with these cells were strongly protected against the clinical and parasitological manifestations of leishmaniasis and developed a Th1 immune response. protection was solid and long-lasting, and was also dependent of the via of administration. Whe the mechanism of protection was studied, it was observed that the availability of the cytokine interleukin-12 at the time of vaccination was a key requirement, but that the source of this cytokine is not the donor cells but unidentified cells from the recipients.
Nowadays, robotics plays an important role in increasing fields of application. There exist many environments or situations where mobile robots instead of human beings are used, since the tasks are too hazardous, uncomfortable, repetitive, or costly for humans to perform. The autonomy and the mobility of the robot are often essential for a good solution of these problems. Thus, such a robot should at least be able to answer the question "Where am I?". This thesis investigates the problem of self-localizing a robot in an indoor environment using range measurements. That is, a robot equipped with a range sensor wakes up inside a building and has to determine its position using only its sensor data and a map of its environment. We examine this problem from an idealizing point of view (reducing it into a pure geometric one) and further investigate a method of Guibas, Motwani, and Raghavan from the field of computational geometry to solving it. Here, so-called visibility skeletons, which can be seen as coarsened representations of visibility polygons, play a decisive role. In the major part of this thesis we analyze the structures and the occurring complexities in the framework of this scheme. It turns out that the main source of complication are so-called overlapping embeddings of skeletons into the map polygon, for which we derive some restrictive visibility constraints. Based on these results we are able to improve one of the occurring complexity bounds in the sense that we can formulate it with respect to the number of reflex vertices instead of the total number of map vertices. This also affects the worst-case bound on the preprocessing complexity of the method. The second part of this thesis compares the previous idealizing assumptions with the properties of real-world environments and discusses the occurring problems. In order to circumvent these problems, we use the concept of distance functions, which model the resemblance between the sensor data and the map, and appropriately adapt the above method to the needs of realistic scenarios. In particular, we introduce a distance function, namely the polar coordinate metric, which seems to be well suited to the localization problem. Finally, we present the RoLoPro software where most of the discussed algorithms are implemented (including the polar coordinate metric).
The present work consists of two parts. The first one deals with theoretical questions and tests the performance of orbitals obtained from a self-interaction free KS method, the LHFapproach, in multireference ab initio methods. The purpose of this part is to enable a more efficient computation of excitation energies, which is important for the spectroscopic characterization of many organic and bioorganic molecules. The second part focuses on bioorganic questions and studies the base pairing properties of the purine base xanthine in order to explain, e.g., the unusually high stability of selfpairing xanthine alanyl-PNA double strands and the mutagenicity of xanthine formed in DNA. Part1: In contrast to HF- and standard DFT-methods, the LHF-approach leads to a fully bound virtual orbital spectrum, because Coulomb self interactions are exactly canceled in the LHFansatz. Furthermore, the energies of the occupied orbitals are not upshifted, like it is the case for standard DFT-methods, so that Koopmans' theorem remains valid. In line with this, also the occupied LHF-orbitals are somewhat more compact than standard DFT-orbitals. The present work shows that both properties are of great benefit for MR methods. The virtual LHF-orbitals are well optimized and allow an efficient description of excited states and static correlation in both MRCI- and MRPT2-approaches. Furthermore, the higher compactness of the occupied LHF- compared to standard DFT-orbitals leads to a better description of the center ion of Rydberg states. However, for each of the two advantages mentioned at least one example molecule has been found, for which LHF-orbitals actually perform worse than HF-and/or standard DFT-orbitals. This shows, that even though LHF virtual orbitals allow an excellent MRCI- and MRPT2-description for the electronically excited states of a large number of molecules, this cannot be generalized and their performance needs to be tested for each individual case. In the second part of the present work, the base pairing properties of xanthine and xanthine derivatives were studied. The purpose of this part was to find an explanation for the unexpectedly high stability of the xanthine alanyl PNA double strand. Furthermore, it was analyzed, why xanthine, that is formed from guanine in DNA under chemical stress, is able to form mismatched base pairs with the pyrimidine base thymine. Stability of xanthine alanyl PNA: In the first step, the regioisomer present in the considered alanyl PNA was identified to be the N7-regioisomer of xanthine by a theoretical analysis of the 13C-NMR spectrum. To analyze the stability of the xanthine self-pairing, a simplified model was set up, in which the stability of the PNA double strand was explained solely by the energy contributions from H-bonding and base stacking. For that purpose, the dimerization and stacking energies for the xanthine-xanthine, guaninecytosine, adenine-thymine and xanthine-2,6-diaminopurine base pairs were computed using DFT and MP2 methods. Solvent effects were taken into account by the conductor like screening model. The influence of the peptide backbone on the stacking geometry was considered by force field optimizations. While the individual contributions from hydrogen bonding and stacking do not correlate with the melting temperature Tm, the sum of both correlates linearly with Tm. This correlation is somewhat surprising, because this means that the effects of the entropy and the molecular water environment either cancel or are similar for all systems compared. In this model, the stability of the xanthine selfpairing mainly stems from an enlarged stacking interaction, while the H-bonds give only minor contributions to the stability of the xanthine selfpaired double strand of alanyl-PNA. Base pairing properties of N9-Xanthine: The computation of the base pairing properties of N9-xanthine revealed a strong variation in the individual H-bond strengths for the selfpairing of xanthine, that range from -4 to -11 kcal/mol in the gas phase and -2.5 to -5 kcal/mol in polar solvent. By comparison with model systems it was shown that the strong variance of the H-bond strength is mainly due to attractive or repulsive secondary electrostatic interactions. For the homodimer of hypoxanthine it was shown that the increase of aromaticity in the pyrimidine ring upon dimer formation leads to a strengthening of the hydrogen bonds. Mutagenicity of hypoxanthine and xanthine: Several neutral and anionic Watson-Crick base pairs of xanthine were computed with MP2- and DFT-methods in order to explain the mutagenicity of hypoxanthine and xanthine. Also basepairs involving tautomeric forms of xanthine and hypoxanthine were considered. To evaluate the dimerization energies found, the dimers were classified into pairings that have the exact geometry of the canonical base pairs and those that realize a distorted Watson-Crick pairing mode. The computations show that a stable pairing which realizes the exact geometry of a canonical Watson Crick base pairing is only possible for the pairing of xanthine to cytosine, however, the base pairs are only weakly bound. The dimerization energies of both the neutral and the anionic pairing is around 0 kcal/mol, so that the xanthine-cytosine base pairs are incorporated into DNA solely because the base pairs fulfill the geometric demands of DNA polymerase, but it does not profit from any additional stabilization due to hydrogen bonding. The bonding that in the Watson-Crick pairing mode xanthine has almost no affinity to cytosine is in correspondence with the experimental result that the cytosine-xanthine base pair is incorporated into DNA at a much lower rate than the cytosine-guanine base pair, which has a very strong hydrogen bonding. While the affinity of xanthine to cytosine is very low, the computations predict that xanthine is able to form a stable Watson-Crick pairing with thymine. However, the pairing has a somewhat distorted Watson-Crick geometry, so that its high stability is outbalanced by the worsened fit to the binding pocket of DNA-polymerase. As a consequence, the xanthinethymine pairing is incorporated into DNA not at a faster, but only at a rate comparable to that of the xanthine-cytosine pairing.
The aim of current work was contribution to the long-term ongoing project on developing human IL-5 agonists/antagonists that intervene with or inhibit IL-5 numerous functions in cell culture and/or in animal disease models. To facilitate design of an IL-5 antagonist variant or low-molecular weight mimetics only capable of binding to the specific receptor alpha chain, but would lack the ability to attract the receptor common β-chain and thus initiate receptor complex activation it is necessary to gain the information on minimal structural and functional epitopes. Such a strategy was successfully adopted in our group on example of Interleukin 4. To precisely localize minimal structural epitope it is essential to have structure of the ligand in its bound form and especially informative would be structure of complex of the ligand and its specific receptor alpha chain. For this purpose large quantities (tens of milligrams), retaining full biological activity IL-5 and extracellular domain of IL-5 specific receptor α-chain were expressed in a bacterial expression system (E.coli). After successful refolding proteins were purified to 95-99% Stable and soluble receptor:ligand complex was prepared. Each established purification and refolding procedures were subjected to optimization targeting maximal yields and purity. Produced receptor:ligand complex was applied to crystallization experiments. Microcrystals were initially obtained with a flexible sparse matrix screening methodology. Crystal quality was subsequently improved by fine-tuning of the crystallization conditions. At this stage crystals of about 800x150x30µm in size can be obtained. They possess desirable visible characteristics of crystals including optical clarity, smooth facecs and sharp edges. Crystals rotate plane polarized light reflecting their well internal organization. Unfortunately relative slimness and sometimes cluster nature of the produced crystals complicates acquisition of high-resolution dataset and resolution of the structure. With some of obtained crystals diffraction to a resolution up to 4Å was observed.