Refine
Has Fulltext
- yes (643)
Year of publication
- 2016 (643) (remove)
Document Type
- Journal article (547)
- Doctoral Thesis (86)
- Book article / Book chapter (3)
- Preprint (3)
- Review (2)
- Master Thesis (1)
- Working Paper (1)
Language
- English (643) (remove)
Keywords
- Drosophila (7)
- Drosophila melanogaster (7)
- inflammation (7)
- vision (7)
- Fabry disease (6)
- breast cancer (6)
- phosphorylation (6)
- Boron (5)
- Taufliege (5)
- Trypanosoma brucei (5)
Institute
- Theodor-Boveri-Institut für Biowissenschaften (100)
- Medizinische Klinik und Poliklinik II (37)
- Physikalisches Institut (34)
- Institut für Psychologie (33)
- Neurologische Klinik und Poliklinik (32)
- Graduate School of Life Sciences (29)
- Medizinische Klinik und Poliklinik I (25)
- Julius-von-Sachs-Institut für Biowissenschaften (23)
- Rudolf-Virchow-Zentrum (23)
- Institut für Theoretische Physik und Astrophysik (21)
Schriftenreihe
Sonstige beteiligte Institutionen
- Bavarian Center for Applied Energy Research e.V. (ZAE Bayern) (1)
- Center for Nanosystems Chemistry (1)
- Department of Chemistry, Sungkyunkwan University, 440-746 Suwon, Republic of Korea (1)
- EMBL Mouse Biology Unit, Monterotondo, Italien (1)
- ESPCI Paris (1)
- Fachbereich Physik, Universität Konstanz, D-78464 Konstanz, Germany (1)
- Fraunhofer ISC (1)
- Fraunhofer Institute Interfacial Engineering and Biotechnology (IGB) (1)
- Klinik für Psychiatrie und Psychotherapie, Universität Würzburg (1)
- Lehrstuhl für Bioinformatik (1)
Virtualization allows the creation of virtual instances of physical devices, such as network and processing units. In a virtualized system, governed by a hypervisor, resources are shared among virtual machines (VMs). Virtualization has been receiving increasing interest as away to reduce costs through server consolidation and to enhance the flexibility of physical infrastructures. Although virtualization provides many benefits, it introduces new security challenges; that is, the introduction of a hypervisor introduces threats since hypervisors expose new attack surfaces.
Intrusion detection is a common cyber security mechanism whose task is to detect malicious activities in host and/or network environments. This enables timely reaction in order to stop an on-going attack, or to mitigate the impact of a security breach. The wide adoption of virtualization has resulted in the increasingly common practice of deploying conventional intrusion detection systems (IDSs), for example, hardware IDS appliances or common software-based IDSs, in designated VMs as virtual network functions (VNFs). In addition, the research and industrial communities have developed IDSs specifically designed to operate in virtualized environments (i.e., hypervisorbased IDSs), with components both inside the hypervisor and in a designated VM. The latter are becoming increasingly common with the growing proliferation of virtualized data centers and the adoption of the cloud computing paradigm, for which virtualization is as a key enabling technology.
To minimize the risk of security breaches, methods and techniques for evaluating IDSs in an accurate manner are essential. For instance, one may compare different IDSs in terms of their attack detection accuracy in order to identify and deploy the IDS that operates optimally in a given environment, thereby reducing the risks of a security breach. However, methods and techniques for realistic and accurate evaluation of the attack detection accuracy of IDSs in virtualized environments (i.e., IDSs deployed as VNFs or hypervisor-based IDSs) are lacking. That is, workloads that exercise the sensors of an evaluated IDS and contain attacks targeting hypervisors are needed. Attacks targeting hypervisors are of high severity since they may result in, for example, altering the hypervisors’s memory and thus enabling the execution of malicious code with hypervisor privileges. In addition, there are no metrics and measurement methodologies
for accurately quantifying the attack detection accuracy of IDSs in virtualized environments with elastic resource provisioning (i.e., on-demand allocation or deallocation of virtualized hardware resources to VMs). Modern hypervisors allow for hotplugging virtual CPUs and memory on the designated VM where the intrusion detection engine of hypervisor-based IDSs, as well as of IDSs deployed as VNFs, typically operates. Resource hotplugging may have a significant impact on the attack detection accuracy of an evaluated IDS, which is not taken into account by existing metrics for quantifying IDS attack detection accuracy. This may lead to inaccurate measurements, which, in turn, may result in the deployment of misconfigured or ill-performing IDSs, increasing
the risk of security breaches.
This thesis presents contributions that span the standard components of any system
evaluation scenario: workloads, metrics, and measurement methodologies. The scientific contributions of this thesis are:
A comprehensive systematization of the common practices and the state-of-theart on IDS evaluation. This includes: (i) a definition of an IDS evaluation design space allowing to put existing practical and theoretical work into a common context in a systematic manner; (ii) an overview of common practices in IDS evaluation reviewing evaluation approaches and methods related to each part of the design space; (iii) and a set of case studies demonstrating how different IDS evaluation approaches are applied in practice. Given the significant amount of existing practical and theoretical work related to IDS evaluation, the presented systematization is beneficial for improving the general understanding of the topic by providing an overview of the current state of the field. In addition, it is beneficial for identifying and contrasting advantages and disadvantages of different IDS evaluation methods and practices, while also helping to identify specific requirements and best practices for evaluating current and future IDSs.
An in-depth analysis of common vulnerabilities of modern hypervisors as well as a set of attack models capturing the activities of attackers triggering these vulnerabilities. The analysis includes 35 representative vulnerabilities of hypercall handlers (i.e., hypercall vulnerabilities). Hypercalls are software traps from a kernel of a VM to the hypervisor. The hypercall interface of hypervisors, among device drivers and VM exit events, is one of the attack surfaces that hypervisors expose. Triggering a hypercall vulnerability may lead to a crash of the hypervisor or to altering the hypervisor’s memory. We analyze the origins
of the considered hypercall vulnerabilities, demonstrate and analyze possible attacks that trigger them (i.e., hypercall attacks), develop hypercall attack models(i.e., systematized activities of attackers targeting the hypercall interface), and discuss future research directions focusing on approaches for securing hypercall interfaces.
A novel approach for evaluating IDSs enabling the generation of workloads that contain attacks targeting hypervisors, that is, hypercall attacks. We propose an approach for evaluating IDSs using attack injection (i.e., controlled execution of attacks during regular operation of the environment where an IDS under test is deployed). The injection of attacks is performed based on attack models that capture realistic attack scenarios. We use the hypercall attack models developed as part of this thesis for injecting hypercall attacks.
A novel metric and measurement methodology for quantifying the attack detection accuracy of IDSs in virtualized environments that feature elastic resource provisioning. We demonstrate how the elasticity of resource allocations in such environments may impact the IDS attack detection accuracy and show that using existing metrics in such environments may lead to practically challenging and inaccurate measurements. We also demonstrate the practical use of the metric we propose through a set of case studies, where we evaluate common conventional IDSs deployed as VNFs.
In summary, this thesis presents the first systematization of the state-of-the-art on IDS evaluation, considering workloads, metrics and measurement methodologies as integral parts of every IDS evaluation approach. In addition, we are the first to examine the hypercall attack surface of hypervisors in detail and to propose an approach using attack injection for evaluating IDSs in virtualized environments. Finally, this thesis presents the first metric and measurement methodology for quantifying the attack detection accuracy of IDSs in virtualized environments that feature elastic resource provisioning.
From a technical perspective, as part of the proposed approach for evaluating IDSsthis thesis presents hInjector, a tool for injecting hypercall attacks. We designed hInjector to enable the rigorous, representative, and practically feasible evaluation of IDSs using attack injection. We demonstrate the application and practical usefulness of hInjector, as well as of the proposed approach, by evaluating a representative hypervisor-based IDS designed to detect hypercall attacks. While we focus on evaluating the capabilities of IDSs to detect hypercall attacks, the proposed IDS evaluation approach can be generalized and applied in a broader context. For example, it may be directly used to also evaluate security mechanisms of hypervisors, such as hypercall access control (AC) mechanisms. It may also be applied to evaluate the capabilities
of IDSs to detect attacks involving operations that are functionally similar to hypercalls,
for example, the input/output control (ioctl) calls that the Kernel-based Virtual Machine (KVM) hypervisor supports. For IDSs in virtualized environments featuring elastic resource provisioning, our approach for injecting hypercall attacks can be applied in combination with the attack detection accuracy metric and measurement methodology we propose. Our approach for injecting hypercall attacks, and our metric and measurement methodology, can also be applied independently beyond the scenarios considered in this thesis. The wide spectrum of security mechanisms in virtualized environments whose evaluation can directly benefit from the contributions of this thesis (e.g., hypervisor-based IDSs, IDSs deployed as VNFs, and AC mechanisms) reflects the practical implication of the thesis.
Costly signaling with mobile devices: An evolutionary psychological perspective on smartphones
(2016)
In the last decade, mobile device ownership has largely increased. In particular, smartphone ownership is constantly rising (A. Smith, 2015; Statista, 2016a), and there is a real hype for luxury brand smartphones (Griffin, 2015). These observations raise the question of which functions smartphones serve in addition to their original purposes of making and receiving calls, searching for information, and organizing. Beyond these obvious functions, studies suggest that smartphones express fashion, lifestyle, and one’s economic status (e.g., Bødker et al., 2009; Statista, 2016b; Vanden Abeele, Antheunis, & Schouten, 2014). Specifically, individuals seem to purchase and use conspicuous luxury brand smartphones to display and enhance status (D. Kim et al., 2014; Müller-Lietzkow et al., 2014; Suki, 2013). But how does owning a conspicuous, high-status smartphone contribute to status, and which benefits may these status boosts provide to their owners? From an evolutionary perspective, status carries a lot of advantages, particularly for males; high status grants them priority access to resources and correlates with their mating success (van Vugt & Tybur, 2016). In this sense, research suggests that men conspicuously display their cell phones to attract mates and to distinguish themselves from rivals (Lycett & Dunbar, 2000). In a similar vein, evolutionarily informed studies on conspicuous consumption indicate that the purchase and display of conspicuous luxuries (including mobile phones and smartphones) relate to a man’s interest in uncommitted sexual relationships and enhance his desirability as a short-term mate (Hennighausen & Schwab, 2014; Saad, 2013; Sundie et al., 2011). Drawing on these findings, this doctoral dissertation investigated how a man is perceived given that he is an owner of a high-status (vs. nonconspicuous, low-status) smartphone as a romantic partner and male rival. This was done in three experiments. In addition, it was examined how male conspicuous consumption of smartphones interacted with further traits that signal a man’s mate quality, namely facial attractiveness (Studies 1 and 2) and social dominance (Study 3). Study 1 revealed that men and women perceived a male owner of a conspicuous smartphone as a less desirable long-term mate and as more inclined toward short-term mating. Study 2 replicated these results and showed that men and women assigned traits that are associated with short-term mating (e.g., low loyalty, interest in flirts, availability of tangible resources) to a male owner of a conspicuous smartphone and perceived him as a stronger male rival and mate poacher, and less as a friend. The results of Study 2 further suggested that specifically more attractive men might benefit from owning a conspicuous smartphone in a short-term mating context and might be hence considered as stronger male rivals. Study 3 partially replicated the findings of Studies 1 and 2 pertaining to the effects of owning a conspicuous smartphone. Study 3 did not show different effects of conspicuous consumption of smartphones on perceptions of a man dependent on the level of his social dominance.
To conclude, the findings of this doctoral dissertation suggest that owning a conspicuous, high-status smartphone might not only serve proximate functions (e.g., making and receiving calls, organization) but also ultimate functions, which relate to mating and reproduction. The results indicate that owning a conspicuous smartphone might yield benefits for men in a short-term rather than in a long-term mating context. Furthermore, more attractive men appear to benefit more from owning a conspicuous smartphone than less attractive men. These findings provide further insights into the motivations that underlie men’s purchases and displays of conspicuous, high-status smartphones from luxury brands that reach beyond the proximate causes frequently described in media and consumer psychological research. By applying an evolutionary perspective, this doctoral dissertation demonstrates the power and utility of this research paradigm for media psychological research and shows how combining a proximate and ultimate perspective adds to a more profound understanding of smartphone phenomena.
Project Borylene
A new borylene ligand ({BN(SiMe\(_3\))(t-Bu)}) has been successfully synthesized bound in a terminal manner to base metal scaffolds of the type [M(CO)\(_5\)] (M = Cr, Mo, and W), yielding complexes [(OC)\(_5\)Cr{BN(SiMe\(_3\))(t-Bu)}] (19), [(OC)\(_5\)Mo{BN(SiMe\(_3\))(t- Bu)}] (20), and [(OC)\(_5\)W{BN(SiMe\(_3\))(t-Bu)}] (21) (Figure 5-1). Synthesis of complexes 19, 20, and 21 was accomplished by double salt elimination reactions of Na\(_2\)[M(CO)\(_5\)] (M = Cr (11), Mo (1), and W (12)) with the dihaloborane Br\(_2\)BN(SiMe\(_3\))(t-Bu) (18). This new “first generation” unsymmetrical borylene ligand is closely akin to the bis(trimethylsilyl)aminoborylene ligand and has been shown to display similar structural characteristics and reactivity. The unsymmetrical borylene ligand {BN((SiMe\(_3\))(t-Bu)} does display some individual characteristics of note and has experimentally been shown to undergo photolytic transfer to transition metal scaffolds in a more rapid manner, and appears to be a more reactive borylene ligand, than the previously published symmetrical {BN(SiMe\(_3\))\(_2\)} ligand, based on NMR and IR spectroscopic evidence.
Photolytic transfer reactions with this new borylene ligand ({BN((SiMe\(_3\))(t-Bu)}) were conducted with other metal scaffolds, resulting in either complete borylene transfer or partial transfer to form bridging borylene ligand interactions between the two transition metals. The unsymmetrical ligand’s coordination to early transition metals (up to Group 6) indicates a preference for a terminal coordination motif while bound to these highly Lewis acidic species. The ligand appears to form more energetically stable bridging coordination modes when bound to transition metals with high Lewis basicity (beyond Group 9) and has been witnessed to transfer to transition metal scaffolds in a terminal manner and subsequently rearrange in order to achieve a more energetically stable bridging final state.
Figure 5-2 lists the four different transfer reactions conducted between the chromium borylene species [(OC)\(_5\)Cr{BN(SiMe\(_3\))(t-Bu)}] (19) and the transition metal complexes [(η\(^5\)-C\(_5\)H\(_5\))V(CO)\(_4\)] (51), [(η\(^5\)-C\(_5\)Me\(_5\))Ir(CO)\(_2\)] (56), [(η\(^5\)-C\(_5\)H\(_4\)Me)Co(CO)\(_2\)] (59), and [{(η\(^5\)-C\(_5\)H\(_5\))Ni}\(_2\){μ-(CO)\(_2\)}] (53). These reactions successfully yielded the new “second generation” borylene complexes [(η\(^5\)-C\(_5\)H\(_5\))(OC)\(_3\)V{BN(SiMe\(_3\))(t-Bu)}] (55), [(η\(^5\)-C\(_5\)Me\(_5\))Ir{BN(SiMe\(_3\))(t-Bu)}\(_2\)] (58), [{(η\(^5\)-C\(_5\)H\(_4\)Me)Co}\(_2\)(μ-CO)\(_2\){μ- BN(SiMe\(_3\))(t-Bu)}] (61), and [{(η\(^5\)-C\(_5\)H\(_5\))Ni}\(_2\)(μ-CO){μ-BN(SiMe\(_3\))(t-Bu)}] (62), respectively.
Analysis of the accumulated data for all of the terminal borylene species discussed in this section, particularly bond distances, infrared spectroscopy, and \(^{11}\)B{\(^1\)H} NMR spectroscopic data, has been performed, and a trend in the data has led to the following conclusions:
[1] NMR spectroscopic data for the \(^{11}\)B{\(^1\)H} boron and \(^{13}\)C{\(^1\)H} carbonyl environments of the first generation borylene species ([(OC)\(_5\)M{BN(SiMe\(_3\))(t-Bu)}] (M = Cr (19), Mo (20), and W (21))) all show progressive up-field shifting as the Group 6 metal becomes heavier (Cr (19) to Mo (20) to W (21)), indicating maximum deshielding for these nuclei in the [(OC)\(_5\)Cr{BN(SiMe\(_3\))(t-Bu)}] (19) complex.
[2] The boron-metal-trans-carbon (B-M-C\(_{trans}\)) axes of the first generation borylene complexes [(OC)\(_5\)M{BN(SiMe\(_3\))(t-Bu)}] (M = Mo (20), and W (21)) are not completely linear, preventing direct IR spectroscopic comparison. The chromium analog [(OC)\(_5\)Cr{BN(SiMe\(_3\))(t-Bu)}] (19), however, is essentially linear and displays the expected three carbonyl IR stretching frequencies, all at higher energy than those of the chromium bis(trimethylsilyl)aminoborylene complex [(OC)\(_5\)Cr{BN(SiMe\(_3\))\(_2\)}] (13), indicating that the ({BN(SiMe\(_3\))(t-Bu)}) ligand is either a stronger σ-donor or a poorer π-acceptor compared to the chromium metal center.
[3] In transfer reactions, the {BN(SiMe\(_3\))(t-Bu)} fragment appears to be more stable as a terminal ligand when bound to more Lewis acidic first row transition metals and appears to prefer coordination in a bridging motif when coordinated to more Lewis basic first row transition metals.
Project Borirene
The synthesis of the first platinum bis(borirene) complexes are presented along with findings from structural and electronic examination of the role of platinum in allowing increased coplanarity and conjugation of twin borirene systems. This series of trans-platinum-linked bis(borirene) complexes (119/120, 122/123, and 125/126) all show coplanarity in the twin ring systems and stand as the first verified structural representations of two coplanar borirene systems across a linking unit. The role of a platinum atom in mediating communication between chromophoric ligands can be generalized by an expected bathochromic (red) shift in the absorption spectrum due to an increase in the electronic delocalization between the formerly independent aromatic systems when compared to the platinum mono-σ-borirenyl systems. The trans-platinum bis(borirene) scaffold serves as a simplified monomeric system that allows not only study of the effects of transition metals in mitigating electronic conjugation, but also the tunability of the overall photophysical profile of the system by exocyclic augmentation of the three-membered aromatic ring.
A series of trans-platinum bis(alkynyl) complexes were prepared (Figure 5-3) to serve as stable platforms to transfer terminal borylene ligands {BN(SiMe\(_3\))\(_2\)} onto 95, 102, 106, and 63. Mixing of cis-[PtCl\(_2\)(PEt\(_3\))\(_2\)] (93) with two equivalents of corresponding alkynes in diethylamine solutions successfully yielded trans-[Pt(C≡C-Ph)\(_2\)(PEt\(_3\))\(_2\)] (95), trans-[Pt(C≡C-p-C\(_6\)H\(_4\)OMe)\(_2\)(PEt\(_3\))\(_2\)] (102), trans-[Pt(C≡C-p-C\(_6\)H\(_4\)CF\(_3\))\(_2\)(PEt\(_3\))\(_2\)](106), and trans-[Pt(C≡C-9-C\(_{14}\)H\(_9\))\(_2\)(PEt\(_3\))\(_2\)] (63) through salt elimination reactions.
Three of the trans-platinum bis(alkynyl) complexes (95, 102, and 106) successfully yielded trans-platinum bis(borirenyl) complexes 119/120, 122/123, and 125/126 through photolytic transfer of two equivalents of the terminal borylene ligand {BN(SiMe\(_3\))\(_2\)} from [(OC)\(_5\)Cr{BN(SiMe\(_3\))\(_2\)}] (13) (Figure 5-4). Attempted borylene transfer reactions to the trans-platinum bis(alkynyl) complex trans-[Pt(C≡C-9-C\(_{14}\)H\(_9\))\(_2\)(PEt\(_3\))\(_2\)] (63) failed due to the complex’s photoinstability. Although a host of other variants of platinum alkynyl species were prepared and attempted, these three were the only ones that successfully yielded trans-platinum bis(borirenyl) units. Attempts were also made to create a cis variant for direct UV-vis comparison to the trans-platinum bis(borirenyl) variants, however, these attempts were also not successful. Gladysz-type platinum end-capped alkynyl species were also synthesized to serve as transfer platforms for borirene synthesis in sequential order, however, these species were also shown to not be photolytically stable.
A host of new monoborirenes: Ph-(μ-{BN(SiMe\(_3\))(t-Bu)}C=C)-Ph (148), trans- [PtCl{(μ-{BN(SiMe\(_3\))(t-Bu)}C=C)-Ph}(PEt\(_3\))\(_2\)] (149), and [(η\(^5\)-C\(_5\)Me\(_5\))(OC)\(_2\)Fe(μ- {BN(SiMe\(_3\))(t-Bu)}C=C)Ph] (150) were synthesized by photo- and thermolytic transfer of the unsymmetrical {BN(SiMe\(_3\))(t-Bu)} ligand from the complexes [(OC)\(_5\)M{BN(SiMe\(_3\))(t-Bu)}] (M = Cr (19), Mo (20), and W (21)) to organic and organometallic alkynyl species to verify that the borylene complexes all display similar reactivity to the symmetrical terminal borylenes of the type [(OC)\(_5\)M{BN(SiMe\(_3\))\(_2\)}] (M = Cr (13), Mo (14), and W (15)). These monoborirenes are all found to be oils when in their pure states and X-ray structural determination was impossible for these species.
Project Boratabenzene
The bis(boratabenzene) complex [{(η\(^5\)-C\(_5\)H\(_5\))Co}\(_2\){μ:η\(^6\),η\(^6\)-(BC\(_5\)H\(_5\))\(_2\)}] (189) was successfully prepared by treatment of tetrabromodiborane (65) with six equivalents of cobaltocene (176) in a unique reaction that utilized cobaltocene as both a reagent and reductant (Figure 5-5). The bimetallic transition metal complex features a new bridging bis(boratabenzene) ligand linked through a boron-boron single bond that can manifest delocalization of electron density by providing an accessible LUMO orbital for π-communication between the cobalt centers and heteroaromatic rings.
This dianionic diboron ligand was shown to facilitate electronic coupling between the cobalt metal sites, as evidenced by the potential separations between successive single-electron redox events in the cyclic voltammogram. Four formal redox potentials for complex 189 were found: E\(_{1/2}\)(1) = −0.84 V, E\(_{1/2}\)(2) = −0.94 V, E\(_{1/2}\)(3) = −2.09 V, and E\(_{1/2}\)(4) = −2.36 V (relative to the Fc/Fc+ couple) (Figure 5-6). These potentials correlate to two closely-spaced oxidation waves and two well-resolved reduction waves ([(189)]\(^{0/+1}\), [(189)]\(^{+1/+2}\), [(189)]\(^{0/–1}\), and [(189)]\(^{–1/–2}\) redox couples, respectively). The extent of metal-metal communication was found to be relative to the charge of the metal atoms, with the negative charge being more efficiently delocalized across the bis(boratabenzene) unit (class II Robin-Day system). Magnetic studies indicate that the Co(II) ions are weakly antiferromagnetically coupled across the B-B bridge.
While reduction of the bis(boratabenzene) system resulted in decomposition of the complex, oxidation of the system by one- and two-electron steps resulted in isolable stable monocationic (194) and dicationic (195) forms of the bis(boratabenzene) complex (Figure 5-7). Study of these systems verified the results of the cyclic voltammetry studies performed on the neutral species. These species are unfortunately not stable in acetonitrile or nitromethane solutions, which until this point are the only solvents that have been observed to dissolve the cationic species. Unfortunately, this instability in solution complicates reactivity studies of these cationic complexes.
Finally, reactivity studies were performed on the neutral bis(boratabenzene) complex 189 in which the compound was tested for: (A) cleavage of the boratabenzene (cyclo-BC\(_5\)H\(_5\)) ring from the cobalt center, and (B) oxidative addition of the B-B bond to a transition metal scaffold to attempt synthesis of the first ever L\(_x\)M-η\(^1\)-(BC\(_5\)H\(_5\)) complex. Both of these reactivity studies, however, proved unsuccessful and typically witnessed decomposition of the bis(boratabenzene) complex or no reactivity. After repeated attempts of these reactions, no oxidative addition of the bis(boratabenzene) system could be confirmed.
Starting with a terminological and phenomenological perspective on the question “What is an emotion?”, particularly as developed by Aaron Ben Zeʾev , the kiling scene in the book of Judith (Jdt 12:10–13:9 is analysed. This crucial scene in the book’s plot reports the intense emotions of Holofernes but nothing is said about any emotions on the part of of Judith. The only emotional glimpse occurs in Judith’s short prayers in the killing scene. The highly emotional Holofernes and the unemotional Judith together reveal that Holofernes is already made “headless” by his own emotions, whereas the unemotional Judith, unencumbered by emotions, is able to behead the “headless” Holofernes.
This dissertation focuses on the drivers of international capital flows to emerging markets, as well as the determinants of crises in emerging markets. Particular emphasis is devoted to the role of U.S. monetary policy. The dissertation consists of three independent chapters.
Chapter 1 is a survey of the voluminous empirical literature on the drivers of capital flows to emerging markets. The contribution of the survey is to provide a comprehensive assessment of what we can say with relative confidence about the empirical drivers of EM capital flows. The evidence is structured based on the recognition that the drivers of capital flows vary over time and across different types of capital flows. The drivers are classified using the traditional framework for external and domestic factors (often referred to as “push versus pull” drivers), which is augmented by a distinction between cyclical and structural factors. Push factors are found to matter most for portfolio flows, somewhat less for banking flows, and least for foreign direct investment (FDI). Pull factors matter for all three components, but most for banking flows. A historical perspective suggests that the recent literature may have overemphasized the importance of cyclical factors at the expense of longer-term structural trends.
Chapter 2 undertakes an empirical analysis of the drivers of portfolio flows to emerging markets, focusing on the role of Fed policy. A time series model is estimated to analyze two different concepts of high frequency portfolio flows, including monthly data on flows into investment funds and a novel dataset on monthly portfolio flows obtained from individual national sources. The evidence presented in this chapter suggests a more nuanced interpretation of the role of U.S. monetary policy. In the existing literature, it is traditionally argued that Fed policy tightening is unambiguously negative for capital flows to emerging markets. By contrast, the findings presented in this dissertation suggest that it is the surprise element of monetary policy that affects EM portfolio inflows. A shift in market expectations towards easier future U.S. monetary policy leads to greater foreign portfolio inflows and vice versa. Given current market expectations of sustained increases in the federal funds rate in coming years, EM portfolio flows could be boosted by a slower pace of Fed tightening than currently expected or could be reduced by a faster pace of Fed tightening.
Chapter 3 examines the role of U.S. monetary policy in determining the incidence of emerging market crises. A negative binomial count model and a panel logit model are estimated to analyze the determinants of currency crises, banking crises, and sovereign defaults in a group of 27 emerging economies. The estimation results suggest that the probability of crises is substantially higher (1) when the federal funds rate is above its natural level, (2) during Fed policy tightening cycles, and (3) when market participants are surprised by signals that the Fed will tighten policy faster than previously expected. These findings contrast with the existing literature, which generally views domestic factors as the dominant determinants of emerging market crises. The findings also point to a heightened risk of emerging market crises in the coming years if the Fed continues to tighten monetary policy.
Computer systems have replaced human work-force in many parts of everyday life, but there still exists a large number of tasks that cannot be automated, yet. This also includes tasks, which we consider to be rather simple like the categorization of image content or subjective ratings. Traditionally, these tasks have been completed by designated employees or outsourced to specialized companies. However, recently the crowdsourcing paradigm is more and more applied to complete such human-labor intensive tasks. Crowdsourcing aims at leveraging the huge number of Internet users all around the globe, which form a potentially highly available, low-cost, and easy accessible work-force.
To enable the distribution of work on a global scale, new web-based services emerged, so called crowdsourcing platforms, that act as mediator between employers posting tasks and workers completing tasks. However, the crowdsourcing approach, especially the large anonymous worker crowd, results in two types of challenges. On the one hand, there are technical challenges like the dimensioning of crowdsourcing platform infrastructure or the interconnection of crowdsourcing platforms and machine clouds to build hybrid services. On the other hand, there are conceptual challenges like identifying reliable workers or migrating traditional off-line work to the crowdsourcing environment. To tackle these challenges, this monograph analyzes and models current crowdsourcing systems to optimize crowdsourcing workflows and the underlying infrastructure. First, a categorization of crowdsourcing tasks and platforms is developed to derive generalizable properties. Based on this categorization and an exemplary analysis of a commercial crowdsourcing platform, models for different aspects of crowdsourcing platforms and crowdsourcing mechanisms are developed. A special focus is put on quality assurance mechanisms for crowdsourcing tasks, where the models are used to assess the suitability and costs of existing approaches for different types of tasks. Further, a novel quality assurance mechanism solely based on user-interactions is proposed and its feasibility is shown. The findings from the analysis of existing platforms, the derived models, and the developed quality assurance mechanisms are finally used to derive best practices for two crowdsourcing use-cases, crowdsourcing-based network measurements and crowdsourcing-based subjective user studies. These two exemplary use-cases cover aspects typical for a large range of crowdsourcing tasks and illustrated the potential benefits, but also resulting challenges when using crowdsourcing.
With the ongoing digitalization and globalization of the labor markets, the crowdsourcing paradigm is expected to gain even more importance in the next years. This is already evident in the currently new emerging fields of crowdsourcing, like enterprise crowdsourcing or mobile crowdsourcing. The models developed in the monograph enable platform providers to optimize their current systems and employers to optimize their workflows to increase their commercial success. Moreover, the results help to improve the general understanding of crowdsourcing systems, a key for identifying necessary adaptions and future improvements.
Neuropathic pain, caused by neuronal damage, is a severely impairing mostly chronic condition. Its underlying molecular mechanisms have not yet been thoroughly understood in their variety. In this doctoral thesis, I investigated the role of microRNAs (miRNAs) in a murine model of peripheral neuropathic pain. MiRNAs are small, non-coding RNAs known to play a crucial role in post-transcriptional gene regulation, mainly in cell proliferation and differentiation. Initially, expression patterns in affected dorsal root ganglia (DRG) at different time points after setting a peripheral nerve lesion were studied. DRG showed an increasingly differential expression pattern over the course of one week. Interestingly, a similar effect, albeit to a smaller extent, was observed in corresponding contralateral ganglia. Five miRNA (miR-124, miR-137, miR-183, miR-27b, and miR-505) were further analysed. qPCR, in situ hybridization, and bioinformatical analysis point towards a role for miR-137 and -183 in neuropathic pain as both were downregulated. Furthermore, miR-137 is shown to be specific for non-peptidergic non-myelinated nociceptors (C fibres) in DRG. As the ganglia consist of highly heterocellular tissue, I also developed a neuron-specific approach. Primarily damaged neurons were separated from intact adjacent neurons using fluorescence-activated cell-sorting and their gene expression pattern was analysed using a microarray. Thereby, not only were information obtained about mRNA expression in both groups but, by bioinformatical tools, also inferences on miRNA involvement. The general expression pattern was consistent with previous findings. Still, several genes were found differentially expressed that had not been described in this context before. Among these are corticoliberin or cation-regulating proteins like Otopetrin1. Bioinformatical data conformed, in part, to results from whole DRG, e.g. they implied a down-regulation of miR-124, -137, and -183. However, these results were not significant.
In summary, I found that a) miRNA expression in DRG is influenced by nerve lesions typical of neuropathic pain and that b) these changes develop simultaneously to over-expression of galanin, a marker for neuronal damage. Furthermore, several miRNAs (miR-183, -137) exhibit distinct expression patterns in whole-DRG as well as in neuron-specific approaches. Therefore, further investigation of their possible role in initiation and maintenance of neuropathic pain seems promising.
Finally, the differential expression of genes like Corticoliberin or Otopetrin 1, previously not described in neuropathic pain, has already resulted in follow-up projects.
Classical novae are thermonuclear explosions occurring on the surface of white dwarfs.
When co-existing in a binary system with a main sequence or more evolved star, mass
accretion from the companion star to the white dwarf can take place if the companion
overflows its Roche lobe. The envelope of hydrogen-rich matter which builds on
top of the white dwarf eventually ignites under degenerate conditions, leading to
a thermonuclear runaway and an explosion in the order of 1046 erg, while leaving
the white dwarf intact. Spectral analyses from the debris indicate an abundance of
isotopes that are tracers of nuclear burning via the hot CNO cycle, which in turn
reveal some sort of mixing between the envelope and the white dwarf underneath.
The exact mechanism is still a matter of debate.
The convection and deflagration in novae develop in the low Mach number regime.
We used the Seven League Hydro code (SLH ), which employs numerical schemes
designed to correctly simulate low Mach number flows, to perform two and three-
dimensional simulations of classical novae. Based on a spherically-symmetric model
created with aid of a stellar evolution code, we developed our own nova model and
tested it on a variety of numerical grids and boundary conditions for validation. We
focused on the evolution of temperature, density and nuclear energy generation rate at
the layers between white dwarf and envelope, where most of the energy is generated,
to understand the structure of the transition region, and its effect on the nuclear
burning. We analyzed the resulting dredge-up efficiency stemming from the convective
motions in the envelope. Our models yield similar results to the literature, but seem
to depend very strongly on the numerical resolution. We followed the evolution of
the nuclear species involved in the CNO cycle and concluded that the thermonuclear
reactions primarily taking place are those of the cold and not the hot CNO cycle.
The reason behind this could be that under the conditions generally assumed for
multi-dimensional simulations, the envelope is in fact not degenerate. We performed
initial tests for 3D simulations and realized that alternative boundary conditions are
needed.
The enteric nervous system (ENS) innervates the gastrointestinal (GI) tract and controls central aspects of GI physiology including contractility of the intestinal musculature, glandular secretion and intestinal blood flow. The ENS is composed of neurons that conduct electrical signals and of enteric glial cells (EGCs). EGCs resemble central nervous system (CNS) astrocytes in their morphology and in the expression of shared markers such as the intermediate filament protein glial fibrillary acidic protein (GFAP). They are strategically located at the interface of ENS neurons and their effector cells to modulate intestinal motility, epithelial barrier stability and inflammatory processes. The specific contributions of EGCs to the maintenance of intestinal homeostasis are subject of current research.
From a clinical point of view EGC involvement in pathophysiological processes such as intestinal inflammation is highly relevant. Like CNS astrocytes ECGs can acquire a reactive, tissue-protective phenotype in response to intestinal injury. In patients with chronic inflammatory bowel diseases (IBD) such as Crohn's disease and ulcerative colitis, alterations in the EGC network are well known, particularly a differential expression of GFAP, which is a hallmark of reactive gliosis in the CNS.
With increasing recognition of the role of EGCs in intestinal health and disease comes the need to study the glial population in its complexity. The overall aim of this thesis was to comprehensively study EGCs with focus on the reactive GFAP-expressing subpopulation under inflammatory conditions in vivo and in vitro. In a first step, a novel in vivo rat model of acute systemic inflammation mimicking sepsis was employed to investigate rapidly occuring responses of EGCs to inflammation. This study revealed that within a short time frame of a few hours, EGCs responded to the inflammation with an upregulation of Gfap gene expression. This inflammation-induced upregulation was confined to the myenteric plexus and varied in intensity along the intestinal rostro-caudal axis. This highly responsive myenteric GFAP-expressing EGC population was further characterized in vivo andin vitro using a transgenic mouse model (hGFAP-eGFP mice). Primary purified murine GFAP-EGC cultures in vitro were established and it was assessed how the transcriptomic and proteomic profiles of these cells change upon inflammatory stimulation. Here, myenteric GFAP-EGCs were found to undergo a shift in gene expression profile that predominantly affects expression of genes associated with inflammatory responses. Further, a secretion of inflammatory mediators was validated on protein level. The GFAP+ subpopulation is hence an active participant in inflammatory pathophysiology. In an acute murine IBD model in vivo, GFAP-EGCs were found to express components of the major histocompatibility complex (MHC) class II in inflamed tissue, which also indicates a crosstalk of EGCs with the innate and the adaptive lamina propria immune system in acute inflammation.
Taken together, this work advances our knowledge on EGC (patho-)physiology by identifying and characterizing an EGC subpopulation rapidly responsive to inflammation. This study further provides the transcriptomic profile of this population in vivo and in vitro, which can be used to identify targets for therapeutic intervention. Due to the modulating influence of EGCs on the intestinal microenvironment, the study further underlines the importance of integrating EGCs into in vitro test systems that aim to model intestinal tissues in vitro and presents an outlook on a potential strategy.
The microbial communities that live inside the human gastrointestinal tract -the human gut
microbiome- are important for host health and wellbeing. Characterizing this new “organ”,
made up of as many cells as the human body itself, has recently become possible through
technological advances. Metagenomics, the high-throughput sequencing of DNA directly from
microbial communities, enables us to take genomic snapshots of thousands of microbes living
together in this complex ecosystem, without the need for isolating and growing them.
Quantifying the composition of the human gut microbiome allows us to investigate its
properties and connect it to host physiology and disease. The wealth of such connections was
unexpected and is probably still underestimated. Due to the fact that most of our dietary as well
as medicinal intake affects the microbiome and that the microbiome itself interacts with our
immune system through a multitude of pathways, many mechanisms have been proposed to
explain the observed correlations, though most have yet to be understood in depth.
An obvious prerequisite to characterizing the microbiome and its interactions with the host is
the accurate quantification of its composition, i.e. determining which microbes are present and
in what numbers they occur. Historically, standard practices have existed for sample handling,
DNA extraction and data analysis for many years. However, these were generally developed for
single microbe cultures and it is not always feasible to implement them in large scale
metagenomic studies. Partly because of this and partly because of the excitement that new
technology brings about, the first metagenomic studies each took the liberty to define their own
approach and protocols. From early meta-analysis of these studies it became clear that the
differences in sample handling, as well as differences in computational approaches, made
comparisons across studies very difficult. This restricts our ability to cross-validate findings of
individual studies and to pool samples from larger cohorts. To address the pressing need for
standardization, we undertook an extensive comparison of 21 different DNA extraction methods
as well as a series of other sample manipulations that affect quantification. We developed a
number of criteria for determining the measurement quality in the absence of a mock
community and used these to propose best practices for sampling, DNA extraction and library
preparation. If these were to be accepted as standards in the field, it would greatly improve
comparability across studies, which would dramatically increase the power of our inferences
and our ability to draw general conclusions about the microbiome.
Most metagenomics studies involve comparisons between microbial communities, for example
between fecal samples from cases and controls. A multitude of approaches have been proposed
to calculate community dissimilarities (beta diversity) and they are often combined with
various preprocessing techniques. Direct metagenomics quantification usually counts
sequencing reads mapped to specific taxonomic units, which can be species, genera, etc. Due to
technology-inherent differences in sampling depth, normalizing counts is necessary, for
instance by dividing each count by the sum of all counts in a sample (i.e. total sum scaling), or by
subsampling. To derive a single value for community (dis-)similarity, multiple distance
measures have been proposed. Although it is theoretically difficult to benchmark these
approaches, we developed a biologically motivated framework in which distance measures can
be evaluated. This highlights the importance of data transformations and their impact on the
measured distances.
Building on our experience with accurate abundance estimation and data preprocessing
techniques, we can now try and understand some of the basic properties of microbial
communities. In 2011, it was proposed that the space of genus level variation of the human gut
microbial community is structured into three basic types, termed enterotypes. These were
described in a multi-country cohort, so as to be independent of geography, age and other host
properties. Operationally defined through a clustering approach, they are “densely populated
areas in a multidimensional space of community composition”(source) and were proposed as a
general stratifier for the human population. Later studies that applied this concept to other
datasets raised concerns about the optimum number of clusters and robustness of the
clustering approach. This heralded a long standing debate about the existence of structure and
the best ways to determine and capture it. Here, we reconsider the concept of enterotypes, in
the context of the vastly increased amounts of available data. We propose a refined framework
in which the different types should be thought of as weak attractors in compositional space and
we try to implement an approach to determining which attractor a sample is closest to. To this
end, we train a classifier on a reference dataset to assign membership to new samples. This way,
enterotypes assignment is no longer dataset dependent and effects due to biased sampling are
minimized. Using a model in which we assume the existence of three enterotypes characterized
by the same driver genera, as originally postulated, we show the relevance of this stratification
and propose it to be used in a clinical setting as a potential marker for disease development.
Moreover, we believe that these attractors underline different rules of community assembly and
we recommend they be accounted for when analyzing gut microbiome samples.
While enterotypes describe structure in the community at genus level, metagenomic sequencing
can in principle achieve single-nucleotide resolution, allowing us to identify single nucleotide
polymorphisms (SNPs) and other genomic variants in the gut microbiome. Analysis
methodology for this level of resolution has only recently been developed and little exploration
has been done to date. Assessing SNPs in a large, multinational cohort, we discovered that the
landscape of genomic variation seems highly structured even beyond species resolution,
indicating that clearly distinguishable subspecies are prevalent among gut microbes. In several
cases, these subspecies exhibit geo-stratification, with some subspecies only found in the
Chinese population. Generally however, they present only minor dispersion limitations and are
seen across most of our study populations. Within one individual, one subspecies is commonly
found to dominate and only rarely are several subspecies observed to co-occur in the same
ecosystem. Analysis of longitudinal data indicates that the dominant subspecies remains stable
over periods of more than three years. When interrogating their functional properties we find
many differences, with specific ones appearing relevant to the host. For example, we identify a
subspecies of E. rectale that is lacking the flagellum operon and find its presence to be
significantly associated with lower body mass index and lower insulin resistance of their hosts;
it also correlates with higher microbial community diversity. These associations could not be
seen at the species level (where multiple subspecies are convoluted), which illustrates the
importance of this increased resolution for a more comprehensive understanding of microbial
interactions within the microbiome and with the host.
Taken together, our results provide a rigorous basis for performing comparative metagenomics
of the human gut, encompassing recommendations for both experimental sample processing
and computational analysis. We furthermore refine the concept of community stratification into
enterotypes, develop a reference-based approach for enterotype assignment and provide
compelling evidence for their relevance. Lastly, by harnessing the full resolution of
metagenomics, we discover a highly structured genomic variation landscape below the
microbial species level and identify common subspecies of the human gut microbiome. By
developing these high-precision metagenomics analysis tools, we thus hope to contribute to a
greatly improved understanding of the properties and dynamics of the human gut microbiome.
The present thesis considers the development and analysis of arbitrary Lagrangian-Eulerian
discontinuous Galerkin (ALE-DG) methods with time-dependent approximation spaces for
conservation laws and the Hamilton-Jacobi equations.
Fundamentals about conservation laws, Hamilton-Jacobi equations and discontinuous Galerkin
methods are presented. In particular, issues in the development of discontinuous Galerkin (DG)
methods for the Hamilton-Jacobi equations are discussed.
The development of the ALE-DG methods based on the assumption that the distribution of
the grid points is explicitly given for an upcoming time level. This assumption allows to construct a time-dependent local affine linear mapping to a reference cell and a time-dependent
finite element test function space. In addition, a version of Reynolds’ transport theorem can be
proven.
For the fully-discrete ALE-DG method for nonlinear scalar conservation laws the geometric
conservation law and a local maximum principle are proven. Furthermore, conditions for slope
limiters are stated. These conditions ensure the total variation stability of the method. In addition, entropy stability is discussed. For the corresponding semi-discrete ALE-DG method,
error estimates are proven. If a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell, the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence for monotone fuxes and the optimal $(k+1)$ convergence for an upwind flux are proven in the $\mathrm{L}^{2}$-norm. The capability of the method is shown by numerical examples for nonlinear conservation laws.
Likewise, for the semi-discrete ALE-DG method for nonlinear Hamilton-Jacobi equations, error
estimates are proven. In the one dimensional case the optimal $\left(k+1\right)$ convergence and in the two dimensional case the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence are proven in the $\mathrm{L}^{2}$-norm, if a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell. For the fullydiscrete method, the geometric conservation is proven and for the piecewise constant forward Euler step the convergence of the method to the unique physical relevant solution is discussed.
Gambling is a popular activity in Germany, with 40% of a representative sample reporting having gambled at least once in the past year (Bundeszentrale für gesundheitliche Aufklärung, 2014). While the majority of gamblers show harmless gambling behavior, a subset develops serious problems due to their gambling, affecting their psychological well-being, social life and work. According to recent estimates, up to 0.8% of the German population are affected by such pathological gambling. People in general and pathological gamblers in particular show several cognitive distortions, that is, misconceptions about the chances of winning and skill involvement, in gambling. The current work aimed at elucidating the biopsychological basis of two such kinds of cognitive distortions, the illusion of control and the gambler’s and hot hand fallacies, and their modulation by gambling problems. Therefore, four studies were conducted assessing the processing of near outcomes (used as a proxy for the illusion of control) and outcome sequences (used as a proxy for the gambler’s and hot hand fallacies) in samples of varying degrees of gambling problems, using a multimethod approach.
The first study analyzed the processing and evaluation of near outcomes as well as choice behavior in a wheel of fortune paradigm using electroencephalography (EEG). To assess the influence of gambling problems, a group of problem gamblers was compared to a group of controls. The results showed that there were no differences in the processing of near outcomes between the two groups. Near compared to full outcomes elicited smaller P300 amplitudes. Furthermore, at a trend level, the choice behavior of participants showed signs of a pattern opposite to the gambler’s fallacy, with longer runs of an outcome color leading to increased probabilities of choosing this color again on the subsequent trial. Finally, problem gamblers showed smaller feedback-related negativity (FRN) amplitudes relative to controls.
The second study also targeted the processing of near outcomes in a wheel of fortune paradigm, this time using functional magnetic resonance imaging and a group of participants with varying degrees of gambling problems. The results showed increased activity in the bilateral superior parietal cortex following near compared to full outcomes.
The third study examined the peripheral physiology reactions to near outcomes in the wheel of fortune. Heart period and skin conductance were measured while participants with varying degrees of gambling problems played on the wheel of fortune. Near compared to full outcomes led to increased heart period duration shortly after the outcome. Furthermore, heart period reactions and skin conductance responses (SCRs) were modulated by gambling problems. Participants with high relative to low levels of gambling problems showed increased SCRs to near outcomes and similar heart period reactions to near outcomes and full wins.
The fourth study analyzed choice behavior and sequence effects in the processing of outcomes in a coin toss paradigm using EEG in a group of problem gamblers and controls. Again, problem gamblers showed generally smaller FRN amplitudes compared to controls. There were no differences between groups in the processing of outcome sequences. The break of an outcome streak led to increased power in the theta frequency band. Furthermore, the P300 amplitude was increased after a sequence of previous wins. Finally, problem gamblers compared to controls showed a trend of switching the outcome symbol relative to the previous outcome symbol more often.
In sum, the results point towards differences in the processing of near compared to full outcomes in brain areas and measures implicated in attentional and salience processes. The processing of outcome sequences involves processes of salience attribution and violation of expectations. Furthermore, problem gamblers seem to process near outcomes as more win-like compared to controls. The results and their implications for problem gambling as well as further possible lines of research are discussed.
Spermiogenesis describes the differentiation of haploid germ cells into motile, fertilization-competent spermatozoa. During this fundamental transition the species-specific sperm head is formed, which necessitates profound nuclear restructuring coincident with the assembly of sperm-specific structures and chromatin compaction. In the case of the mouse, it is characterized by reshaping of the early round spermatid nucleus into an elongated sickle-shaped sperm head. This tremendous shape change requires the transduction of cytoskeletal forces onto the nuclear envelope (NE) or even further into the nuclear interior. LINC (linkers of nucleoskeleton and cytoskeleton) complexes might be involved in this process, due to their general function in bridging the NE and thereby physically connecting the nucleus to the peripheral cytoskeleton.
LINC complexes consist of inner nuclear membrane integral SUN-domain proteins and outer nuclear membrane KASH-domain counterparts. SUN- and KASH-domain proteins are directly connected to each other within the perinuclear space, and are thus capable of transferring forces across the NE. To date, these protein complexes are known for their essential functions in nuclear migration, anchoring and positioning of the nucleus, and even for chromosome movements and the maintenance of cell polarity and nuclear shape.
In this study LINC complexes were investigated with regard to their potential role in sperm head formation, in order to gain further insight into the processes occurring during spermiogenesis. To this end, the behavior and function of the testis-specific SUN4 protein was studied. The SUN-domain protein SUN4, which had received limited characterization prior to this work, was found to be exclusively expressed in haploid stages during germ cell development. In these cell stages, it specifically localized to the posterior NE at regions decorated by the manchette, a spermatid-specific structure which was previously shown to be involved in nuclear shaping. Mice deficient for SUN4 exhibited severely disorganized manchette residues and gravely misshapen sperm heads. These defects resulted in a globozoospermia-like phenotype and male mice infertility. Therefore, SUN4 was not only found to be mandatory for the correct assembly and anchorage of the manchette, but also for the correct localization of SUN3 and Nesprin1, as well as of other NE components. Interaction studies revealed that SUN4 had the potential to interact with SUN3, Nesprin1, and itself, and as such is likely to build functional LINC complexes that anchor the manchette and transfer cytoskeletal forces onto the nucleus.
Taken together, the severe impact of SUN4 deficiency on the nucleocytoplasmic junction during sperm development provided direct evidence for a crucial role of SUN4 and other LINC complex components in mammalian sperm head formation and fertility.
Software frameworks for Realtime Interactive Systems (RIS), e.g., in the areas of Virtual, Augmented, and Mixed Reality (VR, AR, and MR) or computer games, facilitate a multitude of functionalities by coupling diverse software modules. In this context, no uniform methodology for coupling these modules does exist; instead various purpose-built solutions have been proposed. As a consequence, important software qualities, such as maintainability, reusability, and adaptability, are impeded.
Many modern systems provide additional support for the integration of Artificial Intelligence (AI) methods to create so called intelligent virtual environments. These methods exacerbate the above-mentioned problem of coupling software modules in the thus created Intelligent Realtime Interactive Systems (IRIS) even more. This, on the one hand, is due to the commonly applied specialized data structures and asynchronous execution schemes, and the requirement for high consistency regarding content-wise coupled but functionally decoupled forms of data representation on the other.
This work proposes an approach to decoupling software modules in IRIS, which is based on the abstraction of architecture elements using a semantic Knowledge Representation Layer (KRL). The layer facilitates decoupling the required modules, provides a means for ensuring interface compatibility and consistency, and in the end constitutes an interface for symbolic AI methods.
Adjuvants are compounds added to an agrochemical spray formulation to improve or modify the action of an active ingredient (AI) or the physico-chemical characteristics of the spray liquid. Adjuvants can have more than only one distinct mode of action (MoA) during the foliar spray application process and they are generally known to be the best tools to improve agrochemical formulations. The main objective for this work was to elucidate the basic MoA of adjuvants by uncoupling different aspects of the spray application. Laboratory experiments, beginning from retention and spreading characteristics, followed by humectant effects concerning the spray deposit on the leaf surface and ultimately the cuticular penetration of an AI, were figured out to evaluate overall in vivo effects of adjuvants which were also obtained in a greenhouse spray test. For this comprehensive study, the surfactant classes of non-ionic sorbitan esters (Span), polysorbates (Tween) and oleyl alcohol polyglycol ether (Genapol O) were generally considered because of their common promoting potential in agrochemical formulations and their structural diversity.
The reduction of interfacial tension is one of the most crucial physico-chemical properties of surfactants. The dynamic surface tension (DST) was monitored to characterise the surface tension lowering behaviour which is known to influence the droplet formation and retention characteristics. The DST is a function of time and the critical time frame of droplet impact might be at about 100 ms. None of the selected surfactants were found to lower the surface tension sufficiently during this short timeframe (chapter I). At ca. 100 ms, Tween 20 resulted in the lowest DST value. When surfactant monomers are fully saturated at the droplet-air-interface, an equilibrium surface tension (STeq) value can be determined which may be used to predict spreading or run-off effects. The majority of selected surfactants resulted in a narrow distribution of STeq values, ranging between 30 and 45 mN m- 1. Nevertheless, all surfactants were able to decrease the surface tension considerably compared to pure water (72 mN m- 1). The influence of different surfactants on the wetting process was evaluated by studying time-dependent static contact angles on different surfaces and the droplet spread area on Triticum aestivum leaves after water evaporation. The spreading potential was observed to be better for Spans than for Tweens. Especially Span 20 showed maximum spreading results. To transfer laboratory findings to spray application, related to field conditions, retention and leaf coverage was measured quantitatively on wheat leaves by using a variable track sprayer. Since the retention process involves short time dynamics, it is well-known that the spray retention on a plant surface is not correlated to STeq but to DST values. The relationship between DST at ca. 100 ms and results from the track sprayer showed increasing retention results with decreasing DST, whereas at DST values below ca. 60 mN m- 1 no further retention improvement could be observed.
Under field conditions, water evaporates from the droplet within a few seconds to minutes after droplet deposition on the leaf surface. Since precipitation of the AI must essentially being avoided by holding the AI in solution, so-called humectants are used as tank-mix adjuvants. The ability of pure surfactants to absorb water from the surrounding atmosphere was investigated comprehensively by analysing water sorption isotherms (chapter II). These isotherms showed an exponential shape with a steep water sorption increase starting at 60% to 70% RH. Water sorption was low for Spans and much more distinct for the polyethoxylated surfactants (Tweens and Genapol O series). The relationship between the water sorption behaviour and the molecular structure of surfactants was considered as the so-called humectant activity. With an increasing ethylene oxide (EO) content, the humectant activity increased concerning the particular class of Genapol O. However, it could be shown that the moisture absorption across all classes of selected surfactants correlates rather better with their hydrophilic-lipophilic balance values with the EO content.
All aboveground organs of plants are covered by the cuticular membrane which is therefore the first rate limiting barrier for AI uptake. In vitro penetration experiments through an astomatous model cuticle were performed to study the effects of adjuvants on the penetration of the lipophilic herbicide Pinoxaden (PXD) (chapter III). In order to understand the influence of different adjuvant MoA like humectancy, experiments were performed under three different humidity levels. No explicit relationship could be found between humidity levels and the PXD penetration which might be explained by the fact that humidity effects would rather affect hydrophilic AIs than lipophilic ones. Especially for Tween 20, it became obvious that a complex balance between multiple MoA like spreading, humectancy and plasticising effects have to be considered.
Greenhouse trials, focussing the adjuvant impact on in vivo action of PXD, were evaluated on five different grass-weed species (chapter III). Since agrochemical spray application and its following action on living plants also includes translocation processes in planta and species dependent physiological effects, this investigation may help to simulate the situation on the field. Even though the absolute weed damage was different, depending both on plant species and also on PXD rates, adjuvant effects in greenhouse experiments displayed the same ranking as in cuticular penetration studies: Tween 20 > Tween 80 > Span 20 ≥ Span 80.
Thus, the present work shows for the first time that findings obtained in laboratory experiments can be successfully transferred to spray application studies on living plants concerning adjuvant MoA. A comparative analysis, using radar charts, could demonstrate systematic derivations from structural similarities of adjuvants to their MoA (summarising discussion and outlook). Exemplarily, Tween 20 and Tween 80 cover a wide range of selected variables by having no outstanding MoA improving one distinct process during foliar application, compared to non-ethoxylated Span 20 and Span 80 which primarily revealed a surface active action. Most adjuvants used in this study represent polydisperse mixtures bearing a complex distribution of EO and aliphatic chains. From this study it seems alike that adjuvants having a wide EO distribution offer broader potential than adjuvants with a small EO distribution. It might be a speculation that due to this broad distribution of single molecules, all bearing their individual specific physico-chemical nature, a wide range of properties concerning their MoA is covered.
Mathematical modelling, simulation, and optimisation are core methodologies for future
developments in engineering, natural, and life sciences. This work aims at applying these
mathematical techniques in the field of biological processes with a focus on the wine
fermentation process that is chosen as a representative model.
In the literature, basic models for the wine fermentation process consist of a system of
ordinary differential equations. They model the evolution of the yeast population number
as well as the concentrations of assimilable nitrogen, sugar, and ethanol. In this thesis,
the concentration of molecular oxygen is also included in order to model the change of
the metabolism of the yeast from an aerobic to an anaerobic one. Further, a more sophisticated
toxicity function is used. It provides simulation results that match experimental
measurements better than a linear toxicity model. Moreover, a further equation for the
temperature plays a crucial role in this work as it opens a way to influence the fermentation
process in a desired way by changing the temperature of the system via a cooling
mechanism. From the view of the wine industry, it is necessary to cope with large scale
fermentation vessels, where spatial inhomogeneities of concentrations and temperature
are likely to arise. Therefore, a system of reaction-diffusion equations is formulated in
this work, which acts as an approximation for a model including computationally very
expensive fluid dynamics.
In addition to the modelling issues, an optimal control problem for the proposed
reaction-diffusion fermentation model with temperature boundary control is presented
and analysed. Variational methods are used to prove the existence of unique weak solutions
to this non-linear problem. In this framework, it is possible to exploit the Hilbert
space structure of state and control spaces to prove the existence of optimal controls.
Additionally, first-order necessary optimality conditions are presented. They characterise
controls that minimise an objective functional with the purpose to minimise the final
sugar concentration. A numerical experiment shows that the final concentration of sugar
can be reduced by a suitably chosen temperature control.
The second part of this thesis deals with the identification of an unknown function
that participates in a dynamical model. For models with ordinary differential equations,
where parts of the dynamic cannot be deduced due to the complexity of the underlying
phenomena, a minimisation problem is formulated. By minimising the deviations of simulation
results and measurements the best possible function from a trial function space
is found. The analysis of this function identification problem covers the proof of the
differentiability of the function–to–state operator, the existence of minimisers, and the
sensitivity analysis by means of the data–to–function mapping. Moreover, the presented
function identification method is extended to stochastic differential equations. Here, the
objective functional consists of the difference of measured values and the statistical expected
value of the stochastic process solving the stochastic differential equation. Using a
Fokker-Planck equation that governs the probability density function of the process, the
probabilistic problem of simulating a stochastic process is cast to a deterministic partial
differential equation. Proofs of unique solvability of the forward equation, the existence of
minimisers, and first-order necessary optimality conditions are presented. The application
of the function identification framework to the wine fermentation model aims at finding
the shape of the toxicity function and is carried out for the deterministic as well as the
stochastic case.
Small satellites contribute significantly in the rapidly evolving innovation in space engineering, in particular in distributed space systems for global Earth observation and communication services. Significant mass reduction by miniaturization, increased utilization of commercial high-tech components, and in particular standardization are the key drivers for modern miniature space technology.
This thesis addresses key fields in research and development on miniature satellite technology regarding efficiency, flexibility, and robustness. Here, these challenges are addressed by the University of Wuerzburg’s advanced pico-satellite bus, realizing a generic modular satellite architecture and standardized interfaces for all subsystems. The modular platform ensures reusability, scalability, and increased testability due to its flexible subsystem interface which allows efficient and compact integration of the entire satellite in a plug-and-play manner.
Beside systematic design for testability, a high degree of operational robustness is achieved by the consequent implementation of redundancy of crucial subsystems. This is combined with efficient fault detection, isolation and recovery mechanisms. Thus, the UWE-3 platform, and in particular the on-board data handling system and the electrical power system, offers one of the most efficient pico-satellite architectures launched in recent years and provides a solid basis for future extensions.
The in-orbit performance results of the pico-satellite UWE-3 are presented and summarize successful operations since its launch in 2013. Several software extensions and adaptations have been uploaded to UWE-3 increasing its capabilities. Thus, a very flexible platform for in-orbit software experiments and for evaluations of innovative concepts was provided and tested.
Amyotrophic lateral sclerosis and spinal muscular atrophy are the two most common motoneuron diseases. Both are characterized by destabilization of axon terminals, axon degeneration and alterations in neuronal cytoskeleton. Accumulation of neurofilaments has been observed in several neurodegenerative diseases but the mechanisms how elevated neurofilament levels destabilize axons are unknown so far. Here, I show that increased neurofilament expression in motor nerves of pmn mutant mice causes disturbed microtubule dynamics. Depletion of neurofilament by Nefl knockout increases the number and regrowth of microtubules in pmn mutant motoneurons and restores axon elongation. This effect is mediated by interaction of neurofilament with the stathmin complex. Depletion of neurofilament increases stathmin-Stat3 interaction and stabilizes the microtubules. Consequently, the axonal maintenance is improved and the pmn mutant mice survive longer. We propose that this mechanism could also be relevant for other neurodegenerative diseases in which neurofilament accumulation is a prominent feature.
Next, using Smn-/-;SMN2 mouse as a model, the molecular mechanism behind synapse loss in SMA is studied. SMA is characterized by degeneration of lower α-motoneurons in spinal cord; however, how reduction of ubiquitously expressed SMN leads to MN-specific degeneration remains unclear. SMN is involved in pre-mRNA splicing (Pellizzoni, Kataoka et al. 1998) and its deficiency in SMA affects the splicing machinery. Neuromuscular junction denervation precedes neurodegeneration in SMA. However, there is no evidence of a link between aberrant splicing of transcripts downstream of Smn and reduced presynaptic axon excitability observed in SMA. In this study, we observed that expression and splicing of Nrxn2, that encodes a presynaptic protein is affected in the SMA mouse and that Nrxn2 could be a candidate that relates aberrant splicing to synaptic motoneuron defects in SMA.
At a hadron collider as the LHC or the Tevatron the production of a photon in association with a leptonically decaying vector boson represents an important class of processes. These processes stand out due to a very clean signal of a photon and two leptons. Furthermore they
provide direct access to the photon–vector-boson couplings and thus an easy opportunity to test the
gauge sector of the Standard Model. Within the scope of this work we present a full calculation of the next-to-leading-order corrections which include the O (αs) corrections of the strong interaction as well as the electroweak corrections of O (α) including all photon-induced contributions. For the creation of matrix elements we use methods based on Feynman diagrams. The IR singularities are treated with the dipole subtraction technique. In order to separate photons from jets, a quark-to-photon fragmentation function ´a la Glover / Morgan or Frixione’s cone isolation is employed. Moreover, two different scenarios for charged leptons in the fi state were considered. The fi scenario for dressed leptons assumes that a charged lepton and a photon will be recombined if they are collinear. In the second scenario for bare muons it is assumed that leptons and photon can be separated in a detector also if they are collinear.
For our calculation we implemented all corrections into a fl Monte Carlo program. Be- sides the computation of the total cross section this program is also able to generate diff tial distributions of several experimentally motivated observables. Apart from the expected large electroweak corrections in the high transverse-momentum regions and sizeable corrections in the resonance regions of the transverse or the invariant masses we found photon-induced corrections up to several 10% for high transverse momenta. Within run I at the LHC for 7/8 TeV the experimental accuracy for Vγ production was roughly 10%. Due to the higher luminosity at run II this accuracy
will be reduced to the level of a few percent so that corrections of the same order within the theoretical predictions might become relevant. In this work we present results for the total cross section at the LHC for 7, 8 and 14 TeV and the corresponding distributions
for 14 TeV.