### Refine

#### Year of publication

- 2019 (230) (remove)

#### Document Type

- Doctoral Thesis (165)
- Book article / Book chapter (23)
- Journal article (22)
- Preprint (16)
- Conference Proceeding (1)
- Other (1)
- Report (1)
- Working Paper (1)

#### Language

- English (230) (remove)

#### Keywords

- Animal Studies (24)
- Cultural Animal Studies (24)
- Cultural Studies (24)
- Ecocriticism (24)
- Environmental Humanities (24)
- Human-Animal Studies (24)
- Literary Studies (24)
- Tissue Engineering (6)
- boron (6)
- Drosophila melanogaster (4)

#### Institute

- Graduate School of Life Sciences (53)
- Institut für deutsche Philologie (24)
- Neuphilologisches Institut - Moderne Fremdsprachen (22)
- Theodor-Boveri-Institut für Biowissenschaften (19)
- Physikalisches Institut (14)
- Institut für Organische Chemie (12)
- Institut für Pharmazie und Lebensmittelchemie (12)
- Institut für Physikalische und Theoretische Chemie (12)
- Institut für Anorganische Chemie (10)
- Institut für Informatik (9)

Chlamydia trachomatis (Ct) is an obligate intracellular human pathogen. It causes blinding trachoma and sexually transmitted disease such as chlamydia, pelvic inflammatory disease and lymphogranuloma venereum. Ct has a unique biphasic development cycle and replicates in an intracellular vacuole called inclusion. Normally it has two forms: the infectious form, elementary body (EB); and the non-infectious form, reticulate body (RB). Ct is not easily amenable to genetic manipulation. Hence, to understand the infection process, it is crucial to study how the metabolic activity of Ct exactly evolves in the host cell and what roles of EB and RB play differentially in Ct metabolism during infection. In addition, Ct was found regularly coinfected with other pathogens in patients who got sexually transmitted diseases (STDs). A lack of powerful methods to culture Ct outside of the host cell makes the detailed molecular mechanisms of coinfection difficult to study.
In this work, a genome-scale metabolic model with 321 metabolites and 277 reactions was first reconstructed by me to study Ct metabolic adaptation in the host cell during infection. This model was calculated to yield 84 extreme pathways, and metabolic flux strength was then modelled regarding 20hpi, 40hpi and later based on a published proteomics dataset. Activities of key enzymes involved in target pathways were further validated by RT-qPCR in both HeLa229 and HUVEC cell lines. This study suggests that Ct's major active pathways involve glycolysis, gluconeogenesis, glycerolphospholipid biosynthesis and pentose phosphate pathway, while Ct's incomplete tricarboxylic acid cycle and fatty acid biosynthesis are less active. EB is more activated in almost all these carbohydrate pathways than RB. Result suggests the survival of Ct generally requires a lot of acetyl-CoA from the host. Besides, both EB and RB can utilize folate biosynthesis to generate NAD(P)H but may use different pathways depending on the demands of ATP. When more ATP is available from both host cell and Ct itself, RB is more activated by utilizing energy providing chemicals generated by enzymes associated in the nucleic acid metabolism. The forming of folate also suggests large glutamate consumption, which is supposed to be converted from glutamine by the glutamine-fructose-6-phosphate transaminase (glmS) and CTP synthase (pyrG).
Then, RNA sequencing (RNA-seq) data analysis was performed by me in a coinfection study. Metatranscriptome from patient RNA-seq data provides a realistic overview. Thirteen patient samples were collected and sequenced by our collaborators. Six male samples were obtained by urethral swab, and seven female samples were collected by cervicovaginal lavage. All the samples were Neisseria gonorrhoeae (GC) positive, and half of them had coinfection with Ct. HISAT2 and Stringtie were used for transcriptomic mapping and assembly respectively, and differential expression analysis by DESeq2, Ballgown and Cuffdiff2 are parallelly processed for comparison. Although the measured transcripts were not sufficient to assemble Ct's transcriptome, the differential expression of genes in both the host and GC were analyzed by comparing Ct positive group (Ct+) against Ct-uninfected group. The results show that in the Ct+ group, the host MHC class II immune response was highly induced. Ct infection is associated with the regulation of DNA methylation, DNA double-strand damage and ubiquitination. The analysis also shows Ct infection enhances host fatty acid beta oxidation, thereby inducing mROS, and the host responds to reduce ceramide production and glycolysis. The coinfection upregulates GC's own ion transporters and amino acid uptake, while it downregulates GC's restriction and modification systems. Meanwhile, GC has the nitrosative and oxidative stress response and also increases the ability for ferric uptake especially in the Ct+ group compared to Ct-uninfected group.
In conclusion, methods in bioinformatics were used here in analyzing the metabolism of Ct itself, and the responses of the host and GC respectively in a coinfection study with and without Ct. These methods provide metabolic and metatranscriptomic details to study Ct metabolism during infection and Ct associated coinfection in the human microbiota.

In this work models for molecular networks consisting of ordinary differential equations are extended by terms that include the interaction of the corresponding molecular network with the environment that the molecular network is embedded in. These terms model the effects of the external stimuli on the molecular network. The usability of this extension is demonstrated with a model of a circadian clock that is extended with certain terms and reproduces data from several experiments at the same time.
Once the model including external stimuli is set up, a framework is developed in order to calculate external stimuli that have a predefined desired effect on the molecular network. For this purpose the task of finding appropriate external stimuli is formulated as a mathematical optimal control problem for which in order to solve it a lot of mathematical methods are available. Several methods are discussed and worked out in order to calculate a solution for the corresponding optimal control problem. The application of the framework to find pharmacological intervention points or effective drug combinations is pointed out and discussed. Furthermore the framework is related to existing network analysis tools and their combination for network analysis in order to find dedicated external stimuli is discussed.
The total framework is verified with biological examples by comparing the calculated results with data from literature. For this purpose platelet aggregation is investigated based on a corresponding gene regulatory network and associated receptors are detected. Furthermore a transition from one to another type of T-helper cell is analyzed in a tumor setting where missing agents are calculated to induce the corresponding switch in vitro. Next a gene regulatory network of a myocardiocyte is investigated where it is shown how the presented framework can be used to compare different treatment strategies with respect to their beneficial effects and side effects quantitatively. Moreover a constitutively activated signaling pathway, which thus causes maleficent effects, is modeled and intervention points with corresponding treatment strategies are determined that steer the gene regulatory network from a pathological expression pattern to physiological one again.

The culture of human induced pluripotent stem cells (hiPSCs) at large-scale becomes feasible with the aid of scalable suspension setups in continuously stirred tank reactors (CSTRs). Suspension cul- tures of hiPSCs are characterized by the self-aggregation of single cells into macroscopic cell aggre- gates that increase in size over time. The development of these free-floating aggregates is dependent on the culture vessel and thus represents a novel process parameter that is of particular interest for hiPSC suspension culture scaling. Further, aggregates surpassing a critical size are prone to spon- taneous differentiation or cell viability loss. In this regard, and, for the first time, a hiPSC-specific suspension culture unit was developed that utilizes in situ microscope imaging to monitor and to characterize hiPSC aggregation in one specific CSTR setup to a statistically significant degree while omitting the need for error-prone and time-intensive sampling. For this purpose, a small-scale CSTR system was designed and fabricated by fused deposition modeling (FDM) using an in-house 3D- printer. To provide a suitable cell culture environment for the CSTR system and in situ microscope, a custom-built incubator was constructed to accommodate all culture vessels and process control devices. Prior to manufacture, the CSTR design was characterized in silico for standard engineering parameters such as the specific power input, mixing time, and shear stress using computational fluid dynamics (CFD) simulations. The established computational model was successfully validated by comparing CFD-derived mixing time data to manual measurements. Proof for system functionality was provided in the context of long-term expansion (4 passages) of hiPSCs. Thereby, hiPSC aggregate size development was successfully tracked by in situ imaging of CSTR suspensions and subsequent automated image processing. Further, the suitability of the developed hiPSC culture unit was proven by demonstrating the preservation of CSTR-cultured hiPSC pluripotency on RNA level by qRT-PCR and PluriTest, and on protein level by flow cytometry.

The measurement of the mass of the $W$ boson is currently one of the most promising precision analyses of the Standard Model, that could ultimately reveal a hint for new physics.
The mass of the $W$ boson is determined by comparing the $W$ boson, which cannot be reconstructed directly, to the $Z$ boson, where the full decay signature is available. With the help of Monte Carlo simulations one can extrapolate from the $Z$ boson to the $W$ boson.
Technically speaking, the measurement of the $W$ boson mass is performed by comparing data taken by the ATLAS experiment to a set of calibrated Monte Carlo simulations, which reflect different mass hypotheses.\
A dedicated calibration of the reconstructed objects in the simulations is crucial for a high precision of the measured value.
The comparison of simulated $Z$ boson events to reconstructed $Z$ boson candidates in data allows to derive event weights and scale factors for the calibration.
This thesis presents a new approach to reweight the hadronic recoil in the simulations. The focus of the calibration is on the average hadronic activity visible in the mean of the scalar sum of the hadronic recoil $\Sigma E_T$ as a function of pileup. In contrast to the standard method, which directly reweights the scalar sum, the dependency to the transverse boson momentum is less strongly affected here.
The $\Sigma E_T$ distribution is modeled first by means of its pileup dependency. Then, the remaining differences in the resolution of the vector sum of the hadronic recoil are scaled. This is done separately for the parallel and the pterpendicular component of the hadronic recoil with respect to the reconstructed boson.
This calibration was developed for the dataset taken by the ATLAS experiment at a center of mass energy of $8\,\textrm{TeV}$ in 2012. In addition, the same reweighting procedure is applied to the recent dataset with a low pileup contribution, the \textit{lowMu} runs at $5\,\textrm{TeV}$ and at $13\,\textrm{TeV}$, taken by ATLAS in November 2017. The dedicated aspects of the reweighting procedure are presented in this thesis. It can be shown that this reweighting approach improves the agreement between data and the simulations effectively for all datasets.
The uncertainties of this reweighting approach as well as the statistical errors are evaluated for a $W$ mass measurement by a template fit to pseudodata for the \textit{lowMu} dataset. A first estimate of these uncertainties is given here. For the pfoEM algorithm a statistical uncertainty of $17\,\text{MeV}$ for the $5\,\textrm{TeV}$ dataset and of $18\,\text{MeV}$ for the $13\,\textrm{TeV}$ are found for the $W \rightarrow \mu \nu$ analysis. The systematic uncertainty introduced by the resolution scaling has the largest effect, a value of $15\,\text{MeV}$ is estimated for the $13\,\textrm{TeV}$ dataset in the muon channel.

From the simplest single-cellular organism to the most complex multicellular life forms, genetic information in form of DNA represents the universal basis for all biological processes and thus for life itself. Maintaining the structural and functional integrity of the genome is therefore of paramount importance for every single cell. DNA itself, as an active and complex macromolecular structure, is both substrate and product of many of these biochemical processes. A cornerstone of DNA maintenance is thus established by the tight regulation of the multitude of reactions in DNA metabolism, repressing adverse side reactions and ensuring the integrity of DNA in sequence and function. The family of RecQ helicases has emerged as a vital class of enzymes that facilitate genomic integrity by operating in a versatile spectrum of nucleic acid metabolism processes, such as DNA replication, repair, recombination, transcription and telomere stability. RecQ helicases are ubiquitously expressed and conserved in all kingdoms of life. Human cells express five different RecQ enzymes, RecQ1, BLM, WRN, RecQ4 and RecQ5, which all exhibit individual as well as overlapping functions in the maintenance of genomic integrity. Dysfunction of three human RecQ helicases, BLM, WRN and RecQ4, causes different heritable cancer susceptibility syndromes, supporting the theory that genomic instability is a molecular driving force for cancer development. However, based on their inherent DNA protective nature, RecQ helicases represent a double-edged sword in the maintenance of genomic integrity. While their activity in normal cells is essential to prevent cancerogenesis and cellular aging, cancer cells may exploit this DNA protective function by the overexpression of many RecQ helicases, aiding to overcome the disadvantageous results of unchecked DNA replication and simultaneously gaining resistance against chemotherapeutic drugs. Therefore, detailed knowledge how RecQ helicases warrant genomic integrity is required to understand their implication in cancerogenesis and aging, thus setting the stage to develop new strategies towards the treatment of cancer.
The current study presents and discusses the first high-resolution X-ray structure of the human RecQ4 helicase. The structure encompasses the conserved RecQ4 helicase core, including a large fraction of its unique C- terminus. Our structural analysis of the RecQ4 model highlights distinctive differences and unexpected similarities to other, structurally conserved, RecQ helicases and permits to draw conclusions about the functional implications of the unique domains within the RecQ4 C-terminus. The biochemical characterization of various RecQ4 variants provides functional insights into the RecQ4 helicase mechanism, suggesting that RecQ4 might utilize an alternative DNA strand separation technique, compared to other human RecQ family members. Finally, the RecQ4 model permits for the first time the analysis of multiple documented RecQ4 patient mutations at the atomic level and thus provides the possibility for an advanced interpretation of particular structure-function relationships in RecQ4 pathogenesis.

This thesis deals with a new so-called sequential quadratic Hamiltonian (SQH) iterative scheme to solve optimal control problems with differential models and cost functionals ranging from smooth to discontinuous and non-convex. This scheme is based on the Pontryagin maximum principle (PMP) that provides necessary optimality conditions for an optimal solution. In this framework, a Hamiltonian function is defined that attains its minimum pointwise at the optimal solution of the corresponding optimal control problem. In the SQH scheme, this Hamiltonian function is augmented by a quadratic penalty term consisting of the current control function and the control function from the previous iteration. The heart of the SQH scheme is to minimize this augmented Hamiltonian function pointwise in order to determine a control update. Since the PMP does not require any differ- entiability with respect to the control argument, the SQH scheme can be used to solve optimal control problems with both smooth and non-convex or even discontinuous cost functionals. The main achievement of the thesis is the formulation of a robust and efficient SQH scheme and a framework in which the convergence analysis of the SQH scheme can be carried out. In this framework, convergence of the scheme means that the calculated solution fulfills the PMP condition. The governing differential models of the considered optimal control problems are ordinary differential equations (ODEs) and partial differential equations (PDEs). In the PDE case, elliptic and parabolic equations as well as the Fokker-Planck (FP) equation are considered. For both the ODE and the PDE cases, assumptions are formulated for which it can be proved that a solution to an optimal control problem has to fulfill the PMP. The obtained results are essential for the discussion of the convergence analysis of the SQH scheme. This analysis has two parts. The first one is the well-posedness of the scheme which means that all steps of the scheme can be carried out and provide a result in finite time. The second part part is the PMP consistency of the solution. This means that the solution of the SQH scheme fulfills the PMP conditions. In the ODE case, the following results are obtained that state well-posedness of the SQH scheme and the PMP consistency of the corresponding solution. Lemma 7 states the existence of a pointwise minimum of the augmented Hamiltonian. Lemma 11 proves the existence of a weight of the quadratic penalty term such that the minimization of the corresponding augmented Hamiltonian results in a control updated that reduces the value of the cost functional. Lemma 12 states that the SQH scheme stops if an iterate is PMP optimal. Theorem 13 proves the cost functional reducing properties of the SQH control updates. The main result is given in Theorem 14, which states the pointwise convergence of the SQH scheme towards a PMP consistent solution. In this ODE framework, the SQH method is applied to two optimal control problems. The first one is an optimal quantum control problem where it is shown that the SQH method converges much faster to an optimal solution than a globalized Newton method. The second optimal control problem is an optimal tumor treatment problem with a system of coupled highly non-linear state equations that describe the tumor growth. It is shown that the framework in which the convergence of the SQH scheme is proved is applicable for this highly non-linear case. Next, the case of PDE control problems is considered. First a general framework is discussed in which a solution to the corresponding optimal control problem fulfills the PMP conditions. In this case, many theoretical estimates are presented in Theorem 59 and Theorem 64 to prove in particular the essential boundedness of the state and adjoint variables. The steps for the convergence analysis of the SQH scheme are analogous to that of the ODE case and result in Theorem 27 that states the PMP consistency of the solution obtained with the SQH scheme. This framework is applied to different elliptic and parabolic optimal control problems, including linear and bilinear control mechanisms, as well as non-linear state equations. Moreover, the SQH method is discussed for solving a state-constrained optimal control problem in an augmented formulation. In this case, it is shown in Theorem 30 that for increasing the weight of the augmentation term, which penalizes the violation of the state constraint, the measure of this state constraint violation by the corresponding solution converges to zero. Furthermore, an optimal control problem with a non-smooth L\(^1\)-tracking term and a non-smooth state equation is investigated. For this purpose, an adjoint equation is defined and the SQH method is used to solve the corresponding optimal control problem. The final part of this thesis is devoted to a class of FP models related to specific stochastic processes. The discussion starts with a focus on random walks where also jumps are included. This framework allows a derivation of a discrete FP model corresponding to a continuous FP model with jumps and boundary conditions ranging from absorbing to totally reflecting. This discussion allows the consideration of the drift-control resulting from an anisotropic probability of the steps of the random walk. Thereafter, in the PMP framework, two drift-diffusion processes and the corresponding FP models with two different control strategies for an optimal control problem with an expectation functional are considered. In the first strategy, the controls depend on time and in the second one, the controls depend on space and time. In both cases a solution to the corresponding optimal control problem is characterized with the PMP conditions, stated in Theorem 48 and Theorem 49. The well-posedness of the SQH scheme is shown in both cases and further conditions are discussed that ensure the convergence of the SQH scheme to a PMP consistent solution. The case of a space and time dependent control strategy results in a special structure of the corresponding PMP conditions that is exploited in another solution method, the so-called direct Hamiltonian (DH) method.

This work deals with the development and application of novel quantum Monte Carlo methods to simulate fermion-boson models. Our developments are based on the path-integral formalism, where the bosonic degrees of freedom are integrated out exactly to obtain a retarded fermionic interaction. We give an overview of three methods that can be used to simulate retarded interactions. In particular, we develop a novel quantum Monte Carlo method with global directed-loop updates that solves the autocorrelation problem of previous approaches and scales linearly with system size. We demonstrate its efficiency for the Peierls transition in the Holstein model and discuss extensions to other fermion-boson models as well as spin-boson models. Furthermore, we show how with the help of generating functionals bosonic observables can be recovered directly from the Monte Carlo configurations. This includes estimators for the boson propagator, the fidelity susceptibility, and the specific heat of the Holstein model. The algorithmic developments of this work allow us to study the specific heat of the spinless Holstein model covering its entire parameter range. Its key features are explained from the single-particle spectral functions of electrons and phonons. In the adiabatic limit, the spectral properties are calculated exactly as a function of temperature using a classical Monte Carlo method and compared to results for the Su-Schrieffer-Heeger model.