Refine
Has Fulltext
- yes (230)
Is part of the Bibliography
- yes (230)
Year of publication
Document Type
- Doctoral Thesis (131)
- Journal article (77)
- Book (5)
- Other (4)
- Report (4)
- Master Thesis (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (12)
- Optimierung (9)
- Extremwertstatistik (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Finite-Elemente-Methode (5)
- Mathematikunterricht (5)
Institute
- Institut für Mathematik (230) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
Extreme value theory aims at modeling extreme but rare events from a probabilistic point of view. It is well-known that so-called generalized Pareto distributions, which are briefly reviewed in Chapter 1, are the only reasonable probability distributions suited for modeling observations above a high threshold, such as waves exceeding the height of a certain dike, earthquakes having at least a certain intensity, and, after applying a simple transformation, share prices falling below some low threshold. However, there are cases for which a generalized Pareto model might fail. Therefore, Chapter 2 derives certain neighborhoods of a generalized Pareto distribution and provides several statistical tests for these neighborhoods, where the cases of observing finite dimensional data and of observing continuous functions on [0,1] are considered. By using a notation based on so-called D-norms it is shown that these tests consistently link both frameworks, the finite dimensional and the functional one. Since the derivation of the asymptotic distributions of the test statistics requires certain technical restrictions, Chapter 3 analyzes these assumptions in more detail. It provides in particular some examples of distributions that satisfy the null hypothesis and of those that do not. Since continuous copula processes are crucial tools for the functional versions of the proposed tests, it is also discussed whether those copula processes actually exist for a given set of data. Moreover, some practical advice is given how to choose the free parameters incorporated in the test statistics. Finally, a simulation study in Chapter 4 compares the in total three different test statistics with another test found in the literature that has a similar null hypothesis. This thesis ends with a short summary of the results and an outlook to further open questions.
The present thesis considers the development and analysis of arbitrary Lagrangian-Eulerian
discontinuous Galerkin (ALE-DG) methods with time-dependent approximation spaces for
conservation laws and the Hamilton-Jacobi equations.
Fundamentals about conservation laws, Hamilton-Jacobi equations and discontinuous Galerkin
methods are presented. In particular, issues in the development of discontinuous Galerkin (DG)
methods for the Hamilton-Jacobi equations are discussed.
The development of the ALE-DG methods based on the assumption that the distribution of
the grid points is explicitly given for an upcoming time level. This assumption allows to construct a time-dependent local affine linear mapping to a reference cell and a time-dependent
finite element test function space. In addition, a version of Reynolds’ transport theorem can be
proven.
For the fully-discrete ALE-DG method for nonlinear scalar conservation laws the geometric
conservation law and a local maximum principle are proven. Furthermore, conditions for slope
limiters are stated. These conditions ensure the total variation stability of the method. In addition, entropy stability is discussed. For the corresponding semi-discrete ALE-DG method,
error estimates are proven. If a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell, the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence for monotone fuxes and the optimal $(k+1)$ convergence for an upwind flux are proven in the $\mathrm{L}^{2}$-norm. The capability of the method is shown by numerical examples for nonlinear conservation laws.
Likewise, for the semi-discrete ALE-DG method for nonlinear Hamilton-Jacobi equations, error
estimates are proven. In the one dimensional case the optimal $\left(k+1\right)$ convergence and in the two dimensional case the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence are proven in the $\mathrm{L}^{2}$-norm, if a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell. For the fullydiscrete method, the geometric conservation is proven and for the piecewise constant forward Euler step the convergence of the method to the unique physical relevant solution is discussed.
Mathematical modelling, simulation, and optimisation are core methodologies for future
developments in engineering, natural, and life sciences. This work aims at applying these
mathematical techniques in the field of biological processes with a focus on the wine
fermentation process that is chosen as a representative model.
In the literature, basic models for the wine fermentation process consist of a system of
ordinary differential equations. They model the evolution of the yeast population number
as well as the concentrations of assimilable nitrogen, sugar, and ethanol. In this thesis,
the concentration of molecular oxygen is also included in order to model the change of
the metabolism of the yeast from an aerobic to an anaerobic one. Further, a more sophisticated
toxicity function is used. It provides simulation results that match experimental
measurements better than a linear toxicity model. Moreover, a further equation for the
temperature plays a crucial role in this work as it opens a way to influence the fermentation
process in a desired way by changing the temperature of the system via a cooling
mechanism. From the view of the wine industry, it is necessary to cope with large scale
fermentation vessels, where spatial inhomogeneities of concentrations and temperature
are likely to arise. Therefore, a system of reaction-diffusion equations is formulated in
this work, which acts as an approximation for a model including computationally very
expensive fluid dynamics.
In addition to the modelling issues, an optimal control problem for the proposed
reaction-diffusion fermentation model with temperature boundary control is presented
and analysed. Variational methods are used to prove the existence of unique weak solutions
to this non-linear problem. In this framework, it is possible to exploit the Hilbert
space structure of state and control spaces to prove the existence of optimal controls.
Additionally, first-order necessary optimality conditions are presented. They characterise
controls that minimise an objective functional with the purpose to minimise the final
sugar concentration. A numerical experiment shows that the final concentration of sugar
can be reduced by a suitably chosen temperature control.
The second part of this thesis deals with the identification of an unknown function
that participates in a dynamical model. For models with ordinary differential equations,
where parts of the dynamic cannot be deduced due to the complexity of the underlying
phenomena, a minimisation problem is formulated. By minimising the deviations of simulation
results and measurements the best possible function from a trial function space
is found. The analysis of this function identification problem covers the proof of the
differentiability of the function–to–state operator, the existence of minimisers, and the
sensitivity analysis by means of the data–to–function mapping. Moreover, the presented
function identification method is extended to stochastic differential equations. Here, the
objective functional consists of the difference of measured values and the statistical expected
value of the stochastic process solving the stochastic differential equation. Using a
Fokker-Planck equation that governs the probability density function of the process, the
probabilistic problem of simulating a stochastic process is cast to a deterministic partial
differential equation. Proofs of unique solvability of the forward equation, the existence of
minimisers, and first-order necessary optimality conditions are presented. The application
of the function identification framework to the wine fermentation model aims at finding
the shape of the toxicity function and is carried out for the deterministic as well as the
stochastic case.
Extreme value theory is concerned with the stochastic modeling of rare and extreme events. While fundamental theories of classical stochastics - such as the laws of small numbers or the central limit theorem - are used to investigate the asymptotic behavior of the sum of random variables, extreme value theory focuses on the maximum or minimum of a set of observations. The limit distribution of the normalized sample maximum among a sequence of independent and identically distributed random variables can be characterized by means of so-called max-stable distributions.
This dissertation concerns with different aspects of the theory of max-stable random vectors and stochastic processes. In particular, the concept of 'differentiability in distribution' of a max-stable process is introduced and investigated. Moreover, 'generalized max-linear models' are introduced in order to interpolate a known max-stable random vector by a max-stable process. Further, the connection between extreme value theory and multivariate records is established. In particular, so-called 'complete' and 'simple' records are introduced as well as it is examined their asymptotic behavior.
Proximal methods are iterative optimization techniques for functionals, J = J1 + J2, consisting of a differentiable part J2 and a possibly nondifferentiable part J1. In this thesis proximal methods for finite- and infinite-dimensional optimization problems are discussed. In finite dimensions, they solve l1- and TV-minimization problems that are effectively applied to image reconstruction in magnetic resonance imaging (MRI). Convergence of these methods in this setting is proved. The proposed proximal scheme is compared to a split proximal scheme and it achieves a better signal-to-noise ratio. In addition, an application that uses parallel imaging is presented.
In infinite dimensions, these methods are discussed to solve nonsmooth linear and bilinear elliptic and parabolic optimal control problems. In particular, fast convergence of these methods is proved. Furthermore, for benchmarking purposes, truncated proximal schemes are compared to an inexact semismooth Newton method. Results of numerical experiments are presented to demonstrate the computational effectiveness of our proximal schemes that need less computation time than the semismooth Newton method in most cases. Results of numerical experiments are presented that successfully validate the theoretical estimates.
Based on the work of Eisenberg and Noe [2001], Suzuki [2002], Elsinger [2009] and Fischer [2014], we consider a generalization of Merton's asset valuation approach where n firms are linked by cross-ownership of equities and liabilities. Each firm is assumed to have a single outstanding liability, whereas its assets consist of one system-exogenous asset, as well as system-endogenous assets comprising some fraction of other firms' equity and liability, respectively. Following Fischer [2014], one can obtain no-arbitrage prices of equity and the recovery claims of liabilities as solutions of a fixed point problem, and hence obtain no-arbitrage prices of the `firm value' of each firm, which is the value of the firm's liability plus the firm's equity.
In a first step, we consider the two-firm case where explicit formulae for the no-arbitrage prices of the firm values are available (cf. Suzuki [2002]). Since firm values are derivatives of exogenous asset values, the distribution of firm values at maturity can be determined from the distribution of exogenous asset values. The Merton model and most of its known extensions do not account for the cross-ownership structure of the assets owned by the firm. Therefore the assumption of lognormally distributed exogenous assets leads to lognormally distributed firm values in such models, as the values of the liability and the equity add up to the exogenous asset's value (which has lognormal distribution by assumption). Our work therefore starts from lognormally distributed exogenous assets and reveals how cross-ownership, when correctly accounted for in the valuation process, affects the distribution of the firm value, which is not lognormal anymore. In a simulation study we examine the impact of several parameters (amount of cross-ownership of debt and equity, ratio of liabilities to expected exogenous assets value) on the differences between the distribution of firm values obtained from our model and correspondingly matched lognormal distributions. It becomes clear that the assumption of lognormally distributed firm values may lead to both over- and underestimation of the “true" firm values (within the cross-ownership model) and consequently of bankruptcy risk, too.
In a second step, the bankruptcy risk of one firm within the system is analyzed in more detail in a further simulation study, revealing that the correct incorporation of cross-ownership in the valuation procedure is the more important, the tighter the cross-ownership structure between the two firms. Furthermore, depending on the considered type of cross-ownership (debt or equity), the assumption of lognormally distributed firm values is likely to result in an over- resp. underestimation of the actual probability of default. In a similar vein, we consider the Value-at-Risk (VaR) of a firm in the system, which we calculate as the negative α-quantile of the firm value at maturity minus the firm's risk neutral price in t=0, i.e. we consider the (1-α)100%-VaR of the change in firm value. If we let the cross-ownership fractions (i.e. the fraction that one firm holds of another firm's debt or equity) converge to 1 (which is the supremum of the possible values that cross-ownership fractions can take), we can prove that in a system of two firms, the lognormal model will over- resp. underestimate both univariate and bivariate probabilities of default under cross-ownership of debt only resp. cross-ownership of equity only. Furthermore, we provide a formula that allows us to check for an arbitrary scenario of cross-ownership and any non-negative distribution of exogenous assets whether the approximating lognormal model will over- or underestimate the related probability of default of a firm. In particular, any given non-negative distribution of exogenous asset values (non-degenerate in a certain sense) can be transformed into a new, “extreme" distribution of exogenous assets yielding such a low or high actual probability of default that the approximating lognormal model will over- and underestimate this risk, respectively.
After this analysis of the univariate distribution of firm values under cross-ownership in a system of two firms with bivariately lognormally distributed exogenous asset values, we consider the copula of these firm values as a distribution-free measure of the dependency between these firm values. Without cross-ownership, this copula would be the Gaussian copula. Under cross-ownership, we especially consider the behaviour of the copula of firm values in the lower left and upper right corner of the unit square, and depending on the type of cross-ownership and the considered corner, we either obtain error bounds as to how good the copula of firm values under cross-ownership can be approximated with the Gaussian copula, or we see that the copula of firm values can be written as the copula of two linear combinations of exogenous asset values (note that these linear combinations are not lognormally distributed). These insights serve as a basis for our analysis of the tail dependence coefficient of firm values under cross-ownership. Under cross-ownership of debt only, firm values remain upper tail independent, whereas they become perfectly lower tail dependent if the correlation between exogenous asset values exceeds a certain positive threshold, which does not depend on the exact level of cross-ownership. Under cross-ownership of equity only, the situation is reverse in that firm values always remain lower tail independent, but upper tail independence is preserved if and only if the right tail behaviour of both firms’ values is determined by the right tail behaviour of the firms’ own exogenous asset value instead of the respective other firm’s exogenous asset value.
Next, we return to systems of n≥2 firms and analyze sensitivities of no-arbitrage prices of equity and the recovery claims of liabilities with respect to the model parameters. In the literature, such sensitivities are provided with respect to exogenous asset values by Gouriéroux et al. [2012], and we extend the existing results by considering how these no-arbitrage prices depend on the cross-ownership fractions and the level of liabilities. For the former, we can show that all prices are non-decreasing in any cross-ownership fraction in the model, and by use of a version of the Implicit Function Theorem we can also determine exact derivatives. For the latter, we show that the recovery value of debt and the equity value of a firm are non-decreasing and non-increasing in the firm's nominal level of liabilities, respectively, but the firm value is in general not monotone in the firm's level of liabilities. Furthermore, no-arbitrage prices of equity and the recovery claims of liabilities of a firm are in general non-monotone in the nominal level of liabilities of other firms in the system. If we confine ourselves to one type of cross-ownership (i.e. debt or equity), we can derive more precise relationships. All the results can be transferred to risk-neutral prices before maturity.
Finally, following Gouriéroux et al. [2012] and as a kind of extension to the above sensitivity results, we consider how immediate changes in exogenous asset values of one or more firms at maturity affect the financial health of a system of n initially solvent firms. We start with some theoretical considerations on what we call the contagion effect, namely the change in the endogenous asset value of a firm caused by shocks on the exogenous assets of firms within the system. For the two-firm case, an explicit formula is available, making clear that in general (and in particular under cross-ownership of equity only), the effect of contagion can be positive as well as negative, i.e. it can both, mitigate and exacerbate the change in the exogenous asset value of a firm. On the other hand, we cannot generally say that a tighter cross-ownership structure leads to bigger absolute contagion effects. Under cross-ownership of debt only, firms cannot profit from positive shocks beyond the direct effect on exogenous assets, as the contagion effect is always non-positive. Next, we are concerned with spillover effects of negative shocks on a subset of firms to other firms in the system (experiencing non-negative shocks themselves), driving them into default due to large losses in their endogenous asset values. Extending the results of Glasserman and Young [2015], we provide a necessary condition for the shock to cause such an event. This also yields an upper bound for the probability of such an event. We further investigate how the stability of a system of firms exposed to multiple shocks depends on the model parameters in a simulation study. In doing so, we consider three network types (incomplete, core-periphery and ring network) with simultaneous shocks on some of the firms and wiping out a certain percentage of their exogenous assets. Then we analyze for all three types of cross-ownership (debt only, equity only, both debt and equity) how the shock intensity, the shock size, and network parameters as the number of links in the network and the proportion of a firm's debt or equity held within the system of firms influences several output parameters, comprising the total number of defaults and the relative loss in the sum of firm values, among others. Comparing our results to the studies of Nier et al. [2007], Gai and Kapadia [2010] and Elliott et al. [2014], we can only partly confirm their results with respect to the number of defaults. We conclude our work with a theoretical comparison of the complete network (where each firm holds a part of any other firm) and the ring network with respect to the number of defaults caused by a shock on a single firm, as it is done by Allen and Gale [2000]. In line with the literature, we find that under cross-ownership of debt only, complete networks are “robust yet fragile" [Gai and Kapadia, 2010] in that moderate shocks can be completely withstood or drive the firm directly hit by the shock in default, but as soon as the shock exceeds a certain size, all firms are simultaneously in default. In contrast to that, firms default one by one in the ring network, with the first “contagious default" (i.e. a default of a firm not directly hit by the shock) already occurs for smaller shock sizes than under the complete network.
Dysfunction of dopaminergic neurotransmission has been implicated in HIV infection. We showed previously increased dopamine (DA) levels in CSF of therapy-naïve HIV patients and an inverse correlation between CSF DA and CD4 counts in the periphery, suggesting adverse effects of high levels of DA on HIV infection. In the current study including a total of 167 HIV-positive and negative donors from Germany and South Africa (SA), we investigated the mechanistic background for the increase of CSF DA in HIV individuals. Interestingly, we found that the DAT 10/10-repeat allele is present more frequently within HIV individuals than in uninfected subjects. Logistic regression analysis adjusted for gender and ethnicity showed an odds ratio for HIV infection in DAT 10/10 allele carriers of 3.93 (95 % CI 1.72–8.96; p = 0.001, Fishers exact test). 42.6 % HIV-infected patients harbored the DAT 10/10 allele compared to only 10.5 % uninfected DAT 10/10 carriers in SA (odds ratio 6.31), whereas 68.1 versus 40.9 %, respectively, in Germany (odds ratio 3.08). Subjects homozygous for the 10-repeat allele had higher amounts of CSF DA and reduced DAT mRNA expression but similar disease severity compared with those carrying other DAT genotypes. These intriguing and novel findings show the mutual interaction between DA and HIV, suggesting caution in the interpretation of CNS DA alterations in HIV infection solely as a secondary phenomenon to the virus and open the door for larger studies investigating consequences of the DAT functional polymorphism on HIV epidemiology and progression of disease.
Die Arbeit beschäftigt sich mit dem Einsatz von Origami im Schulunterricht. Genauer beschreibt sie eine Unterrichtssequenz zur Flachfaltbarkeit, einem Teilgebiet des mathematischen Papierfaltens, für den Mathematikunterricht in der Oberstufe an Gymnasien und höheren Schulen. Es werden konkrete Handlungsanweisungen sowie Alternativen ausgeführt und begründet und mit vielen Grafiken erläutert. Ferner werden Ziele dieser Unterrichtssequenz gemäß KMK-Bildungsstandards dargelegt. Anschließend wird ein mathematischer Blick auf die Beschäftigung mit der Flachfaltbarkeit sowie eine Einordnung in die aktuelle Forschungslage gegeben.
The first goal of this thesis is to generalize Loewner's famous differential equation to multiply connected domains. The resulting differential equations are known as Komatu--Loewner differential equations. We discuss Komatu--Loewner equations for canonical domains (circular slit disks, circular slit annuli and parallel slit half-planes). Additionally, we give a generalisation to several slits and discuss parametrisations that lead to constant coefficients. Moreover, we compare Komatu--Loewner equations with several slits to single slit Loewner equations.
Finally we generalise Komatu--Loewner equations to hulls satisfying a local growth property.
This thesis deals with the hp-finite element method (FEM) for linear quadratic optimal control problems. Here, a tracking type functional with control costs as regularization shall be minimized subject to an elliptic partial differential equation. In the presence of control constraints, the first order necessary conditions, which are typically used to find optimal solutions numerically, can be formulated as a semi-smooth projection formula. Consequently, optimal solutions may be non-smooth as well. The hp-discretization technique considers this fact and approximates rough functions on fine meshes while using higher order finite elements on domains where the solution is smooth.
The first main achievement of this thesis is the successful application of hp-FEM to two related problem classes: Neumann boundary and interface control problems. They are solved with an a-priori refinement strategy called boundary concentrated (bc) FEM and interface concentrated (ic) FEM, respectively. These strategies generate grids that are heavily refined towards the boundary or interface. We construct an elementwise interpolant that allows to prove algebraic decay of the approximation error for both techniques. Additionally, a detailed analysis of global and local regularity of solutions, which is critical for the speed of convergence, is included. Since the bc- and ic-FEM retain small polynomial degrees for elements touching the boundary and interface, respectively, we are able to deduce novel error estimates in the L2- and L∞-norm. The latter allows an a-priori strategy for updating the regularization parameter in the objective functional to solve bang-bang problems.
Furthermore, we apply the traditional idea of the hp-FEM, i.e., grading the mesh geometrically towards vertices of the domain, for solving optimal control problems (vc-FEM). In doing so, we obtain exponential convergence with respect to the number of unknowns. This is proved with a regularity result in countably normed spaces for the variables of the coupled optimality system.
The second main achievement of this thesis is the development of a fully adaptive hp-interior point method that can solve problems with distributed or Neumann control. The underlying barrier problem yields a non-linear optimality system, which poses a numerical challenge: the numerically stable evaluation of integrals over possibly singular functions in higher order elements. We successfully overcome this difficulty by monitoring the control variable at the integration points and enforcing feasibility in an additional smoothing step. In this work, we prove convergence of an interior point method with smoothing step and derive a-posteriori error estimators. The adaptive mesh refinement is based on the expansion of the solution in a Legendre series. The decay of the coefficients serves as an indicator for smoothness that guides between h- and p-refinement.
The goal of this thesis is to investigate conformal mappings onto circular arc polygon domains, i.e. domains that are bounded by polygons consisting of circular arcs instead of line segments.
Conformal mappings onto circular arc polygon domains contain parameters in addition to the classical parameters of the Schwarz-Christoffel transformation. To contribute to the parameter problem of conformal mappings from the unit disk onto circular arc polygon domains, we investigate two special cases of these mappings. In the first case we can describe the additional parameters if the bounding circular arc polygon is a polygon with straight sides. In the second case we provide an approximation for the additional parameters if the circular arc polygon domain satisfies some symmetry conditions. These results allow us to draw conclusions on the connection between these additional parameters and the classical parameters of the mapping.
For conformal mappings onto multiply connected circular arc polygon domains, we provide an alternative construction of the mapping formula without using the Schottky-Klein prime function. In the process of constructing our main result, mappings for domains of connectivity three or greater, we also provide a formula for conformal mappings onto doubly connected circular arc polygon domains. The comparison of these mapping formulas with already known mappings allows us to provide values for some of the parameters of the mappings onto doubly connected circular arc polygon domains if the image domain is a polygonal domain.
The different components of the mapping formula are constructed by using a slightly modified variant of the Poincaré theta series. This construction includes the design of a function to remove unwanted poles and of different versions of functions that are analytic on the domain of definition of the mapping functions and satisfy some special functional equations.
We also provide the necessary concepts to numerically evaluate the conformal mappings onto multiply connected circular arc polygon domains. As the evaluation of such a map requires the solution of a differential equation, we provide a possible configuration of curves inside the preimage domain to solve the equation along them in addition to a description of the procedure for computing either the formula for the doubly connected case or the case of connectivity three or greater. We also describe the procedures for solving the parameter problem for multiply connected circular arc polygon domains.
The purpose of confidence and prediction intervals is to provide an interval estimation for an unknown distribution parameter or the future value of a phenomenon. In many applications, prior knowledge about the distribution parameter is available, but rarely made use of, unless in a Bayesian framework. This thesis provides exact frequentist confidence intervals of minimal volume exploiting prior information. The scheme is applied to distribution parameters of the binomial and the Poisson distribution. The Bayesian approach to obtain intervals on a distribution parameter in form of credibility intervals is considered, with particular emphasis on the binomial distribution. An application of interval estimation is found in auditing, where two-sided intervals of Stringer type are meant to contain the mean of a zero-inflated population. In the context of time series analysis, covariates are supposed to improve the prediction of future values. Exponential smoothing with covariates as an extension of the popular forecasting method exponential smoothing is considered in this thesis. A double-seasonality version of it is applied to forecast hourly electricity load under the use of meteorological covariates. Different kinds of prediction intervals for exponential smoothing with covariates are formulated.
The subject of this thesis is the rigorous passage from discrete systems to continuum models via variational methods.
The first part of this work studies a discrete model describing a one-dimensional chain of atoms with finite range interactions of Lennard-Jones type. We derive an expansion of the ground state energy using \(\Gamma\)-convergence. In particular, we show that a variant of the Cauchy-Born rule holds true for the model under consideration. We exploit this observation to derive boundary layer energies due to asymmetries of the lattice at the boundary or at cracks of the specimen. Hereby we extend several results obtained previously for models involving only nearest and next-to-nearest neighbour interactions by Braides and Cicalese and Scardia, Schlömerkemper and Zanini.
The second part of this thesis is devoted to the analysis of a quasi-continuum (QC) method. To this end, we consider the discrete model studied in the first part of this thesis as the fully atomistic model problem and construct an approximation based on a QC method. We show that in an elastic setting the expansion by \(\Gamma\)-convergence of the fully atomistic energy and its QC approximation coincide. In the case of fracture, we show that this is not true in general. In the case of only nearest and next-to-nearest neighbour interactions, we give sufficient conditions on the QC approximation such that, also in case of fracture, the minimal energies of the fully atomistic energy and its approximation coincide in the limit.
The thesis ’Hurwitz’s Complex Continued Fractions - A Historical Approach and Modern Perspectives.’ deals with two branches of mathematics: Number Theory and History of Mathematics. On the first glimpse this might be unexpected, however, on the second view this is a very fruitful combination. Doing research in mathematics, it turns out to be very helpful to be aware of the beginnings and development of the corresponding subject.
In the case of Complex Continued Fractions the origins can easily be traced back to the end of the 19th century (see [Perron, 1954, vl. 1, Ch. 46]). One of their godfathers had been the famous mathematician Adolf Hurwitz. During the study of his transformation from real to complex continued fraction theory [Hurwitz, 1888], our attention was arrested by the article ’Ueber eine besondere Art der Kettenbruch-Entwicklung complexer Grössen’ [Hurwitz, 1895] from 1895 of an author called J. Hurwitz. We were not only surprised when we found out that he was the elder unknown brother Julius, furthermore, Julius Hurwitz introduced a complex continued fraction that also appeared (unmentioned) in an ergodic theoretical work from 1985 [Tanaka, 1985]. Those observations formed the Basis of our main research questions:
What is the historical background of Adolf and Julius Hurwitz and their mathematical studies? and What modern perspectives are provided by their complex continued fraction expansions?
In this work we examine complex continued fractions from various viewpoints. After a brief introduction on real continued fractions, we firstly devote ourselves to the lives of the brothers Adolf and Julius Hurwitz. Two excursions on selected historical aspects in respect to their work complete this historical chapter. In the sequel we shed light on Hurwitz’s, Adolf’s as well as Julius’, approaches to complex continued fraction expansions.
Correspondingly, in the following chapter we take a more modern perspective. Highlights are an ergodic theoretical result, namely a variation on the Döblin-Lenstra Conjecture [Bosma et al., 1983], as well as a result on transcendental numbers in tradition of Roth’s theorem [Roth, 1955]. In two subsequent chapters we are concernced with arithmetical properties of complex continued fractions. Firstly, an analogue to Marshall Hall’s Theorem from 1947 [Hall, 1947] on sums of continued fractions is derived. Secondly, a general approach on new types of continued fractions is presented building on the structural properties of lattices. Finally, in the last chapter we take up this approach and obtain an upper bound for the approximation quality of diophantine approximations by quotients of lattice points in the complex plane generalizing a method of Hermann Minkowski, improved by Hilde Gintner [Gintner, 1936], based on ideas from geometry of numbers.
Der Einzug des Rechners in den Mathematikunterricht hat eine Vielzahl neuer Möglichkeiten der Darstellung mit sich gebracht, darunter auch multiple, dynamisch verbundene Repräsentationen mathematischer Probleme. Die Arbeit beantwortet die Frage, ob und wie diese Repräsentationsarten von Schülerinnen und Schüler in Argumentationen genutzt werden. In der empirischen Untersuchung wurde dabei einerseits quantitativ erforscht, wie groß der Einfluss der in der Aufgabenstellung gegebenen Repräsentationsform auf die schriftliche Argumentationen der Schülerinnen und Schüler ist. Andererseits wurden durch eine qualitative Analyse spezifische Nutzungsweisen identifiziert und mittels Toulmins Argumentationsmodell beschrieben. Diese Erkenntnisse wurden genutzt, um Konsequenzen bezüglich der Verwendung von multiplen und/oder dynamischen Repräsentationen im Mathematikunterricht der Sekundarstufe zu formulieren.
The investigation of interacting multi-agent models is a new field of mathematical research with application to the study of behavior in groups of animals or community of people. One interesting feature of multi-agent systems is collective behavior. From the mathematical point of view, one of the challenging issues considering with these dynamical models is development of control mechanisms that are able to influence the time evolution of these systems.
In this thesis, we focus on the study of controllability, stabilization and optimal control problems for multi-agent systems considering three models as follows: The first one is the Hegselmann Krause opinion formation (HK) model. The HK dynamics describes how individuals' opinions are changed by the interaction with others taking place in a bounded domain of confidence. The study of this model focuses on determining feedback controls in order to drive the agents' opinions to reach a desired agreement. The second model is the Heider social balance (HB) model. The HB dynamics explains the evolution of relationships in a social network. One purpose of studying this system is the construction of control function in oder to steer the relationship to reach a friendship state. The third model that we discuss is a flocking model describing collective motion observed in biological systems. The flocking model under consideration includes self-propelling, friction, attraction, repulsion, and alignment features. We investigate a control for steering the flocking system to track a desired trajectory. Common to all these systems is our strategy to add a leader agent that interacts with all other members of the system and includes the control mechanism.
Our control through leadership approach is developed using classical theoretical control methods and a model predictive control (MPC) scheme. To apply the former method, for each model the stability of the corresponding linearized system near consensus is investigated. Further, local controllability is examined. However, only in the
Hegselmann-Krause opinion formation model, the feedback control is determined in order to steer agents' opinions to globally converge to a desired agreement. The MPC approach is an optimal control strategy based on numerical optimization. To apply the MPC scheme, optimal control problems for each model are formulated where the objective functions are different depending on the desired objective of the problem. The first-oder necessary optimality conditions for each problem are presented. Moreover for the numerical treatment, a sequence of open-loop discrete optimality systems is solved by accurate Runge-Kutta schemes, and in the optimization procedure, a nonlinear conjugate gradient solver is implemented. Finally, numerical experiments are performed to investigate the properties of the multi-agent models and demonstrate the ability of the proposed control strategies to drive multi-agent systems to attain a desired consensus and to track a given trajectory.
Background
The prevalence of obesity is rising. Obesity can lead to cardiovascular and ventilatory complications through multiple mechanisms. Cardiac and pulmonary function in asymptomatic subjects and the effect of structured dietary programs on cardiac and pulmonary function is unclear.
Objective
To determine lung and cardiac function in asymptomatic obese adults and to evaluate whether weight loss positively affects functional parameters.
Methods
We prospectively evaluated bodyplethysmographic and echocardiographic data in asymptomatic subjects undergoing a structured one-year weight reduction program.
Results
74 subjects (32 male, 42 female; mean age 42±12 years) with an average BMI 42.5±7.9, body weight 123.7±24.9 kg were enrolled. Body weight correlated negatively with vital capacity (R = −0.42, p<0.001), FEV1 (R = −0.497, p<0.001) and positively with P 0.1 (R = 0.32, p = 0.02) and myocardial mass (R = 0.419, p = 0.002). After 4 months the study subjects had significantly reduced their body weight (−26.0±11.8 kg) and BMI (−8.9±3.8) associated with a significant improvement of lung function (absolute changes: vital capacity +5.5±7.5% pred., p<0.001; FEV1+9.8±8.3% pred., p<0.001, ITGV+16.4±16.0% pred., p<0.001, SR tot −17.4±41.5% pred., p<0.01). Moreover, P0.1/Pimax decreased to 47.7% (p<0.01) indicating a decreased respiratory load. The change of FEV1 correlated significantly with the change of body weight (R = −0.31, p = 0.03). Echocardiography demonstrated reduced myocardial wall thickness (−0.08±0.2 cm, p = 0.02) and improved left ventricular myocardial performance index (−0.16±0.35, p = 0.02). Mitral annular plane systolic excursion (+0.14, p = 0.03) and pulmonary outflow acceleration time (AT +26.65±41.3 ms, p = 0.001) increased.
Conclusion
Even in asymptomatic individuals obesity is associated with abnormalities in pulmonary and cardiac function and increased myocardial mass. All the abnormalities can be reversed by a weight reduction program.
An efficient and accurate computational framework for solving control problems governed by quantum spin systems is presented. Spin systems are extremely important in modern quantum technologies such as nuclear magnetic resonance spectroscopy, quantum imaging and quantum computing. In these applications, two classes of quantum control problems arise: optimal control problems and exact-controllability problems, with a bilinear con- trol structure. These models correspond to the Schrödinger-Pauli equation, describing the time evolution of a spinor, and the Liouville-von Neumann master equation, describing the time evolution of a spinor and a density operator. This thesis focuses on quantum control problems governed by these models. An appropriate definition of the optimiza- tion objectives and of the admissible set of control functions allows to construct controls with specific properties. These properties are in general required by the physics and the technologies involved in quantum control applications. A main purpose of this work is to address non-differentiable quantum control problems. For this reason, a computational framework is developed to address optimal-control prob- lems, with possibly L1 -penalization term in the cost-functional, and exact-controllability problems. In both cases the set of admissible control functions is a subset of a Hilbert space. The bilinear control structure of the quantum model, the L1 -penalization term and the control constraints generate high non-linearities that make difficult to solve and analyse the corresponding control problems. The first part of this thesis focuses on the physical description of the spin of particles and of the magnetic resonance phenomenon. Afterwards, the controlled Schrödinger- Pauli equation and the Liouville-von Neumann master equation are discussed. These equations, like many other controlled quantum models, can be represented by dynamical systems with a bilinear control structure. In the second part of this thesis, theoretical investigations of optimal control problems, with a possible L1 -penalization term in the objective and control constraints, are consid- ered. In particular, existence of solutions, optimality conditions, and regularity properties of the optimal controls are discussed. In order to solve these optimal control problems, semi-smooth Newton methods are developed and proved to be superlinear convergent. The main difficulty in the implementation of a Newton method for optimal control prob- lems comes from the dimension of the Jacobian operator. In a discrete form, the Jacobian is a very large matrix, and this fact makes its construction infeasible from a practical point of view. For this reason, the focus of this work is on inexact Krylov-Newton methods, that combine the Newton method with Krylov iterative solvers for linear systems, and allows to avoid the construction of the discrete Jacobian. In the third part of this thesis, two methodologies for the exact-controllability of quan- tum spin systems are presented. The first method consists of a continuation technique, while the second method is based on a particular reformulation of the exact-control prob- lem. Both these methodologies address minimum L2 -norm exact-controllability problems. In the fourth part, the thesis focuses on the numerical analysis of quantum con- trol problems. In particular, the modified Crank-Nicolson scheme as an adequate time discretization of the Schrödinger equation is discussed, the first-discretize-then-optimize strategy is used to obtain a discrete reduced gradient formula for the differentiable part of the optimization objective, and implementation details and globalization strategies to guarantee an adequate numerical behaviour of semi-smooth Newton methods are treated. In the last part of this work, several numerical experiments are performed to vali- date the theoretical results and demonstrate the ability of the proposed computational framework to solve quantum spin control problems.
In the thesis discrete moments of the Riemann zeta-function and allied Dirichlet series are studied.
In the first part the asymptotic value-distribution of zeta-functions is studied where the samples are taken from a Cauchy random walk on a vertical line inside the critical strip. Building on techniques by Lifshits and Weber analogous results for the Hurwitz zeta-function are derived. Using Atkinson’s dissection this is even generalized to Dirichlet L-functions associated with a primitive character. Both results indicate that the expectation value equals one which shows that the values of these
zeta-function are small on average.
The second part deals with the logarithmic derivative of the Riemann zeta-function on vertical lines and here the samples are with respect to an explicit ergodic transformation. Extending work of Steuding, discrete moments are evaluated and an equivalent formulation for the Riemann Hypothesis in terms of ergodic theory is obtained.
In the third and last part of the thesis, the phenomenon of universality with respect
to stochastic processes is studied. It is shown that certain random shifts of the zeta-function can approximate non-vanishing analytic target functions as good as we please. This result relies on Voronin's universality theorem.
The Cauchy problem for a simplified shallow elastic fluids model, one 3 x 3 system of Temple's type, is studied and a global weak solution is obtained by using the compensated compactness theorem coupled with the total variation estimates on the first and third Riemann invariants, where the second Riemann invariant is singular near the zero layer depth (rho - 0). This work extends in some sense the previous works, (Serre, 1987) and (Leveque and Temple, 1985), which provided the global existence of weak solutions for 2 x 2 strictly hyperbolic system and (Heibig, 1994) for n x n strictly hyperbolic system with smooth Riemann invariants.