Refine
Has Fulltext
- yes (231)
Is part of the Bibliography
- yes (231)
Year of publication
Document Type
- Doctoral Thesis (131)
- Journal article (78)
- Book (5)
- Other (4)
- Report (4)
- Master Thesis (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (12)
- Optimierung (9)
- Extremwertstatistik (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Finite-Elemente-Methode (5)
- Mathematikunterricht (5)
Institute
- Institut für Mathematik (231) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
Extreme value theory is concerned with the stochastic modeling of rare and extreme events. While fundamental theories of classical stochastics - such as the laws of small numbers or the central limit theorem - are used to investigate the asymptotic behavior of the sum of random variables, extreme value theory focuses on the maximum or minimum of a set of observations. The limit distribution of the normalized sample maximum among a sequence of independent and identically distributed random variables can be characterized by means of so-called max-stable distributions.
This dissertation concerns with different aspects of the theory of max-stable random vectors and stochastic processes. In particular, the concept of 'differentiability in distribution' of a max-stable process is introduced and investigated. Moreover, 'generalized max-linear models' are introduced in order to interpolate a known max-stable random vector by a max-stable process. Further, the connection between extreme value theory and multivariate records is established. In particular, so-called 'complete' and 'simple' records are introduced as well as it is examined their asymptotic behavior.
In this paper, convex approximation methods, suclt as CONLIN, the method of moving asymptotes (MMA) and a stabilized version of MMA (Sequential Convex Programming), are discussed with respect to their convergence behaviour. In an extensive numerical study they are :finally compared with other well-known optimization methods at 72 examples of sizing problems.
This thesis is devoted to Bernoulli Stochastics, which was initiated by Jakob Bernoulli more than 300 years ago by his master piece 'Ars conjectandi', which can be translated as 'Science of Prediction'. Thus, Jakob Bernoulli's Stochastics focus on prediction in contrast to the later emerging disciplines probability theory, statistics and mathematical statistics. Only recently Jakob Bernoulli's focus was taken up von Collani, who developed a unified theory of uncertainty aiming at making reliable and accurate predictions. In this thesis, teaching material as well as a virtual classroom are developed for fostering ideas and techniques initiated by Jakob Bernoulli and elaborated by Elart von Collani. The thesis is part of an extensively construed project called 'Stochastikon' aiming at introducing Bernoulli Stochastics as a unified science of prediction and measurement under uncertainty. This ambitious aim shall be reached by the development of an internet-based comprehensive system offering the science of Bernoulli Stochastics on any level of application. So far it is planned that the 'Stochastikon' system (http://www.stochastikon.com/) will consist of five subsystems. Two of them are developed and introduced in this thesis. The first one is the e-learning programme 'Stochastikon Magister' and the second one 'Stochastikon Graphics' that provides the entire Stochastikon system with graphical illustrations. E-learning is the outcome of merging education and internet techniques. E-learning is characterized by the facts that teaching and learning are independent of place and time and of the availability of specially trained teachers. Knowledge offering as well as knowledge transferring are realized by using modern information technologies. Nowadays more and more e-learning environments are based on the internet as the primary tool for communication and presentation. E-learning presentation tools are for instance text-files, pictures, graphics, audio and videos, which can be networked with each other. There could be no limit as to the access to teaching contents. Moreover, the students can adapt the speed of learning to their individual abilities. E-learning is particularly appropriate for newly arising scientific and technical disciplines, which generally cannot be presented by traditional learning methods sufficiently well, because neither trained teachers nor textbooks are available. The first part of this dissertation introduces the state of the art of e-learning in statistics, since statistics and Bernoulli Stochastics are both based on probability theory and exhibit many similar features. Since Stochastikon Magister is the first e-learning programme for Bernoulli Stochastics, the educational statistics systems is selected for the purpose of comparison and evaluation. This makes sense as both disciplines are an attempt to handle uncertainty and use methods that often can be directly compared. The second part of this dissertation is devoted to Bernoulli Stochastics. This part aims at outlining the content of two courses, which have been developed for the anticipated e-learning programme Stochastikon Magister in order to show the difficulties in teaching, understanding and applying Bernoulli Stochastics. The third part discusses the realization of the e-learning programme Stochastikon Magister, its design and implementation, which aims at offering a systematic learning of principles and techniques developed in Bernoulli Stochastics. The resulting e-learning programme differs from the commonly developed e-learning programmes as it is an attempt to provide a virtual classroom that simulates all the functions of real classroom teaching. This is in general not necessary, since most of the e-learning programmes aim at supporting existing classroom teaching. The forth part presents two empirical evaluations of Stochastikon Magister. The evaluations are performed by means of comparisons between traditional classroom learning in statistics and e-learning of Bernoulli Stochastics. The aim is to assess the usability and learnability of Stochastikon Magister. Finally, the fifth part of this dissertation is added as an appendix. It refers to Stochastikon Graphics, the fifth component of the entire Stochastikon system. Stochastikon Graphics provides the other components with graphical representations of concepts, procedures and results obtained or used in the framework of Bernoulli Stochastics. The primary aim of this thesis is the development of an appropriate software for the anticipated e-learning environment meant for Bernoulli Stochastics, while the preparation of the necessary teaching material constitutes only a secondary aim used for demonstrating the functionality of the e-learning platform and the scientific novelty of Bernoulli Stochastics. To this end, a first version of two teaching courses are developed, implemented and offered on-line in order to collect practical experiences. The two courses, which were developed as part of this projects are submitted as a supplement to this dissertation. For the time being the first experience with the e-learning programme Stochastikon Magister has been made. Students of different faculties of the University of Würzburg, as well as researchers and engineers, who are involved in the Stochastikon project have obtained access to Stochastikon Magister via internet. They have registered for Stochastikon Magister and participated in the course programme. This thesis reports on two assessments of these first experiences and the results will lead to further improvements with respect to content and organization of Stochastikon Magister.
This work is concerned with the numerical approximation of solutions to models that are used to describe atmospheric or oceanographic flows. In particular, this work concen- trates on the approximation of the Shallow Water equations with bottom topography and the compressible Euler equations with a gravitational potential. Numerous methods have been developed to approximate solutions of these models. Of specific interest here are the approximations of near equilibrium solutions and, in the case of the Euler equations, the low Mach number flow regime. It is inherent in most of the numerical methods that the quality of the approximation increases with the number of degrees of freedom that are used. Therefore, these schemes are often run in parallel on big computers to achieve the best pos- sible approximation. However, even on those big machines, the desired accuracy can not be achieved by the given maximal number of degrees of freedom that these machines allow. The main focus in this work therefore lies in the development of numerical schemes that give better resolution of the resulting dynamics on the same number of degrees of freedom, compared to classical schemes.
This work is the result of a cooperation of Prof. Klingenberg of the Institute of Mathe- matics in Wu¨rzburg and Prof. R¨opke of the Astrophysical Institute in Wu¨rzburg. The aim of this collaboration is the development of methods to compute stellar atmospheres. Two main challenges are tackled in this work. First, the accurate treatment of source terms in the numerical scheme. This leads to the so called well-balanced schemes. They allow for an accurate approximation of near equilibrium dynamics. The second challenge is the approx- imation of flows in the low Mach number regime. It is known that the compressible Euler equations tend towards the incompressible Euler equations when the Mach number tends to zero. Classical schemes often show excessive diffusion in that flow regime. The here devel- oped scheme falls into the category of an asymptotic preserving scheme, i.e. the numerical scheme reflects the behavior that is computed on the continuous equations. Moreover, it is shown that the diffusion of the numerical scheme is independent of the Mach number.
In chapter 3, an HLL-type approximate Riemann solver is adapted for simulations of the Shallow Water equations with bottom topography to develop a well-balanced scheme. In the literature, most schemes only tackle the equilibria when the fluid is at rest, the so called Lake at rest solutions. Here a scheme is developed to accurately capture all the equilibria of the Shallow Water equations. Moreover, in contrast to other works, a second order extension is proposed, that does not rely on an iterative scheme inside the reconstruction procedure, leading to a more efficient scheme.
In chapter 4, a Suliciu relaxation scheme is adapted for the resolution of hydrostatic equilibria of the Euler equations with a gravitational potential. The hydrostatic relations are underdetermined and therefore the solutions to that equations are not unique. However, the scheme is shown to be well-balanced for a wide class of hydrostatic equilibria. For specific classes, some quadrature rules are computed to ensure the exact well-balanced property. Moreover, the scheme is shown to be robust, i.e. it preserves the positivity of mass and energy, and stable with respect to the entropy. Numerical results are presented in order to investigate the impact of the different quadrature rules on the well-balanced property.
In chapter 5, a Suliciu relaxation scheme is adapted for the simulations of low Mach number flows. The scheme is shown to be asymptotic preserving and not suffering from excessive diffusion in the low Mach number regime. Moreover, it is shown to be robust under certain parameter combinations and to be stable from an Chapman-Enskog analysis.
Numerical results are presented in order to show the advantages of the new approach.
In chapter 6, the schemes developed in the chapters 4 and 5 are combined in order to investigate the performance of the numerical scheme in the low Mach number regime in a gravitational stratified atmosphere. The scheme is shown the be well-balanced, robust and stable with respect to a Chapman-Enskog analysis. Numerical tests are presented to show the advantage of the newly proposed method over the classical scheme.
In chapter 7, some remarks on an alternative way to tackle multidimensional simulations are presented. However no numerical simulations are performed and it is shown why further research on the suggested approach is necessary.
This thesis deals with the hp-finite element method (FEM) for linear quadratic optimal control problems. Here, a tracking type functional with control costs as regularization shall be minimized subject to an elliptic partial differential equation. In the presence of control constraints, the first order necessary conditions, which are typically used to find optimal solutions numerically, can be formulated as a semi-smooth projection formula. Consequently, optimal solutions may be non-smooth as well. The hp-discretization technique considers this fact and approximates rough functions on fine meshes while using higher order finite elements on domains where the solution is smooth.
The first main achievement of this thesis is the successful application of hp-FEM to two related problem classes: Neumann boundary and interface control problems. They are solved with an a-priori refinement strategy called boundary concentrated (bc) FEM and interface concentrated (ic) FEM, respectively. These strategies generate grids that are heavily refined towards the boundary or interface. We construct an elementwise interpolant that allows to prove algebraic decay of the approximation error for both techniques. Additionally, a detailed analysis of global and local regularity of solutions, which is critical for the speed of convergence, is included. Since the bc- and ic-FEM retain small polynomial degrees for elements touching the boundary and interface, respectively, we are able to deduce novel error estimates in the L2- and L∞-norm. The latter allows an a-priori strategy for updating the regularization parameter in the objective functional to solve bang-bang problems.
Furthermore, we apply the traditional idea of the hp-FEM, i.e., grading the mesh geometrically towards vertices of the domain, for solving optimal control problems (vc-FEM). In doing so, we obtain exponential convergence with respect to the number of unknowns. This is proved with a regularity result in countably normed spaces for the variables of the coupled optimality system.
The second main achievement of this thesis is the development of a fully adaptive hp-interior point method that can solve problems with distributed or Neumann control. The underlying barrier problem yields a non-linear optimality system, which poses a numerical challenge: the numerically stable evaluation of integrals over possibly singular functions in higher order elements. We successfully overcome this difficulty by monitoring the control variable at the integration points and enforcing feasibility in an additional smoothing step. In this work, we prove convergence of an interior point method with smoothing step and derive a-posteriori error estimators. The adaptive mesh refinement is based on the expansion of the solution in a Legendre series. The decay of the coefficients serves as an indicator for smoothness that guides between h- and p-refinement.
The investigation of interacting multi-agent models is a new field of mathematical research with application to the study of behavior in groups of animals or community of people. One interesting feature of multi-agent systems is collective behavior. From the mathematical point of view, one of the challenging issues considering with these dynamical models is development of control mechanisms that are able to influence the time evolution of these systems.
In this thesis, we focus on the study of controllability, stabilization and optimal control problems for multi-agent systems considering three models as follows: The first one is the Hegselmann Krause opinion formation (HK) model. The HK dynamics describes how individuals' opinions are changed by the interaction with others taking place in a bounded domain of confidence. The study of this model focuses on determining feedback controls in order to drive the agents' opinions to reach a desired agreement. The second model is the Heider social balance (HB) model. The HB dynamics explains the evolution of relationships in a social network. One purpose of studying this system is the construction of control function in oder to steer the relationship to reach a friendship state. The third model that we discuss is a flocking model describing collective motion observed in biological systems. The flocking model under consideration includes self-propelling, friction, attraction, repulsion, and alignment features. We investigate a control for steering the flocking system to track a desired trajectory. Common to all these systems is our strategy to add a leader agent that interacts with all other members of the system and includes the control mechanism.
Our control through leadership approach is developed using classical theoretical control methods and a model predictive control (MPC) scheme. To apply the former method, for each model the stability of the corresponding linearized system near consensus is investigated. Further, local controllability is examined. However, only in the
Hegselmann-Krause opinion formation model, the feedback control is determined in order to steer agents' opinions to globally converge to a desired agreement. The MPC approach is an optimal control strategy based on numerical optimization. To apply the MPC scheme, optimal control problems for each model are formulated where the objective functions are different depending on the desired objective of the problem. The first-oder necessary optimality conditions for each problem are presented. Moreover for the numerical treatment, a sequence of open-loop discrete optimality systems is solved by accurate Runge-Kutta schemes, and in the optimization procedure, a nonlinear conjugate gradient solver is implemented. Finally, numerical experiments are performed to investigate the properties of the multi-agent models and demonstrate the ability of the proposed control strategies to drive multi-agent systems to attain a desired consensus and to track a given trajectory.
The classification of isoparametric hypersurfaces in spheres with a homogeneous focal manifold is a project that has been started by Linus Kramer. It extends results by E. Cartan and Hsiang and Lawson. Kramer does most part of this classification in his Habilitationsschrift. In particular he obtains a classification for the cases where the homogeneous focal manifold is at least 2-connected. Results of E. Cartan, Dorfmeister and Neher, and Takagi also solve parts of the classification problem. This thesis completes the classification. We classify all closed isoparametric hypersurfaces in spheres with g>2 distinct principal curvatures one of whose multiplicities is 2 such that the lower dimensional focal manifold is homogeneous. The methods are essentially the same as in Kramer's 'Habilitationsschrift'. The cohomology of the focal manifolds in question is known. This leads to two topological classification problems, which are also solved in this thesis. We classify simply connected homogeneous spaces of compact Lie groups with the same integral cohomology ring as a product of spheres S^2 x S^m and m odd on the one hand and a truncated polynomial ring Q[a]/(a^m) with one generator of even degree and m > 1 as its rational cohomology ring on the other hand.
It is well known, that the least squares estimator performs poorly in the presence of multicollinearity. One way to overcome this problem is using biased estimators, e.g. ridge regression estimators. In this study an estimation procedure is proposed based on adding a small quantity omega on some or each regressor. The resulting biased estimator is described in dependence of omega and furthermore it is shown that its mean squared error is smaller than the one corresponding to the least squares estimator in the case of highly correlated regressors.
This cumulative dissertation is organized as follows:
After the introduction, the second chapter, based on “Asymptotic independence of bivariate order statistics” (2017) by Falk and Wisheckel, is an investigation of the asymptotic dependence behavior of the components of bivariate order statistics. We find that the two components of the order statistics become asymptotically independent for certain combinations of (sequences of) indices that are selected, and it turns out that no further assumptions on the dependence of the two components in the underlying sample are necessary. To establish this, an explicit representation of the conditional distribution of bivariate order statistics is derived.
Chapter 3 is from “Conditional tail independence in archimedean copula models” (2019) by Falk, Padoan and Wisheckel and deals with the conditional distribution of an Archimedean copula, conditioned on one of its components. We show that its tails are independent under minor conditions on the generator function, even if the unconditional tails were dependent. The theoretical findings are underlined by a simulation study and can be generalized to Archimax copulas.
“Generalized pareto copulas: A key to multivariate extremes” (2019) by Falk, Padoan and Wisheckel lead to Chapter 4 where we introduce a nonparametric approach to estimate the probability that a random vector exceeds a fixed threshold if it follows a Generalized Pareto copula. To this end, some theory underlying the concept of Generalized Pareto distributions is presented first, the estimation procedure is tested using a simulation and finally applied to a dataset of air pollution parameters in Milan, Italy, from 2002 until 2017.
The fifth chapter collects some additional results on derivatives of D-norms, in particular a condition for the existence of directional derivatives, and multivariate spacings, specifically an explicit formula for the second-to-last bivariate spacing.
In der vorliegenden Arbeit werden lineare Systeme elliptischer partieller Differentialgleichungen in schwacher Formulierung auf konischen Gebieten untersucht. Auf einem zunächst unbeschränkten Kegelgebiet betrachten wir den Fall beschränkter und nur von den Winkelvariablen abhängiger Koeffizientenfunktionen. Die durch selbige definierte Bilinearform genüge einer Gårdingschen Ungleichung. In gewichteten Sobolevräumen werden Existenz- und Eindeutigkeitsfragen geklärt, wobei das Problem mittels Fouriertransformation auf eine von einem komplexen Parameter abhängige Familie T(·) von Fredholmoperatoren zurückgeführt wird. Unter Anwendung des Residuenkalküls gewinnen wir eine Darstellung der Lösung in Form einer Zerlegung in einen glatten Anteil einerseits sowie eine endliche Summe von Singulärfunktionen andererseits. Durch Abschneidetechniken werden die gewonnenen Erkenntnisse auf den Fall schwach formulierter elliptischer Systeme auf beschränkten Kegelgebieten unter Formulierung in gewöhnlichen, nicht-gewichteten Sobolevräumen angewendet. Die für Regularitätsfragen maßgeblichen Eigenwerte der Operatorfunktion T mit minimalem positiven Imaginärteil werden im letzten Kapitel der Arbeit am Beispiel der ebenen elastischen Gleichungen numerisch bestimmt.
We compute genus-0 Belyi maps with prescribed monodromy and strictly verify the computed results. Among the computed examples are almost simple primitive groups that satisfy the rational rigidity criterion yielding polynomials with prescribed Galois groups over Q(t). We also give an explicit version of a theorem of Magaard, which lists all sporadic groups occurring as composition factors of monodromy groups of rational functions.
We generalize a theorem by Titchmarsh about the mean value of Hardy’s \(Z\)-function at the Gram points to the Hecke \(L\)-functions, which in turn implies the weak Gram law for them. Instead of proceeding analogously to Titchmarsh with an approximate functional equation we employ a different method using contour integration.
In the generalized Nash equilibrium problem not only the cost function of a player depends on the rival players' decisions, but also his constraints. This thesis presents different iterative methods for the numerical computation of a generalized Nash equilibrium, some of them globally, others locally superlinearly convergent. These methods are based on either reformulations of the generalized Nash equilibrium problem as an optimization problem, or on a fixed point formulation. The key tool for these reformulations is the Nikaido-Isoda function. Numerical results for various problem from the literature are given.
In dieser Arbeit wird der Bau der (abzählbaren) abelschen p-Gruppen untersucht, durch die Betrachtung der dazugehörigen Quasibasen, die als bestimmte erzeugende Systeme der gegebenen p-Gruppe definiert sind. Die Untersuchung wird insbesondere auf die nichtseparablen p-Gruppen und ihre induktiven Quasibasen bezogen.
In my Ph.D. thesis "On the geometry and parametrization of almost invariant subspaces and observer theory" I consider the set of almost conditioned invariant subspaces of fixed dimension for a given fixed linear finite-dimensional time-invariant observable control system in state space form. Almost conditioned invariant subspaces were introduced by Willems. They generalize the concept of a conditioned invariant subspace requiring the invariance condition to hold only up to an arbitrarily small deviation in the metric of the state space. One of the goals of the theory of almost conditioned invariant subspaces was to identify the subspaces appearing as limits of sequences of conditioned invariant subspaces. An example due to {\"O}zveren, Verghese and Willsky, however, shows that the set of almost conditioned invariant subspaces is not big enough. I address this question in a joint paper with Helmke and Fuhrmann (Towards a compactification of the set of conditioned invariant subspaces, Systems and Control Letters, 48(2):101-111, 2003). Antoulas derived a description of conditioned invariant subspaces as kernels of permuted and truncated reachability matrices of controllable pairs of the appropriate size. This description was used by Helmke and Fuhrmann to construct a diffeomorphism from the set of similarity classes of certain controllable pairs onto the set of tight conditioned invariant subspaces. In my thesis I generalize this result to almost conditioned invariant subspaces describing them in terms of restricted system equivalence classes of controllable triples. Furthermore, I identify the controllable pairs appearing in the kernel representations of conditioned invariant subspaces as being induced by corestrictions of the original system to the subspace. Conditioned invariant subspaces are known to be closely related to partial observers. In fact, a tracking observer for a linear function of the state of the observed system exists if and only if the kernel of that function is conditioned invariant. In my thesis I show that the system matrices of the observers are in fact the corestrictions of the observed system to the kernels of the observed functions. They in turn are closely related to partial realizations. Exploring this connection further, I prove that the set of tracking observer parameters of fixed size, i.e. tracking observers of fixed order together with the functions they are tracking, is a smooth manifold. Furthermore, I construct a vector bundle structure for the set of conditioned invariant subspaces of fixed dimension together with their friends, i.e. the output injections making the subspaces invariant, over that manifold. Willems and Trentelman generalized the concept of a tracking observer by including derivatives of the output of the observed system in the observer equations (PID-observers). They showed that a PID-observer for a linear function of the state of the observed system exists if and only if the kernel of that function is almost conditioned invariant. In my thesis I replace PID-observers by singular systems, which has the advantage that the system matrices of the observers coincide with the matrices appearing in the kernel representations of the subspaces. In a second approach to the parametrization of conditioned invariant subspaces Hinrichsen, M{\"u}nzner and Pr{\"a}tzel-Wolters, Fuhrmann and Helmke and Ferrer, F. Puerta, X. Puerta and Zaballa derived a description of conditioned invariant subspaces in terms of images of block Toeplitz type matrices. They used this description to construct a stratification of the set of conditioned invariant subspaces of fixed dimension into smooth manifolds. These so called Brunovsky strata consist of all the subspaces with fixed restriction indices. They constructed a cell decomposition of the Brunovsky strata into so called Kronecker cells. In my thesis I show that in the tight case this cell decomposition is induced by a Bruhat decomposition of a generalized flag manifold. I identify the adherence order of the cell decomposition as being induced by the reverse Bruhat order.
Background
It is hypothesized that because of higher mast cell numbers and mediator release, mastocytosis predisposes patients for systemic immediate-type hypersensitivity reactions to certain drugs including non-steroidal anti-inflammatory drugs (NSAID).
Objective
To clarify whether patients with NSAID hypersensitivity show increased basal serum tryptase levels as sign for underlying mast cell disease.
Methods
As part of our allergy work-up, basal serum tryptase levels were determined in all patients with a diagnosis of NSAID hypersensitivity and the severity of the reaction was graded. Patients with confirmed IgE-mediated hymenoptera venom allergy served as a comparison group.
Results
Out of 284 patients with NSAID hypersensitivity, 26 were identified with basal serum tryptase > 10.0 ng/mL (9.2%). In contrast, significantly (P = .004) more hymenoptera venom allergic patients had elevated tryptase > 10.0 ng/mL (83 out of 484; 17.1%). Basal tryptase > 20.0 ng/mL was indicative for severe anaphylaxis only in venom allergic subjects (29 patients; 4x grade 2 and 25x grade 3 anaphylaxis), but not in NSAID hypersensitive patients (6 patients; 4x grade 1, 2x grade 2).
Conclusions
In contrast to hymenoptera venom allergy, NSAID hypersensitivity do not seem to be associated with elevated basal serum tryptase levels and levels > 20 ng/mL were not related to increased severity of the clinical reaction. This suggests that mastocytosis patients may be treated with NSAID without special precautions.
In this paper, we prove an asymptotic formula for the sum of the values of the periodic zeta-function at the nontrivial zeros of the Riemann zeta-function (up to some height) which are symmetrical on the real line and the critical line. This is an extension of the previous results due to Garunkštis, Kalpokas, and, more recently, Sowa. Whereas Sowa's approach was assuming the yet unproved Riemann hypothesis, our result holds unconditionally.
In the following dissertation we consider three preconditioners of algebraic multigrid type, though they are defined for arbitrary prolongation and restriction operators, we consider them in more detail for the aggregation method. The strengthened Cauchy-Schwarz inequality and the resulting angle between the spaces will be our main interests. In this context we will introduce some modifications. For the problem of the one-dimensional convection we obtain perfect theoretical results. Although this is not the case for more complex problems, the numerical results we present will show that the modifications are also useful in these situation. Additionally, we will consider a symmetric problem in the energy norm and present a simple rule for algebraic aggregation.
On the Fragility Index
(2011)
The Fragility Index captures the amount of risk in a stochastic system of arbitrary dimension. Its main mathematical tool is the asymptotic distribution of exceedance counts within the system which can be derived by use of multivariate extreme value theory. Thereby the basic assumption is that data comes from a distribution which lies in the domain of attraction of a multivariate extreme value distribution. The Fragility Index itself and its extension can serve as a quantitative measure for tail dependence in arbitrary dimensions. It is linked to the well known extremal index for stochastic processes as well the extremal coefficient of an extreme value distribution.
Mathematische Programme mit Gleichgewichtsrestriktionen (oder Komplementaritätsbedingungen), kurz MPECs, sind als äußerst schwere Optimierungsprobleme bekannt. Lokale Minima oder geeignete stationäre Punkte zu finden, ist ein nichttriviales Problem. Diese Arbeit beschreibt, wie man dennoch die spezielle Struktur von MPECs ausnutzen kann und mittels eines Branch-and-Bound-Verfahrens ein globales Minimum von Linearen Programmen mit Gleichgewichtsrestriktionen, kurz LPECs, bekommt. Des Weiteren wird dieser Branch-and-Bound-Algorithmus innerhalb eines Filter-SQPEC-Verfahrens genutzt, um allgemeine MPECs zu lösen. Für das Filter-SQPEC Verfahren wird ein globaler Konvergenzsatz bewiesen. Außerdem werden für beide Verfahren numerische Resultate angegeben.
Beatty sets (also called Beatty sequences) have appeared as early as 1772 in the astronomical studies of Johann III Bernoulli as a tool for easing manual calculations and - as Elwin Bruno Christoffel pointed out in 1888 - lend themselves to exposing intricate properties of the real irrationals. Since then, numerous researchers have explored a multitude of arithmetic properties of Beatty sets; the interrelation between Beatty sets and modular inversion, as well as Beatty sets and the set of rational primes, being the central topic of this book. The inquiry into the relation to rational primes is complemented by considering a natural generalisation to imaginary quadratic number fields.
In this dissertation, we develop and analyze novel optimizing feedback laws for control-affine systems with real-valued state-dependent output (or objective) functions. Given a control-affine system, our goal is to derive an output-feedback law that asymptotically stabilizes the closed-loop system around states at which the output function attains a minimum value. The control strategy has to be designed in such a way that an implementation only requires real-time measurements of the output value. Additional information, like the current system state or the gradient vector of the output function, is not assumed to be known. A method that meets all these criteria is called an extremum seeking control law. We follow a recently established approach to extremum seeking control, which is based on approximations of Lie brackets. For this purpose, the measured output is modulated by suitable highly oscillatory signals and is then fed back into the system. Averaging techniques for control-affine systems with highly oscillatory inputs reveal that the closed-loop system is driven, at least approximately, into the directions of certain Lie brackets. A suitable design of the control law ensures that these Lie brackets point into descent directions of the output function. Under suitable assumptions, this method leads to the effect that minima of the output function are practically uniformly asymptotically stable for the closed-loop system. The present document extends and improves this approach in various ways.
One of the novelties is a control strategy that does not only lead to practical asymptotic stability, but in fact to asymptotic and even exponential stability. In this context, we focus on the application of distance-based formation control in autonomous multi-agent system in which only distance measurements are available. This means that the target formations as well as the sensed variables are determined by distances. We propose a fully distributed control law, which only involves distance measurements for each individual agent to stabilize a desired formation shape, while a storage of measured data is not required. The approach is applicable to point agents in the Euclidean space of arbitrary (but finite) dimension. Under the assumption of infinitesimal rigidity of the target formations, we show that the proposed control law induces local uniform asymptotic (and even exponential) stability. A similar statement is also derived for nonholonomic unicycle agents with all-to-all communication. We also show how the findings can be used to solve extremum seeking control problems.
Another contribution is an extremum seeking control law with an adaptive dither signal. We present an output-feedback law that steers a fully actuated control-affine system with general drift vector field to a minimum of the output function. A key novelty of the approach is an adaptive choice of the frequency parameter. In this way, the task of determining a sufficiently large frequency parameter becomes obsolete. The adaptive choice of the frequency parameter also prevents finite escape times in the presence of a drift. The proposed control law does not only lead to convergence into a neighborhood of a minimum, but leads to exact convergence. For the case of an output function with a global minimum and no other critical point, we prove global convergence.
Finally, we present an extremum seeking control law for a class of nonholonomic systems. A detailed averaging analysis reveals that the closed-loop system is driven approximately into descent directions of the output function along Lie brackets of the control vector fields. Those descent directions also originate from an approximation of suitably chosen Lie brackets. This requires a two-fold approximation of Lie brackets on different time scales. The proposed method can lead to practical asymptotic stability even if the control vector fields do not span the entire tangent space. It suffices instead that the tangent space is spanned by the elements in the Lie algebra generated by the control vector fields. This novel feature extends extremum seeking by Lie bracket approximations from the class of fully actuated systems to a larger class of nonholonomic systems.
Our starting point is the Jacobsthal function \(j(m)\), defined for each positive integer \(m\) as the smallest number such that every \(j(m)\) consecutive integers contain at least one integer relatively prime to \(m\). It has turned out that improving on upper bounds for \(j(m)\) would also lead to advances in understanding the distribution of prime numbers among arithmetic progressions. If \(P_r\) denotes the product of the first \(r\) prime numbers, then a conjecture of Montgomery states that \(j(P_r)\) can be bounded from above by \(r (\log r)^2\) up to some constant factor. However, the until now very promising sieve methods seem to have reached a limit here, and the main goal of this work is to develop other combinatorial methods in hope of coming a bit closer to prove the conjecture of Montgomery. Alongside, we solve a problem of Recamán about the maximum possible length among arithmetic progressions in the least (positive) reduced residue system modulo \(m\). Lastly, we turn towards three additive representation functions as introduced by Erdős, Sárközy and Sós who studied their surprising different monotonicity behavior. By an alternative approach, we answer a question of Sárközy and demostrate that another conjecture does not hold.
Background
Referring to individuals with reactivity to honey bee and Vespula venom in diagnostic tests, the umbrella terms “double sensitization” or “double positivity” cover patients with true clinical double allergy and those allergic to a single venom with asymptomatic sensitization to the other. There is no international consensus on whether immunotherapy regimens should generally include both venoms in double sensitized patients.
Objective
We investigated the long-term outcome of single venom-based immunotherapy with regard to potential risk factors for treatment failure and specifically compared the risk of relapse in mono sensitized and double sensitized patients.
Methods
Re-sting data were obtained from 635 patients who had completed at least 3 years of immunotherapy between 1988 and 2008. The adequate venom for immunotherapy was selected using an algorithm based on clinical details and the results of diagnostic tests.
Results
Of 635 patients, 351 (55.3%) were double sensitized to both venoms. The overall re-exposure rate to Hymenoptera stings during and after immunotherapy was 62.4%; the relapse rate was 7.1% (6.0% in mono sensitized, 7.8% in double sensitized patients). Recurring anaphylaxis was statistically less severe than the index sting reaction (P = 0.004). Double sensitization was not significantly related to relapsing anaphylaxis (P = 0.56), but there was a tendency towards an increased risk of relapse in a subgroup of patients with equal reactivity to both venoms in diagnostic tests (P = 0.15).
Conclusions
Single venom-based immunotherapy over 3 to 5 years effectively and long-lastingly protects the vast majority of both mono sensitized and double sensitized Hymenoptera venom allergic patients. Double venom immunotherapy is indicated in clinically double allergic patients reporting systemic reactions to stings of both Hymenoptera and in those with equal reactivity to both venoms in diagnostic tests who have not reliably identified the culprit stinging insect.
We extend Bourgain’s bound for the order of growth of the Riemann zeta function on the critical line to Lerch zeta functions. More precisely, we prove L(λ, α, 1/2 + it) ≪ t\(^{13/84+ϵ}\) as t → ∞. For both, the Riemann zeta function as well as for the more general Lerch zeta function, it is conjectured that the right-hand side can be replaced by t\(^ϵ\) (which is the so-called Lindelöf hypothesis). The growth of an analytic function is closely related to the distribution of its zeros.
For an arbitrary complex number a≠0 we consider the distribution of values of the Riemann zeta-function ζ at the a-points of the function Δ which appears in the functional equation ζ(s)=Δ(s)ζ(1−s). These a-points δa are clustered around the critical line 1/2+i\(\mathbb {R}\) which happens to be a Julia line for the essential singularity of ζ at infinity. We observe a remarkable average behaviour for the sequence of values ζ(δ\(_a\)).
Lagrange Multiplier Methods for Constrained Optimization and Variational Problems in Banach Spaces
(2018)
This thesis is concerned with a class of general-purpose algorithms for constrained minimization problems, variational inequalities, and quasi-variational inequalities in Banach spaces.
A substantial amount of background material from Banach space theory, convex analysis, variational analysis, and optimization theory is presented, including some results which are refinements of those existing in the literature. This basis is used to formulate an augmented Lagrangian algorithm with multiplier safeguarding for the solution of constrained optimization problems in Banach spaces. The method is analyzed in terms of local and global convergence, and many popular problem classes such as nonlinear programming, semidefinite programming, and function space optimization are shown to be included as special cases of the general setting.
The algorithmic framework is then extended to variational and quasi-variational inequalities, which include, by extension, Nash and generalized Nash equilibrium problems. For these problem classes, the convergence is analyzed in detail. The thesis then presents a rich collection of application examples for all problem classes, including implementation details and numerical results.
Spiraltypflächen sind Minimalflächen des dreidimensionalen euklidischen Raums, die sich durch hohe Symmetrie gegenüber komplexen Ähnlichkeitsabbildungen der Minimalkurve auszeichnen. Ihren Namen verdanken Sie folgender Eigenschaft: Sie und ihre komplex Homothetischen sind die einzigen auf Spiralflächen abwickelbaren Minimalflächen. Bekannte Spiraltypflächen sind die Spiralminimalflächen (zugleich Minimal- und Spiralflächen) und die Bourflächen (auf Rotationsflächen abwickelbare Minimalflächen). Das Katenoid und die Enneperfläche sind spezielle Bourflächen. In dieser Arbeit werden die Spiraltypflächen auf ihre geometrischen Eigenschaften untersucht. Wir stellen ihre Periodizitäten und Symmetrien fest und versuchen, ausgezeichnete Flächenkurven auf ihnen zu finden. Wir verwenden eine globale Weierstraß-Darstellung der Spiraltypflächen. In dieser Darstellung ergeben die Flächen eine Schar mit einem komplexen Scharparameter. Anhand dieser Darstellung leiten wir sämtliche Symmetrien der Spiraltypflächen zu linearen Ähnlichkeitsabbildungen der Minimalkurve her. Als Spezialfälle erhalten wir die Symmetrien unter Assoziationen und Derivationen (Drehung der Minimalkurve um einen imaginären Drehwinkel), sowie die reellen Symmetrien (Dreh-, Spiegel- und Strecksymmetrien). Unter den Spiraltypflächen gibt es nur zwei translationssymmetrische Flächen. Die Umorientierung einer Spiraltypfläche entspricht (bis auf komplexe Homothetie) dem Vorzeichenwechsel des Flächenparameters. Im Übrigen kann durch einfache Spiegelungen an den Koordinatenebenen beziehungsweise Drehungen um die Koordinatenachsen das Vorzeichen von Real- beziehungsweise Imaginärteil des Flächenparameters umgekehrt werden. Schließlich stellen wir noch ausgezeichnete Flächenkurven auf den Spiraltypflächen vor: Krümmungslinien, Asymptotenlinien und Geodätische, sowie als deren Verallgemeinerungen die Pseudokrümmungslinien und Pseudogeodätischen.
In the thesis discrete moments of the Riemann zeta-function and allied Dirichlet series are studied.
In the first part the asymptotic value-distribution of zeta-functions is studied where the samples are taken from a Cauchy random walk on a vertical line inside the critical strip. Building on techniques by Lifshits and Weber analogous results for the Hurwitz zeta-function are derived. Using Atkinson’s dissection this is even generalized to Dirichlet L-functions associated with a primitive character. Both results indicate that the expectation value equals one which shows that the values of these
zeta-function are small on average.
The second part deals with the logarithmic derivative of the Riemann zeta-function on vertical lines and here the samples are with respect to an explicit ergodic transformation. Extending work of Steuding, discrete moments are evaluated and an equivalent formulation for the Riemann Hypothesis in terms of ergodic theory is obtained.
In the third and last part of the thesis, the phenomenon of universality with respect
to stochastic processes is studied. It is shown that certain random shifts of the zeta-function can approximate non-vanishing analytic target functions as good as we please. This result relies on Voronin's universality theorem.
In this work, multi-particle quantum optimal control problems are studied in the framework of time-dependent density functional theory (TDDFT).
Quantum control problems are of great importance in both fundamental research and application of atomic and molecular systems. Typical applications are laser induced chemical reactions, nuclear magnetic resonance experiments, and quantum computing.
Theoretically, the problem of how to describe a non-relativistic system of multiple particles is solved by the Schrödinger equation (SE). However, due to the exponential increase in numerical complexity with the number of particles, it is impossible to directly solve the Schrödinger equation for large systems of interest. An efficient and successful approach to overcome this difficulty is the framework of TDDFT and the use of the time-dependent Kohn-Sham (TDKS) equations therein.
This is done by replacing the multi-particle SE with a set of nonlinear single-particle Schrödinger equations that are coupled through an additional potential.
Despite the fact that TDDFT is widely used for physical and quantum chemical calculation and software packages for its use are readily available, its mathematical foundation is still under active development and even fundamental issues remain unproven today.
The main purpose of this thesis is to provide a consistent and rigorous setting for the TDKS equations and of the related optimal control problems.
In the first part of the thesis, the framework of density functional theory (DFT) and TDDFT are introduced. This includes a detailed presentation of the different functional sets forming DFT. Furthermore, the known equivalence of the TDKS system to the original SE problem is further discussed.
To implement the TDDFT framework for multi-particle computations, the TDKS equations provide one of the most successful approaches nowadays. However, only few mathematical results concerning these equations are available and these results do not cover all issues that arise in the formulation of optimal control problems governed by the TDKS model.
It is the purpose of the second part of this thesis to address these issues such as higher regularity of TDKS solutions and the case of weaker requirements on external (control) potentials that are instrumental for the formulation of well-posed TDKS control problems. For this purpose, in this work, existence and uniqueness of TDKS solutions are investigated in the Galerkin framework and using energy estimates for the nonlinear TDKS equations.
In the third part of this thesis, optimal control problems governed by the TDKS model are formulated and investigated. For this purpose, relevant cost functionals that model the purpose of the control are discussed.
Henceforth, TDKS control problems result from the requirement of optimising the given cost functionals subject to the differential constraint given by the TDKS equations. The analysis of these problems is novel and represents one of the main contributions of the present thesis.
In particular, existence of minimizers is proved and their characterization by TDKS optimality systems is discussed in detail.
To this end, Fréchet differentiability of the TDKS model and of the cost functionals is addressed considering \(H^1\) cost of the control.
This part is concluded by deriving the reduced gradient in the \(L^2\) and \(H^1\) inner product.
While the \(L^2\) optimization is widespread in the literature, the choice of the \(H^1\) gradient is motivated in this work by theoretical consideration and by resulting numerical advantages.
The last part of the thesis is devoted to the numerical approximation of the TDKS optimality systems and to their solution by gradient-based optimization techniques.
For the former purpose, Strang time-splitting pseudo-spectral schemes are discussed including a review of some recent theoretical estimates for these schemes and a numerical validation of these estimates.
For the latter purpose, nonlinear (projected) conjugate gradient methods are implemented and are used to validate the theoretical analysis of this thesis with results of numerical experiments with different cost functional settings.
The starting point of the thesis is the {\it universality} property of the Riemann Zeta-function $\zeta(s)$
which was proved by Voronin in 1975:
{\it Given a positive number $\varepsilon>0$ and an analytic non-vanishing function $f$ defined on a compact subset $\mathcal{K}$ of the strip $\left\{s\in\mathbb{C}:1/2 < \Re s< 1\right\}$ with connected complement, there exists a real number $\tau$ such that
\begin{align}\label{continuous}
\max\limits_{s\in \mathcal{K}}|\zeta(s+i\tau)-f(s)|<\varepsilon.
\end{align}
}
In 1980, Reich proved a discrete analogue of Voronin’s theorem, also known as {\it discrete universality theorem} for $\zeta(s)$:
{\it If $\mathcal{K}$, $f$ and $\varepsilon$ are as before, then
\begin{align}\label{discretee}
\liminf\limits_{N\to\infty}\dfrac{1}{N}\sharp\left\{1\leq n\leq N:\max\limits_{s\in \mathcal{K}}|\zeta(s+i\Delta n)-f(s)|<\varepsilon\right\}>0,
\end{align}
where $\Delta$ is an arbitrary but fixed positive number.
}
We aim at developing a theory which can be applied to prove the majority of all so far existing discrete universality theorems in the case of Dirichlet $L$-functions $L(s,\chi)$ and Hurwitz zeta-functions $\zeta(s;\alpha)$,
where $\chi$ is a Dirichlet character and $\alpha\in(0,1]$, respectively.
Both of the aforementioned classes of functions are generalizations of $\zeta(s)$, since $\zeta(s)=L(s,\chi_0)=\zeta(s;1)$, where $\chi_0$ is the principal Dirichlet character mod 1.
Amongst others, we prove statement (2) where instead of $\zeta(s)$ we have $L(s,\chi)$ for some Dirichlet character $\chi$ or $\zeta(s;\alpha)$ for some transcendental or rational number $\alpha\in(0,1]$, and instead of $(\Delta n)_{n\in\mathbb{N}}$ we can have:
\begin{enumerate}
\item \textit{Beatty sequences,}
\item \textit{sequences of ordinates of $c$-points of zeta-functions from the Selberg class,}
\item \textit{sequences which are generated by polynomials.}
\end{enumerate}
In all the preceding cases, the notion of {\it uniformly distributed sequences} plays an important role and we draw attention to it wherever we can.
Moreover, for the case of polynomials, we employ more advanced techniques from Analytic Number Theory such as bounds of exponential sums and zero-density estimates for Dirichlet $L$-functions.
This will allow us to prove the existence of discrete second moments of $L(s,\chi)$ and $\zeta(s;\alpha)$ on the left of the vertical line $1+i\mathbb{R}$, with respect to polynomials.
In the case of the Hurwitz Zeta-function $\zeta(s;\alpha)$, where $\alpha$ is transcendental or rational but not equal to $1/2$ or 1, the target function $f$ in (1) or (2), where $\zeta(\cdot)$ is replaced by $\zeta(\cdot;\alpha)$, is also allowed to have zeros.
Until recently there was no result regarding the universality of $\zeta(s;\alpha)$ in the literature whenever $\alpha$ is an algebraic irrational.
In the second half of the thesis, we prove that a weak version of statement \eqref{continuous} for $\zeta(s;\alpha)$ holds for all but finitely many algebraic irrational $\alpha$ in $[A,1]$, where $A\in(0,1]$ is an arbitrary but fixed real number.
Lastly, we prove that the ordinary Dirichlet series
$\zeta(s;f)=\sum_{n\geq1}f(n)n^{-s}$ and $\zeta_\alpha(s)=\sum_{n\geq1}\lfloor P(\alpha n+\beta)\rfloor^{-s}$
are hypertranscendental, where $f:\mathbb{N}\to\mathbb{C}$ is a {\it Besicovitch almost periodic arithmetical function}, $\alpha,\beta>0$ are such that $\lfloor\alpha+\beta\rfloor>1$ and $P\in\mathbb{Z}[X]$ is such that $P(\mathbb{N})\subseteq\mathbb{N}$.
A torsion free abelian group of finite rank is called almost completely decomposable if it has a completely decomposable subgroup of finite index. A p-local, p-reduced almost completely decomposable group of type (1,2) is briefly called a (1,2)-group. Almost completely decomposable groups can be represented by matrices over the ring Z/hZ, where h is the exponent of the regulator quotient. This particular choice of representation allows for a better investigation of the decomposability of the group. Arnold and Dugas showed in several of their works that (1,2)-groups with regulator quotient of exponent at least p^7 allow infinitely many isomorphism types of indecomposable groups. It is not known if the exponent 7 is minimal. In this dissertation, this problem is addressed.
Mathematical concepts are regularly used in media reports concerning the Covid-19 pandemic. These include growth models, which attempt to explain or predict the effectiveness of interventions and developments, as well as the reproductive factor. Our contribution has the aim of showing that basic mental models about exponential growth are important for understanding media reports of Covid-19. Furthermore, we highlight how the coronavirus pandemic can be used as a context in mathematics classrooms to help students understand that they can and should question media reports on their own, using their mathematical knowledge. Therefore, we first present the role of mathematical modelling in achieving these goals in general. The same relevance applies to the necessary basic mental models of exponential growth. Following this description, based on three topics, namely, investigating the type of growth, questioning given course models, and determining exponential factors at different times, we show how the presented theoretical aspects manifest themselves in teaching examples when students are given the task of reflecting critically on existing media reports. Finally, the value of the three topics regarding the intended goals is discussed and conclusions concerning the possibilities and limits of their use in schools are drawn.
This dissertation investigates the application of multivariate Chebyshev polynomials in the algebraic signal processing theory for the development of FFT-like algorithms for discrete cosine transforms on weight lattices of compact Lie groups. After an introduction of the algebraic signal processing theory, a multivariate Gauss-Jacobi procedure for the development of orthogonal transforms is proven. Two theorems on fast algorithms in algebraic signal processing, one based on a decomposition property of certain polynomials and the other based on induced modules, are proven as multivariate generalizations of prior theorems. The definition of multivariate Chebyshev polynomials based on the theory of root systems is recalled. It is shown how to use these polynomials to define discrete cosine transforms on weight lattices of compact Lie groups. Furthermore it is shown how to develop FFT-like algorithms for these transforms. Then the theory of matrix-valued, multivariate Chebyshev polynomials is developed based on prior ideas. Under an existence assumption a formula for generating functions of these matrix-valued Chebyshev polynomials is deduced.
In this thesis a new and powerful approach for modeling laser cavity eigenmodes is presented. This approach is based on an eigenvalue problem for singularly perturbed partial differential operators with complex coefficients; such operators have not been investigated in detail until now. The eigenvalue problem is discretized by finite elements, and convergence of the approximate solution is proved by using an abstract convergence theory also developed in this dissertation. This theory for the convergence of an approximate solution of a (quadratic) eigenvalue problem, which particularly can be applied to a finite element discretization, is interesting on its own, since the ideas can conceivably be used to handle equations with a more complex nonlinearity. The discretized eigenvalue problem essentially is solved by preconditioned GMRES, where the preconditioner is constructed according to the underlying physics of the problem. The power and correctness of the new approach for computing laser cavity eigenmodes is clearly demonstrated by successfully simulating a variety of different cavity configurations. The thesis is organized as follows: Chapter 1 contains a short overview on solving the so-called Helmholtz equation with the help of finite elements. The main part of Chapter 2 is dedicated to the analysis of a one-dimensional model problem containing the main idea of a new model for laser cavity eigenmodes which is derived in detail in Chapter 3. Chapter 4 comprises a convergence theory for the approximate solution of quadratic eigenvalue problems. In Chapter 5, a stabilized finite element discretization of the new model is described and its convergence is proved by applying the theory of Chapter 4. Chapter 6 contains computational aspects of solving the resulting system of equations and, finally, Chapter 7 presents numerical results for various configurations, demonstrating the practical relevance of our new approach.
This thesis discusses and proposes a solution for one problem arising from deformation quantization:
Having constructed the quantization of a classical system, one would like to understand its mathematical properties (of both the classical and quantum system). Especially if both systems are described by ∗-algebras over the field of complex numbers, this means to understand the properties of certain ∗-algebras:
What are their representations? What are the properties of these representations? How
can the states be described in these representations? How can the spectrum of the observables be
described?
In order to allow for a sufficiently general treatment of these questions, the concept of abstract O ∗-algebras is introduced. Roughly speaking, these are ∗ -algebras together with a cone of positive linear functionals on them (e.g. the continuous ones if one starts with a ∗-algebra that is endowed with a well-behaved topology). This language is then applied to two examples from deformation quantization, which will be studied in great detail.
In the verification of positive Harris recurrence of multiclass queueing networks the stability analysis for the class of fluid networks is of vital interest. This thesis addresses stability of fluid networks from a Lyapunov point of view. In particular, the focus is on converse Lyapunov theorems. To gain an unified approach the considerations are based on generic properties that fluid networks under widely used disciplines have in common. It is shown that the class of closed generic fluid network models (closed GFNs) is too wide to provide a reasonable Lyapunov theory. To overcome this fact the class of strict generic fluid network models (strict GFNs) is introduced. In this class it is required that closed GFNs satisfy additionally a concatenation and a lower semicontinuity condition. We show that for strict GFNs a converse Lyapunov theorem is true which provides a continuous Lyapunov function. Moreover, it is shown that for strict GFNs satisfying a trajectory estimate a smooth converse Lyapunov theorem holds. To see that widely used queueing disciplines fulfill the additional conditions, fluid networks are considered from a differential inclusions perspective. Within this approach it turns out that fluid networks under general work-conserving, priority and proportional processor-sharing disciplines define strict GFNs. Furthermore, we provide an alternative proof for the fact that the Markov process underlying a multiclass queueing network is positive Harris recurrent if the associate fluid network defining a strict GFN is stable. The proof explicitely uses the Lyapunov function admitted by the stable strict GFN. Also, the differential inclusions approach shows that first-in-first-out disciplines play a special role.
In this paper we consider the class (θA, B) of parameter-dependent linear systems given by matrices A ∈ ℂ\(^{nxn}\) and B ∈ ℂ\(^{nxm}\). This class is of interest for several applications and the frequently met task for such systems is to steer the origin toward a given target family f(θ) by using an input that is independent from the parameter. This paper provides a collection of necessary and sufficient conditions for ensemble reachability for these systems.
The subject of this thesis is the rigorous passage from discrete systems to continuum models via variational methods.
The first part of this work studies a discrete model describing a one-dimensional chain of atoms with finite range interactions of Lennard-Jones type. We derive an expansion of the ground state energy using \(\Gamma\)-convergence. In particular, we show that a variant of the Cauchy-Born rule holds true for the model under consideration. We exploit this observation to derive boundary layer energies due to asymmetries of the lattice at the boundary or at cracks of the specimen. Hereby we extend several results obtained previously for models involving only nearest and next-to-nearest neighbour interactions by Braides and Cicalese and Scardia, Schlömerkemper and Zanini.
The second part of this thesis is devoted to the analysis of a quasi-continuum (QC) method. To this end, we consider the discrete model studied in the first part of this thesis as the fully atomistic model problem and construct an approximation based on a QC method. We show that in an elastic setting the expansion by \(\Gamma\)-convergence of the fully atomistic energy and its QC approximation coincide. In the case of fracture, we show that this is not true in general. In the case of only nearest and next-to-nearest neighbour interactions, we give sufficient conditions on the QC approximation such that, also in case of fracture, the minimal energies of the fully atomistic energy and its approximation coincide in the limit.
The aim of this work is to provide further insight into the qualitative behavior of mechanical systems that are well described by Lennard-Jones type interactions on an atomistic scale. By means of Gamma-convergence techniques, we study the continuum limit of one-dimensional chains of atoms with finite range interactions of Lennard-Jones type, including the classical Lennard-Jones potentials. So far, explicit formula for the continuum limit were only available for the case of nearest and next-to-nearest neighbour interactions. In this work, we provide an explicit expression for the continuum limit in the case of finite range interactions. The obtained homogenization formula is given by the convexification of a Cauchy-Born energy density. Furthermore, we study rescaled energies in which bulk and surface contributions scale in the same way. The related discrete-to-continuum limit yields a rigorous derivation of a one-dimensional version of Griffith' fracture energy and thus generalizes earlier derivations for nearest and next-to-nearest neighbors to the case of finite range interactions. A crucial ingredient to our proofs is a novel decomposition of the energy that allows for re fined estimates.
The subject of this thesis are mathematical programs with complementarity conditions (MPCC). At first, an economic example of this problem class is analyzed, the problem of effort maximization in asymmetric n-person contest games. While an analytical solution for this special problem could be derived, this is not possible in general for MPCCs. Therefore, optimality conditions which might be used for numerical approaches where considered next. More precisely, a Fritz-John result for MPCCs with stronger properties than those known so far was derived together with some new constraint qualifications and subsequently used to prove an exact penalty result. Finally, to solve MPCCs numerically, the so called relaxation approach was used. Besides improving the results for existing relaxation methods, a new relaxation with strong convergence properties was suggested and a numerical comparison of all methods based on the MacMPEC collection conducted.
Consider the situation where two or more images are taken from the same object. After taking the first image, the object is moved or rotated so that the second recording depicts it in a different manner. Additionally, take heed of the possibility that the imaging techniques may have also been changed. One of the main problems in image processing is to determine the spatial relation between such images. The corresponding process of finding the spatial alignment is called “registration”. In this work, we study the optimization problem which corresponds to the registration task. Especially, we exploit the Lie group structure of the set of transformations to construct efficient, intrinsic algorithms. We also apply the algorithms to medical registration tasks. However, the methods developed are not restricted to the field of medical image processing. We also have a closer look at more general forms of optimization problems and show connections to related tasks.
The present thesis considers the development and analysis of arbitrary Lagrangian-Eulerian
discontinuous Galerkin (ALE-DG) methods with time-dependent approximation spaces for
conservation laws and the Hamilton-Jacobi equations.
Fundamentals about conservation laws, Hamilton-Jacobi equations and discontinuous Galerkin
methods are presented. In particular, issues in the development of discontinuous Galerkin (DG)
methods for the Hamilton-Jacobi equations are discussed.
The development of the ALE-DG methods based on the assumption that the distribution of
the grid points is explicitly given for an upcoming time level. This assumption allows to construct a time-dependent local affine linear mapping to a reference cell and a time-dependent
finite element test function space. In addition, a version of Reynolds’ transport theorem can be
proven.
For the fully-discrete ALE-DG method for nonlinear scalar conservation laws the geometric
conservation law and a local maximum principle are proven. Furthermore, conditions for slope
limiters are stated. These conditions ensure the total variation stability of the method. In addition, entropy stability is discussed. For the corresponding semi-discrete ALE-DG method,
error estimates are proven. If a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell, the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence for monotone fuxes and the optimal $(k+1)$ convergence for an upwind flux are proven in the $\mathrm{L}^{2}$-norm. The capability of the method is shown by numerical examples for nonlinear conservation laws.
Likewise, for the semi-discrete ALE-DG method for nonlinear Hamilton-Jacobi equations, error
estimates are proven. In the one dimensional case the optimal $\left(k+1\right)$ convergence and in the two dimensional case the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence are proven in the $\mathrm{L}^{2}$-norm, if a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell. For the fullydiscrete method, the geometric conservation is proven and for the piecewise constant forward Euler step the convergence of the method to the unique physical relevant solution is discussed.
The dissertation investigates the wide class of Epstein zeta-functions in terms of uniform distribution modulo one of the ordinates of their nontrivial zeros. Main results are a proof of a Landau type theorem for all Epstein zeta-functions as well as uniform distribution modulo one for the zero ordinates of all Epstein zeta-functions asscoiated with binary quadratic forms.
The work at hand studies problems from Loewner theory and is divided into two parts:
In part 1 (chapter 2) we present the basic notions of Loewner theory. Here we use a modern form which was developed by F. Bracci, M. Contreras, S. Díaz-Madrigal et al. and which can be applied to certain higher dimensional complex manifolds.
We look at two domains in more detail: the Euclidean unit ball and the polydisc. Here we consider two classes of biholomorphic mappings which were introduced by T. Poreda and G. Kohr as generalizations of the class S.
We prove a conjecture of G. Kohr about support points of these classes. The proof relies on the observation that the classes describe so called Runge domains, which follows from a result by L. Arosio, F. Bracci and E. F. Wold.
Furthermore, we prove a conjecture of G. Kohr about support points of a class of biholomorphic mappings that comes from applying the Roper-Suffridge extension operator to the class S.
In part 2 (chapter 3) we consider one special Loewner equation: the chordal multiple-slit equation in the upper half-plane.
After describing basic properties of this equation we look at the problem, whether one can choose the coefficient functions in this equation to be constant. D. Prokhorov proved this statement under the assumption that the slits are piecewise analytic. We use a completely different idea to solve the problem in its general form.
As the Loewner equation with constant coefficients holds everywhere (and not just almost everywhere), this result generalizes Loewner’s original idea to the multiple-slit case.
Moreover, we consider the following problems:
• The “simple-curve problem” asks which driving functions describe the growth of simple curves (in contrast to curves that touch itself). We discuss necessary and sufficient conditions, generalize a theorem of J. Lind, D. Marshall and S. Rohde to the multiple-slit equation and we give an example of a set of driving functions which generate simple curves because of a certain self-similarity property.
• We discuss properties of driving functions that generate slits which enclose a given angle with the real axis.
• A theorem by O. Roth gives an explicit description of the reachable set of one point in the radial Loewner equation. We prove the analog for the chordal equation.
First-order proximal methods that solve linear and bilinear elliptic optimal control problems with a sparsity cost functional are discussed. In particular, fast convergence of these methods is proved. For benchmarking purposes, inexact proximal schemes are compared to an inexact semismooth Newton method. Results of numerical experiments are presented to demonstrate the computational effectiveness of proximal schemes applied to infinite-dimensional elliptic optimal control problems and to validate the theoretical estimates.
Proximal methods are iterative optimization techniques for functionals, J = J1 + J2, consisting of a differentiable part J2 and a possibly nondifferentiable part J1. In this thesis proximal methods for finite- and infinite-dimensional optimization problems are discussed. In finite dimensions, they solve l1- and TV-minimization problems that are effectively applied to image reconstruction in magnetic resonance imaging (MRI). Convergence of these methods in this setting is proved. The proposed proximal scheme is compared to a split proximal scheme and it achieves a better signal-to-noise ratio. In addition, an application that uses parallel imaging is presented.
In infinite dimensions, these methods are discussed to solve nonsmooth linear and bilinear elliptic and parabolic optimal control problems. In particular, fast convergence of these methods is proved. Furthermore, for benchmarking purposes, truncated proximal schemes are compared to an inexact semismooth Newton method. Results of numerical experiments are presented to demonstrate the computational effectiveness of our proximal schemes that need less computation time than the semismooth Newton method in most cases. Results of numerical experiments are presented that successfully validate the theoretical estimates.
We analyze the mathematical models of two classes of physical phenomena. The first class of phenomena we consider is the interaction between one or more insulating rigid bodies and an electrically conducting fluid, inside of which the bodies are contained, as well as the electromagnetic fields trespassing both of the materials. We take into account both the cases of incompressible and compressible fluids. In both cases our main result yields the existence of weak solutions to the associated system of partial differential equations, respectively. The proofs of these results are built upon hybrid discrete-continuous approximation schemes: Parts of the systems are discretized with respect to time in order to deal with the solution-dependent test functions in the induction equation. The remaining parts are treated as continuous equations on the small intervals between consecutive discrete time points, allowing us to employ techniques which do not transfer to the discretized setting. Moreover, the solution-dependent test functions in the momentum equation are handled via the use of classical penalization methods.
The second class of phenomena we consider is the evolution of a magnetoelastic material. Here too, our main result proves the existence of weak solutions to the corresponding system of partial differential equations. Its proof is based on De Giorgi's minimizing movements method, in which the system is discretized in time and, at each discrete time point, a minimization problem is solved, the associated Euler-Lagrange equations of which constitute a suitable approximation of the original equation of motion and magnetic force balance. The construction of such a minimization problem is made possible by the realization that, already on the continuous level, both of these equations can be written in terms of the same energy and dissipation potentials. The functional for the discrete minimization problem can then be constructed on the basis of these potentials.
In this thesis stability and robustness properties of systems of functional differential equations which dynamics depends on the maximum of a solution over a prehistory time interval is studied. Max-operator is analyzed and it is proved that due to its presence such kind of systems are particular case of state dependent delay differential equations with piecewise continuous delay function. They are nonlinear, infinite-dimensional and may reduce to one-dimensional along its solution. Stability analysis with respect to input is accomplished by trajectory estimate and via averaging method. Numerical method is proposed.
Statistical Procedures for modelling a random phenomenon heavily depend on the choice of a certain family of probability distributions. Frequently, this choice is governed by a good mathematical feasibility, but disregards that some distribution properties may contradict reality. At most, the choosen distribution may be considered as an approximation. The present thesis starts with a construction of distributions, which uses solely available information and yields distributions having greatest uncertainty in the sense of the maximum entropy principle. One of such distributions is the monotonic distribution, which is solely determined by its support and the mean. Although classical frequentist statistics provides estimation procedures which may incorporate prior information, such procedures are rarely considered. A general frequentist scheme for the construction of shortest confidence intervals for distribution parameters under prior information is presented. In particular, the scheme is used for establishing confidence intervals for the mean of the monotonic distribution and compared to classical procedures. Additionally, an approximative procedure for the upper bound of the support of the monotonic distribution is proposed. A core purpose of auditing sampling is the determination of confidence intervals for the mean of zero-inflated populations. The monotonic distribution is used for modelling such a population and is utilised for the procedure of a confidence interval under prior information for the mean. The results are compared to two-sided intervals of Stringer-type.
Über die besondere Bedeutung von Analogiebildungsprozessen beim Lernen im Allgemeinen und beim Lernen von Mathematik im Speziellen besteht ein breiter wissenschaftlicher Konsens. Es liegt deshalb nahe, von einem lernförderlichen Mathematikunterricht zu verlangen, dass er im Bewusstsein dieser Bedeutung entwickelt ist – dass er also einerseits Analogien aufzeigt und sich diese beim Lehren von Mathematik zunutze macht, dass er andererseits aber auch dem Lernenden Gelegenheiten bietet, Analogien zu erkennen und zu entwickeln. Kurz: Die Fähigkeit zum Bilden von Analogien soll durch den Unterricht gezielt gefördert werden.
Um diesem Anspruch gerecht werden zu können, müssen ausreichende Kenntnisse darüber vorliegen, wie Analogiebildungsprozesse beim Lernen von Mathematik und beim Lösen mathematischer Aufgaben ablaufen, wodurch sich erfolgreiche Analogiebildungsprozesse auszeichnen und an welchen Stellen möglicherweise Schwierigkeiten bestehen.
Der Autor zeigt auf, wie Prozesse der Analogiebildung beim Lösen mathematischer Aufgaben initiiert, beobachtet, beschrieben und interpretiert werden können, um auf dieser Grundlage Ansatzpunkte für geeignete Fördermaßnahmen zu identifizieren, bestehende Ideen zur Förderung der Analogiebildungsfähigkeit zu beurteilen und neue Ideen zu entwickeln. Es werden dabei Wege der Analogiebildung nachgezeichnet und untersucht, die auf der Verschränkung zweier Dimensionen der Analogiebildung im Rahmen des zugrundeliegenden theoretischen Modells beruhen. So können verschiedene Vorgehensweisen ebenso kontrastiert werden, wie kritische Punkte im Verlauf eines Analogiebildungsprozesses. Es ergeben sich daraus Unterrichtsvorschläge, die auf den Ideen zum beispielbasierten Lernen aufbauen.
Human herpesvirus-6 (HHV-6) exists in latent form either as a nuclear episome or integrated into human chromosomes in more than 90% of healthy individuals without causing clinical symptoms. Immunosuppression and stress conditions can reactivate HHV-6 replication, associated with clinical complications and even death. We have previously shown that co-infection of Chlamydia trachomatis and HHV-6 promotes chlamydial persistence and increases viral uptake in an in vitro cell culture model. Here we investigated C. trachomatis-induced HHV-6 activation in cell lines and fresh blood samples from patients having Chromosomally integrated HHV-6 (CiHHV-6). We observed activation of latent HHV-6 DNA replication in CiHHV-6 cell lines and fresh blood cells without formation of viral particles. Interestingly, we detected HHV-6 DNA in blood as well as cervical swabs from C. trachomatis-infected women. Low virus titers correlated with high C. trachomatis load and vice versa, demonstrating a potentially significant interaction of these pathogens in blood cells and in the cervix of infected patients. Our data suggest a thus far underestimated interference of HHV-6 and C. trachomatis with a likely impact on the disease outcome as consequence of co-infection.
A new approach to modelling pedestrians' avoidance dynamics based on a Fokker–Planck (FP) Nash game framework is presented. In this framework, two interacting pedestrians are considered, whose motion variability is modelled through the corresponding probability density functions (PDFs) governed by FP equations. Based on these equations, a Nash differential game is formulated where the game strategies represent controls aiming at avoidance by minimizing appropriate collision cost functionals. The existence of Nash equilibria solutions is proved and characterized as a solution to an optimal control problem that is solved numerically. Results of numerical experiments are presented that successfully compare the computed Nash equilibria to the output of real experiments (conducted with humans) for four test cases.
Functions of bounded variation are most important in many fields of mathematics. This thesis investigates spaces of functions of bounded variation with one variable of various types, compares them to other classical function spaces and reveals natural “habitats” of BV-functions. New and almost comprehensive results concerning mapping properties like surjectivity and injectivity, several kinds of continuity and compactness of both linear and nonlinear operators between such spaces are given. A new theory about different types of convergence of sequences of such operators is presented in full detail and applied to a new proof for the continuity of the composition operator in the classical BV-space. The abstract results serve as ingredients to solve Hammerstein and Volterra integral equations using fixed point theory. Many criteria guaranteeing the existence and uniqueness of solutions in BV-type spaces are given and later applied to solve boundary and initial value problems in a nonclassical setting.
A big emphasis is put on a clear and detailed discussion. Many pictures and synoptic tables help to visualize and summarize the most important ideas. Over 160 examples and counterexamples illustrate the many abstract results and how delicate some of them are.
This doctoral thesis provides a classification of equivariant star products (star products together with quantum momentum maps) in terms of equivariant de Rham cohomology. This classification result is then used to construct an analogon of the Kirwan map from which one can directly obtain the characteristic class of certain reduced star products on Marsden-Weinstein reduced symplectic manifolds from the equivariant characteristic class of their corresponding unreduced equivariant star product. From the surjectivity of this map one can conclude that every star product on Marsden-Weinstein reduced symplectic manifolds can (up to equivalence) be obtained as a reduced equivariant star product.
In the thesis at hand, several sequences of number theoretic interest will be studied in the context of uniform distribution modulo one. <br>
<br>
In the first part we deduce for positive and real \(z\not=1\) a discrepancy estimate for the sequence \( \left((2\pi )^{-1}(\log z)\gamma_a\right) \),
where \(\gamma_a\) runs through the positive imaginary parts of the nontrivial \(a\)-points of the Riemann zeta-function. If the considered imaginary
parts are bounded by \(T\), the discrepancy of the sequence \( \left((2\pi )^{-1}(\log z)\gamma_a\right) \) tends to zero like
\( (\log\log\log T)^{-1} \) as \(T\rightarrow \infty\). The proof is related to the proof of Hlawka, who determined a discrepancy estimate for the
sequence containing the positive imaginary parts of the nontrivial zeros of the Riemann zeta-function. <br>
<br>
The second part of this thesis is about a sequence whose asymptotic behaviour is motivated by the sequence of primes. If \( \alpha\not=0\) is real
and \(f\) is a function of logarithmic growth, we specify several conditions such that the sequence \( (\alpha f(q_n)) \) is uniformly distributed
modulo one. The corresponding discrepancy estimates will be stated. The sequence \( (q_n)\) of real numbers is strictly increasing and the conditions
on its counting function \( Q(x)=\#\lbrace q_n \leq x \rbrace \) are satisfied by primes and primes in arithmetic progessions. As an application we
obtain that the sequence \( \left( (\log q_n)^K\right)\) is uniformly distributed modulo one for arbitrary \(K>1\), if the \(q_n\) are primes or primes
in arithmetic progessions. The special case that \(q_n\) equals the \(\textit{n}\)th prime number \(p_n\) was studied by Too, Goto and Kano. <br>
<br>
In the last part of this thesis we study for irrational \(\alpha\) the sequence \( (\alpha p_n)\) of irrational multiples of primes in the context of
weighted uniform distribution modulo one. A result of Vinogradov concerning exponential sums states that this sequence is uniformly distributed modulo one.
An alternative proof due to Vaaler uses L-functions. We extend this approach in the context of the Selberg class with polynomial Euler product. By doing so, we obtain
two weighted versions of Vinogradov's result: The sequence \( (\alpha p_n)\) is \( (1+\chi_{D}(p_n))\log p_n\)-uniformly distributed modulo one, where
\( \chi_D\) denotes the Legendre-Kronecker character. In the proof we use the Dedekind zeta-function of the quadratic number field \( \Bbb Q (\sqrt{D})\).
As an application we obtain in case of \(D=-1\), that \( (\alpha p_n)\) is uniformly distributed modulo one, if the considered primes are congruent to
one modulo four. Assuming additional conditions on the functions from the Selberg class we prove that the sequence \( (\alpha p_n) \) is also
\( (\sum_{j=1}^{\nu_F}{\alpha_j(p_n)})\log p_n\)-uniformly distributed modulo one, where the weights are related to the Euler product of the function.
The present thesis deals with optimisation problems with sparsity terms, either in the constraints which lead to cardinality-constrained problems or in the objective function which in turn lead to sparse optimisation problems. One of the primary aims of this work is to extend the so-called sequential optimality conditions to these two classes of problems. In recent years sequential optimality conditions have become increasingly popular in the realm of standard nonlinear programming. In contrast to the more well-known Karush-Kuhn-Tucker condition, they are genuine optimality conditions in the sense that every local minimiser satisfies these conditions without any further assumption. Lately they have also been extended to mathematical programmes with complementarity constraints. At around the same time it was also shown that optimisation problems with sparsity terms can be reformulated into problems which possess similar structures to mathematical programmes with complementarity constraints. These recent developments have become the impetus of the present work. But rather than working with the aforementioned reformulations which involve an artifical variable we shall first directly look at the problems themselves and derive sequential optimality conditions which are independent of any artificial variable. Afterwards we shall derive the weakest constraint qualifications associated with these conditions which relate them to the Karush-Kuhn-Tucker-type conditions. Another equally important aim of this work is to then consider the practicability of the derived sequential optimality conditions. The previously mentioned reformulations open up the possibilities to adapt methods which have been proven successful to handle mathematical programmes with complementarity constraints. We will show that the safeguarded augmented Lagrangian method and some regularisation methods may generate a point satisfying the derived conditions.
Ill-posed optimization problems appear in a wide range of mathematical applications, and their numerical solution requires the use of appropriate regularization techniques. In order to understand these techniques, a thorough analysis is inevitable.
The main subject of this book are quadratic optimal control problems subject to elliptic linear or semi-linear partial differential equations. Depending on the structure of the differential equation, different regularization techniques are employed, and their analysis leads to novel results such as rate of convergence estimates.
In this thesis it is shown how the spread of infectious diseases can be described via mathematical models that show the dynamic behavior of epidemics. Ordinary differential equations are used for the modeling process. SIR and SIRS models are distinguished, depending on whether a disease confers immunity to individuals after recovery or not. There are characteristic parameters for each disease like the infection rate or the recovery rate. These parameters indicate how aggressive a disease acts and how long it takes for an individual to recover, respectively. In general the parameters are time-varying and depend on population groups. For this reason, models with multiple subgroups are introduced, and switched systems are used to carry out time-variant parameters.
When investigating such models, the so called disease-free equilibrium is of interest, where no infectives appear within the population. The question is whether there are conditions, under which this equilibrium is stable. Necessary mathematical tools for the stability analysis are presented. The theory of ordinary differential equations, including Lyapunov stability theory, is fundamental. Moreover, convex and nonsmooth analysis, positive systems and differential inclusions are introduced. With these tools, sufficient conditions are given for the disease-free equilibrium of SIS, SIR and SIRS systems to be asymptotically stable.
This work deals with a class of nonlinear dynamical systems exhibiting both continuous and discrete dynamics, which is called as hybrid dynamical system.
We provide a broader framework of generalized hybrid dynamical systems allowing us to handle issues on modeling, stability and interconnections.
Various sufficient stability conditions are proposed by extensions of direct Lyapunov method.
We also explicitly show Lyapunov formulations of the nonlinear small-gain theorems for interconnected input-to-state stable hybrid dynamical systems.
Applications on modeling and stability of hybrid dynamical systems are given by effective strategies of vaccination programs to control a spread of disease in epidemic systems.
Die vorliegende Arbeit untersucht die Analytizitätseigenschaften unzulässiger Innerer-Punkte Pfade bei monotonen Komplementaritätsproblemen und diskutiert mögliche algorithmische Anwendungen. In Kapitel 2 werden einige matrixanalytische Konzepte und Resultate zusammengestellt, die für die Beweisführung in den folgenden Kapiteln benötigt werden. Kapitel 3 gibt eine genaue Definition der Begriffe "monotones lineares Komplementaritätsproblem" (LCP) bzw. "semidefinites monotones lineares Komplementaritätsproblem" (SDLCP) und zeigt die Grundidee hinter den Innere-Punkte-Verfahren zur Lösung solcher Probleme. Kapitel 4 beinhaltet die analytischen Hauptresultate für monotone Komplementaritätsprobleme. In Abschnitt 4.1 werden einige wohlbekannte Resultate über die Analytizitätseigenschaften unzulässiger Innerer-Punkte-Pfade für LCP's wiedergegeben. Diese werden in Abschnitt 4.2 auf den semidefiniten Fall übertragen. Unter der Annahme, dass das zugrundeliegende SDLCP eine strikt komplementäre Lösung besitzt, wird gezeigt, dass die Inneren-Punkte-Pfade sogar noch im Randpunkt analytisch sind. Kapitel 5 benutzt die Resultate aus Kapitel 4, um die lokal hohe Konvergenzordnung einer Langschrittmethode zur Lösung von SDLCP's zu zeigen. Kapitel 6 führt eine neue Methode zur Lösung von LCP's und SDLCP's mit Hilfe von Inneren-Punkte-Techniken ein. Dabei werden die Pfadfunktionen derart gewählt, dass alle Iterierten auf unzulässigen zentralen Pfaden liegen. Es wird globale und lokale Konvergenz des Verfahrens bewiesen.
The work at hand discusses various universality results for locally univalent and conformal metrics.
In Chapter 2 several interesting approximation results are discussed. Runge-type Theorems for holomorphic and meromorphic locally univalent functions are shown. A well-known local approximation theorem for harmonic functions due to Keldysh is generalized to solutions of the curvature equation.
In Chapter 3 and 4 these approximation theorems are used to establish universality results for locally univalent functions and conformal metrics. In particular locally univalent analogues for well-known universality results due Birkhoff, Seidel & Walsh and Heins are shown.
The present thesis considers the modelling of gas mixtures via a kinetic description. Fundamentals about the Boltzmann equation for gas mixtures and the BGK approximation are presented. Especially, issues in extending these models to gas mixtures are discussed. A non-reactive two component gas mixture is considered. The two species mixture is modelled by a system of kinetic BGK equations featuring two interaction terms to account for momentum and energy transfer between the two species. The model presented here contains several models from physicists and engineers as special cases. Consistency of this model is proven: conservation properties, positivity of all temperatures and the H-theorem. The form in global equilibrium as Maxwell distributions is specified. Moreover, the usual macroscopic conservation laws can be derived.
In the literature, there is another type of BGK model for gas mixtures developed by Andries, Aoki and Perthame, which contains only one interaction term. In this thesis, the advantages of these two types of models are discussed and the usefulness of the model presented here is shown by using this model to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described in the literature by Dellacherie. In addition, for each of the two models existence and uniqueness of mild solutions is shown. Moreover, positivity of classical solutions is proven.
Then, the model presented here is applied to three physical applications: a plasma consisting of ions and electrons, a gas mixture which deviates from equilibrium and a gas mixture consisting of polyatomic molecules.
First, the model is extended to a model for charged particles. Then, the equations of magnetohydrodynamics are derived from this model. Next, we want to apply this extended model to a mixture of ions and electrons in a special physical constellation which can be found for example in a Tokamak. The mixture is partly in equilibrium in some regions, in some regions it deviates from equilibrium. The model presented in this thesis is taken for this purpose, since it has the advantage to separate the intra and interspecies interactions. Then, a new model based on a micro-macro decomposition is proposed in order to capture the physical regime of being partly in equilibrium, partly not. Theoretical results are presented, convergence rates to equilibrium in the space-homogeneous case and the Landau damping for mixtures, in order to compare it with numerical results.
Second, the model presented here is applied to a gas mixture which deviates from equilibrium such that it is described by Navier-Stokes equations on the macroscopic level. In this macroscopic description it is expected that four physical coefficients will show up, characterizing the physical behaviour of the gases, namely the diffusion coefficient, the viscosity coefficient, the heat conductivity and the thermal diffusion parameter. A Chapman-Enskog expansion of the model presented here is performed in order to capture three of these four physical coefficients. In addition, several possible extensions to an ellipsoidal statistical model for gas mixtures are proposed in order to capture the fourth coefficient. Three extensions are proposed: An extension which is as simple as possible, an intuitive extension copying the one species case and an extension which takes into account the physical motivation of the physicist Holway who invented the ellipsoidal statistical model for one species. Consistency of the extended models like conservation properties, positivity of all temperatures and the H-theorem are proven. The shape of global Maxwell distributions in equilibrium are specified.
Third, the model presented here is applied to polyatomic molecules. A multi component gas mixture with translational and internal energy degrees of freedom is considered. The two species are allowed to have different degrees of freedom in internal energy and are modelled by a system of kinetic ellipsoidal statistical equations. Consistency of this model is shown: conservation properties, positivity of the temperature, H-theorem and the form of Maxwell distributions in equilibrium. For numerical purposes the Chu reduction is applied to the developed model for polyatomic gases to reduce the complexity of the model and an application for a gas consisting of a mono-atomic and a diatomic gas is given.
Last, the limit from the model presented here to the dissipative Euler equations for gas mixtures is proven.
We consider the Bathnagar–Gross–Krook (BGK) model, an approximation of the Boltzmann equation, describing the time evolution of a single momoatomic rarefied gas and satisfying the same two main properties (conservation properties and entropy inequality). However, in practical applications, one often has to deal with two additional physical issues. First, a gas often does not consist of only one species, but it consists of a mixture of different species. Second, the particles can store energy not only in translational degrees of freedom but also in internal degrees of freedom such as rotations or vibrations (polyatomic molecules). Therefore, here, we will present recent BGK models for gas mixtures for mono- and polyatomic particles and the existing mathematical theory for these models.
This thesis is concerned with numerical methods for solving nonlinear and mixed complementarity problems. Such problems arise from a variety of applications such as equilibria models of economics, contact and structural mechanics problems, obstacle problems, discrete-time optimal control problems etc. In this thesis we present a new formulation of nonlinear and mixed complementarity problems based on the Fischer-Burmeister function approach. Unlike traditional reformulations, our approach leads to an over-determined system of nonlinear equations. This has the advantage that certain drawbacks of the Fischer-Burmeister approach are avoided. Among other favorable properties of the new formulation, the natural merit function turns out to be differentiable. To solve the arising over-determined system we use a nonsmooth damped Levenberg-Marquardt-type method and investigate its convergence properties. Under mild assumptions, it can be shown that the global and local fast convergence results are similar to some of the better equation-based method. Moreover, the new method turns out to be significantly more robust than the corresponding equation-based method. For the case of large complementarity problems, however, the performance of this method suffers from the need for solving the arising linear least squares problem exactly at each iteration. Therefore, we suggest a modified version which allows inexact solutions of the least squares problems by using an appropriate iterative solver. Under certain assumptions, the favorable convergence properties of the original method are preserved. As an alternative method for mixed complementarity problems, we consider a box constrained least squares formulation along with a projected Levenberg-Marquardt-type method. To globalize this method, trust region strategies are proposed. Several ingredients are used to improve this approach: affine scaling matrices and multi-dimensional filter techniques. Global convergence results as well as local superlinear/quadratic convergence are shown under appropriate assumptions. Combining the advantages of the new methods, a new software for solving mixed complementarity problems is presented.
Das Hauptgebiet der Arbeit stellt die Approximation der Lösungen partieller Differentialgleichungen mit Dirichlet-Randbedingungen durch Splinefunktionen dar. Partielle Differentialgleichungen finden ihre Anwendung beispielsweise in Bereichen der Elektrostatik, der Elastizitätstheorie, der Strömungslehre sowie bei der Untersuchung der Ausbreitung von Wärme und Schall. Manche Approximationsaufgaben besitzen keine eindeutige Lösung. Durch Anwendung der Penalized Least Squares Methode wurde gezeigt, dass die Eindeutigkeit der gesuchten Lösung von gewissen Minimierungsaufgaben sichergestellt werden kann. Unter Umständen lässt sich sogar eine höhere Stabilität des numerischen Verfahrens gewinnen. Für die numerischen Betrachtungen wurde ein umfangreiches, effizientes C-Programm erstellt, welches die Grundlage zur Bestätigung der theoretischen Voraussagen mit den praktischen Anwendungen bildete.
The thesis ’Hurwitz’s Complex Continued Fractions - A Historical Approach and Modern Perspectives.’ deals with two branches of mathematics: Number Theory and History of Mathematics. On the first glimpse this might be unexpected, however, on the second view this is a very fruitful combination. Doing research in mathematics, it turns out to be very helpful to be aware of the beginnings and development of the corresponding subject.
In the case of Complex Continued Fractions the origins can easily be traced back to the end of the 19th century (see [Perron, 1954, vl. 1, Ch. 46]). One of their godfathers had been the famous mathematician Adolf Hurwitz. During the study of his transformation from real to complex continued fraction theory [Hurwitz, 1888], our attention was arrested by the article ’Ueber eine besondere Art der Kettenbruch-Entwicklung complexer Grössen’ [Hurwitz, 1895] from 1895 of an author called J. Hurwitz. We were not only surprised when we found out that he was the elder unknown brother Julius, furthermore, Julius Hurwitz introduced a complex continued fraction that also appeared (unmentioned) in an ergodic theoretical work from 1985 [Tanaka, 1985]. Those observations formed the Basis of our main research questions:
What is the historical background of Adolf and Julius Hurwitz and their mathematical studies? and What modern perspectives are provided by their complex continued fraction expansions?
In this work we examine complex continued fractions from various viewpoints. After a brief introduction on real continued fractions, we firstly devote ourselves to the lives of the brothers Adolf and Julius Hurwitz. Two excursions on selected historical aspects in respect to their work complete this historical chapter. In the sequel we shed light on Hurwitz’s, Adolf’s as well as Julius’, approaches to complex continued fraction expansions.
Correspondingly, in the following chapter we take a more modern perspective. Highlights are an ergodic theoretical result, namely a variation on the Döblin-Lenstra Conjecture [Bosma et al., 1983], as well as a result on transcendental numbers in tradition of Roth’s theorem [Roth, 1955]. In two subsequent chapters we are concernced with arithmetical properties of complex continued fractions. Firstly, an analogue to Marshall Hall’s Theorem from 1947 [Hall, 1947] on sums of continued fractions is derived. Secondly, a general approach on new types of continued fractions is presented building on the structural properties of lattices. Finally, in the last chapter we take up this approach and obtain an upper bound for the approximation quality of diophantine approximations by quotients of lattice points in the complex plane generalizing a method of Hermann Minkowski, improved by Hilde Gintner [Gintner, 1936], based on ideas from geometry of numbers.
Argumentation and proof have played a fundamental role in mathematics education in recent years. The author of this dissertation would like to investigate the development of the proving process within a dynamic geometry system in order to support tertiary students understanding the proving process. The strengths of this dynamic system stimulate students to formulate conjectures and produce arguments during the proving process. Through empirical research, we classified different levels of proving and proposed a methodological model for proving. This methodological model makes a contribution to improve students’ levels of proving and develop their dynamic visual thinking. We used Toulmin model of argumentation as a theoretical model to analyze the relationship between argumentation and proof. This research also offers some possible explanation so as to why students have cognitive difficulties in constructing proofs and provides mathematics educators with a deeper understanding on the proving process within a dynamic geometry system.
Die Arbeit beschäftigt sich mit dem Einsatz von Origami im Schulunterricht. Genauer beschreibt sie eine Unterrichtssequenz zur Flachfaltbarkeit, einem Teilgebiet des mathematischen Papierfaltens, für den Mathematikunterricht in der Oberstufe an Gymnasien und höheren Schulen. Es werden konkrete Handlungsanweisungen sowie Alternativen ausgeführt und begründet und mit vielen Grafiken erläutert. Ferner werden Ziele dieser Unterrichtssequenz gemäß KMK-Bildungsstandards dargelegt. Anschließend wird ein mathematischer Blick auf die Beschäftigung mit der Flachfaltbarkeit sowie eine Einordnung in die aktuelle Forschungslage gegeben.
In dieser Arbeit wird mathematisches Papierfalten und speziell 1-fach-Origami im universitären Kontext untersucht. Die Arbeit besteht aus drei Teilen.
Der erste Teil ist im Wesentlichen der Sachanalyse des 1-fach-Origami gewidmet. Im ersten Kapitel gehen wir auf die geschichtliche Einordnung des 1-fach-Origami, betrachten axiomatische Grundlagen und diskutieren, wie das Axiomatisieren von 1-fach-Origami zum Verständnis des Axiomenbegriffs beitragen könnte. Im zweiten Kapitel schildern wir das Design der zugehörigen explorativen Studie, beschreiben unsere Forschungsziele und -fragen. Im dritten Kapitel wird 1-fach-Origami mathematisiert, definiert und eingehend untersucht.
Der zweite Teil beschäftigt sich mit den von uns gestalteten und durchgeführten Kursen »Axiomatisieren lernen mit Papierfalten«. Im vierten Kapitel beschreiben wir die Lehrmethodik und die Gestaltung der Kurse, das fünfte Kapitel enthält ein Exzerpt der Kurse.
Im dritten Teil werden die zugehörigen Tests beschrieben. Im sechsten Kapitel erläutern wir das Design der Tests sowie die Testmethodik. Im siebten Kapitel findet die Auswertung ebendieser Tests statt.
We investigate the convergence of the proximal gradient method applied to control problems with non-smooth and non-convex control cost. Here, we focus on control cost functionals that promote sparsity, which includes functionals of L\(^{p}\)-type for p\in [0,1). We prove stationarity properties of weak limit points of the method. These properties are weaker than those provided by Pontryagin’s maximum principle and weaker than L-stationarity.
In this thesis we investigate near-isomorphism classes and isomorphism classes of almost completely decomposable groups. In Chapter 2 we introduce the concept of almost completely decomposable groups and sum up their most important facts. A local group is an almost completely decomposable group with a primary regulator quotient. A uniform group is a rigid local group with a homocyclic regulator quotient. In Chapter 3 a weakening of isomorphism, called type-isomorphism, appears. It is shown that type-isomorphism agrees with Lady's near-isomorphism. By the Main Decomposition Theorem and the Primary Reduction Theorem we are allowed to restrict ourselves on clipped local groups, namely groups without a direct rank-one summand. In Chapter 4 we collect facts of matrices over commutative rings with an identity element. Matrices over the local ring (Z / p^e Z) of residue classes of the rational integers modulo a prime power play an important role. In Chapter 5 we introduce representing matrices of finite essential extensions. Here a normal form for local groups is found by the Gauß algorithm. Uniform groups have representing matrices in Hermite normal form. The classification problems for almost completely decomposable groups up to isomorphism and up to near-isomorphism can be rephrased as equivalence problems for the representing matrices. In Chapter 6 we derive a criterion for the representing matrices of local groups in Gauß normal form. In Chapter 7 we formulate the matrix criterion for uniform groups. Two representing matrices in Hermite normal form describe isomorphic groups if and only if the rest blocks of the representing matrices are T-diagonally equivalent. Starting from a fixed near-isomorphism class in Chapter 8 we investigate isomorphism classes of uniform groups. We count groups and isomorphism classes. In Chapter 9 we specialize on uniform groups of rank 2r with a regulator quotient of rank r such that the rest block of the representing matrix is invertible and normed.
In dieser Arbeit werden Algorithmen zur Lösung von linearen semidefiniten Programmen beschrieben. Unter einer geeigneten Regularitätsvoraussetzung ist ein semidefinites Programm äquivalent zu seinen Optimalitätsbedingungen. Die Optimalitätsbedingungen bzw. die Zentralen-Pfad-Bedingungen überführen wir zunächst durch matrixwertige NCP-Funktionen in ein nichtlineares Gleichungssystem. Dieses nichtlineare und teilweise nicht differenzierbare Gleichungssystem lösen wir dann mit einem Newton-ähnlichen Verfahren. Durch die Umformulierung in ein nichtlineares Gleichungssystem muss während der Iteration nicht mehr explizit die positive (Semi-)Definitheit der beteiligten Matrizen beachtet werden. Weiter wird gezeigt, dass dieser Ansatz im Gegensatz zu Inneren-Punkte-Methoden sofort symmetrische Suchrichtungen erzeugt. Um globale Konvergenz zu erhalten, werden verschiedene Globalisierungsstrategien (Schrittweitenbestimmung, Trust-Region-Ansatz) untersucht. Für das betrachtete Prädiktor-Korrektor-Verfahren und das Trust-Region-Verfahren wird lokal superlineare Konvergenz unter strikter Komplementarität und Nichtdegeneriertheit gezeigt. Die theoretische Untersuchung eines nichtglatten Newton-Verfahrens liefert ein lokal quadratisches Konvergenzverhalten ohne strikte Komplementarität, wenn die Nichtdegeneriertheitsvoraussetzung geeignet modifiziert wird.
We investigate eigenvalues of the zero-divisor graph Γ(R) of finite commutative rings R and study the interplay between these eigenvalues, the ring-theoretic properties of R and the graph-theoretic properties of Γ(R). The graph Γ(R) is defined as the graph with vertex set consisting of all nonzero zero-divisors of R and adjacent vertices x, y whenever xy=0. We provide formulas for the nullity of Γ(R), i.e., the multiplicity of the eigenvalue 0 of Γ(R). Moreover, we precisely determine the spectra of \(\Gamma ({\mathbb {Z}}_p \times {\mathbb {Z}}_p \times {\mathbb {Z}}_p)\) and \(\Gamma ({\mathbb {Z}}_p \times {\mathbb {Z}}_p \times {\mathbb {Z}}_p \times {\mathbb {Z}}_p)\) for a prime number p. We introduce a graph product ×Γ with the property that Γ(R)≅Γ(R\(_1\))×Γ⋯×ΓΓ(R\(_r\)) whenever R≅R\(_1\)×⋯×R\(_r\). With this product, we find relations between the number of vertices of the zero-divisor graph Γ(R), the compressed zero-divisor graph, the structure of the ring R and the eigenvalues of Γ(R).
In the present thesis we investigate algebraic and arithmetic properties of graph spectra. In particular, we study the algebraic degree of a graph, that is the dimension of the splitting field of the characteristic polynomial of the associated adjacency matrix over the rationals, and examine the question whether there is a relation between the algebraic degree of a graph and its structural properties. This generalizes the yet open question ``Which graphs have integral spectra?'' stated by Harary and Schwenk in 1974.
We provide an overview of graph products since they are useful to study graph spectra and, in particular, to construct families of integral graphs. Moreover, we present a relation between the diameter, the maximum vertex degree and the algebraic degree of a graph, and construct a potential family of graphs of maximum algebraic degree.
Furthermore, we determine precisely the algebraic degree of circulant graphs and find new criteria for isospectrality of circulant graphs. Moreover, we solve the inverse Galois problem for circulant graphs showing that every finite abelian extension of the rationals is the splitting field of some circulant graph. Those results generalize a theorem of So who characterized all integral circulant graphs. For our proofs we exploit the theory of Schur rings which was already used in order to solve the isomorphism problem for circulant graphs.
Besides that, we study spectra of zero-divisor graphs over finite commutative rings.
Given a ring \(R\), the zero-divisor graph over \(R\) is defined as the graph with vertex set being the set of non-zero zero-divisors of \(R\) where two vertices \(x,y\) are adjacent if and only if \(xy=0\). We investigate relations between the eigenvalues of a zero-divisor graph, its structural properties and the algebraic properties of the respective ring.
We discuss exceptional polynomials, i.e. polynomials over a finite field $k$ that induce bijections over infinitely many finite extensions of $k$. In the first chapters we give the theoretical background to characterize this class of polynomials with Galois theoretic means. This leads to the notion of arithmetic resp. geometric monodromy groups. In the remaining chapters we restrict our attention to polynomials with primitive affine arithmetic monodromy group. We first classify all exceptional polynomials with the fixed field of the affine kernel of the arithmetic monodromy group being of genus less or equal to 2. Next we show that every full affine group can be realized as the monodromy group of a polynomial. In the remaining chapters we classify affine polynomials of a given degree.
Die vorliegende Arbeit beschäftigt sich explorativ mit Metakognition beim Umgang mit Mathematik. Aufbauend auf der vorgestellten Forschungsliteratur wird der Einsatz von Metakognition im Rahmen einer qualitativen Studie bei Studienanfänger_innen aus verschiedenen Mathematik-(Lehramts-)Studiengängen dokumentiert. Unter Verwendung der Qualitativen Inhaltsanalyse nach Mayring erfolgt die Etablierung eines Kategoriensystems für den Begriff Metakognition im Hinblick auf den Einsatz in der Mathematik, das bisherige Systematisierungen erweitert. Schließlich wird der Einsatz der entsprechenden metakognitiven Aspekte am Beispiel verschiedener Begriffe und Verfahren aus dem Analysis-Unterricht exemplarisch aufgezeigt.
Analysis of discretization schemes for Fokker-Planck equations and related optimality systems
(2015)
The Fokker-Planck (FP) equation is a fundamental model in thermodynamic kinetic theories and
statistical mechanics.
In general, the FP equation appears in a number of different fields in natural sciences, for instance in solid-state physics, quantum optics, chemical physics, theoretical biology, and circuit theory. These equations also provide a powerful mean to define
robust control strategies for random models. The FP equations are partial differential equations (PDE) describing the time evolution of the probability density function (PDF) of stochastic processes.
These equations are of different types depending on the underlying stochastic process.
In particular, they are parabolic PDEs for the PDF of Ito processes, and hyperbolic PDEs for piecewise deterministic processes (PDP).
A fundamental axiom of probability calculus requires that the integral of the PDF over all the allowable state space must be equal to one, for all time. Therefore, for the purpose of accurate numerical simulation, a discretized FP equation must guarantee conservativeness of the total probability. Furthermore, since the
solution of the FP equation represents a probability density, any numerical scheme that approximates the FP equation is required to guarantee the positivity of the solution. In addition, an approximation scheme must be accurate and stable.
For these purposes, for parabolic FP equations on bounded domains, we investigate the Chang-Cooper (CC) scheme for space discretization and first- and
second-order backward time differencing. We prove that the resulting
space-time discretization schemes are accurate, conditionally stable, conservative, and preserve positivity.
Further, we discuss a finite difference discretization for the FP system corresponding to a PDP process in a bounded domain.
Next, we discuss FP equations in unbounded domains.
In this case, finite-difference or finite-element methods cannot be applied. By employing a suitable set of basis functions, spectral methods allow to treat unbounded domains. Since FP solutions decay exponentially at infinity, we consider Hermite functions as basis functions, which are Hermite polynomials multiplied by a Gaussian.
To this end, the Hermite spectral discretization is applied
to two different FP equations; the parabolic PDE corresponding to Ito processes, and the system of hyperbolic PDEs corresponding to a PDP process. The resulting discretized schemes are analyzed. Stability and spectral accuracy of the Hermite spectral discretization of the FP problems is proved. Furthermore, we investigate the conservativity of the solutions of FP equations discretized with the Hermite spectral scheme.
In the last part of this thesis, we discuss optimal control problems governed by FP equations on the characterization of their solution by optimality systems. We then investigate the Hermite spectral discretization of FP optimality systems in unbounded domains.
Within the framework of Hermite discretization, we obtain sparse-band systems of ordinary differential equations. We analyze the accuracy of the discretization schemes by showing spectral convergence in approximating the state, the adjoint, and the control variables that appear in the FP optimality systems.
To validate our theoretical estimates, we present results of numerical experiments.
We are interested in studying a system coupling the compressible Navier–Stokes equations with an elastic structure located at the boundary of the fluid domain. Initially the fluid domain is rectangular and the beam is located on the upper side of the rectangle. The elastic structure is modeled by an Euler–Bernoulli damped beam equation. We prove the local in time existence of strong solutions for that coupled system.
The investigation of multivariate generalized Pareto distributions (GPDs) in the framework of extreme value theory has begun only lately. Recent results show that they can, as in the univariate case, be used in Peaks over Threshold approaches. In this manuscript we investigate the definition of GPDs from Section 5.1 of Falk et al. (2004), which does not differ in the area of interest from those of other authors. We first show some theoretical properties and introduce important examples of GPDs. For the further investigation of these distributions simulation methods are an important part. We describe several methods of simulating GPDs, beginning with an efficient method for the logistic GPD. This algorithm is based on the Shi transformation, which was introduced by Shi (1995) and was used in Stephenson (2003) for the simulation of multivariate extreme value distributions of logistic type. We also present nonparametric and parametric estimation methods in GPD models. We estimate the angular density nonparametrically in arbitrary dimension, where the bivariate case turns out to be a special case. The asymptotic normality of the corresponding estimators is shown. Also in the parametric estimations, which are mainly based on maximum likelihood methods, the asymptotic normality of the estimators is shown under certain regularity conditions. Finally the methods are applied to a real hydrological data set containing water discharges of the rivers Altmühl and Danube in southern Bavaria.
This thesis aims at providing efficient and side-channel protected implementations of isogeny-based primitives, and at their application in threshold protocols. It is based on a sequence of academic papers.
Chapter 3 reviews the original variable-time implementation of CSIDH and introduces several optimizations, e.g. a significant improvement of isogeny computations by using both Montgomery and Edwards curves. In total, our improvements yield a speedup of 25% compared to the original implementation.
Chapter 4 presents the first practical constant-time implementation of CSIDH. We describe how variable-time implementations of CSIDH leak information on private keys, and describe ways to mitigate this. Further, we present several techniques to speed up the implementation. In total, our constant-time implementation achieves a rather small slowdown by a factor of 3.03.
Chapter 5 reviews practical fault injection attacks on CSIDH and presents countermeasures. We evaluate different attack models theoretically and practically, using low-budget equipment. Moreover, we present countermeasures that mitigate the proposed fault injection attacks, only leading to a small performance overhead of 7%.
Chapter 6 initiates the study of threshold schemes based on the Hard Homogeneous Spaces (HHS) framework of Couveignes. Using the HHS equivalent of Shamir’s secret sharing in the exponents, we adapt isogeny based schemes to the threshold setting. In particular, we present threshold versions of the CSIDH public key encryption and the CSI-FiSh signature scheme.
Chapter 7 gives a sieving algorithm for finding pairs of consecutive smooth numbers that utilizes solutions to the Prouhet-Tarry-Escott (PTE) problem. Recent compact isogeny-based protocols, namely B-SIDH and SQISign, both require large primes that lie between two smooth integers. Finding such a prime can be seen as a special case of finding twin smooth integers under the additional stipulation that their sum is a prime.
Mathematical modelling, simulation, and optimisation are core methodologies for future
developments in engineering, natural, and life sciences. This work aims at applying these
mathematical techniques in the field of biological processes with a focus on the wine
fermentation process that is chosen as a representative model.
In the literature, basic models for the wine fermentation process consist of a system of
ordinary differential equations. They model the evolution of the yeast population number
as well as the concentrations of assimilable nitrogen, sugar, and ethanol. In this thesis,
the concentration of molecular oxygen is also included in order to model the change of
the metabolism of the yeast from an aerobic to an anaerobic one. Further, a more sophisticated
toxicity function is used. It provides simulation results that match experimental
measurements better than a linear toxicity model. Moreover, a further equation for the
temperature plays a crucial role in this work as it opens a way to influence the fermentation
process in a desired way by changing the temperature of the system via a cooling
mechanism. From the view of the wine industry, it is necessary to cope with large scale
fermentation vessels, where spatial inhomogeneities of concentrations and temperature
are likely to arise. Therefore, a system of reaction-diffusion equations is formulated in
this work, which acts as an approximation for a model including computationally very
expensive fluid dynamics.
In addition to the modelling issues, an optimal control problem for the proposed
reaction-diffusion fermentation model with temperature boundary control is presented
and analysed. Variational methods are used to prove the existence of unique weak solutions
to this non-linear problem. In this framework, it is possible to exploit the Hilbert
space structure of state and control spaces to prove the existence of optimal controls.
Additionally, first-order necessary optimality conditions are presented. They characterise
controls that minimise an objective functional with the purpose to minimise the final
sugar concentration. A numerical experiment shows that the final concentration of sugar
can be reduced by a suitably chosen temperature control.
The second part of this thesis deals with the identification of an unknown function
that participates in a dynamical model. For models with ordinary differential equations,
where parts of the dynamic cannot be deduced due to the complexity of the underlying
phenomena, a minimisation problem is formulated. By minimising the deviations of simulation
results and measurements the best possible function from a trial function space
is found. The analysis of this function identification problem covers the proof of the
differentiability of the function–to–state operator, the existence of minimisers, and the
sensitivity analysis by means of the data–to–function mapping. Moreover, the presented
function identification method is extended to stochastic differential equations. Here, the
objective functional consists of the difference of measured values and the statistical expected
value of the stochastic process solving the stochastic differential equation. Using a
Fokker-Planck equation that governs the probability density function of the process, the
probabilistic problem of simulating a stochastic process is cast to a deterministic partial
differential equation. Proofs of unique solvability of the forward equation, the existence of
minimisers, and first-order necessary optimality conditions are presented. The application
of the function identification framework to the wine fermentation model aims at finding
the shape of the toxicity function and is carried out for the deterministic as well as the
stochastic case.
In this thesis, time-optimal control of the bi-steerable robot is addressed. The bi-steerable robot, a vehicle with two independently steerable axles, is a complex nonholonomic system with applications in many areas of land-based robotics. Motion planning and optimal control are challenging tasks for this system, since standard control schemes do not apply. The model of the bi-steerable robot considered here is a reduced kinematic model with the driving velocity and the steering angles of the front and rear axle as inputs. The steering angles of the two axles can be set independently from each other. The reduced kinematic model is a control system with affine and non-affine inputs, as the driving velocity enters the system linearly, whereas the steering angles enter nonlinearly. In this work, a new approach to solve the time-optimal control problem for the bi-steerable robot is presented. In contrast to most standard methods for time-optimal control, our approach does not exclusively rely on discretization and purely numerical methods. Instead, the Pontryagin Maximum Principle is used to characterize candidates for time-optimal solutions. The resultant boundary value problem is solved by optimization to obtain solutions to the path planning problem over a given time horizon. The time horizon is decreased and the path planning is iterated to approximate a time-optimal solution. An optimality condition is introduced which depends on the number of cusps, i.e., reversals of the driving direction of the robot. This optimality condition allows to single out non-optimal solutions with too many cusps. In general, our approach only gives approximations of time-optimal solutions, since only normal regular extremals are considered as solutions to the path planning problem, and the path planning is terminated when an extremal with minimal number of cusps is found. However, for most desired configurations, normal regular extremals with the minimal number of cusps provide time-optimal solutions for the bi-steerable robot. The convergence of the approach is analyzed and its probabilistic completeness is shown. Moreover, simulation results on time-optimal solutions for the bi-steerable robot are presented.
Purpose: To compare the outcomes of canaloplasty and trabeculectomy in open-angle glaucoma.
Methods: This prospective, randomized clinical trial included 62 patients who randomly received trabeculectomy (n = 32) or canaloplasty (n = 30) and were followed up prospectively for 2 years. Primary endpoint was complete (without medication) and qualified success (with or without medication) defined as an intraocular pressure (IOP) of ≤18 mmHg (definition 1) or IOP ≤21 mmHg and ≥20% IOP reduction (definition 2), IOP ≥5 mmHg, no vision loss and no further glaucoma surgery. Secondary endpoints were the absolute IOP reduction, visual acuity, medication, complications and second surgeries.
Results: Surgical treatment significantly reduced IOP in both groups (p < 0.001). Complete success was achieved in 74.2% and 39.1% (definition 1, p = 0.01), and 67.7% and 39.1% (definition 2, p = 0.04) after 2 years in the trabeculectomy and canaloplasty group, respectively. Mean absolute IOP reduction was 10.8 ± 6.9 mmHg in the trabeculectomy and 9.3 ± 5.7 mmHg in the canaloplasty group after 2 years (p = 0.47). Mean IOP was 11.5 ± 3.4 mmHg in the trabeculectomy and 14.4 ± 4.2 mmHg in the canaloplasty group after 2 years. Following trabeculectomy, complications were more frequent including hypotony (37.5%), choroidal detachment (12.5%) and elevated IOP (25.0%).
Conclusions: Trabeculectomy is associated with a stronger IOP reduction and less need for medication at the cost of a higher rate of complications. If target pressure is attainable by moderate IOP reduction, canaloplasty may be considered for its relative ease of postoperative care and lack of complications.
In Janssen and Reiss (1988) it was shown that in a location model of a Weibull type sample with shape parameter -1 < a < 1 the k(n) lower extremes are asymptotically local sufficient. In the present paper we show that even global sufficiency holds. Moreover, it turns out that convergence of the given statistical experiments in the deficiency metric does not only hold for compact parameter sets but for the whole real line.
The purpose of confidence and prediction intervals is to provide an interval estimation for an unknown distribution parameter or the future value of a phenomenon. In many applications, prior knowledge about the distribution parameter is available, but rarely made use of, unless in a Bayesian framework. This thesis provides exact frequentist confidence intervals of minimal volume exploiting prior information. The scheme is applied to distribution parameters of the binomial and the Poisson distribution. The Bayesian approach to obtain intervals on a distribution parameter in form of credibility intervals is considered, with particular emphasis on the binomial distribution. An application of interval estimation is found in auditing, where two-sided intervals of Stringer type are meant to contain the mean of a zero-inflated population. In the context of time series analysis, covariates are supposed to improve the prediction of future values. Exponential smoothing with covariates as an extension of the popular forecasting method exponential smoothing is considered in this thesis. A double-seasonality version of it is applied to forecast hourly electricity load under the use of meteorological covariates. Different kinds of prediction intervals for exponential smoothing with covariates are formulated.
The Cauchy problem for a simplified shallow elastic fluids model, one 3 x 3 system of Temple's type, is studied and a global weak solution is obtained by using the compensated compactness theorem coupled with the total variation estimates on the first and third Riemann invariants, where the second Riemann invariant is singular near the zero layer depth (rho - 0). This work extends in some sense the previous works, (Serre, 1987) and (Leveque and Temple, 1985), which provided the global existence of weak solutions for 2 x 2 strictly hyperbolic system and (Heibig, 1994) for n x n strictly hyperbolic system with smooth Riemann invariants.
For a graph \(\Gamma\) , let K be the smallest field containing all eigenvalues of the adjacency matrix of \(\Gamma\) . The algebraic degree \(\deg (\Gamma )\) is the extension degree \([K:\mathbb {Q}]\). In this paper, we completely determine the algebraic degrees of Cayley graphs over abelian groups and dihedral groups.
This dissertation is dealing with three mathematical areas, namely polynomial matrices over finite fields, linear systems and coding theory.
Coprimeness properties of polynomial matrices provide criteria for the reachability and observability of interconnected linear systems. Since time-discrete linear systems over finite fields and convolutional codes are basically the same objects, these results could be transfered to criteria for non-catastrophicity of convolutional codes.
We calculate the probability that specially structured polynomial matrices are right prime. In particular, formulas for the number of pairwise coprime polynomials and for the number of mutually left coprime polynomial matrices are calculated. This leads to the probability that a parallel connected linear system is reachable and that a parallel connected convolutional codes is non-catastrophic.
Moreover, the corresponding probabilities are calculated for other networks of linear systems and convolutional codes, such as series connection.
Furthermore, the probabilities that a convolutional codes is MDP and that a clock code is MDS are approximated.
Finally, we consider the probability of finding a solution for a linear network coding problem.
Optimization problems with composite functions deal with the minimization of the sum
of a smooth function and a convex nonsmooth function. In this thesis several numerical
methods for solving such problems in finite-dimensional spaces are discussed, which are
based on proximity operators.
After some basic results from convex and nonsmooth analysis are summarized, a first-order
method, the proximal gradient method, is presented and its convergence properties are
discussed in detail. Known results from the literature are summarized and supplemented by
additional ones. Subsequently, the main part of the thesis is the derivation of two methods
which, in addition, make use of second-order information and are based on proximal Newton
and proximal quasi-Newton methods, respectively. The difference between the two methods
is that the first one uses a classical line search, while the second one uses a regularization
parameter instead. Both techniques lead to the advantage that, in contrast to many similar
methods, in the respective detailed convergence analysis global convergence to stationary
points can be proved without any restricting precondition. Furthermore, comprehensive
results show the local convergence properties as well as convergence rates of these algorithms,
which are based on rather weak assumptions. Also a method for the solution of the arising
proximal subproblems is investigated.
In addition, the thesis contains an extensive collection of application examples and a detailed
discussion of the related numerical results.
The limiting behaviour of a one‐dimensional discrete system is studied by means of Γ‐convergence. We consider a toy model of a chain of atoms. The interaction potentials are of Lennard‐Jones type and periodically or stochastically distributed. The energy of the system is considered in the discrete to continuum limit, i.e. as the number of atoms tends to infinity. During that limit, a homogenization process takes place. The limiting functional is discussed, especially with regard to fracture. Secondly, we consider a rescaled version of the problem, which yields a limiting energy of Griffith's type consisting of a quadratic integral term and a jump contribution. The periodic case can be found in [8], the stochastic case in [6,7].
The work in this thesis contains three main topics. These are the passage from discrete to continuous models by means of $\Gamma$-convergence, random as well as periodic homogenization and fracture enabled by non-convex Lennard-Jones type interaction potentials. Each of them is discussed in the following.
We consider a discrete model given by a one-dimensional chain of particles with randomly distributed interaction potentials. Our interest lies in the continuum limit, which yields the effective behaviour of the system. This limit is achieved as the number of atoms tends to infinity, which corresponds to a vanishing distance between the particles. The starting point of our analysis is an energy functional in a discrete system; its continuum limit is obtained by variational $\Gamma$-convergence.
The $\Gamma$-convergence methods are combined with a homogenization process in the framework of ergodic theory, which allows to focus on heterogeneous systems. On the one hand, composite materials or materials with impurities are modelled by a stochastic or periodic distribution of particles or interaction potentials. On the other hand, systems of one species of particles can be considered as random in cases when the orientation of particles matters. Nanomaterials, like chains of atoms, molecules or polymers, are an application of the heterogeneous chains in experimental sciences.
A special interest is in fracture in such heterogeneous systems. We consider interaction potentials of Lennard-Jones type. The non-standard growth conditions and the convex-concave structure of the Lennard-Jones type interactions yield mathematical difficulties, but allow for fracture. The interaction potentials are long-range in the sense that their modulus decays slower than exponential. Further, we allow for interactions beyond nearest neighbours, which is also referred to as long-range.
The main mathematical issue is to bring together the Lennard-Jones type interactions with ergodic theorems in the limiting process as the number of particles tends to infinity. The blow up at zero of the potentials prevents from using standard extensions of the Akcoglu-Krengel subadditive ergodic theorem. We overcome this difficulty by an approximation of the interaction potentials which shows suitable Lipschitz and Hölder regularity. Beyond that, allowing for continuous probability distributions instead of only finitely many different potentials leads to a further challenge.
The limiting integral functional of the energy by means of $\Gamma$-convergence involves a homogenized energy density and allows for fracture, but without a fracture contribution in the energy. In order to refine this result, we rescale our model and consider its $\Gamma$-limit, which is of Griffith's type consisting of an elastic part and a jump contribution.
In a further approach we study fracture at the level of the discrete energies. With an appropriate definition of fracture in the discrete setting, we define a fracture threshold separating the region of elasticity from that of fracture and consider the pointwise convergence of this threshold. This limit turns out to coincide with the one obtained in the variational $\Gamma$-convergence approach.
This work studies the convergence of trajectories of gradient-like systems. In the first part of this work continuous-time gradient-like systems are examined. Results on the convergence of integral curves of gradient systems to single points of Lojasiewicz and Kurdyka are extended to a class of gradient-like vector fields and gradient-like differential inclusions. In the second part of this work discrete-time gradient-like optimization methods on manifolds are studied. Methods for smooth and for nonsmooth optimization problems are considered. For these methods some convergence results are proven. Additionally the optimization methods for nonsmooth cost functions are applied to sphere packing problems on adjoint orbits.
In this thesis, a variety of Fokker--Planck (FP) optimal control problems are investigated. Main emphasis is put on a first-- and second--order analysis of different optimal control problems, characterizing optimal controls, establishing regularity results for optimal controls, and providing a numerical analysis for a Galerkin--based numerical scheme.
The Fokker--Planck equation is a partial differential equation (PDE) of linear parabolic type deeply connected to the theory of stochastic processes and stochastic differential equations. In essence, it describes the evolution over time of the probability distribution of the state of an object or system of objects under the influence of both deterministic and stochastic forces.
The FP equation is a cornerstone in understanding and modeling phenomena ranging from the diffusion and motion of molecules in a fluid to the fluctuations in financial markets.
Two different types of optimal control problems are analyzed in this thesis. On the one hand, Fokker--Planck ensemble optimal control problems are considered that have a wide range of applications in controlling a system of multiple non--interacting objects. In this framework, the goal is to collectively drive each object into a desired state.
On the other hand, tracking--type control problems are investigated, commonly used in parameter identification problems or stemming from the field of inverse problems.
In this framework, the aim is to determine certain parameters or functions of the FP equation, such that the resulting probability distribution function takes a desired form, possibly observed by measurements.
In both cases, we consider FP models where the control functions are part of the drift, arising only from the deterministic forces of the system. Therefore, the FP optimal control problem has a bilinear control structure.
Box constraints on the controls may be present, and the focus is on time--space dependent controls for ensemble--type problems and on only time--dependent controls for tracking--type optimal control problems.
In the first chapter of the thesis, a proof of the connection between the FP equation and stochastic differential equations is provided. Additionally, stochastic optimal control problems, aiming to minimize an expected cost value, are introduced, and the corresponding formulation within a deterministic FP control framework is established.
For the analysis of this PDE--constrained optimal control problem, the existence, and regularity of solutions to the FP problem are investigated. New $L^\infty$--estimates for solutions are established for low space dimensions under mild assumptions on the drift. Furthermore, based on the theory of Bessel potential spaces, new smoothness properties are derived for solutions to the FP problem in the case of only time--dependent controls. Due to these properties, the control--to--state map, which associates the control functions with the corresponding solution of the FP problem, is well--defined, Fréchet differentiable and compact for suitable Lebesgue spaces or Sobolev spaces.
The existence of optimal controls is proven under various assumptions on the space of admissible controls and objective functionals. First--order optimality conditions are derived using the adjoint system. The resulting characterization of optimal controls is exploited to achieve higher regularity of optimal controls, as well as their state and co--state functions.
Since the FP optimal control problem is non--convex due to its bilinear structure, a first--order analysis should be complemented by a second--order analysis.
Therefore, a second--order analysis for the ensemble--type control problem in the case of $H^1$--controls in time and space is performed, and sufficient second--order conditions are provided. Analogous results are obtained for the tracking--type problem for only time--dependent controls.
The developed theory on the control problem and the first-- and second--order optimality conditions is applied to perform a numerical analysis for a Galerkin discretization of the FP optimal control problem. The main focus is on tracking-type problems with only time--dependent controls. The idea of the presented Galerkin scheme is to first approximate the PDE--constrained optimization problem by a system of ODE--constrained optimization problems. Then, conditions on the problem are presented such that the convergence of optimal controls from one problem to the other can be guaranteed.
For this purpose, a class of bilinear ODE--constrained optimal control problems arising from the Galerkin discretization of the FP problem is analyzed. First-- and second--order optimality conditions are established, and a numerical analysis is performed. A discretization with linear finite elements for the state and co--state problem is investigated, while the control functions are approximated by piecewise constant or piecewise quadratic continuous polynomials. The latter choice is motivated by the bilinear structure of the optimal control problem, allowing to overcome the discrepancies between a discretize--then--optimize and optimize--then--discretize approach. Moreover, second--order accuracy results are shown using the space of continuous, piecewise quadratic polynomials as the discrete space of controls. Lastly, the theoretical results and the second--order convergence rates are numerically verified.
In attempting to solve the regular inverse Galois problem for arbitrary subfields K of C (particularly for K=Q), a very important result by Fried and Völklein reduces the existence of regular Galois extensions F|K(t) with Galois group G to the existence of K-rational points on components of certain moduli spaces for families of covers of the projective line, known as Hurwitz spaces.
In some cases, the existence of rational points on Hurwitz spaces has been proven by theoretical criteria. In general, however, the question whether a given Hurwitz space has any rational point remains a very difficult problem. In concrete cases, it may be tackled by an explicit computation of a Hurwitz space and the corresponding family of covers.
The aim of this work is to collect and expand on the various techniques that may be used to solve such computational problems and apply them to tackle several families of Galois theoretic interest. In particular, in Chapter 5, we compute explicit curve equations for Hurwitz spaces for certain families of \(M_{24}\) and \(M_{23}\).
These are (to my knowledge) the first examples of explicitly computed Hurwitz spaces of such high genus. They might be used to realize \(M_{23}\) as a regular Galois group over Q if one manages to find suitable points on them.
Apart from the calculation of explicit algebraic equations, we produce complex approximations for polynomials with genus zero ramification of several different ramification types in \(M_{24}\) and \(M_{23}\). These may be used as starting points for similar computations.
The main motivation for these computations is the fact that \(M_{23}\) is currently the only remaining sporadic group that is not known to occur as a Galois group over Q.
We also compute the first explicit polynomials with Galois groups \(G=P\Gamma L_3(4), PGL_3(4), PSL_3(4)\) and \(PSL_5(2)\) over Q(t).
Special attention will be given to reality questions. As an application we compute the first examples of totally real polynomials with Galois groups \(PGL_2(11)\) and \(PSL_3(3)\) over Q.
As a suggestion for further research, we describe an explicit algorithmic version of "Algebraic Patching", following the theory described e.g. by M. Jarden. This could be used to conquer some problems regarding families of covers of genus g>0.
Finally, we present explicit Magma implementations for several of the most important algorithms involved in our computations.