Institut für Mathematik
Refine
Has Fulltext
- yes (230)
Is part of the Bibliography
- yes (230)
Year of publication
Document Type
- Doctoral Thesis (131)
- Journal article (77)
- Book (5)
- Other (4)
- Report (4)
- Master Thesis (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (12)
- Optimierung (9)
- Extremwertstatistik (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Finite-Elemente-Methode (5)
- Mathematikunterricht (5)
Institute
- Institut für Mathematik (230)
- Augenklinik und Poliklinik (2)
- Institut für Virologie und Immunbiologie (2)
- Klinik und Poliklinik für Dermatologie, Venerologie und Allergologie (2)
- Theodor-Boveri-Institut für Biowissenschaften (2)
- Graduate School of Science and Technology (1)
- Institut für Informatik (1)
- Klinik und Poliklinik für Psychiatrie, Psychosomatik und Psychotherapie (1)
- Medizinische Klinik und Poliklinik II (1)
- Missionsärztliche Klinik (1)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
A Lagrange multiplier method for semilinear elliptic state constrained optimal control problems
(2020)
In this paper we apply an augmented Lagrange method to a class of semilinear ellip-tic optimal control problems with pointwise state constraints. We show strong con-vergence of subsequences of the primal variables to a local solution of the original problem as well as weak convergence of the adjoint states and weak-* convergence of the multipliers associated to the state constraint. Moreover, we show existence of stationary points in arbitrary small neighborhoods of local solutions of the original problem. Additionally, various numerical results are presented.
We consider a class of “wild” initial data to the compressible Euler system that give rise to infinitely many admissible weak solutions via the method of convex integration. We identify the closure of this class in the natural L1-topology and show that its complement is rather large, specifically it is an open dense set.
In this paper we introduce a theoretical framework concerned with fostering functional thinking in Grade 8 students by utilizing digital technologies. This framework is meant to be used to guide the systematic variation of tasks for implementation in the classroom while using digital technologies. Examples of problems and tasks illustrate this process. Additionally, results of an empirical investigation with Grade 8 students, which focusses on the students’ skills with digital technologies, how they utilize these tools when engaging with the developed tasks, and how they influence their functional thinking, are presented. The research aim is to investigate in which way tasks designed according to the theoretical framework could promote functional thinking while using digital technologies in the sense of the operative principle. The results show that the developed framework — Function-Operation-Matrix — is a sound basis for initiating students’ actions in the sense of the operative principle, to foster the development of functional thinking in its three aspects, namely, assignment, co-variation and object, and that digital technologies can support this process in a meaningful way.
Functions of bounded variation are most important in many fields of mathematics. This thesis investigates spaces of functions of bounded variation with one variable of various types, compares them to other classical function spaces and reveals natural “habitats” of BV-functions. New and almost comprehensive results concerning mapping properties like surjectivity and injectivity, several kinds of continuity and compactness of both linear and nonlinear operators between such spaces are given. A new theory about different types of convergence of sequences of such operators is presented in full detail and applied to a new proof for the continuity of the composition operator in the classical BV-space. The abstract results serve as ingredients to solve Hammerstein and Volterra integral equations using fixed point theory. Many criteria guaranteeing the existence and uniqueness of solutions in BV-type spaces are given and later applied to solve boundary and initial value problems in a nonclassical setting.
A big emphasis is put on a clear and detailed discussion. Many pictures and synoptic tables help to visualize and summarize the most important ideas. Over 160 examples and counterexamples illustrate the many abstract results and how delicate some of them are.
In the present thesis we investigate algebraic and arithmetic properties of graph spectra. In particular, we study the algebraic degree of a graph, that is the dimension of the splitting field of the characteristic polynomial of the associated adjacency matrix over the rationals, and examine the question whether there is a relation between the algebraic degree of a graph and its structural properties. This generalizes the yet open question ``Which graphs have integral spectra?'' stated by Harary and Schwenk in 1974.
We provide an overview of graph products since they are useful to study graph spectra and, in particular, to construct families of integral graphs. Moreover, we present a relation between the diameter, the maximum vertex degree and the algebraic degree of a graph, and construct a potential family of graphs of maximum algebraic degree.
Furthermore, we determine precisely the algebraic degree of circulant graphs and find new criteria for isospectrality of circulant graphs. Moreover, we solve the inverse Galois problem for circulant graphs showing that every finite abelian extension of the rationals is the splitting field of some circulant graph. Those results generalize a theorem of So who characterized all integral circulant graphs. For our proofs we exploit the theory of Schur rings which was already used in order to solve the isomorphism problem for circulant graphs.
Besides that, we study spectra of zero-divisor graphs over finite commutative rings.
Given a ring \(R\), the zero-divisor graph over \(R\) is defined as the graph with vertex set being the set of non-zero zero-divisors of \(R\) where two vertices \(x,y\) are adjacent if and only if \(xy=0\). We investigate relations between the eigenvalues of a zero-divisor graph, its structural properties and the algebraic properties of the respective ring.
Many modern statistically efficient methods come with tremendous computational challenges, often leading to large-scale optimisation problems. In this work, we examine such computational issues for recently developed estimation methods in nonparametric regression with a specific view on image denoising. We consider in particular certain variational multiscale estimators which are statistically optimal in minimax sense, yet computationally intensive. Such an estimator is computed as the minimiser of a smoothness functional (e.g., TV norm) over the class of all estimators such that none of its coefficients with respect to a given multiscale dictionary is statistically significant. The so obtained multiscale Nemirowski-Dantzig estimator (MIND) can incorporate any convex smoothness functional and combine it with a proper dictionary including wavelets, curvelets and shearlets. The computation of MIND in general requires to solve a high-dimensional constrained convex optimisation problem with a specific structure of the constraints induced by the statistical multiscale testing criterion. To solve this explicitly, we discuss three different algorithmic approaches: the Chambolle-Pock, ADMM and semismooth Newton algorithms. Algorithmic details and an explicit implementation is presented and the solutions are then compared numerically in a simulation study and on various test images. We thereby recommend the Chambolle-Pock algorithm in most cases for its fast convergence. We stress that our analysis can also be transferred to signal recovery and other denoising problems to recover more general objects whenever it is possible to borrow statistical strength from data patches of similar object structure.
Fluids in Gravitational Fields – Well-Balanced Modifications for Astrophysical Finite-Volume Codes
(2021)
Stellar structure can -- in good approximation -- be described as a hydrostatic state, which which arises due to a balance between gravitational force and pressure gradient. Hydrostatic states are static solutions of the full compressible Euler system with gravitational source term, which can be used to model the stellar interior. In order to carry out simulations of dynamical processes occurring in stars, it is vital for the numerical method to accurately maintain the hydrostatic state over a long time period. In this thesis we present different methods to modify astrophysical finite volume codes in order to make them \emph{well-balanced}, preventing them from introducing significant discretization errors close to hydrostatic states. Our well-balanced modifications are constructed so that they can meet the requirements for methods applied in the astrophysical context: They can well-balance arbitrary hydrostatic states with any equation of state that is applied to model thermodynamical relations and they are simple to implement in existing astrophysical finite volume codes. One of our well-balanced modifications follows given solutions exactly and can be applied on any grid geometry. The other methods we introduce, which do no require any a priori knowledge, balance local high order approximations of arbitrary hydrostatic states on a Cartesian grid. All of our modifications allow for high order accuracy of the method. The improved accuracy close to hydrostatic states is verified in various numerical experiments.
ADMM-Type Methods for Optimization and Generalized Nash Equilibrium Problems in Hilbert Spaces
(2020)
This thesis is concerned with a certain class of algorithms for the solution of constrained optimization problems and generalized Nash equilibrium problems in Hilbert spaces. This class of algorithms is inspired by the alternating direction method of multipliers (ADMM) and eliminates the constraints using an augmented Lagrangian approach. The alternating direction method consists of splitting the augmented Lagrangian subproblem into smaller and more easily manageable parts.
Before the algorithms are discussed, a substantial amount of background material, including the theory of Banach and Hilbert spaces, fixed-point iterations as well as convex and monotone set-valued analysis, is presented. Thereafter, certain optimization problems and generalized Nash equilibrium problems are reformulated and analyzed using variational inequalities and set-valued mappings. The analysis of the algorithms developed in the course of this thesis is rooted in these reformulations as variational inequalities and set-valued mappings.
The first algorithms discussed and analyzed are one weakly and one strongly convergent ADMM-type algorithm for convex, linearly constrained optimization. By equipping the associated Hilbert space with the correct weighted scalar product, the analysis of these two methods is accomplished using the proximal point method and the Halpern method.
The rest of the thesis is concerned with the development and analysis of ADMM-type algorithms for generalized Nash equilibrium problems that jointly share a linear equality constraint. The first class of these algorithms is completely parallelizable and uses a forward-backward idea for the analysis, whereas the second class of algorithms can be interpreted as a direct extension of the classical ADMM-method to generalized Nash equilibrium problems.
At the end of this thesis, the numerical behavior of the discussed algorithms is demonstrated on a collection of examples.
The work in this thesis contains three main topics. These are the passage from discrete to continuous models by means of $\Gamma$-convergence, random as well as periodic homogenization and fracture enabled by non-convex Lennard-Jones type interaction potentials. Each of them is discussed in the following.
We consider a discrete model given by a one-dimensional chain of particles with randomly distributed interaction potentials. Our interest lies in the continuum limit, which yields the effective behaviour of the system. This limit is achieved as the number of atoms tends to infinity, which corresponds to a vanishing distance between the particles. The starting point of our analysis is an energy functional in a discrete system; its continuum limit is obtained by variational $\Gamma$-convergence.
The $\Gamma$-convergence methods are combined with a homogenization process in the framework of ergodic theory, which allows to focus on heterogeneous systems. On the one hand, composite materials or materials with impurities are modelled by a stochastic or periodic distribution of particles or interaction potentials. On the other hand, systems of one species of particles can be considered as random in cases when the orientation of particles matters. Nanomaterials, like chains of atoms, molecules or polymers, are an application of the heterogeneous chains in experimental sciences.
A special interest is in fracture in such heterogeneous systems. We consider interaction potentials of Lennard-Jones type. The non-standard growth conditions and the convex-concave structure of the Lennard-Jones type interactions yield mathematical difficulties, but allow for fracture. The interaction potentials are long-range in the sense that their modulus decays slower than exponential. Further, we allow for interactions beyond nearest neighbours, which is also referred to as long-range.
The main mathematical issue is to bring together the Lennard-Jones type interactions with ergodic theorems in the limiting process as the number of particles tends to infinity. The blow up at zero of the potentials prevents from using standard extensions of the Akcoglu-Krengel subadditive ergodic theorem. We overcome this difficulty by an approximation of the interaction potentials which shows suitable Lipschitz and Hölder regularity. Beyond that, allowing for continuous probability distributions instead of only finitely many different potentials leads to a further challenge.
The limiting integral functional of the energy by means of $\Gamma$-convergence involves a homogenized energy density and allows for fracture, but without a fracture contribution in the energy. In order to refine this result, we rescale our model and consider its $\Gamma$-limit, which is of Griffith's type consisting of an elastic part and a jump contribution.
In a further approach we study fracture at the level of the discrete energies. With an appropriate definition of fracture in the discrete setting, we define a fracture threshold separating the region of elasticity from that of fracture and consider the pointwise convergence of this threshold. This limit turns out to coincide with the one obtained in the variational $\Gamma$-convergence approach.
This thesis is concerned with the solution of control and state constrained optimal control problems, which are governed by elliptic partial differential equations. Problems of this type are challenging since they suffer from the low regularity of the multiplier corresponding to the state constraint. Applying an augmented Lagrangian method we overcome these difficulties by working with multiplier approximations in $L^2(\Omega)$. For each problem class, we introduce the solution algorithm, carry out a thoroughly convergence analysis and illustrate our theoretical findings with numerical examples.
The thesis is divided into two parts. The first part focuses on classical PDE constrained optimal control problems. We start by studying linear-quadratic objective functionals, which include the standard tracking type term and an additional regularization term as well as the case, where the regularization term is replaced by an $L^1(\Omega)$-norm term, which makes the problem ill-posed. We deepen our study of the augmented Lagrangian algorithm by examining the more complicated class of optimal control problems that are governed by a semilinear partial differential equation.
The second part investigates the broader class of multi-player control problems. While the examination of jointly convex generalized Nash equilibrium problems (GNEP) is a simple extension of the linear elliptic optimal control case, the complexity is increased significantly for pure GNEPs. The existence of solutions of jointly convex GNEPs is well-studied. However, solution algorithms may suffer from non-uniqueness of solutions. Therefore, the last part of this thesis is devoted to the analysis of the uniqueness of normalized equilibria.
This cumulative dissertation is organized as follows:
After the introduction, the second chapter, based on “Asymptotic independence of bivariate order statistics” (2017) by Falk and Wisheckel, is an investigation of the asymptotic dependence behavior of the components of bivariate order statistics. We find that the two components of the order statistics become asymptotically independent for certain combinations of (sequences of) indices that are selected, and it turns out that no further assumptions on the dependence of the two components in the underlying sample are necessary. To establish this, an explicit representation of the conditional distribution of bivariate order statistics is derived.
Chapter 3 is from “Conditional tail independence in archimedean copula models” (2019) by Falk, Padoan and Wisheckel and deals with the conditional distribution of an Archimedean copula, conditioned on one of its components. We show that its tails are independent under minor conditions on the generator function, even if the unconditional tails were dependent. The theoretical findings are underlined by a simulation study and can be generalized to Archimax copulas.
“Generalized pareto copulas: A key to multivariate extremes” (2019) by Falk, Padoan and Wisheckel lead to Chapter 4 where we introduce a nonparametric approach to estimate the probability that a random vector exceeds a fixed threshold if it follows a Generalized Pareto copula. To this end, some theory underlying the concept of Generalized Pareto distributions is presented first, the estimation procedure is tested using a simulation and finally applied to a dataset of air pollution parameters in Milan, Italy, from 2002 until 2017.
The fifth chapter collects some additional results on derivatives of D-norms, in particular a condition for the existence of directional derivatives, and multivariate spacings, specifically an explicit formula for the second-to-last bivariate spacing.
In this dissertation, we develop and analyze novel optimizing feedback laws for control-affine systems with real-valued state-dependent output (or objective) functions. Given a control-affine system, our goal is to derive an output-feedback law that asymptotically stabilizes the closed-loop system around states at which the output function attains a minimum value. The control strategy has to be designed in such a way that an implementation only requires real-time measurements of the output value. Additional information, like the current system state or the gradient vector of the output function, is not assumed to be known. A method that meets all these criteria is called an extremum seeking control law. We follow a recently established approach to extremum seeking control, which is based on approximations of Lie brackets. For this purpose, the measured output is modulated by suitable highly oscillatory signals and is then fed back into the system. Averaging techniques for control-affine systems with highly oscillatory inputs reveal that the closed-loop system is driven, at least approximately, into the directions of certain Lie brackets. A suitable design of the control law ensures that these Lie brackets point into descent directions of the output function. Under suitable assumptions, this method leads to the effect that minima of the output function are practically uniformly asymptotically stable for the closed-loop system. The present document extends and improves this approach in various ways.
One of the novelties is a control strategy that does not only lead to practical asymptotic stability, but in fact to asymptotic and even exponential stability. In this context, we focus on the application of distance-based formation control in autonomous multi-agent system in which only distance measurements are available. This means that the target formations as well as the sensed variables are determined by distances. We propose a fully distributed control law, which only involves distance measurements for each individual agent to stabilize a desired formation shape, while a storage of measured data is not required. The approach is applicable to point agents in the Euclidean space of arbitrary (but finite) dimension. Under the assumption of infinitesimal rigidity of the target formations, we show that the proposed control law induces local uniform asymptotic (and even exponential) stability. A similar statement is also derived for nonholonomic unicycle agents with all-to-all communication. We also show how the findings can be used to solve extremum seeking control problems.
Another contribution is an extremum seeking control law with an adaptive dither signal. We present an output-feedback law that steers a fully actuated control-affine system with general drift vector field to a minimum of the output function. A key novelty of the approach is an adaptive choice of the frequency parameter. In this way, the task of determining a sufficiently large frequency parameter becomes obsolete. The adaptive choice of the frequency parameter also prevents finite escape times in the presence of a drift. The proposed control law does not only lead to convergence into a neighborhood of a minimum, but leads to exact convergence. For the case of an output function with a global minimum and no other critical point, we prove global convergence.
Finally, we present an extremum seeking control law for a class of nonholonomic systems. A detailed averaging analysis reveals that the closed-loop system is driven approximately into descent directions of the output function along Lie brackets of the control vector fields. Those descent directions also originate from an approximation of suitably chosen Lie brackets. This requires a two-fold approximation of Lie brackets on different time scales. The proposed method can lead to practical asymptotic stability even if the control vector fields do not span the entire tangent space. It suffices instead that the tangent space is spanned by the elements in the Lie algebra generated by the control vector fields. This novel feature extends extremum seeking by Lie bracket approximations from the class of fully actuated systems to a larger class of nonholonomic systems.
The limiting behaviour of a one‐dimensional discrete system is studied by means of Γ‐convergence. We consider a toy model of a chain of atoms. The interaction potentials are of Lennard‐Jones type and periodically or stochastically distributed. The energy of the system is considered in the discrete to continuum limit, i.e. as the number of atoms tends to infinity. During that limit, a homogenization process takes place. The limiting functional is discussed, especially with regard to fracture. Secondly, we consider a rescaled version of the problem, which yields a limiting energy of Griffith's type consisting of a quadratic integral term and a jump contribution. The periodic case can be found in [8], the stochastic case in [6,7].
In the thesis at hand, several sequences of number theoretic interest will be studied in the context of uniform distribution modulo one. <br>
<br>
In the first part we deduce for positive and real \(z\not=1\) a discrepancy estimate for the sequence \( \left((2\pi )^{-1}(\log z)\gamma_a\right) \),
where \(\gamma_a\) runs through the positive imaginary parts of the nontrivial \(a\)-points of the Riemann zeta-function. If the considered imaginary
parts are bounded by \(T\), the discrepancy of the sequence \( \left((2\pi )^{-1}(\log z)\gamma_a\right) \) tends to zero like
\( (\log\log\log T)^{-1} \) as \(T\rightarrow \infty\). The proof is related to the proof of Hlawka, who determined a discrepancy estimate for the
sequence containing the positive imaginary parts of the nontrivial zeros of the Riemann zeta-function. <br>
<br>
The second part of this thesis is about a sequence whose asymptotic behaviour is motivated by the sequence of primes. If \( \alpha\not=0\) is real
and \(f\) is a function of logarithmic growth, we specify several conditions such that the sequence \( (\alpha f(q_n)) \) is uniformly distributed
modulo one. The corresponding discrepancy estimates will be stated. The sequence \( (q_n)\) of real numbers is strictly increasing and the conditions
on its counting function \( Q(x)=\#\lbrace q_n \leq x \rbrace \) are satisfied by primes and primes in arithmetic progessions. As an application we
obtain that the sequence \( \left( (\log q_n)^K\right)\) is uniformly distributed modulo one for arbitrary \(K>1\), if the \(q_n\) are primes or primes
in arithmetic progessions. The special case that \(q_n\) equals the \(\textit{n}\)th prime number \(p_n\) was studied by Too, Goto and Kano. <br>
<br>
In the last part of this thesis we study for irrational \(\alpha\) the sequence \( (\alpha p_n)\) of irrational multiples of primes in the context of
weighted uniform distribution modulo one. A result of Vinogradov concerning exponential sums states that this sequence is uniformly distributed modulo one.
An alternative proof due to Vaaler uses L-functions. We extend this approach in the context of the Selberg class with polynomial Euler product. By doing so, we obtain
two weighted versions of Vinogradov's result: The sequence \( (\alpha p_n)\) is \( (1+\chi_{D}(p_n))\log p_n\)-uniformly distributed modulo one, where
\( \chi_D\) denotes the Legendre-Kronecker character. In the proof we use the Dedekind zeta-function of the quadratic number field \( \Bbb Q (\sqrt{D})\).
As an application we obtain in case of \(D=-1\), that \( (\alpha p_n)\) is uniformly distributed modulo one, if the considered primes are congruent to
one modulo four. Assuming additional conditions on the functions from the Selberg class we prove that the sequence \( (\alpha p_n) \) is also
\( (\sum_{j=1}^{\nu_F}{\alpha_j(p_n)})\log p_n\)-uniformly distributed modulo one, where the weights are related to the Euler product of the function.
This thesis covers a wide range of results for when a random vector is in the max-domain of attraction of max-stable random vector. It states some new theoretical results in D-norm terminology, but also gives an explaination why most approaches to multivariate extremes are equivalent to this specific approach. Then it covers new methods to deal with high-dimensional extremes, ranging from dimension reduction to exploratory methods and explaining why the Huessler-Reiss model is a powerful parametric model in multivariate extremes on par with the multivariate Gaussian distribution in multivariate regular statistics. It also gives new results for estimating and inferring the multivariate extremal dependence structure, strategies for choosing thresholds and compares the behavior of local and global threshold approaches. The methods are demonstrated in an artifical simulation study, but also on German weather data.
This dissertation investigates the application of multivariate Chebyshev polynomials in the algebraic signal processing theory for the development of FFT-like algorithms for discrete cosine transforms on weight lattices of compact Lie groups. After an introduction of the algebraic signal processing theory, a multivariate Gauss-Jacobi procedure for the development of orthogonal transforms is proven. Two theorems on fast algorithms in algebraic signal processing, one based on a decomposition property of certain polynomials and the other based on induced modules, are proven as multivariate generalizations of prior theorems. The definition of multivariate Chebyshev polynomials based on the theory of root systems is recalled. It is shown how to use these polynomials to define discrete cosine transforms on weight lattices of compact Lie groups. Furthermore it is shown how to develop FFT-like algorithms for these transforms. Then the theory of matrix-valued, multivariate Chebyshev polynomials is developed based on prior ideas. Under an existence assumption a formula for generating functions of these matrix-valued Chebyshev polynomials is deduced.
Prediction intervals are needed in many industrial applications. Frequently in mass production, small subgroups of unknown size with a lifetime behavior differing from the remainder of the population exist. A risk assessment for such a subgroup consists of two steps: i) the estimation of the subgroup size, and ii) the estimation of the lifetime behavior of this subgroup. This thesis covers both steps. An efficient practical method to estimate the size of a subgroup is presented and benchmarked against other methods. A prediction interval procedure which includes prior information in form of a Beta distribution is provided. This scheme is applied to the prediction of binomial and negative binomial counts. The effect of the population size on the prediction of the future number of failures is considered for a Weibull lifetime distribution, whose parameters are estimated from censored field data. Methods to obtain a prediction interval for the future number of failures with unknown sample size are presented. In many applications, failures are reported with a delay. The effects of such a reporting delay on the coverage properties of prediction intervals for the future number of failures are studied. The total failure probability of the two steps can be decomposed as a product probability. One-sided confidence intervals for such a product probability are presented.
A sequential quadratic Hamiltonian (SQH) scheme for solving different classes of non-smooth and non-convex PDE optimal control problems is investigated considering seven different benchmark problems with increasing difficulty. These problems include linear and nonlinear PDEs with linear and bilinear control mechanisms, non-convex and discontinuous costs of the controls, L\(^1\) tracking terms, and the case of state constraints.
The SQH method is based on the characterisation of optimality of PDE optimal control problems by the Pontryagin's maximum principle (PMP). For each problem, a theoretical discussion of the PMP optimality condition is given and results of numerical experiments are presented that demonstrate the large range of applicability of the SQH scheme.
The starting point of the thesis is the {\it universality} property of the Riemann Zeta-function $\zeta(s)$
which was proved by Voronin in 1975:
{\it Given a positive number $\varepsilon>0$ and an analytic non-vanishing function $f$ defined on a compact subset $\mathcal{K}$ of the strip $\left\{s\in\mathbb{C}:1/2 < \Re s< 1\right\}$ with connected complement, there exists a real number $\tau$ such that
\begin{align}\label{continuous}
\max\limits_{s\in \mathcal{K}}|\zeta(s+i\tau)-f(s)|<\varepsilon.
\end{align}
}
In 1980, Reich proved a discrete analogue of Voronin’s theorem, also known as {\it discrete universality theorem} for $\zeta(s)$:
{\it If $\mathcal{K}$, $f$ and $\varepsilon$ are as before, then
\begin{align}\label{discretee}
\liminf\limits_{N\to\infty}\dfrac{1}{N}\sharp\left\{1\leq n\leq N:\max\limits_{s\in \mathcal{K}}|\zeta(s+i\Delta n)-f(s)|<\varepsilon\right\}>0,
\end{align}
where $\Delta$ is an arbitrary but fixed positive number.
}
We aim at developing a theory which can be applied to prove the majority of all so far existing discrete universality theorems in the case of Dirichlet $L$-functions $L(s,\chi)$ and Hurwitz zeta-functions $\zeta(s;\alpha)$,
where $\chi$ is a Dirichlet character and $\alpha\in(0,1]$, respectively.
Both of the aforementioned classes of functions are generalizations of $\zeta(s)$, since $\zeta(s)=L(s,\chi_0)=\zeta(s;1)$, where $\chi_0$ is the principal Dirichlet character mod 1.
Amongst others, we prove statement (2) where instead of $\zeta(s)$ we have $L(s,\chi)$ for some Dirichlet character $\chi$ or $\zeta(s;\alpha)$ for some transcendental or rational number $\alpha\in(0,1]$, and instead of $(\Delta n)_{n\in\mathbb{N}}$ we can have:
\begin{enumerate}
\item \textit{Beatty sequences,}
\item \textit{sequences of ordinates of $c$-points of zeta-functions from the Selberg class,}
\item \textit{sequences which are generated by polynomials.}
\end{enumerate}
In all the preceding cases, the notion of {\it uniformly distributed sequences} plays an important role and we draw attention to it wherever we can.
Moreover, for the case of polynomials, we employ more advanced techniques from Analytic Number Theory such as bounds of exponential sums and zero-density estimates for Dirichlet $L$-functions.
This will allow us to prove the existence of discrete second moments of $L(s,\chi)$ and $\zeta(s;\alpha)$ on the left of the vertical line $1+i\mathbb{R}$, with respect to polynomials.
In the case of the Hurwitz Zeta-function $\zeta(s;\alpha)$, where $\alpha$ is transcendental or rational but not equal to $1/2$ or 1, the target function $f$ in (1) or (2), where $\zeta(\cdot)$ is replaced by $\zeta(\cdot;\alpha)$, is also allowed to have zeros.
Until recently there was no result regarding the universality of $\zeta(s;\alpha)$ in the literature whenever $\alpha$ is an algebraic irrational.
In the second half of the thesis, we prove that a weak version of statement \eqref{continuous} for $\zeta(s;\alpha)$ holds for all but finitely many algebraic irrational $\alpha$ in $[A,1]$, where $A\in(0,1]$ is an arbitrary but fixed real number.
Lastly, we prove that the ordinary Dirichlet series
$\zeta(s;f)=\sum_{n\geq1}f(n)n^{-s}$ and $\zeta_\alpha(s)=\sum_{n\geq1}\lfloor P(\alpha n+\beta)\rfloor^{-s}$
are hypertranscendental, where $f:\mathbb{N}\to\mathbb{C}$ is a {\it Besicovitch almost periodic arithmetical function}, $\alpha,\beta>0$ are such that $\lfloor\alpha+\beta\rfloor>1$ and $P\in\mathbb{Z}[X]$ is such that $P(\mathbb{N})\subseteq\mathbb{N}$.
This work deals with a class of nonlinear dynamical systems exhibiting both continuous and discrete dynamics, which is called as hybrid dynamical system.
We provide a broader framework of generalized hybrid dynamical systems allowing us to handle issues on modeling, stability and interconnections.
Various sufficient stability conditions are proposed by extensions of direct Lyapunov method.
We also explicitly show Lyapunov formulations of the nonlinear small-gain theorems for interconnected input-to-state stable hybrid dynamical systems.
Applications on modeling and stability of hybrid dynamical systems are given by effective strategies of vaccination programs to control a spread of disease in epidemic systems.
A new approach to modelling pedestrians' avoidance dynamics based on a Fokker–Planck (FP) Nash game framework is presented. In this framework, two interacting pedestrians are considered, whose motion variability is modelled through the corresponding probability density functions (PDFs) governed by FP equations. Based on these equations, a Nash differential game is formulated where the game strategies represent controls aiming at avoidance by minimizing appropriate collision cost functionals. The existence of Nash equilibria solutions is proved and characterized as a solution to an optimal control problem that is solved numerically. Results of numerical experiments are presented that successfully compare the computed Nash equilibria to the output of real experiments (conducted with humans) for four test cases.
This thesis deals with a new so-called sequential quadratic Hamiltonian (SQH) iterative scheme to solve optimal control problems with differential models and cost functionals ranging from smooth to discontinuous and non-convex. This scheme is based on the Pontryagin maximum principle (PMP) that provides necessary optimality conditions for an optimal solution. In this framework, a Hamiltonian function is defined that attains its minimum pointwise at the optimal solution of the corresponding optimal control problem. In the SQH scheme, this Hamiltonian function is augmented by a quadratic penalty term consisting of the current control function and the control function from the previous iteration. The heart of the SQH scheme is to minimize this augmented Hamiltonian function pointwise in order to determine a control update. Since the PMP does not require any differ- entiability with respect to the control argument, the SQH scheme can be used to solve optimal control problems with both smooth and non-convex or even discontinuous cost functionals. The main achievement of the thesis is the formulation of a robust and efficient SQH scheme and a framework in which the convergence analysis of the SQH scheme can be carried out. In this framework, convergence of the scheme means that the calculated solution fulfills the PMP condition. The governing differential models of the considered optimal control problems are ordinary differential equations (ODEs) and partial differential equations (PDEs). In the PDE case, elliptic and parabolic equations as well as the Fokker-Planck (FP) equation are considered. For both the ODE and the PDE cases, assumptions are formulated for which it can be proved that a solution to an optimal control problem has to fulfill the PMP. The obtained results are essential for the discussion of the convergence analysis of the SQH scheme. This analysis has two parts. The first one is the well-posedness of the scheme which means that all steps of the scheme can be carried out and provide a result in finite time. The second part part is the PMP consistency of the solution. This means that the solution of the SQH scheme fulfills the PMP conditions. In the ODE case, the following results are obtained that state well-posedness of the SQH scheme and the PMP consistency of the corresponding solution. Lemma 7 states the existence of a pointwise minimum of the augmented Hamiltonian. Lemma 11 proves the existence of a weight of the quadratic penalty term such that the minimization of the corresponding augmented Hamiltonian results in a control updated that reduces the value of the cost functional. Lemma 12 states that the SQH scheme stops if an iterate is PMP optimal. Theorem 13 proves the cost functional reducing properties of the SQH control updates. The main result is given in Theorem 14, which states the pointwise convergence of the SQH scheme towards a PMP consistent solution. In this ODE framework, the SQH method is applied to two optimal control problems. The first one is an optimal quantum control problem where it is shown that the SQH method converges much faster to an optimal solution than a globalized Newton method. The second optimal control problem is an optimal tumor treatment problem with a system of coupled highly non-linear state equations that describe the tumor growth. It is shown that the framework in which the convergence of the SQH scheme is proved is applicable for this highly non-linear case. Next, the case of PDE control problems is considered. First a general framework is discussed in which a solution to the corresponding optimal control problem fulfills the PMP conditions. In this case, many theoretical estimates are presented in Theorem 59 and Theorem 64 to prove in particular the essential boundedness of the state and adjoint variables. The steps for the convergence analysis of the SQH scheme are analogous to that of the ODE case and result in Theorem 27 that states the PMP consistency of the solution obtained with the SQH scheme. This framework is applied to different elliptic and parabolic optimal control problems, including linear and bilinear control mechanisms, as well as non-linear state equations. Moreover, the SQH method is discussed for solving a state-constrained optimal control problem in an augmented formulation. In this case, it is shown in Theorem 30 that for increasing the weight of the augmentation term, which penalizes the violation of the state constraint, the measure of this state constraint violation by the corresponding solution converges to zero. Furthermore, an optimal control problem with a non-smooth L\(^1\)-tracking term and a non-smooth state equation is investigated. For this purpose, an adjoint equation is defined and the SQH method is used to solve the corresponding optimal control problem. The final part of this thesis is devoted to a class of FP models related to specific stochastic processes. The discussion starts with a focus on random walks where also jumps are included. This framework allows a derivation of a discrete FP model corresponding to a continuous FP model with jumps and boundary conditions ranging from absorbing to totally reflecting. This discussion allows the consideration of the drift-control resulting from an anisotropic probability of the steps of the random walk. Thereafter, in the PMP framework, two drift-diffusion processes and the corresponding FP models with two different control strategies for an optimal control problem with an expectation functional are considered. In the first strategy, the controls depend on time and in the second one, the controls depend on space and time. In both cases a solution to the corresponding optimal control problem is characterized with the PMP conditions, stated in Theorem 48 and Theorem 49. The well-posedness of the SQH scheme is shown in both cases and further conditions are discussed that ensure the convergence of the SQH scheme to a PMP consistent solution. The case of a space and time dependent control strategy results in a special structure of the corresponding PMP conditions that is exploited in another solution method, the so-called direct Hamiltonian (DH) method.
A mathematical optimal-control tumor therapy framework consisting of radio- and anti-angiogenesis control strategies that are included in a tumor growth model is investigated. The governing system, resulting from the combination of two well established models, represents the differential constraint of a non-smooth optimal control problem that aims at reducing the volume of the tumor while keeping the radio- and anti-angiogenesis chemical dosage to a minimum. Existence of optimal solutions is proved and necessary conditions are formulated in terms of the Pontryagin maximum principle. Based on this principle, a so-called sequential quadratic Hamiltonian (SQH) method is discussed and benchmarked with an “interior point optimizer―a mathematical programming language” (IPOPT-AMPL) algorithm. Results of numerical experiments are presented that successfully validate the SQH solution scheme. Further, it is shown how to choose the optimisation weights in order to obtain treatment functions that successfully reduce the tumor volume to zero.
The work at hand discusses various universality results for locally univalent and conformal metrics.
In Chapter 2 several interesting approximation results are discussed. Runge-type Theorems for holomorphic and meromorphic locally univalent functions are shown. A well-known local approximation theorem for harmonic functions due to Keldysh is generalized to solutions of the curvature equation.
In Chapter 3 and 4 these approximation theorems are used to establish universality results for locally univalent functions and conformal metrics. In particular locally univalent analogues for well-known universality results due Birkhoff, Seidel & Walsh and Heins are shown.
Statistical Procedures for modelling a random phenomenon heavily depend on the choice of a certain family of probability distributions. Frequently, this choice is governed by a good mathematical feasibility, but disregards that some distribution properties may contradict reality. At most, the choosen distribution may be considered as an approximation. The present thesis starts with a construction of distributions, which uses solely available information and yields distributions having greatest uncertainty in the sense of the maximum entropy principle. One of such distributions is the monotonic distribution, which is solely determined by its support and the mean. Although classical frequentist statistics provides estimation procedures which may incorporate prior information, such procedures are rarely considered. A general frequentist scheme for the construction of shortest confidence intervals for distribution parameters under prior information is presented. In particular, the scheme is used for establishing confidence intervals for the mean of the monotonic distribution and compared to classical procedures. Additionally, an approximative procedure for the upper bound of the support of the monotonic distribution is proposed. A core purpose of auditing sampling is the determination of confidence intervals for the mean of zero-inflated populations. The monotonic distribution is used for modelling such a population and is utilised for the procedure of a confidence interval under prior information for the mean. The results are compared to two-sided intervals of Stringer-type.
Lagrange Multiplier Methods for Constrained Optimization and Variational Problems in Banach Spaces
(2018)
This thesis is concerned with a class of general-purpose algorithms for constrained minimization problems, variational inequalities, and quasi-variational inequalities in Banach spaces.
A substantial amount of background material from Banach space theory, convex analysis, variational analysis, and optimization theory is presented, including some results which are refinements of those existing in the literature. This basis is used to formulate an augmented Lagrangian algorithm with multiplier safeguarding for the solution of constrained optimization problems in Banach spaces. The method is analyzed in terms of local and global convergence, and many popular problem classes such as nonlinear programming, semidefinite programming, and function space optimization are shown to be included as special cases of the general setting.
The algorithmic framework is then extended to variational and quasi-variational inequalities, which include, by extension, Nash and generalized Nash equilibrium problems. For these problem classes, the convergence is analyzed in detail. The thesis then presents a rich collection of application examples for all problem classes, including implementation details and numerical results.
This thesis discusses and proposes a solution for one problem arising from deformation quantization:
Having constructed the quantization of a classical system, one would like to understand its mathematical properties (of both the classical and quantum system). Especially if both systems are described by ∗-algebras over the field of complex numbers, this means to understand the properties of certain ∗-algebras:
What are their representations? What are the properties of these representations? How
can the states be described in these representations? How can the spectrum of the observables be
described?
In order to allow for a sufficiently general treatment of these questions, the concept of abstract O ∗-algebras is introduced. Roughly speaking, these are ∗ -algebras together with a cone of positive linear functionals on them (e.g. the continuous ones if one starts with a ∗-algebra that is endowed with a well-behaved topology). This language is then applied to two examples from deformation quantization, which will be studied in great detail.
In this thesis stability and robustness properties of systems of functional differential equations which dynamics depends on the maximum of a solution over a prehistory time interval is studied. Max-operator is analyzed and it is proved that due to its presence such kind of systems are particular case of state dependent delay differential equations with piecewise continuous delay function. They are nonlinear, infinite-dimensional and may reduce to one-dimensional along its solution. Stability analysis with respect to input is accomplished by trajectory estimate and via averaging method. Numerical method is proposed.
Purpose: To compare the outcomes of canaloplasty and trabeculectomy in open-angle glaucoma.
Methods: This prospective, randomized clinical trial included 62 patients who randomly received trabeculectomy (n = 32) or canaloplasty (n = 30) and were followed up prospectively for 2 years. Primary endpoint was complete (without medication) and qualified success (with or without medication) defined as an intraocular pressure (IOP) of ≤18 mmHg (definition 1) or IOP ≤21 mmHg and ≥20% IOP reduction (definition 2), IOP ≥5 mmHg, no vision loss and no further glaucoma surgery. Secondary endpoints were the absolute IOP reduction, visual acuity, medication, complications and second surgeries.
Results: Surgical treatment significantly reduced IOP in both groups (p < 0.001). Complete success was achieved in 74.2% and 39.1% (definition 1, p = 0.01), and 67.7% and 39.1% (definition 2, p = 0.04) after 2 years in the trabeculectomy and canaloplasty group, respectively. Mean absolute IOP reduction was 10.8 ± 6.9 mmHg in the trabeculectomy and 9.3 ± 5.7 mmHg in the canaloplasty group after 2 years (p = 0.47). Mean IOP was 11.5 ± 3.4 mmHg in the trabeculectomy and 14.4 ± 4.2 mmHg in the canaloplasty group after 2 years. Following trabeculectomy, complications were more frequent including hypotony (37.5%), choroidal detachment (12.5%) and elevated IOP (25.0%).
Conclusions: Trabeculectomy is associated with a stronger IOP reduction and less need for medication at the cost of a higher rate of complications. If target pressure is attainable by moderate IOP reduction, canaloplasty may be considered for its relative ease of postoperative care and lack of complications.
This thesis considers a model of a scalar partial differential equation in the presence of a singular source term, modeling the interaction between an inviscid fluid represented by the Burgers equation and an arbitrary, finite amount of particles moving inside the fluid, each one acting as a point-wise drag force with a particle related friction constant.
\begin{align*}
\partial_t u + \partial_x (u^2/2) &= \sum_{i \in N(t)} \lambda_i \Big(h_i'(t)-u(t,h_i(t)\Big)\delta(x-h_i(t))
\end{align*}
The model was introduced for the case of a single particle by Lagoutière, Seguin and Takahashi, is a first step towards a better understanding of interaction between fluids and solids on the level of partial differential equations and has the unique property of considering entropy admissible solutions and the interaction with shockwaves.
The model is extended to an arbitrary, finite number of particles and interactions like merging, splitting and crossing of particle paths are considered.
The theory of entropy admissibility is revisited for the cases of interfaces and discontinuous flux conservation laws, existing results are summarized and compared, and adapted for regions of particle interactions. To this goal, the theory of germs introduced by Andreianov, Karlsen and Risebro is extended to this case of non-conservative interface coupling.
Exact solutions for the Riemann Problem of particles drifting apart are computed and analysis on the behavior of entropy solutions across the particle related interfaces is used to determine physically relevant and consistent behavior for merging and splitting of particles. Well-posedness of entropy solutions to the Cauchy problem is proven, using an explicit construction method, L-infinity bounds, an approximation of the particle paths and compactness arguments to obtain existence of entropy solutions. Uniqueness is shown in the class of weak entropy solutions using almost classical Kruzkov-type analysis and the notion of L1-dissipative germs.
Necessary fundamentals of hyperbolic conservation laws, including weak solutions, shocks and rarefaction waves and the Rankine-Hugoniot condition are briefly recapitulated.
Ill-posed optimization problems appear in a wide range of mathematical applications, and their numerical solution requires the use of appropriate regularization techniques. In order to understand these techniques, a thorough analysis is inevitable.
The main subject of this book are quadratic optimal control problems subject to elliptic linear or semi-linear partial differential equations. Depending on the structure of the differential equation, different regularization techniques are employed, and their analysis leads to novel results such as rate of convergence estimates.
Beatty sets (also called Beatty sequences) have appeared as early as 1772 in the astronomical studies of Johann III Bernoulli as a tool for easing manual calculations and - as Elwin Bruno Christoffel pointed out in 1888 - lend themselves to exposing intricate properties of the real irrationals. Since then, numerous researchers have explored a multitude of arithmetic properties of Beatty sets; the interrelation between Beatty sets and modular inversion, as well as Beatty sets and the set of rational primes, being the central topic of this book. The inquiry into the relation to rational primes is complemented by considering a natural generalisation to imaginary quadratic number fields.
The present thesis considers the modelling of gas mixtures via a kinetic description. Fundamentals about the Boltzmann equation for gas mixtures and the BGK approximation are presented. Especially, issues in extending these models to gas mixtures are discussed. A non-reactive two component gas mixture is considered. The two species mixture is modelled by a system of kinetic BGK equations featuring two interaction terms to account for momentum and energy transfer between the two species. The model presented here contains several models from physicists and engineers as special cases. Consistency of this model is proven: conservation properties, positivity of all temperatures and the H-theorem. The form in global equilibrium as Maxwell distributions is specified. Moreover, the usual macroscopic conservation laws can be derived.
In the literature, there is another type of BGK model for gas mixtures developed by Andries, Aoki and Perthame, which contains only one interaction term. In this thesis, the advantages of these two types of models are discussed and the usefulness of the model presented here is shown by using this model to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described in the literature by Dellacherie. In addition, for each of the two models existence and uniqueness of mild solutions is shown. Moreover, positivity of classical solutions is proven.
Then, the model presented here is applied to three physical applications: a plasma consisting of ions and electrons, a gas mixture which deviates from equilibrium and a gas mixture consisting of polyatomic molecules.
First, the model is extended to a model for charged particles. Then, the equations of magnetohydrodynamics are derived from this model. Next, we want to apply this extended model to a mixture of ions and electrons in a special physical constellation which can be found for example in a Tokamak. The mixture is partly in equilibrium in some regions, in some regions it deviates from equilibrium. The model presented in this thesis is taken for this purpose, since it has the advantage to separate the intra and interspecies interactions. Then, a new model based on a micro-macro decomposition is proposed in order to capture the physical regime of being partly in equilibrium, partly not. Theoretical results are presented, convergence rates to equilibrium in the space-homogeneous case and the Landau damping for mixtures, in order to compare it with numerical results.
Second, the model presented here is applied to a gas mixture which deviates from equilibrium such that it is described by Navier-Stokes equations on the macroscopic level. In this macroscopic description it is expected that four physical coefficients will show up, characterizing the physical behaviour of the gases, namely the diffusion coefficient, the viscosity coefficient, the heat conductivity and the thermal diffusion parameter. A Chapman-Enskog expansion of the model presented here is performed in order to capture three of these four physical coefficients. In addition, several possible extensions to an ellipsoidal statistical model for gas mixtures are proposed in order to capture the fourth coefficient. Three extensions are proposed: An extension which is as simple as possible, an intuitive extension copying the one species case and an extension which takes into account the physical motivation of the physicist Holway who invented the ellipsoidal statistical model for one species. Consistency of the extended models like conservation properties, positivity of all temperatures and the H-theorem are proven. The shape of global Maxwell distributions in equilibrium are specified.
Third, the model presented here is applied to polyatomic molecules. A multi component gas mixture with translational and internal energy degrees of freedom is considered. The two species are allowed to have different degrees of freedom in internal energy and are modelled by a system of kinetic ellipsoidal statistical equations. Consistency of this model is shown: conservation properties, positivity of the temperature, H-theorem and the form of Maxwell distributions in equilibrium. For numerical purposes the Chu reduction is applied to the developed model for polyatomic gases to reduce the complexity of the model and an application for a gas consisting of a mono-atomic and a diatomic gas is given.
Last, the limit from the model presented here to the dissipative Euler equations for gas mixtures is proven.
This work is concerned with the numerical approximation of solutions to models that are used to describe atmospheric or oceanographic flows. In particular, this work concen- trates on the approximation of the Shallow Water equations with bottom topography and the compressible Euler equations with a gravitational potential. Numerous methods have been developed to approximate solutions of these models. Of specific interest here are the approximations of near equilibrium solutions and, in the case of the Euler equations, the low Mach number flow regime. It is inherent in most of the numerical methods that the quality of the approximation increases with the number of degrees of freedom that are used. Therefore, these schemes are often run in parallel on big computers to achieve the best pos- sible approximation. However, even on those big machines, the desired accuracy can not be achieved by the given maximal number of degrees of freedom that these machines allow. The main focus in this work therefore lies in the development of numerical schemes that give better resolution of the resulting dynamics on the same number of degrees of freedom, compared to classical schemes.
This work is the result of a cooperation of Prof. Klingenberg of the Institute of Mathe- matics in Wu¨rzburg and Prof. R¨opke of the Astrophysical Institute in Wu¨rzburg. The aim of this collaboration is the development of methods to compute stellar atmospheres. Two main challenges are tackled in this work. First, the accurate treatment of source terms in the numerical scheme. This leads to the so called well-balanced schemes. They allow for an accurate approximation of near equilibrium dynamics. The second challenge is the approx- imation of flows in the low Mach number regime. It is known that the compressible Euler equations tend towards the incompressible Euler equations when the Mach number tends to zero. Classical schemes often show excessive diffusion in that flow regime. The here devel- oped scheme falls into the category of an asymptotic preserving scheme, i.e. the numerical scheme reflects the behavior that is computed on the continuous equations. Moreover, it is shown that the diffusion of the numerical scheme is independent of the Mach number.
In chapter 3, an HLL-type approximate Riemann solver is adapted for simulations of the Shallow Water equations with bottom topography to develop a well-balanced scheme. In the literature, most schemes only tackle the equilibria when the fluid is at rest, the so called Lake at rest solutions. Here a scheme is developed to accurately capture all the equilibria of the Shallow Water equations. Moreover, in contrast to other works, a second order extension is proposed, that does not rely on an iterative scheme inside the reconstruction procedure, leading to a more efficient scheme.
In chapter 4, a Suliciu relaxation scheme is adapted for the resolution of hydrostatic equilibria of the Euler equations with a gravitational potential. The hydrostatic relations are underdetermined and therefore the solutions to that equations are not unique. However, the scheme is shown to be well-balanced for a wide class of hydrostatic equilibria. For specific classes, some quadrature rules are computed to ensure the exact well-balanced property. Moreover, the scheme is shown to be robust, i.e. it preserves the positivity of mass and energy, and stable with respect to the entropy. Numerical results are presented in order to investigate the impact of the different quadrature rules on the well-balanced property.
In chapter 5, a Suliciu relaxation scheme is adapted for the simulations of low Mach number flows. The scheme is shown to be asymptotic preserving and not suffering from excessive diffusion in the low Mach number regime. Moreover, it is shown to be robust under certain parameter combinations and to be stable from an Chapman-Enskog analysis.
Numerical results are presented in order to show the advantages of the new approach.
In chapter 6, the schemes developed in the chapters 4 and 5 are combined in order to investigate the performance of the numerical scheme in the low Mach number regime in a gravitational stratified atmosphere. The scheme is shown the be well-balanced, robust and stable with respect to a Chapman-Enskog analysis. Numerical tests are presented to show the advantage of the newly proposed method over the classical scheme.
In chapter 7, some remarks on an alternative way to tackle multidimensional simulations are presented. However no numerical simulations are performed and it is shown why further research on the suggested approach is necessary.
Finite volume methods for compressible Euler equations suffer from an excessive diffusion in the limit of low Mach numbers. This PhD thesis explores new approaches to overcome this.
The analysis of a simpler set of equations that also possess a low Mach number limit is found to give valuable insights. These equations are the acoustic equations obtained as a linearization of the Euler equations. For both systems the limit is characterized by a divergencefree velocity. This constraint is nontrivial only in multiple spatial dimensions. As the Jacobians of the acoustic system do not commute, acoustics cannot be reduced to some kind of multi-dimensional advection. Therefore first an exact solution in multiple spatial dimensions is obtained. It is shown that the low Mach number limit can be interpreted as a limit of long times.
It is found that the origin of the inability of a scheme to resolve the low Mach number limit is the lack a discrete counterpart to the limit of long times. Numerical schemes whose discrete stationary states discretize all the analytic stationary states of the PDE are called stationarity preserving. It is shown that for the acoustic equations, stationarity preserving schemes are vorticity preserving and are those that are able to resolve the low Mach limit (low Mach compliant). This establishes a new link between these three concepts.
Stationarity preservation is studied in detail for both dimensionally split and multi-dimensional schemes for linear acoustics. In particular it is explained why the same multi-dimensional stencils appear in literature in very different contexts: These stencils are unique discretizations of the divergence that allow for stabilizing stationarity preserving diffusion.
Stationarity preservation can also be generalized to nonlinear systems such as the Euler equations. Several ways how such numerical schemes can be constructed for the Euler equations are presented. In particular a low Mach compliant numerical scheme is derived that uses a novel construction idea. Its diffusion is chosen such that it depends on the velocity divergence rather than just derivatives of the different velocity components. This is demonstrated to overcome the low Mach number problem. The scheme shows satisfactory results in numerical simulations and has been found to be stable under explicit time integration.
An efficient multigrid finite-differences scheme for solving elliptic Fredholm partial integro-differential equations (PIDE) is discussed. This scheme combines a second-order accurate finite difference discretization of the PIDE problem with a multigrid scheme that includes a fast multilevel integration of the Fredholm operator allowing the fast solution of the PIDE problem. Theoretical estimates of second-order accuracy and results of local Fourier analysis of convergence of the proposed multigrid scheme are presented. Results of numerical experiments validate these estimates and demonstrate optimal computational complexity of the proposed framework.
The main theme of this thesis is the development of multigrid and hierarchical matrix solution procedures with almost linear computational complexity for classes of partial integro-differential problems. An elliptic partial integro-differential equation, a convection-diffusion partial integro-differential equation and a convection-diffusion partial integro-differential optimality system are investigated. In the first part of this work, an efficient multigrid finite-differences scheme for solving an elliptic
Fredholm partial integro-differential equation (PIDE) is discussed. This scheme combines a second-order accurate finite difference discretization and a Simpson's quadrature rule to approximate the PIDE problem and a multigrid scheme and a fast
multilevel integration method of the Fredholm operator allowing the fast solution of the PIDE problem. Theoretical estimates of second-order accuracy and results of local Fourier analysis of convergence of the proposed multigrid scheme
are presented. Results of numerical experiments validate these estimates and demonstrate optimal computational complexity of the proposed framework that includes numerical experiments for elliptic PIDE problems with singular kernels. The experience gained in this part of the work is used for the investigation of convection diffusion partial-integro differential equations in the second part of this thesis.
Convection-diffusion PIDE problems are discretized using a finite volume scheme referred to as the Chang and Cooper (CC) scheme and a quadrature rule. Also for this class of PIDE problems and this numerical setting, a stability and accuracy analysis of the CC scheme combined with a Simpson's quadrature rule is presented proving second-order accuracy of the numerical solution. To extend and investigate the proposed approximation and solution strategy to the case of systems of convection-diffusion PIDE, an optimal control problem governed by this model is considered. In this case the research focus is the CC-Simpson's discretization of the optimality system and its solution by the proposed multigrid strategy. Second-order accuracy of the optimization solution is proved and results of local Fourier analysis are presented that provide sharp convergence estimates of the optimal computational complexity of the multigrid-fast integration technique.
While (geometric) multigrid techniques require ad-hoc implementation depending on the structure of the PIDE problem and on the dimensionality of the domain where the problem is considered, the hierarchical matrix framework allows a more general treatment that exploits the algebraic structure of the problem at hand. In this thesis, this framework is extended to the case of combined differential and integral problems considering the case of a convection-diffusion PIDE. In this case, the starting point is the CC discretization of the convection-diffusion operator combined with the trapezoidal quadrature rule. The hierarchical matrix approach exploits the algebraic nature of the hierarchical matrices for blockwise approximations by low-rank matrices of the sparse convection-diffusion approximation and enables data sparse representation of the fully populated matrix where all essential matrix operations are performed with at most logarithmic optimal complexity. The factorization of part of or the whole coefficient matrix is used as a preconditioner to the solution of the PIDE problem using a generalized minimum residual (GMRes) procedure as a solver.
Numerical analysis estimates of the accuracy of the
finite-volume and trapezoidal rule approximation are
presented and combined with estimates of the
hierarchical matrix approximation and with the
accuracy of the GMRes iterates. Results of numerical experiments are reported that
successfully validate the theoretical estimates and
the optimal computational complexity of the proposed hierarchical matrix
solution procedure. These results include an extension to higher dimensions and an application to the time evolution of the probability density function of a jump diffusion process.
Über die besondere Bedeutung von Analogiebildungsprozessen beim Lernen im Allgemeinen und beim Lernen von Mathematik im Speziellen besteht ein breiter wissenschaftlicher Konsens. Es liegt deshalb nahe, von einem lernförderlichen Mathematikunterricht zu verlangen, dass er im Bewusstsein dieser Bedeutung entwickelt ist – dass er also einerseits Analogien aufzeigt und sich diese beim Lehren von Mathematik zunutze macht, dass er andererseits aber auch dem Lernenden Gelegenheiten bietet, Analogien zu erkennen und zu entwickeln. Kurz: Die Fähigkeit zum Bilden von Analogien soll durch den Unterricht gezielt gefördert werden.
Um diesem Anspruch gerecht werden zu können, müssen ausreichende Kenntnisse darüber vorliegen, wie Analogiebildungsprozesse beim Lernen von Mathematik und beim Lösen mathematischer Aufgaben ablaufen, wodurch sich erfolgreiche Analogiebildungsprozesse auszeichnen und an welchen Stellen möglicherweise Schwierigkeiten bestehen.
Der Autor zeigt auf, wie Prozesse der Analogiebildung beim Lösen mathematischer Aufgaben initiiert, beobachtet, beschrieben und interpretiert werden können, um auf dieser Grundlage Ansatzpunkte für geeignete Fördermaßnahmen zu identifizieren, bestehende Ideen zur Förderung der Analogiebildungsfähigkeit zu beurteilen und neue Ideen zu entwickeln. Es werden dabei Wege der Analogiebildung nachgezeichnet und untersucht, die auf der Verschränkung zweier Dimensionen der Analogiebildung im Rahmen des zugrundeliegenden theoretischen Modells beruhen. So können verschiedene Vorgehensweisen ebenso kontrastiert werden, wie kritische Punkte im Verlauf eines Analogiebildungsprozesses. Es ergeben sich daraus Unterrichtsvorschläge, die auf den Ideen zum beispielbasierten Lernen aufbauen.
This doctoral thesis provides a classification of equivariant star products (star products together with quantum momentum maps) in terms of equivariant de Rham cohomology. This classification result is then used to construct an analogon of the Kirwan map from which one can directly obtain the characteristic class of certain reduced star products on Marsden-Weinstein reduced symplectic manifolds from the equivariant characteristic class of their corresponding unreduced equivariant star product. From the surjectivity of this map one can conclude that every star product on Marsden-Weinstein reduced symplectic manifolds can (up to equivalence) be obtained as a reduced equivariant star product.
This dissertation is dealing with three mathematical areas, namely polynomial matrices over finite fields, linear systems and coding theory.
Coprimeness properties of polynomial matrices provide criteria for the reachability and observability of interconnected linear systems. Since time-discrete linear systems over finite fields and convolutional codes are basically the same objects, these results could be transfered to criteria for non-catastrophicity of convolutional codes.
We calculate the probability that specially structured polynomial matrices are right prime. In particular, formulas for the number of pairwise coprime polynomials and for the number of mutually left coprime polynomial matrices are calculated. This leads to the probability that a parallel connected linear system is reachable and that a parallel connected convolutional codes is non-catastrophic.
Moreover, the corresponding probabilities are calculated for other networks of linear systems and convolutional codes, such as series connection.
Furthermore, the probabilities that a convolutional codes is MDP and that a clock code is MDS are approximated.
Finally, we consider the probability of finding a solution for a linear network coding problem.
In this work, multi-particle quantum optimal control problems are studied in the framework of time-dependent density functional theory (TDDFT).
Quantum control problems are of great importance in both fundamental research and application of atomic and molecular systems. Typical applications are laser induced chemical reactions, nuclear magnetic resonance experiments, and quantum computing.
Theoretically, the problem of how to describe a non-relativistic system of multiple particles is solved by the Schrödinger equation (SE). However, due to the exponential increase in numerical complexity with the number of particles, it is impossible to directly solve the Schrödinger equation for large systems of interest. An efficient and successful approach to overcome this difficulty is the framework of TDDFT and the use of the time-dependent Kohn-Sham (TDKS) equations therein.
This is done by replacing the multi-particle SE with a set of nonlinear single-particle Schrödinger equations that are coupled through an additional potential.
Despite the fact that TDDFT is widely used for physical and quantum chemical calculation and software packages for its use are readily available, its mathematical foundation is still under active development and even fundamental issues remain unproven today.
The main purpose of this thesis is to provide a consistent and rigorous setting for the TDKS equations and of the related optimal control problems.
In the first part of the thesis, the framework of density functional theory (DFT) and TDDFT are introduced. This includes a detailed presentation of the different functional sets forming DFT. Furthermore, the known equivalence of the TDKS system to the original SE problem is further discussed.
To implement the TDDFT framework for multi-particle computations, the TDKS equations provide one of the most successful approaches nowadays. However, only few mathematical results concerning these equations are available and these results do not cover all issues that arise in the formulation of optimal control problems governed by the TDKS model.
It is the purpose of the second part of this thesis to address these issues such as higher regularity of TDKS solutions and the case of weaker requirements on external (control) potentials that are instrumental for the formulation of well-posed TDKS control problems. For this purpose, in this work, existence and uniqueness of TDKS solutions are investigated in the Galerkin framework and using energy estimates for the nonlinear TDKS equations.
In the third part of this thesis, optimal control problems governed by the TDKS model are formulated and investigated. For this purpose, relevant cost functionals that model the purpose of the control are discussed.
Henceforth, TDKS control problems result from the requirement of optimising the given cost functionals subject to the differential constraint given by the TDKS equations. The analysis of these problems is novel and represents one of the main contributions of the present thesis.
In particular, existence of minimizers is proved and their characterization by TDKS optimality systems is discussed in detail.
To this end, Fréchet differentiability of the TDKS model and of the cost functionals is addressed considering \(H^1\) cost of the control.
This part is concluded by deriving the reduced gradient in the \(L^2\) and \(H^1\) inner product.
While the \(L^2\) optimization is widespread in the literature, the choice of the \(H^1\) gradient is motivated in this work by theoretical consideration and by resulting numerical advantages.
The last part of the thesis is devoted to the numerical approximation of the TDKS optimality systems and to their solution by gradient-based optimization techniques.
For the former purpose, Strang time-splitting pseudo-spectral schemes are discussed including a review of some recent theoretical estimates for these schemes and a numerical validation of these estimates.
For the latter purpose, nonlinear (projected) conjugate gradient methods are implemented and are used to validate the theoretical analysis of this thesis with results of numerical experiments with different cost functional settings.
An explicit Runge-Kutta discontinuous Galerkin (RKDG) method is used to device numerical schemes for both the compressible Euler equations of gas dynamics and the ideal magneto- hydrodynamical (MHD) model. These systems of conservation laws are known to have discontinuous solutions. Discontinuities are the source of spurious oscillations in the solution profile of the numerical approximation, when a high order accurate numerical method is used. Different techniques are reviewed in order to control spurious oscillations. A shock detection technique is shown to be useful in order to determine the regions where the spurious oscillations appear such that a Limiter can be used to eliminate these numeric artifacts. To guarantee the positivity of specific variables like the density and the pressure, a positivity preserving limiter is used. Furthermore, a numerical flux, proven to preserve the entropy stability of the semi-discrete DG scheme for the MHD system is used. Finally, the numerical schemes are implemented using the deal.II C++ libraries in the dflo code. The solution of common test cases show the capability of the method.
A framework for the optimal sparse-control of the probability density function of a jump-diffusion process is presented. This framework is based on the partial integro-differential Fokker-Planck (FP) equation that governs the time evolution of the probability density function of this process. In the stochastic process and, correspondingly, in the FP model the control function enters as a time-dependent coefficient. The objectives of the control are to minimize a discrete-in-time, resp. continuous-in-time, tracking functionals and its L2- and L1-costs, where the latter is considered to promote control sparsity. An efficient proximal scheme for solving these optimal control problems is considered. Results of numerical experiments are presented to validate the theoretical results and the computational effectiveness of the proposed control framework.
This doctoral thesis is concerned with the mathematical modeling of magnetoelastic materials and the analysis of PDE systems describing these materials and obtained from a variational approach.
The purpose is to capture the behavior of elastic particles that are not only magnetic but exhibit a magnetic domain structure which is well described by the micromagnetic energy and the Landau-Lifshitz-Gilbert equation of the magnetization. The equation of motion for the material’s velocity is derived in a continuum mechanical setting from an energy ansatz. In the modeling process, the focus is on the interplay between Lagrangian and Eulerian coordinate systems to combine elasticity and magnetism in one model without the assumption of small deformations.
The resulting general PDE system is simplified using special assumptions. Existence of weak solutions is proved for two variants of the PDE system, one including gradient flow dynamics on the magnetization, and the other featuring the Landau-Lifshitz-Gilbert equation. The proof is based on a Galerkin method and a fixed point argument. The analysis of the PDE system with the Landau-Lifshitz-Gilbert equation uses a more involved approach to obtain weak solutions based on G. Carbou and P. Fabrie 2001.
Background
HIV-disease progression correlates with immune activation. Here we investigated whether corticosteroid treatment can attenuate HIV disease progression in antiretroviral-untreated patients.
Methods
Double-blind, placebo-controlled randomized clinical trial including 326 HIV-patients in a resource-limited setting in Tanzania (clinicaltrials.gov NCT01299948). Inclusion criteria were a CD4 count above 300 cells/μl, the absence of AIDS-defining symptoms and an ART-naïve therapy status. Study participants received 5 mg prednisolone per day or placebo for 2 years. Primary endpoint was time to progression to an AIDS-defining condition or to a CD4-count below 200 cells/μl.
Results
No significant change in progression towards the primary endpoint was observed in the intent-to-treat (ITT) analysis (19 cases with prednisolone versus 28 cases with placebo, p = 0.1407). In a per-protocol (PP)-analysis, 13 versus 24 study participants progressed to the primary study endpoint (p = 0.0741). Secondary endpoints: Prednisolone-treatment decreased immune activation (sCD14, suPAR, CD38/HLA-DR/CD8+) and increased CD4-counts (+77.42 ± 5.70 cells/μl compared to -37.42 ± 10.77 cells/μl under placebo, p < 0.0001). Treatment with prednisolone was associated with a 3.2-fold increase in HIV viral load (p < 0.0001). In a post-hoc analysis stratifying for sex, females treated with prednisolone progressed significantly slower to the primary study endpoint than females treated with placebo (ITT-analysis: 11 versus 21 cases, p = 0.0567; PP-analysis: 5 versus 18 cases, p = 0.0051): No changes in disease progression were observed in men.
Conclusions
This study could not detect any significant effects of prednisolone on disease progression in antiretroviral-untreated HIV infection within the intent-to-treat population. However, significant effects were observed on CD4 counts, immune activation and HIV viral load. This study contributes to a better understanding of the role of immune activation in the pathogenesis of HIV infection.
First-order proximal methods that solve linear and bilinear elliptic optimal control problems with a sparsity cost functional are discussed. In particular, fast convergence of these methods is proved. For benchmarking purposes, inexact proximal schemes are compared to an inexact semismooth Newton method. Results of numerical experiments are presented to demonstrate the computational effectiveness of proximal schemes applied to infinite-dimensional elliptic optimal control problems and to validate the theoretical estimates.
The topic of this thesis is the theoretical and numerical analysis of optimal control problems, whose differential constraints are given by Fokker-Planck models related to jump-diffusion processes. We tackle the issue of controlling a stochastic process by formulating a deterministic optimization problem. The
key idea of our approach is to focus on the probability density function of the process,
whose time evolution is modeled by the Fokker-Planck equation. Our control framework is advantageous since it allows to model the action of the control over the entire range of the process, whose statistics are characterized by the shape of its probability density function.
We first investigate jump-diffusion processes, illustrating their main properties. We define stochastic initial-value problems and present results on the existence and uniqueness of their solutions. We then discuss how numerical solutions of stochastic problems are computed, focusing on the Euler-Maruyama method.
We put our attention to jump-diffusion models with time- and space-dependent coefficients and jumps given by a compound Poisson process. We derive the related Fokker-Planck equations, which take the form of partial integro-differential equations. Their differential term is governed by a parabolic operator, while the nonlocal integral operator is due to the presence of the jumps. The derivation is carried out in two cases. On the one hand, we consider a process with unbounded range. On the other hand, we confine the dynamic of the sample paths to a bounded domain, and thus the behavior of the process in proximity of the boundaries has to be specified. Throughout this thesis, we set the barriers of the domain to be reflecting.
The Fokker-Planck equation, endowed with initial and boundary conditions, gives rise to Fokker-Planck problems. Their solvability is discussed in suitable functional spaces. The properties of their solutions are examined, namely their regularity, positivity and probability mass conservation. Since closed-form solutions to Fokker-Planck problems are usually not available, one has to resort to numerical methods.
The first main achievement of this thesis is the definition and analysis of conservative and positive-preserving numerical methods for Fokker-Planck problems. Our SIMEX1 and SIMEX2 (Splitting-Implicit-Explicit) schemes are defined within the framework given by the method of lines. The differential operator is discretized by a finite volume scheme given by the Chang-Cooper method, while the integral operator is approximated by a mid-point rule. This leads to a large system of ordinary differential equations, that we approximate with the Strang-Marchuk splitting method. This technique decomposes the original problem in a
sequence of different subproblems with simpler structure, which are separately solved and linked to each other through initial conditions and final solutions. After performing the splitting step, we carry out the time integration with first- and second-order time-differencing methods. These steps give rise to the SIMEX1 and SIMEX2 methods, respectively.
A full convergence and stability analysis of our schemes is included. Moreover, we are able to prove that the positivity and the mass conservation of the solution to Fokker-Planck problems are satisfied at the discrete level by the numerical solutions computed with the SIMEX schemes.
The second main achievement of this thesis is the theoretical analysis and the numerical solution of optimal control problems governed by Fokker-Planck models. The field of optimal control deals with finding control functions in such a way that given cost functionals are minimized. Our framework aims at the minimization of the difference between a known sequence of values and the first moment of a jump-diffusion process; therefore, this formulation can also be considered as a parameter estimation problem for stochastic processes. Two cases are discussed, in which the form of the cost functional is continuous-in-time and discrete-in-time, respectively.
The control variable enters the state equation as a coefficient of the Fokker-Planck partial integro-differential operator. We also include in the cost functional a $L^1$-penalization term, which enhances the sparsity of the solution. Therefore, the resulting optimization problem is nonconvex and nonsmooth. We derive the first-order optimality systems satisfied by the optimal solution. The computation of the optimal solution is carried out by means of proximal iterative schemes in an infinite-dimensional framework.