Refine
Has Fulltext
- yes (220)
Is part of the Bibliography
- yes (220)
Year of publication
Document Type
- Doctoral Thesis (127)
- Journal article (72)
- Book (5)
- Report (4)
- Master Thesis (3)
- Other (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (11)
- Extremwertstatistik (8)
- Optimierung (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Mathematikunterricht (5)
- Stabilität (5)
Institute
- Institut für Mathematik (220) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
We derive a multi-species BGK model with velocity-dependent collision frequency for a non-reactive, multi-component gas mixture. The model is derived by minimizing a weighted entropy under the constraint that the number of particles of each species, total momentum, and total energy are conserved. We prove that this minimization problem admits a unique solution for very general collision frequencies. Moreover, we prove that the model satisfies an H-Theorem and characterize the form of equilibrium.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
A Lagrange multiplier method for semilinear elliptic state constrained optimal control problems
(2020)
In this paper we apply an augmented Lagrange method to a class of semilinear ellip-tic optimal control problems with pointwise state constraints. We show strong con-vergence of subsequences of the primal variables to a local solution of the original problem as well as weak convergence of the adjoint states and weak-* convergence of the multipliers associated to the state constraint. Moreover, we show existence of stationary points in arbitrary small neighborhoods of local solutions of the original problem. Additionally, various numerical results are presented.
It is well known, that the least squares estimator performs poorly in the presence of multicollinearity. One way to overcome this problem is using biased estimators, e.g. ridge regression estimators. In this study an estimation procedure is proposed based on adding a small quantity omega on some or each regressor. The resulting biased estimator is described in dependence of omega and furthermore it is shown that its mean squared error is smaller than the one corresponding to the least squares estimator in the case of highly correlated regressors.
This paper is devoted to the numerical analysis of non-smooth ensemble optimal control problems governed by the Liouville (continuity) equation that have been originally proposed by R.W. Brockett with the purpose of determining an efficient and robust control strategy for dynamical systems. A numerical methodology for solving these problems is presented that is based on a non-smooth Lagrange optimization framework where the optimal controls are characterized as solutions to the related optimality systems. For this purpose, approximation and solution schemes are developed and analysed. Specifically, for the approximation of the Liouville model and its optimization adjoint, a combination of a Kurganov–Tadmor method, a Runge–Kutta scheme, and a Strang splitting method are discussed. The resulting optimality system is solved by a projected semi-smooth Krylov–Newton method. Results of numerical experiments are presented that successfully validate the proposed framework.
One of the major motivations for the analysis and modeling of time series data is the forecasting of future outcomes. The use of interval forecasts instead of point forecasts allows us to incorporate the apparent forecast uncertainty. When forecasting count time series, one also has to account for the discreteness of the range, which is done by using coherent prediction intervals (PIs) relying on a count model. We provide a comprehensive performance analysis of coherent PIs for diverse types of count processes. We also compare them to approximate PIs that are computed based on a Gaussian approximation. Our analyses rely on an extensive simulation study. It turns out that the Gaussian approximations do considerably worse than the coherent PIs. Furthermore, special characteristics such as overdispersion, zero inflation, or trend clearly affect the PIs' performance. We conclude by presenting two empirical applications of PIs for count time series: the demand for blood bags in a hospital and the number of company liquidations in Germany.
We investigate the convergence of the proximal gradient method applied to control problems with non-smooth and non-convex control cost. Here, we focus on control cost functionals that promote sparsity, which includes functionals of L\(^{p}\)-type for p\in [0,1). We prove stationarity properties of weak limit points of the method. These properties are weaker than those provided by Pontryagin’s maximum principle and weaker than L-stationarity.
We consider the Bathnagar–Gross–Krook (BGK) model, an approximation of the Boltzmann equation, describing the time evolution of a single momoatomic rarefied gas and satisfying the same two main properties (conservation properties and entropy inequality). However, in practical applications, one often has to deal with two additional physical issues. First, a gas often does not consist of only one species, but it consists of a mixture of different species. Second, the particles can store energy not only in translational degrees of freedom but also in internal degrees of freedom such as rotations or vibrations (polyatomic molecules). Therefore, here, we will present recent BGK models for gas mixtures for mono- and polyatomic particles and the existing mathematical theory for these models.
This thesis deals with a new so-called sequential quadratic Hamiltonian (SQH) iterative scheme to solve optimal control problems with differential models and cost functionals ranging from smooth to discontinuous and non-convex. This scheme is based on the Pontryagin maximum principle (PMP) that provides necessary optimality conditions for an optimal solution. In this framework, a Hamiltonian function is defined that attains its minimum pointwise at the optimal solution of the corresponding optimal control problem. In the SQH scheme, this Hamiltonian function is augmented by a quadratic penalty term consisting of the current control function and the control function from the previous iteration. The heart of the SQH scheme is to minimize this augmented Hamiltonian function pointwise in order to determine a control update. Since the PMP does not require any differ- entiability with respect to the control argument, the SQH scheme can be used to solve optimal control problems with both smooth and non-convex or even discontinuous cost functionals. The main achievement of the thesis is the formulation of a robust and efficient SQH scheme and a framework in which the convergence analysis of the SQH scheme can be carried out. In this framework, convergence of the scheme means that the calculated solution fulfills the PMP condition. The governing differential models of the considered optimal control problems are ordinary differential equations (ODEs) and partial differential equations (PDEs). In the PDE case, elliptic and parabolic equations as well as the Fokker-Planck (FP) equation are considered. For both the ODE and the PDE cases, assumptions are formulated for which it can be proved that a solution to an optimal control problem has to fulfill the PMP. The obtained results are essential for the discussion of the convergence analysis of the SQH scheme. This analysis has two parts. The first one is the well-posedness of the scheme which means that all steps of the scheme can be carried out and provide a result in finite time. The second part part is the PMP consistency of the solution. This means that the solution of the SQH scheme fulfills the PMP conditions. In the ODE case, the following results are obtained that state well-posedness of the SQH scheme and the PMP consistency of the corresponding solution. Lemma 7 states the existence of a pointwise minimum of the augmented Hamiltonian. Lemma 11 proves the existence of a weight of the quadratic penalty term such that the minimization of the corresponding augmented Hamiltonian results in a control updated that reduces the value of the cost functional. Lemma 12 states that the SQH scheme stops if an iterate is PMP optimal. Theorem 13 proves the cost functional reducing properties of the SQH control updates. The main result is given in Theorem 14, which states the pointwise convergence of the SQH scheme towards a PMP consistent solution. In this ODE framework, the SQH method is applied to two optimal control problems. The first one is an optimal quantum control problem where it is shown that the SQH method converges much faster to an optimal solution than a globalized Newton method. The second optimal control problem is an optimal tumor treatment problem with a system of coupled highly non-linear state equations that describe the tumor growth. It is shown that the framework in which the convergence of the SQH scheme is proved is applicable for this highly non-linear case. Next, the case of PDE control problems is considered. First a general framework is discussed in which a solution to the corresponding optimal control problem fulfills the PMP conditions. In this case, many theoretical estimates are presented in Theorem 59 and Theorem 64 to prove in particular the essential boundedness of the state and adjoint variables. The steps for the convergence analysis of the SQH scheme are analogous to that of the ODE case and result in Theorem 27 that states the PMP consistency of the solution obtained with the SQH scheme. This framework is applied to different elliptic and parabolic optimal control problems, including linear and bilinear control mechanisms, as well as non-linear state equations. Moreover, the SQH method is discussed for solving a state-constrained optimal control problem in an augmented formulation. In this case, it is shown in Theorem 30 that for increasing the weight of the augmentation term, which penalizes the violation of the state constraint, the measure of this state constraint violation by the corresponding solution converges to zero. Furthermore, an optimal control problem with a non-smooth L\(^1\)-tracking term and a non-smooth state equation is investigated. For this purpose, an adjoint equation is defined and the SQH method is used to solve the corresponding optimal control problem. The final part of this thesis is devoted to a class of FP models related to specific stochastic processes. The discussion starts with a focus on random walks where also jumps are included. This framework allows a derivation of a discrete FP model corresponding to a continuous FP model with jumps and boundary conditions ranging from absorbing to totally reflecting. This discussion allows the consideration of the drift-control resulting from an anisotropic probability of the steps of the random walk. Thereafter, in the PMP framework, two drift-diffusion processes and the corresponding FP models with two different control strategies for an optimal control problem with an expectation functional are considered. In the first strategy, the controls depend on time and in the second one, the controls depend on space and time. In both cases a solution to the corresponding optimal control problem is characterized with the PMP conditions, stated in Theorem 48 and Theorem 49. The well-posedness of the SQH scheme is shown in both cases and further conditions are discussed that ensure the convergence of the SQH scheme to a PMP consistent solution. The case of a space and time dependent control strategy results in a special structure of the corresponding PMP conditions that is exploited in another solution method, the so-called direct Hamiltonian (DH) method.
We prove a sharp Bernstein-type inequality for complex polynomials which are positive and satisfy a polynomial growth condition on the positive real axis. This leads to an improved upper estimate in the recent work of Culiuc and Treil (Int. Math. Res. Not. 2019: 3301–3312, 2019) on the weighted martingale Carleson embedding theorem with matrix weights. In the scalar case this new upper bound is optimal.
In this work, multi-particle quantum optimal control problems are studied in the framework of time-dependent density functional theory (TDDFT).
Quantum control problems are of great importance in both fundamental research and application of atomic and molecular systems. Typical applications are laser induced chemical reactions, nuclear magnetic resonance experiments, and quantum computing.
Theoretically, the problem of how to describe a non-relativistic system of multiple particles is solved by the Schrödinger equation (SE). However, due to the exponential increase in numerical complexity with the number of particles, it is impossible to directly solve the Schrödinger equation for large systems of interest. An efficient and successful approach to overcome this difficulty is the framework of TDDFT and the use of the time-dependent Kohn-Sham (TDKS) equations therein.
This is done by replacing the multi-particle SE with a set of nonlinear single-particle Schrödinger equations that are coupled through an additional potential.
Despite the fact that TDDFT is widely used for physical and quantum chemical calculation and software packages for its use are readily available, its mathematical foundation is still under active development and even fundamental issues remain unproven today.
The main purpose of this thesis is to provide a consistent and rigorous setting for the TDKS equations and of the related optimal control problems.
In the first part of the thesis, the framework of density functional theory (DFT) and TDDFT are introduced. This includes a detailed presentation of the different functional sets forming DFT. Furthermore, the known equivalence of the TDKS system to the original SE problem is further discussed.
To implement the TDDFT framework for multi-particle computations, the TDKS equations provide one of the most successful approaches nowadays. However, only few mathematical results concerning these equations are available and these results do not cover all issues that arise in the formulation of optimal control problems governed by the TDKS model.
It is the purpose of the second part of this thesis to address these issues such as higher regularity of TDKS solutions and the case of weaker requirements on external (control) potentials that are instrumental for the formulation of well-posed TDKS control problems. For this purpose, in this work, existence and uniqueness of TDKS solutions are investigated in the Galerkin framework and using energy estimates for the nonlinear TDKS equations.
In the third part of this thesis, optimal control problems governed by the TDKS model are formulated and investigated. For this purpose, relevant cost functionals that model the purpose of the control are discussed.
Henceforth, TDKS control problems result from the requirement of optimising the given cost functionals subject to the differential constraint given by the TDKS equations. The analysis of these problems is novel and represents one of the main contributions of the present thesis.
In particular, existence of minimizers is proved and their characterization by TDKS optimality systems is discussed in detail.
To this end, Fréchet differentiability of the TDKS model and of the cost functionals is addressed considering \(H^1\) cost of the control.
This part is concluded by deriving the reduced gradient in the \(L^2\) and \(H^1\) inner product.
While the \(L^2\) optimization is widespread in the literature, the choice of the \(H^1\) gradient is motivated in this work by theoretical consideration and by resulting numerical advantages.
The last part of the thesis is devoted to the numerical approximation of the TDKS optimality systems and to their solution by gradient-based optimization techniques.
For the former purpose, Strang time-splitting pseudo-spectral schemes are discussed including a review of some recent theoretical estimates for these schemes and a numerical validation of these estimates.
For the latter purpose, nonlinear (projected) conjugate gradient methods are implemented and are used to validate the theoretical analysis of this thesis with results of numerical experiments with different cost functional settings.
ADMM-Type Methods for Optimization and Generalized Nash Equilibrium Problems in Hilbert Spaces
(2020)
This thesis is concerned with a certain class of algorithms for the solution of constrained optimization problems and generalized Nash equilibrium problems in Hilbert spaces. This class of algorithms is inspired by the alternating direction method of multipliers (ADMM) and eliminates the constraints using an augmented Lagrangian approach. The alternating direction method consists of splitting the augmented Lagrangian subproblem into smaller and more easily manageable parts.
Before the algorithms are discussed, a substantial amount of background material, including the theory of Banach and Hilbert spaces, fixed-point iterations as well as convex and monotone set-valued analysis, is presented. Thereafter, certain optimization problems and generalized Nash equilibrium problems are reformulated and analyzed using variational inequalities and set-valued mappings. The analysis of the algorithms developed in the course of this thesis is rooted in these reformulations as variational inequalities and set-valued mappings.
The first algorithms discussed and analyzed are one weakly and one strongly convergent ADMM-type algorithm for convex, linearly constrained optimization. By equipping the associated Hilbert space with the correct weighted scalar product, the analysis of these two methods is accomplished using the proximal point method and the Halpern method.
The rest of the thesis is concerned with the development and analysis of ADMM-type algorithms for generalized Nash equilibrium problems that jointly share a linear equality constraint. The first class of these algorithms is completely parallelizable and uses a forward-backward idea for the analysis, whereas the second class of algorithms can be interpreted as a direct extension of the classical ADMM-method to generalized Nash equilibrium problems.
At the end of this thesis, the numerical behavior of the discussed algorithms is demonstrated on a collection of examples.
Several aspects of the stability analysis of large-scale discrete-time systems are considered. An important feature is that the right-hand side does not have have to be continuous.
In particular, constructive approaches to compute Lyapunov functions are derived and applied to several system classes.
For large-scale systems, which are considered as an interconnection of smaller subsystems, we derive a new class of small-gain results, which do not require the subsystems to be robust in some sense. Moreover, we do not only study sufficiency of the conditions, but rather state an assumption under which these conditions are also necessary.
Moreover, gain construction methods are derived for several types of aggregation, quantifying how large a prescribed set of interconnection gains can be in order that a small-gain condition holds.
In this thesis affine-scaling-methods for two different types of mathematical problems are considered. The first type of problems are nonlinear optimization problems subject to bound constraints. A class of new affine-scaling Newton-type methods is introduced. The methods are shown to be locally quadratically convergent without assuming strict complementarity of the solution. The new methods differ from previous ones mainly in the choice of the scaling matrix. The second type of problems are semismooth system of equations with bound constraints. A new affine-scaling trust-region method for these problems is developed. The method is shown to have strong global and local convergence properties under suitable assumptions. Numerical results are presented for a number of problems arising from different areas.
In the present thesis we investigate algebraic and arithmetic properties of graph spectra. In particular, we study the algebraic degree of a graph, that is the dimension of the splitting field of the characteristic polynomial of the associated adjacency matrix over the rationals, and examine the question whether there is a relation between the algebraic degree of a graph and its structural properties. This generalizes the yet open question ``Which graphs have integral spectra?'' stated by Harary and Schwenk in 1974.
We provide an overview of graph products since they are useful to study graph spectra and, in particular, to construct families of integral graphs. Moreover, we present a relation between the diameter, the maximum vertex degree and the algebraic degree of a graph, and construct a potential family of graphs of maximum algebraic degree.
Furthermore, we determine precisely the algebraic degree of circulant graphs and find new criteria for isospectrality of circulant graphs. Moreover, we solve the inverse Galois problem for circulant graphs showing that every finite abelian extension of the rationals is the splitting field of some circulant graph. Those results generalize a theorem of So who characterized all integral circulant graphs. For our proofs we exploit the theory of Schur rings which was already used in order to solve the isomorphism problem for circulant graphs.
Besides that, we study spectra of zero-divisor graphs over finite commutative rings.
Given a ring \(R\), the zero-divisor graph over \(R\) is defined as the graph with vertex set being the set of non-zero zero-divisors of \(R\) where two vertices \(x,y\) are adjacent if and only if \(xy=0\). We investigate relations between the eigenvalues of a zero-divisor graph, its structural properties and the algebraic properties of the respective ring.
For a graph \(\Gamma\) , let K be the smallest field containing all eigenvalues of the adjacency matrix of \(\Gamma\) . The algebraic degree \(\deg (\Gamma )\) is the extension degree \([K:\mathbb {Q}]\). In this paper, we completely determine the algebraic degrees of Cayley graphs over abelian groups and dihedral groups.
A torsion free abelian group of finite rank is called almost completely decomposable if it has a completely decomposable subgroup of finite index. A p-local, p-reduced almost completely decomposable group of type (1,2) is briefly called a (1,2)-group. Almost completely decomposable groups can be represented by matrices over the ring Z/hZ, where h is the exponent of the regulator quotient. This particular choice of representation allows for a better investigation of the decomposability of the group. Arnold and Dugas showed in several of their works that (1,2)-groups with regulator quotient of exponent at least p^7 allow infinitely many isomorphism types of indecomposable groups. It is not known if the exponent 7 is minimal. In this dissertation, this problem is addressed.
This paper presents an alternative approach for obtaining a converse Lyapunov theorem for discrete–time systems. The proposed approach is constructive, as it provides an explicit Lyapunov function. The developed converse theorem establishes existence of global Lyapunov functions for globally exponentially stable (GES) systems and semi–global practical Lyapunov functions for globally asymptotically stable systems. Furthermore, for specific classes of sys- tems, the developed converse theorem can be used to establish non–conservatism of a particular type of Lyapunov functions. Most notably, a proof that conewise linear Lyapunov functions are non–conservative for GES conewise linear systems is given and, as a by–product, tractable construction of polyhedral Lyapunov functions for linear systems is attained.
In distance geometry problems and many other applications, we are faced with the optimization of high-dimensional quadratic functions subject to linear equality constraints. A new approach is presented that projects the constraints, preserving sparsity properties of the original quadratic form such that well-known preconditioning techniques for the conjugate gradient method remain applicable. Very-largescale cell placement problems in chip design have been solved successfully with diagonal and incomplete Cholesky preconditioning. Numerical results produced by a FORTRAN 77 program illustrate the good behaviour of the algorithm.
A reformulation of cardinality-constrained optimization problems into continuous nonlinear optimization problems with an orthogonality-type constraint has gained some popularity during the last few years. Due to the special structure of the constraints, the reformulation violates many standard assumptions and therefore is often solved using specialized algorithms. In contrast to this, we investigate the viability of using a standard safeguarded multiplier penalty method without any problem-tailored modifications to solve the reformulated problem. We prove global convergence towards an (essentially strongly) stationary point under a suitable problem-tailored quasinormality constraint qualification. Numerical experiments illustrating the performance of the method in comparison to regularization-based approaches are provided.
Circadian endogenous clocks of eukaryotic organisms are an established and rapidly developing research field. To investigate and simulate in an effective model the effect of external stimuli on such clocks and their components we developed a software framework for download and simulation. The application is useful to understand the different involved effects in a mathematical simple and effective model. This concerns the effects of Zeitgebers, feedback loops and further modifying components. We start from a known mathematical oscillator model, which is based on experimental molecular findings. This is extended with an effective framework that includes the impact of external stimuli on the circadian oscillations including high dose pharmacological treatment. In particular, the external stimuli framework defines a systematic procedure by input-output-interfaces to couple different oscillators. The framework is validated by providing phase response curves and ranges of entrainment. Furthermore, Aschoffs rule is computationally investigated. It is shown how the external stimuli framework can be used to study biological effects like points of singularity or oscillators integrating different signals at once. The mathematical framework and formalism is generic and allows to study in general the effect of external stimuli on oscillators and other biological processes. For an easy replication of each numerical experiment presented in this work and an easy implementation of the framework the corresponding Mathematica files are fully made available. They can be downloaded at the following link: https://www.biozentrum.uni-wuerzburg.de/bioinfo/computing/circadian/.
Risk measures are commonly used to prepare for a prospective occurrence of an adverse event. If we are concerned with discrete risk phenomena such as counts of natural disasters, counts of infections by a serious disease, or counts of certain economic events, then the required risk forecasts are to be computed for an underlying count process. In practice, however, the discrete nature of count data is sometimes ignored and risk forecasts are calculated based on Gaussian time series models. But even if methods from count time series analysis are used in an adequate manner, the performance of risk forecasting is affected by estimation uncertainty as well as certain discreteness phenomena. To get a thorough overview of the aforementioned issues in risk forecasting of count processes, a comprehensive simulation study was done considering a broad variety of risk measures and count time series models. It becomes clear that Gaussian approximate risk forecasts substantially distort risk assessment and, thus, should be avoided. In order to account for the apparent estimation uncertainty in risk forecasting, we use bootstrap approaches for count time series. The relevance and the application of the proposed approaches are illustrated by real data examples about counts of storm surges and counts of financial transactions.
Analysis of discretization schemes for Fokker-Planck equations and related optimality systems
(2015)
The Fokker-Planck (FP) equation is a fundamental model in thermodynamic kinetic theories and
statistical mechanics.
In general, the FP equation appears in a number of different fields in natural sciences, for instance in solid-state physics, quantum optics, chemical physics, theoretical biology, and circuit theory. These equations also provide a powerful mean to define
robust control strategies for random models. The FP equations are partial differential equations (PDE) describing the time evolution of the probability density function (PDF) of stochastic processes.
These equations are of different types depending on the underlying stochastic process.
In particular, they are parabolic PDEs for the PDF of Ito processes, and hyperbolic PDEs for piecewise deterministic processes (PDP).
A fundamental axiom of probability calculus requires that the integral of the PDF over all the allowable state space must be equal to one, for all time. Therefore, for the purpose of accurate numerical simulation, a discretized FP equation must guarantee conservativeness of the total probability. Furthermore, since the
solution of the FP equation represents a probability density, any numerical scheme that approximates the FP equation is required to guarantee the positivity of the solution. In addition, an approximation scheme must be accurate and stable.
For these purposes, for parabolic FP equations on bounded domains, we investigate the Chang-Cooper (CC) scheme for space discretization and first- and
second-order backward time differencing. We prove that the resulting
space-time discretization schemes are accurate, conditionally stable, conservative, and preserve positivity.
Further, we discuss a finite difference discretization for the FP system corresponding to a PDP process in a bounded domain.
Next, we discuss FP equations in unbounded domains.
In this case, finite-difference or finite-element methods cannot be applied. By employing a suitable set of basis functions, spectral methods allow to treat unbounded domains. Since FP solutions decay exponentially at infinity, we consider Hermite functions as basis functions, which are Hermite polynomials multiplied by a Gaussian.
To this end, the Hermite spectral discretization is applied
to two different FP equations; the parabolic PDE corresponding to Ito processes, and the system of hyperbolic PDEs corresponding to a PDP process. The resulting discretized schemes are analyzed. Stability and spectral accuracy of the Hermite spectral discretization of the FP problems is proved. Furthermore, we investigate the conservativity of the solutions of FP equations discretized with the Hermite spectral scheme.
In the last part of this thesis, we discuss optimal control problems governed by FP equations on the characterization of their solution by optimality systems. We then investigate the Hermite spectral discretization of FP optimality systems in unbounded domains.
Within the framework of Hermite discretization, we obtain sparse-band systems of ordinary differential equations. We analyze the accuracy of the discretization schemes by showing spectral convergence in approximating the state, the adjoint, and the control variables that appear in the FP optimality systems.
To validate our theoretical estimates, we present results of numerical experiments.
Die vorliegende Arbeit untersucht die Analytizitätseigenschaften unzulässiger Innerer-Punkte Pfade bei monotonen Komplementaritätsproblemen und diskutiert mögliche algorithmische Anwendungen. In Kapitel 2 werden einige matrixanalytische Konzepte und Resultate zusammengestellt, die für die Beweisführung in den folgenden Kapiteln benötigt werden. Kapitel 3 gibt eine genaue Definition der Begriffe "monotones lineares Komplementaritätsproblem" (LCP) bzw. "semidefinites monotones lineares Komplementaritätsproblem" (SDLCP) und zeigt die Grundidee hinter den Innere-Punkte-Verfahren zur Lösung solcher Probleme. Kapitel 4 beinhaltet die analytischen Hauptresultate für monotone Komplementaritätsprobleme. In Abschnitt 4.1 werden einige wohlbekannte Resultate über die Analytizitätseigenschaften unzulässiger Innerer-Punkte-Pfade für LCP's wiedergegeben. Diese werden in Abschnitt 4.2 auf den semidefiniten Fall übertragen. Unter der Annahme, dass das zugrundeliegende SDLCP eine strikt komplementäre Lösung besitzt, wird gezeigt, dass die Inneren-Punkte-Pfade sogar noch im Randpunkt analytisch sind. Kapitel 5 benutzt die Resultate aus Kapitel 4, um die lokal hohe Konvergenzordnung einer Langschrittmethode zur Lösung von SDLCP's zu zeigen. Kapitel 6 führt eine neue Methode zur Lösung von LCP's und SDLCP's mit Hilfe von Inneren-Punkte-Techniken ein. Dabei werden die Pfadfunktionen derart gewählt, dass alle Iterierten auf unzulässigen zentralen Pfaden liegen. Es wird globale und lokale Konvergenz des Verfahrens bewiesen.
The present thesis considers the development and analysis of arbitrary Lagrangian-Eulerian
discontinuous Galerkin (ALE-DG) methods with time-dependent approximation spaces for
conservation laws and the Hamilton-Jacobi equations.
Fundamentals about conservation laws, Hamilton-Jacobi equations and discontinuous Galerkin
methods are presented. In particular, issues in the development of discontinuous Galerkin (DG)
methods for the Hamilton-Jacobi equations are discussed.
The development of the ALE-DG methods based on the assumption that the distribution of
the grid points is explicitly given for an upcoming time level. This assumption allows to construct a time-dependent local affine linear mapping to a reference cell and a time-dependent
finite element test function space. In addition, a version of Reynolds’ transport theorem can be
proven.
For the fully-discrete ALE-DG method for nonlinear scalar conservation laws the geometric
conservation law and a local maximum principle are proven. Furthermore, conditions for slope
limiters are stated. These conditions ensure the total variation stability of the method. In addition, entropy stability is discussed. For the corresponding semi-discrete ALE-DG method,
error estimates are proven. If a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell, the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence for monotone fuxes and the optimal $(k+1)$ convergence for an upwind flux are proven in the $\mathrm{L}^{2}$-norm. The capability of the method is shown by numerical examples for nonlinear conservation laws.
Likewise, for the semi-discrete ALE-DG method for nonlinear Hamilton-Jacobi equations, error
estimates are proven. In the one dimensional case the optimal $\left(k+1\right)$ convergence and in the two dimensional case the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence are proven in the $\mathrm{L}^{2}$-norm, if a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell. For the fullydiscrete method, the geometric conservation is proven and for the piecewise constant forward Euler step the convergence of the method to the unique physical relevant solution is discussed.
Der Einzug des Rechners in den Mathematikunterricht hat eine Vielzahl neuer Möglichkeiten der Darstellung mit sich gebracht, darunter auch multiple, dynamisch verbundene Repräsentationen mathematischer Probleme. Die Arbeit beantwortet die Frage, ob und wie diese Repräsentationsarten von Schülerinnen und Schüler in Argumentationen genutzt werden. In der empirischen Untersuchung wurde dabei einerseits quantitativ erforscht, wie groß der Einfluss der in der Aufgabenstellung gegebenen Repräsentationsform auf die schriftliche Argumentationen der Schülerinnen und Schüler ist. Andererseits wurden durch eine qualitative Analyse spezifische Nutzungsweisen identifiziert und mittels Toulmins Argumentationsmodell beschrieben. Diese Erkenntnisse wurden genutzt, um Konsequenzen bezüglich der Verwendung von multiplen und/oder dynamischen Repräsentationen im Mathematikunterricht der Sekundarstufe zu formulieren.
This thesis is concerned with the solution of control and state constrained optimal control problems, which are governed by elliptic partial differential equations. Problems of this type are challenging since they suffer from the low regularity of the multiplier corresponding to the state constraint. Applying an augmented Lagrangian method we overcome these difficulties by working with multiplier approximations in $L^2(\Omega)$. For each problem class, we introduce the solution algorithm, carry out a thoroughly convergence analysis and illustrate our theoretical findings with numerical examples.
The thesis is divided into two parts. The first part focuses on classical PDE constrained optimal control problems. We start by studying linear-quadratic objective functionals, which include the standard tracking type term and an additional regularization term as well as the case, where the regularization term is replaced by an $L^1(\Omega)$-norm term, which makes the problem ill-posed. We deepen our study of the augmented Lagrangian algorithm by examining the more complicated class of optimal control problems that are governed by a semilinear partial differential equation.
The second part investigates the broader class of multi-player control problems. While the examination of jointly convex generalized Nash equilibrium problems (GNEP) is a simple extension of the linear elliptic optimal control case, the complexity is increased significantly for pure GNEPs. The existence of solutions of jointly convex GNEPs is well-studied. However, solution algorithms may suffer from non-uniqueness of solutions. Therefore, the last part of this thesis is devoted to the analysis of the uniqueness of normalized equilibria.
This thesis, first, is devoted to the theoretical and numerical investigation of an augmented Lagrangian method for the solution of optimization problems with geometric constraints, subsequently, as well as constrained structured optimization problems featuring a composite objective function and set-membership constraints. It is then concerned to convergence and rate-of-convergence analysis of proximal gradient methods for the composite optimization problems in the presence of the Kurdyka--{\L}ojasiewicz property without global Lipschitz assumption.
In dieser Arbeit wird mathematisches Papierfalten und speziell 1-fach-Origami im universitären Kontext untersucht. Die Arbeit besteht aus drei Teilen.
Der erste Teil ist im Wesentlichen der Sachanalyse des 1-fach-Origami gewidmet. Im ersten Kapitel gehen wir auf die geschichtliche Einordnung des 1-fach-Origami, betrachten axiomatische Grundlagen und diskutieren, wie das Axiomatisieren von 1-fach-Origami zum Verständnis des Axiomenbegriffs beitragen könnte. Im zweiten Kapitel schildern wir das Design der zugehörigen explorativen Studie, beschreiben unsere Forschungsziele und -fragen. Im dritten Kapitel wird 1-fach-Origami mathematisiert, definiert und eingehend untersucht.
Der zweite Teil beschäftigt sich mit den von uns gestalteten und durchgeführten Kursen »Axiomatisieren lernen mit Papierfalten«. Im vierten Kapitel beschreiben wir die Lehrmethodik und die Gestaltung der Kurse, das fünfte Kapitel enthält ein Exzerpt der Kurse.
Im dritten Teil werden die zugehörigen Tests beschrieben. Im sechsten Kapitel erläutern wir das Design der Tests sowie die Testmethodik. Im siebten Kapitel findet die Auswertung ebendieser Tests statt.
A basic mental model (BMM—in German ‘Grundvorstellung’) of a mathematical concept is a content-related interpretation that gives meaning to this concept. This paper defines normative and individual BMMs and concretizes them using the integral as an example. Four BMMs are developed about the concept of definite integral, sometimes used in specific teaching approaches: the BMMs of area, reconstruction, average, and accumulation. Based on theoretical work, in this paper we ask how these BMMs could be identified empirically. A test instrument was developed, piloted, validated and applied with 428 students in first-year mathematics courses. The test results show that the four normative BMMs of the integral can be detected and separated empirically. Moreover, the results allow a comparison of the existing individual BMMs and the requested normative BMMs. Consequences for future developments are discussed.
Reine Untergruppen von vollständig zerlegbaren torsionsfreien abelschen Gruppen werden Butlergruppen genannt. Eine solche Gruppe läßt sich als endliche Summe von rationalen Rang-1-Gruppen darstellen. Eine solche Darstellung ist nicht eindeutig. Daher werden Methoden entwickelt, die zu einer Darstellung mit reinen Summanden führen. Weiter kann aus dieser Darstellung sowohl die kritische Typenmenge als auch die Typuntergruppen direkt abgelesen werden. Dies vereinfacht die Behandlung von Butlergruppen mit dem Computer und gestattet darüberhinaus eine elegantere Darstellung.
Bivariate copula monitoring
(2022)
The assumption of multivariate normality underlying the Hotelling T\(^{2}\) chart is often violated for process data. The multivariate dependency structure can be separated from marginals with the help of copula theory, which permits to model association structures beyond the covariance matrix. Copula‐based estimation and testing routines have reached maturity regarding a variety of practical applications. We have constructed a rich design matrix for the comparison of the Hotelling T\(^{2}\) chart with the copula test by Verdier and the copula test by Vuong, which allows for weighting the observations adaptively. Based on the design matrix, we have conducted a large and computationally intensive simulation study. The results show that the copula test by Verdier performs better than Hotelling T\(^{2}\) in a large variety of out‐of‐control cases, whereas the weighted Vuong scheme often fails to provide an improvement.
Ein bekanntes heuristisches Prinzip von A. Bloch beschreibt die Korrespondenz zwischen Kriterien für die Konstanz ganzer Funktionen und Normalitätskriterien. In der vorliegenden Dissertation untersuchen wir die Gültigkeit des Blochschen Prinzip bei Lückenreihenproblemen sowie Zusammenhänge zwischen Normalitätsfragen und der Semidualität von einer bzw. von zwei Funktionen. Die ersten beiden Kapitel stellen die im folgenden benötigten Hilfsmittel aus der Nevanlinnaschen Wertverteilungstheorie und der Normalitätstheorie bereit. Im dritten Kapitel beweisen wir ein neues Normalitätskriterium für Familien holomorpher Funktionen, für die ein Differentialpolynom einer bestimmten Gestalt nullstellenfrei ist. Dies verallgemeinert frühere Resultate von Hayman, Drasin, Langley und Chen & Hua. Kapitel 4 ist dem Beweis eines unserer im folgenden wichtigsten Hilfsmittel gewidmet: eines tiefliegenden Konvergenzsatzes von H. Cartan über Familien von p-Tupeln holomorpher nullstellenfreier Funktionen, welche einer linearen Relation unterliegen. In Kapitel 5 werden die Konzepte der Dualität und Semidualität eingeführt und die Verbindung zu Normalitätsfragen diskutiert. Die neuen Ergebnisse über Lückenreihen finden sich im sechsten Kapitel. Der Schwerpunkt liegt hierbei zum einen auf sog. AP-Lückenreihen, zum anderen auf allgemeinen Konstruktionsverfahren, mit denen sich neue semiduale Lückenstrukturen aus bereits bekannten gewinnen lassen. Zahlreiche unserer Beweise beruhen wesentlich auf dem Satz von Cartan aus Kapitel 4. Im siebten Kapitel erweitern wir unsere Semidualitätsuntersuchungen auf Mengen aus zwei Funktionen. Wir ziehen Normalitätskriterien (vor allem das in Kapitel 3 bewiesene sowie den Satz von Cartan) heran, um spezielle Mengen als nichtsemidual zu identifizieren. Zuletzt konstruieren wir ein Beispiel einer semidualen Menge aus zwei Funktionen.
It is well-known that a multivariate extreme value distribution can be represented via the D-Norm. However not every norm yields a D-Norm. In this thesis a necessary and sufficient condition is given for a norm to define an extreme value distribution. Applications of this theorem includes a new proof for the bivariate case, the Pickands dependence function and the nested logistic model. Furthermore the GPD-Flow is introduced and first insights were given such that if it converges it converges against the copula of complete dependence.
Human herpesvirus-6 (HHV-6) exists in latent form either as a nuclear episome or integrated into human chromosomes in more than 90% of healthy individuals without causing clinical symptoms. Immunosuppression and stress conditions can reactivate HHV-6 replication, associated with clinical complications and even death. We have previously shown that co-infection of Chlamydia trachomatis and HHV-6 promotes chlamydial persistence and increases viral uptake in an in vitro cell culture model. Here we investigated C. trachomatis-induced HHV-6 activation in cell lines and fresh blood samples from patients having Chromosomally integrated HHV-6 (CiHHV-6). We observed activation of latent HHV-6 DNA replication in CiHHV-6 cell lines and fresh blood cells without formation of viral particles. Interestingly, we detected HHV-6 DNA in blood as well as cervical swabs from C. trachomatis-infected women. Low virus titers correlated with high C. trachomatis load and vice versa, demonstrating a potentially significant interaction of these pathogens in blood cells and in the cervix of infected patients. Our data suggest a thus far underestimated interference of HHV-6 and C. trachomatis with a likely impact on the disease outcome as consequence of co-infection.
This doctoral thesis provides a classification of equivariant star products (star products together with quantum momentum maps) in terms of equivariant de Rham cohomology. This classification result is then used to construct an analogon of the Kirwan map from which one can directly obtain the characteristic class of certain reduced star products on Marsden-Weinstein reduced symplectic manifolds from the equivariant characteristic class of their corresponding unreduced equivariant star product. From the surjectivity of this map one can conclude that every star product on Marsden-Weinstein reduced symplectic manifolds can (up to equivalence) be obtained as a reduced equivariant star product.
Let (ϕ\(_t\))\(_{t≥0}\) be a semigroup of holomorphic functions in the unit disk \(\mathbb {D}\) and K a compact subset of \(\mathbb {D}\). We investigate the conditions under which the backward orbit of K under the semigroup exists. Subsequently, the geometric characteristics, as well as, potential theoretic quantities for the backward orbit of K are examined. More specifically, results are obtained concerning the asymptotic behavior of its hyperbolic area and diameter, the harmonic measure and the capacity of the condenser that K forms with the unit disk.
We compute genus-0 Belyi maps with prescribed monodromy and strictly verify the computed results. Among the computed examples are almost simple primitive groups that satisfy the rational rigidity criterion yielding polynomials with prescribed Galois groups over Q(t). We also give an explicit version of a theorem of Magaard, which lists all sporadic groups occurring as composition factors of monodromy groups of rational functions.
We present a technique for computing multi-branch-point covers with prescribed ramification and demonstrate the applicability of our method in relatively large degrees by computing several families of polynomials with symplectic and linear Galois groups.
As a first application, we present polynomials over \(\mathbb{Q}(\alpha,t)\) for the primitive rank-3 groups \(PSp_4(3)\) and \(PSp_4(3).C_2\) of degree 27 and for the 2-transitive group \(PSp_6(2)\) in its actions on 28 and 36 points, respectively. Moreover, the degree-28 polynomial for \(PSp_6(2)\) admits infinitely many totally real specializations.
Next, we present the first (to the best of our knowledge) explicit polynomials for the 2-transitive linear groups \(PSL_4(3)\) and \(PGL_4(3)\) of degree 40, and the imprimitive group \(Aut(PGL_4(3))\) of degree 80.
Additionally, we negatively answer a question by König whether there exists a degree-63 rational function with rational coefficients and monodromy group \(PSL_6(2)\) ramified over at least four points. This is achieved due to the explicit computation of the corresponding hyperelliptic genus-3 Hurwitz curve parameterizing this family, followed by a search for rational points on it. As a byproduct of our calculations we obtain the first explicit \(Aut(PSL_6(2))\)-realizations over \(\mathbb{Q}(t)\).
At last, we present a technique by Elkies for bounding the transitivity degree of Galois groups. This provides an alternative way to verify the Galois groups from the previous chapters and also yields a proof that the monodromy group of a degree-276 cover computed by Monien is isomorphic to the sporadic 2-transitive Conway group \(Co_3\).
In financial mathematics, it is a typical approach to approximate financial markets operating in discrete time by continuous-time models such as the Black Scholes model. Fitting this model gives rise to difficulties due to the discrete nature of market data. We thus model the pricing process of financial derivatives by the Black Scholes equation, where the volatility is a function of a finite number of random variables. This reflects an influence of uncertain factors when determining volatility. The aim is to quantify the effect of this uncertainty when computing the price of derivatives. Our underlying method is the generalized Polynomial Chaos (gPC) method in order to numerically compute the uncertainty of the solution by the stochastic Galerkin approach and a finite difference method. We present an efficient numerical variation of this method, which is based on a machine learning technique, the so-called Bi-Fidelity approach. This is illustrated with numerical examples.
The purpose of confidence and prediction intervals is to provide an interval estimation for an unknown distribution parameter or the future value of a phenomenon. In many applications, prior knowledge about the distribution parameter is available, but rarely made use of, unless in a Bayesian framework. This thesis provides exact frequentist confidence intervals of minimal volume exploiting prior information. The scheme is applied to distribution parameters of the binomial and the Poisson distribution. The Bayesian approach to obtain intervals on a distribution parameter in form of credibility intervals is considered, with particular emphasis on the binomial distribution. An application of interval estimation is found in auditing, where two-sided intervals of Stringer type are meant to contain the mean of a zero-inflated population. In the context of time series analysis, covariates are supposed to improve the prediction of future values. Exponential smoothing with covariates as an extension of the popular forecasting method exponential smoothing is considered in this thesis. A double-seasonality version of it is applied to forecast hourly electricity load under the use of meteorological covariates. Different kinds of prediction intervals for exponential smoothing with covariates are formulated.
The goal of this thesis is to investigate conformal mappings onto circular arc polygon domains, i.e. domains that are bounded by polygons consisting of circular arcs instead of line segments.
Conformal mappings onto circular arc polygon domains contain parameters in addition to the classical parameters of the Schwarz-Christoffel transformation. To contribute to the parameter problem of conformal mappings from the unit disk onto circular arc polygon domains, we investigate two special cases of these mappings. In the first case we can describe the additional parameters if the bounding circular arc polygon is a polygon with straight sides. In the second case we provide an approximation for the additional parameters if the circular arc polygon domain satisfies some symmetry conditions. These results allow us to draw conclusions on the connection between these additional parameters and the classical parameters of the mapping.
For conformal mappings onto multiply connected circular arc polygon domains, we provide an alternative construction of the mapping formula without using the Schottky-Klein prime function. In the process of constructing our main result, mappings for domains of connectivity three or greater, we also provide a formula for conformal mappings onto doubly connected circular arc polygon domains. The comparison of these mapping formulas with already known mappings allows us to provide values for some of the parameters of the mappings onto doubly connected circular arc polygon domains if the image domain is a polygonal domain.
The different components of the mapping formula are constructed by using a slightly modified variant of the Poincaré theta series. This construction includes the design of a function to remove unwanted poles and of different versions of functions that are analytic on the domain of definition of the mapping functions and satisfy some special functional equations.
We also provide the necessary concepts to numerically evaluate the conformal mappings onto multiply connected circular arc polygon domains. As the evaluation of such a map requires the solution of a differential equation, we provide a possible configuration of curves inside the preimage domain to solve the equation along them in addition to a description of the procedure for computing either the formula for the doubly connected case or the case of connectivity three or greater. We also describe the procedures for solving the parameter problem for multiply connected circular arc polygon domains.
The point of departure for the present work has been the following free boundary value problem for analytic functions $f$ which are defined on a domain $G \subset \mathbb{C}$ and map into the unit disk $\mathbb{D}= \{z \in \mathbb{C} : |z|<1 \}$. Problem 1: Let $z_1, \ldots, z_n$ be finitely many points in a bounded simply connected domain $G \subset \mathbb{C}$. Show that there exists a holomorphic function $f:G \to \mathbb{D}$ with critical points $z_j$ (counted with multiplicities) and no others such that $\lim_{z \to \xi} \frac{|f'(z)|}{1-|f(z)|^2}=1$ for all $\xi \in \partial G$. If $G=\mathbb{D}$, Problem 1 was solved by K?nau [5] in the case of one critical point, and for more than one critical point by Fournier and Ruscheweyh [3]. The method employed by K?nau, Fournier and Ruscheweyh easily extends to more general domains $G$, say bounded by a Dini-smooth Jordan curve, but does not work for arbitrary bounded simply connected domains. In this paper we present a new approach to Problem 1, which shows that this boundary value problem is not an isolated question in complex analysis, but is intimately connected to a number of basic open problems in conformal geometry and non-linear PDE. One of our results is a solution to Problem 1 for arbitrary simply connected domains. However, we shall see that our approach has also some other ramifications, for instance to a well-known problem due to Rellich and Wittich in PDE. Roughly speaking, this paper is broken down into two parts. In a first step we construct a conformal metric in a bounded regular domain $G\subset \mathbb{C}$ with prescribed non-positive Gaussian curvature $k(z)$ and prescribed singularities by solving the first boundary value problem for the Gaussian curvature equation $\Delta u =-k(z) e^{2u}$ in $G$ with prescribed singularities and continuous boundary data. This is related to the Berger-Nirenberg problem in Riemannian geometry, the question which functions on a surface R can arise as the Gaussian curvature of a Riemannian metric on R. The special case, where $k(z)=-4$ and the domain $G$ is bounded by finitely many analytic Jordan curves was treated by Heins [4]. In a second step we show every conformal pseudo-metric on a simply connected domain $G\subseteq \mathbb{C}$ with constant negative Gaussian curvature and isolated zeros of integer order is the pullback of the hyperbolic metric on $\mathbb{D}$ under an analytic map $f:G \to \mathbb{D}$. This extends a theorem of Liouville which deals with the case that the pseudo-metric has no zeros at all. These two steps together allow a complete solution of Problem 1. Contents: Chapter I contains the statement of the main results and connects them with some old and new problems in complex analysis, conformal geometry and PDE: the Uniformization Theorem for Riemann surfaces, the problem of Schwarz-Picard, the Berger-Nirenberg problem, Wittich's problem, etc.. Chapter II and III have preparatory character. In Chapter II we recall some basic results about ordinary differential equations in the complex plane. In our presentation we follow Laine [6], but we have reorganized the material and present a self-contained account of the basic features of Riccati, Schwarzian and second order differential equations. In Chapter III we discuss the first boundary value problem for the Poisson equation. We shall need to consider this problem in the most general situation, which does not seem to be covered in a satisfactory way in the existing literature, see [1,2]. In Chapter IV we turn to a discussion of conformal pseudo-metrics in planar domains. We focus on conformal metrics with prescribed singularities and prescribed non-positive Gaussian curvature. We shall establish the existence of such metrics, that is, we solve the corresponding Gaussian curvature equation by making use of the results of Chapter III. In Chapter V we show that every constantly curved pseudo-metric can be represented as the pullback of either the hyperbolic, the euclidean or the spherical metric under an analytic map. This is proved by using the results of Chapter II. Finally we give in Chapter VI some applications of our results. [1,2] Courant, H., Hilbert, D., Methoden der Mathematischen Physik, Erster/ Zweiter Band, Springer-Verlag, Berlin, 1931/1937. [3] Fournier, R., Ruscheweyh, St., Free boundary value problems for analytic functions in the closed unit disk, Proc. Amer. Math. Soc. (1999), 127 no. 11, 3287-3294. [4] Heins, M., On a class of conformal metrics, Nagoya Math. J. (1962), 21, 1-60. [5] K?nau, R., L?gentreue Randverzerrung bei analytischer Abbildung in hyperbolischer und sph?ischer Geometrie, Mitt. Math. Sem. Giessen (1997), 229, 45-53. [6] Laine, I., Nevanlinna Theory and Complex Differential Equations, de Gruyter, Berlin - New York, 1993.
An exhaustive discussion of constraint qualifications (CQ) and stationarity concepts for mathematical programs with equilibrium constraints (MPEC) is presented. It is demonstrated that all but the weakest CQ, Guignard CQ, are too strong for a discussion of MPECs. Therefore, MPEC variants of all the standard CQs are introduced and investigated. A strongly stationary point (which is simply a KKT-point) is seen to be a necessary first order optimality condition only under the strongest CQs, MPEC-LICQ, MPEC-SMFCQ and Guignard CQ. Therefore a whole set of KKT-type conditions is investigated. A simple approach is given to acquire A-stationarity to be a necessary first order condition under MPEC-Guiganrd CQ. Finally, a whole chapter is devoted to investigating M-stationary, among the strongest stationarity concepts, second only to strong stationarity. It is shown to be a necessary first order condition under MPEC-Guignard CQ, the weakest known CQ for MPECs.
To study coisotropic reduction in the context of deformation quantization we introduce constraint manifolds and constraint algebras as the basic objects encoding the additional information needed to define a reduction. General properties of various categories of constraint objects and their compatiblity with reduction are examined. A constraint Serre-Swan theorem, identifying constraint vector bundles with certain finitely generated projective constraint modules, as well as a constraint symbol calculus are proved. After developing the general deformation theory of constraint algebras, including constraint Hochschild cohomology and constraint differential graded Lie algebras, the second constraint Hochschild cohomology for the constraint algebra of functions on a constraint flat space is computed.
The limiting behaviour of a one‐dimensional discrete system is studied by means of Γ‐convergence. We consider a toy model of a chain of atoms. The interaction potentials are of Lennard‐Jones type and periodically or stochastically distributed. The energy of the system is considered in the discrete to continuum limit, i.e. as the number of atoms tends to infinity. During that limit, a homogenization process takes place. The limiting functional is discussed, especially with regard to fracture. Secondly, we consider a rescaled version of the problem, which yields a limiting energy of Griffith's type consisting of a quadratic integral term and a jump contribution. The periodic case can be found in [8], the stochastic case in [6,7].
Extreme value theory aims at modeling extreme but rare events from a probabilistic point of view. It is well-known that so-called generalized Pareto distributions, which are briefly reviewed in Chapter 1, are the only reasonable probability distributions suited for modeling observations above a high threshold, such as waves exceeding the height of a certain dike, earthquakes having at least a certain intensity, and, after applying a simple transformation, share prices falling below some low threshold. However, there are cases for which a generalized Pareto model might fail. Therefore, Chapter 2 derives certain neighborhoods of a generalized Pareto distribution and provides several statistical tests for these neighborhoods, where the cases of observing finite dimensional data and of observing continuous functions on [0,1] are considered. By using a notation based on so-called D-norms it is shown that these tests consistently link both frameworks, the finite dimensional and the functional one. Since the derivation of the asymptotic distributions of the test statistics requires certain technical restrictions, Chapter 3 analyzes these assumptions in more detail. It provides in particular some examples of distributions that satisfy the null hypothesis and of those that do not. Since continuous copula processes are crucial tools for the functional versions of the proposed tests, it is also discussed whether those copula processes actually exist for a given set of data. Moreover, some practical advice is given how to choose the free parameters incorporated in the test statistics. Finally, a simulation study in Chapter 4 compares the in total three different test statistics with another test found in the literature that has a similar null hypothesis. This thesis ends with a short summary of the results and an outlook to further open questions.
We introduce some mathematical framework for extreme value theory in the space of continuous functions on compact intervals and provide basic definitions and tools. Continuous max-stable processes on [0,1] are characterized by their “distribution functions” G which can be represented via a norm on function space, called D-norm. The high conformity of this setup with the multivariate case leads to the introduction of a functional domain of attraction approach for stochastic processes, which is more general than the usual one based on weak convergence. We also introduce the concept of “sojourn time transformation” and compare several types of convergence on function space. Again in complete accordance with the uni- or multivariate case it is now possible to get functional generalized Pareto distributions (GPD) W via W = 1 + log(G) in the upper tail. In particular, this enables us to derive characterizations of the functional domain of attraction condition for copula processes. Moreover, we investigate the sojourn time above a high threshold of a continuous stochastic process. It turns out that the limit, as the threshold increases, of the expected sojourn time given that it is positive, exists if the copula process corresponding to Y is in the functional domain of attraction of a max-stable process. If the process is in a certain neighborhood of a generalized Pareto process, then we can replace the constant threshold by a general threshold function and we can compute the asymptotic sojourn time distribution.
This thesis covers a wide range of results for when a random vector is in the max-domain of attraction of max-stable random vector. It states some new theoretical results in D-norm terminology, but also gives an explaination why most approaches to multivariate extremes are equivalent to this specific approach. Then it covers new methods to deal with high-dimensional extremes, ranging from dimension reduction to exploratory methods and explaining why the Huessler-Reiss model is a powerful parametric model in multivariate extremes on par with the multivariate Gaussian distribution in multivariate regular statistics. It also gives new results for estimating and inferring the multivariate extremal dependence structure, strategies for choosing thresholds and compares the behavior of local and global threshold approaches. The methods are demonstrated in an artifical simulation study, but also on German weather data.
Controllability Aspects of the Lindblad-Kossakowski Master Equation : A Lie-Theoretical Approach
(2009)
One main task, which is considerably important in many applications in quantum control, is to explore the possibilities of steering a quantum system from an initial state to a target state. This thesis focuses on fundamental control-theoretical issues of quantum dynamics described by the Lindblad-Kossakowski master equation which arises as a bilinear control system on some underlying real vector spaces, e.g controllability aspects and the structure of reachable sets. Based on Lie-algebraic methods from nonlinear control theory, the thesis presents a unified approach to control problems of finite dimensional closed and open quantum systems. In particular, a simplified treatment for controllability of closed quantum systems as well as new accessibility results for open quantum systems are obtained. The main tools to derive the results are the well-known classifications of all matrix Lie groups which act transitively on Grassmann manifolds, and respectively, on real vector spaces without the origin. It is also shown in this thesis that accessibiity of the Lindblad-Kossakowski master equation is a generic property. Moreover, based on the theoretical accessibility results, an algorithm is developed to decide when the Lindblad-Kossakowski master equation is accessible.
This work studies the convergence of trajectories of gradient-like systems. In the first part of this work continuous-time gradient-like systems are examined. Results on the convergence of integral curves of gradient systems to single points of Lojasiewicz and Kurdyka are extended to a class of gradient-like vector fields and gradient-like differential inclusions. In the second part of this work discrete-time gradient-like optimization methods on manifolds are studied. Methods for smooth and for nonsmooth optimization problems are considered. For these methods some convergence results are proven. Additionally the optimization methods for nonsmooth cost functions are applied to sphere packing problems on adjoint orbits.
Composite optimization problems, where the sum of a smooth and a merely lower semicontinuous function has to be minimized, are often tackled numerically by means of proximal gradient methods as soon as the lower semicontinuous part of the objective function is of simple enough structure. The available convergence theory associated with these methods (mostly) requires the derivative of the smooth part of the objective function to be (globally) Lipschitz continuous, and this might be a restrictive assumption in some practically relevant scenarios. In this paper, we readdress this classical topic and provide convergence results for the classical (monotone) proximal gradient method and one of its nonmonotone extensions which are applicable in the absence of (strong) Lipschitz assumptions. This is possible since, for the price of forgoing convergence rates, we omit the use of descent-type lemmas in our analysis.
This thesis discusses and proposes a solution for one problem arising from deformation quantization:
Having constructed the quantization of a classical system, one would like to understand its mathematical properties (of both the classical and quantum system). Especially if both systems are described by ∗-algebras over the field of complex numbers, this means to understand the properties of certain ∗-algebras:
What are their representations? What are the properties of these representations? How
can the states be described in these representations? How can the spectrum of the observables be
described?
In order to allow for a sufficiently general treatment of these questions, the concept of abstract O ∗-algebras is introduced. Roughly speaking, these are ∗ -algebras together with a cone of positive linear functionals on them (e.g. the continuous ones if one starts with a ∗-algebra that is endowed with a well-behaved topology). This language is then applied to two examples from deformation quantization, which will be studied in great detail.
For a connected real Lie group G we consider the canonical standard-ordered star product arising from the canonical global symbol calculus based on the half-commutator connection of G. This star product trivially converges on polynomial functions on T\(^*\)G thanks to its homogeneity. We define a nuclear Fréchet algebra of certain analytic functions on T\(^*\)G, for which the standard-ordered star product is shown to be a well-defined continuous multiplication, depending holomorphically on the deformation parameter \(\hbar\). This nuclear Fréchet algebra is realized as the completed (projective) tensor product of a nuclear Fréchet algebra of entire functions on G with an appropriate nuclear Fréchet algebra of functions on \({\mathfrak {g}}^*\). The passage to the Weyl-ordered star product, i.e. the Gutt star product on T\(^*\)G, is shown to preserve this function space, yielding the continuity of the Gutt star product with holomorphic dependence on \(\hbar\).
A completely decomposable group is a direct sum of subgroups of the rationals. An almost completely decomposable group is a torsion free abelian group that contains a completely decomposable group as subgroup of finite index. Tight subgroups are maximal subgroups (with respect to set inclusion) among the completely decomposable subgroups of an almost completely decomposable group. In this dissertation we show an extended version of the theorem of Bezout, give a new criterion for the tightness of a completely decomposable subgroup, derive some conditions under which a tight subgroup is regulating and generalize a theorem of Campagna. We give an example of an almost completely decomposable group, all of whose regulating subgroups do not have a quotient with minimal exponent. We show that among the types of elements of a coset modulo a completely decomposable group there exists a unique maximal type and define this type to be -the- coset type. We give criteria for tightness and regulating in term of coset types as well as a representation of the type subgroups using coset types. We introduce the notion of reducible cosets and show their key role for transitions from one completely decomposable subgroup up to another one containing the first one as a proper subgroup. We give an example of a tight, but not regulating subgroup which contains the regulator. We develop the notion of a fully single covered subset of a lattice, show that V-free implies fully single covered, but not necessarily vice versa, and we define an equivalence relation on the set of all finite subsets of a given lattice. We develop some extension of ordinary Hasse diagrams, and apply the lattice theoretic results on the lattice of types and almost completely decomposable groups.
We give a collection of 16 examples which show that compositions \(g\) \(\circ\) \(f\) of well-behaved functions \(f\) and \(g\) can be badly behaved. Remarkably, in 10 of the 16 examples it suffices to take as outer function \(g\) simply a power-type or characteristic function. Such a collection of examples may serve as a source of exercises for a calculus course.
This dissertation is dealing with three mathematical areas, namely polynomial matrices over finite fields, linear systems and coding theory.
Coprimeness properties of polynomial matrices provide criteria for the reachability and observability of interconnected linear systems. Since time-discrete linear systems over finite fields and convolutional codes are basically the same objects, these results could be transfered to criteria for non-catastrophicity of convolutional codes.
We calculate the probability that specially structured polynomial matrices are right prime. In particular, formulas for the number of pairwise coprime polynomials and for the number of mutually left coprime polynomial matrices are calculated. This leads to the probability that a parallel connected linear system is reachable and that a parallel connected convolutional codes is non-catastrophic.
Moreover, the corresponding probabilities are calculated for other networks of linear systems and convolutional codes, such as series connection.
Furthermore, the probabilities that a convolutional codes is MDP and that a clock code is MDS are approximated.
Finally, we consider the probability of finding a solution for a linear network coding problem.
This thesis is devoted to Bernoulli Stochastics, which was initiated by Jakob Bernoulli more than 300 years ago by his master piece 'Ars conjectandi', which can be translated as 'Science of Prediction'. Thus, Jakob Bernoulli's Stochastics focus on prediction in contrast to the later emerging disciplines probability theory, statistics and mathematical statistics. Only recently Jakob Bernoulli's focus was taken up von Collani, who developed a unified theory of uncertainty aiming at making reliable and accurate predictions. In this thesis, teaching material as well as a virtual classroom are developed for fostering ideas and techniques initiated by Jakob Bernoulli and elaborated by Elart von Collani. The thesis is part of an extensively construed project called 'Stochastikon' aiming at introducing Bernoulli Stochastics as a unified science of prediction and measurement under uncertainty. This ambitious aim shall be reached by the development of an internet-based comprehensive system offering the science of Bernoulli Stochastics on any level of application. So far it is planned that the 'Stochastikon' system (http://www.stochastikon.com/) will consist of five subsystems. Two of them are developed and introduced in this thesis. The first one is the e-learning programme 'Stochastikon Magister' and the second one 'Stochastikon Graphics' that provides the entire Stochastikon system with graphical illustrations. E-learning is the outcome of merging education and internet techniques. E-learning is characterized by the facts that teaching and learning are independent of place and time and of the availability of specially trained teachers. Knowledge offering as well as knowledge transferring are realized by using modern information technologies. Nowadays more and more e-learning environments are based on the internet as the primary tool for communication and presentation. E-learning presentation tools are for instance text-files, pictures, graphics, audio and videos, which can be networked with each other. There could be no limit as to the access to teaching contents. Moreover, the students can adapt the speed of learning to their individual abilities. E-learning is particularly appropriate for newly arising scientific and technical disciplines, which generally cannot be presented by traditional learning methods sufficiently well, because neither trained teachers nor textbooks are available. The first part of this dissertation introduces the state of the art of e-learning in statistics, since statistics and Bernoulli Stochastics are both based on probability theory and exhibit many similar features. Since Stochastikon Magister is the first e-learning programme for Bernoulli Stochastics, the educational statistics systems is selected for the purpose of comparison and evaluation. This makes sense as both disciplines are an attempt to handle uncertainty and use methods that often can be directly compared. The second part of this dissertation is devoted to Bernoulli Stochastics. This part aims at outlining the content of two courses, which have been developed for the anticipated e-learning programme Stochastikon Magister in order to show the difficulties in teaching, understanding and applying Bernoulli Stochastics. The third part discusses the realization of the e-learning programme Stochastikon Magister, its design and implementation, which aims at offering a systematic learning of principles and techniques developed in Bernoulli Stochastics. The resulting e-learning programme differs from the commonly developed e-learning programmes as it is an attempt to provide a virtual classroom that simulates all the functions of real classroom teaching. This is in general not necessary, since most of the e-learning programmes aim at supporting existing classroom teaching. The forth part presents two empirical evaluations of Stochastikon Magister. The evaluations are performed by means of comparisons between traditional classroom learning in statistics and e-learning of Bernoulli Stochastics. The aim is to assess the usability and learnability of Stochastikon Magister. Finally, the fifth part of this dissertation is added as an appendix. It refers to Stochastikon Graphics, the fifth component of the entire Stochastikon system. Stochastikon Graphics provides the other components with graphical representations of concepts, procedures and results obtained or used in the framework of Bernoulli Stochastics. The primary aim of this thesis is the development of an appropriate software for the anticipated e-learning environment meant for Bernoulli Stochastics, while the preparation of the necessary teaching material constitutes only a secondary aim used for demonstrating the functionality of the e-learning platform and the scientific novelty of Bernoulli Stochastics. To this end, a first version of two teaching courses are developed, implemented and offered on-line in order to collect practical experiences. The two courses, which were developed as part of this projects are submitted as a supplement to this dissertation. For the time being the first experience with the e-learning programme Stochastikon Magister has been made. Students of different faculties of the University of Würzburg, as well as researchers and engineers, who are involved in the Stochastikon project have obtained access to Stochastikon Magister via internet. They have registered for Stochastikon Magister and participated in the course programme. This thesis reports on two assessments of these first experiences and the results will lead to further improvements with respect to content and organization of Stochastikon Magister.
In this paper we introduce a theoretical framework concerned with fostering functional thinking in Grade 8 students by utilizing digital technologies. This framework is meant to be used to guide the systematic variation of tasks for implementation in the classroom while using digital technologies. Examples of problems and tasks illustrate this process. Additionally, results of an empirical investigation with Grade 8 students, which focusses on the students’ skills with digital technologies, how they utilize these tools when engaging with the developed tasks, and how they influence their functional thinking, are presented. The research aim is to investigate in which way tasks designed according to the theoretical framework could promote functional thinking while using digital technologies in the sense of the operative principle. The results show that the developed framework — Function-Operation-Matrix — is a sound basis for initiating students’ actions in the sense of the operative principle, to foster the development of functional thinking in its three aspects, namely, assignment, co-variation and object, and that digital technologies can support this process in a meaningful way.
Charakteristisch für die Lösbarkeit von elliptischen partiellen Differentialgleichungssystemen mit Nebenbedingungen ist das Auftreten einer inf-sup-Bedingung. Im prototypischen Fall der Stokes-Gleichungen ist diese auch als Ladyzhenskaya-Bedingung bekannt. Die Gültigkeit dieser Bedingung, bzw. die Existenz der zugehörigen Konstante ist eine Eigenschaft des Gebietes, innerhalb dessen die Differentialgleichung gelöst werden soll. Während die Existenz schon die Lösbarkeit garantiert, ist beispielsweise für Fehleraussagen bei der numerischen Approximation auch die Größe der Konstanten sehr wichtig. Insbesondere auch deshalb, weil eine ähnliche inf-sup-Bedingung auch bei der Diskretisierung mittel Finiter-Elemente-Methoden auftaucht, die hier Babuska-Brezzi-Bedingung heißt. Die Arbeit befaßt sich auf der einen Seite mit einer analytischen Abschätzung der Ladyzhenskaya-Konstante für verschiedene Gebiete, wobei Äquivalenzen mit verwandten Problemen aus der komplexen Analysis (Friedrichs-Ungleichung) und der Strukturmechanik (Kornsche Ungleichung) benutzt werden. Ein weiterer Teil befaßt sich mit dem Zusammenhang zwischen kontinuierlicher Ladyzhenskaya- Konstante und diskreter Babuska-Brezzi-Konstante. Die dabei gefundenen Ergebnisse werden mit Hilfe eines dazu entwickelten leistungsfähigen Finite-Elemente-Programmsystems numerisch verifiziert. Damit können erstmals genaue Abschätzungen der Konstanten in zwei und drei Dimensionen gefunden werden. Aufbauend auf diesen Resultaten wird ein schneller Lösungsalgorithmus für die Stokes-Gleichungen vorgeschlagen und anhand von problematischen Gebieten dessen Überlegenheit gegenüber klassischen Verfahren wie beispielsweise der Uzawa-Iteration demonstriert. Während selbst bei einfachen Geometrien eine Konvergenzbeschleunigung um einen Faktor 5 erwartet werden kann, sind in kritischen Fällen Faktoren bis zu 1000 möglich.
Teil 1 der Arbeit beinhaltet eine Zusammenfassung grundlegender funktionalanalytischer Ergebnisse sowie eine Einführung in die Integral- und Differentialrechnung in Frécheträumen. Insbesondere wird in Kapitel 2 eine ausführliche Darstellung des Lebesgue-Bochner-Integrals auf Frécheträumen geliefert. Teil 2 behandelt die Theorie der linearen Differentialgleichungen auf Frécheträumen. Dazu werden in Kapitel 3 stark differenzierbare Halbgruppen und deren infinitesimale Generatoren charakterisiert. In Kapitel 4 werden diese Ergebnisse benutzt, um lineare Evolutionsgleichungen (von hyperbolischem oder parabolischem Typ) zu untersuchen. Teil 3 enthält die zentralen Resultate der Arbeit. In Kapitel 5 werden zwei Existenz- und Eindeutigkeitssätze für nichtlineare gewöhnliche Differentialgleichungen in zahmen Frécheträumen bewiesen. Kapitel 6 liefert eine Anwendung der Ergebnisse aus Kapitel 5 auf nichtlineare partielle Differentialgleichungen erster Ordnung.
In the thesis discrete moments of the Riemann zeta-function and allied Dirichlet series are studied.
In the first part the asymptotic value-distribution of zeta-functions is studied where the samples are taken from a Cauchy random walk on a vertical line inside the critical strip. Building on techniques by Lifshits and Weber analogous results for the Hurwitz zeta-function are derived. Using Atkinson’s dissection this is even generalized to Dirichlet L-functions associated with a primitive character. Both results indicate that the expectation value equals one which shows that the values of these
zeta-function are small on average.
The second part deals with the logarithmic derivative of the Riemann zeta-function on vertical lines and here the samples are with respect to an explicit ergodic transformation. Extending work of Steuding, discrete moments are evaluated and an equivalent formulation for the Riemann Hypothesis in terms of ergodic theory is obtained.
In the third and last part of the thesis, the phenomenon of universality with respect
to stochastic processes is studied. It is shown that certain random shifts of the zeta-function can approximate non-vanishing analytic target functions as good as we please. This result relies on Voronin's universality theorem.
Ó. Blasco and S. Pott showed that the supremum of operator norms over L\(^{2}\) of all bicommutators (with the same symbol) of one-parameter Haar multipliers dominates the biparameter dyadic product BMO norm of the symbol itself. In the present work we extend this result to the Bloom setting, and to any exponent 1 < p < ∞. The main tool is a new characterization in terms of paraproducts and two-weight John–Nirenberg inequalities for dyadic product BMO in the Bloom setting. We also extend our results to the whole scale of indexed spaces between little bmo and product BMO in the general multiparameter setting, with the appropriate iterated commutator in each case.
Background
HIV-disease progression correlates with immune activation. Here we investigated whether corticosteroid treatment can attenuate HIV disease progression in antiretroviral-untreated patients.
Methods
Double-blind, placebo-controlled randomized clinical trial including 326 HIV-patients in a resource-limited setting in Tanzania (clinicaltrials.gov NCT01299948). Inclusion criteria were a CD4 count above 300 cells/μl, the absence of AIDS-defining symptoms and an ART-naïve therapy status. Study participants received 5 mg prednisolone per day or placebo for 2 years. Primary endpoint was time to progression to an AIDS-defining condition or to a CD4-count below 200 cells/μl.
Results
No significant change in progression towards the primary endpoint was observed in the intent-to-treat (ITT) analysis (19 cases with prednisolone versus 28 cases with placebo, p = 0.1407). In a per-protocol (PP)-analysis, 13 versus 24 study participants progressed to the primary study endpoint (p = 0.0741). Secondary endpoints: Prednisolone-treatment decreased immune activation (sCD14, suPAR, CD38/HLA-DR/CD8+) and increased CD4-counts (+77.42 ± 5.70 cells/μl compared to -37.42 ± 10.77 cells/μl under placebo, p < 0.0001). Treatment with prednisolone was associated with a 3.2-fold increase in HIV viral load (p < 0.0001). In a post-hoc analysis stratifying for sex, females treated with prednisolone progressed significantly slower to the primary study endpoint than females treated with placebo (ITT-analysis: 11 versus 21 cases, p = 0.0567; PP-analysis: 5 versus 18 cases, p = 0.0051): No changes in disease progression were observed in men.
Conclusions
This study could not detect any significant effects of prednisolone on disease progression in antiretroviral-untreated HIV infection within the intent-to-treat population. However, significant effects were observed on CD4 counts, immune activation and HIV viral load. This study contributes to a better understanding of the role of immune activation in the pathogenesis of HIV infection.
We investigate eigenvalues of the zero-divisor graph Γ(R) of finite commutative rings R and study the interplay between these eigenvalues, the ring-theoretic properties of R and the graph-theoretic properties of Γ(R). The graph Γ(R) is defined as the graph with vertex set consisting of all nonzero zero-divisors of R and adjacent vertices x, y whenever xy=0. We provide formulas for the nullity of Γ(R), i.e., the multiplicity of the eigenvalue 0 of Γ(R). Moreover, we precisely determine the spectra of \(\Gamma ({\mathbb {Z}}_p \times {\mathbb {Z}}_p \times {\mathbb {Z}}_p)\) and \(\Gamma ({\mathbb {Z}}_p \times {\mathbb {Z}}_p \times {\mathbb {Z}}_p \times {\mathbb {Z}}_p)\) for a prime number p. We introduce a graph product ×Γ with the property that Γ(R)≅Γ(R\(_1\))×Γ⋯×ΓΓ(R\(_r\)) whenever R≅R\(_1\)×⋯×R\(_r\). With this product, we find relations between the number of vertices of the zero-divisor graph Γ(R), the compressed zero-divisor graph, the structure of the ring R and the eigenvalues of Γ(R).
The work at hand studies problems from Loewner theory and is divided into two parts:
In part 1 (chapter 2) we present the basic notions of Loewner theory. Here we use a modern form which was developed by F. Bracci, M. Contreras, S. Díaz-Madrigal et al. and which can be applied to certain higher dimensional complex manifolds.
We look at two domains in more detail: the Euclidean unit ball and the polydisc. Here we consider two classes of biholomorphic mappings which were introduced by T. Poreda and G. Kohr as generalizations of the class S.
We prove a conjecture of G. Kohr about support points of these classes. The proof relies on the observation that the classes describe so called Runge domains, which follows from a result by L. Arosio, F. Bracci and E. F. Wold.
Furthermore, we prove a conjecture of G. Kohr about support points of a class of biholomorphic mappings that comes from applying the Roper-Suffridge extension operator to the class S.
In part 2 (chapter 3) we consider one special Loewner equation: the chordal multiple-slit equation in the upper half-plane.
After describing basic properties of this equation we look at the problem, whether one can choose the coefficient functions in this equation to be constant. D. Prokhorov proved this statement under the assumption that the slits are piecewise analytic. We use a completely different idea to solve the problem in its general form.
As the Loewner equation with constant coefficients holds everywhere (and not just almost everywhere), this result generalizes Loewner’s original idea to the multiple-slit case.
Moreover, we consider the following problems:
• The “simple-curve problem” asks which driving functions describe the growth of simple curves (in contrast to curves that touch itself). We discuss necessary and sufficient conditions, generalize a theorem of J. Lind, D. Marshall and S. Rohde to the multiple-slit equation and we give an example of a set of driving functions which generate simple curves because of a certain self-similarity property.
• We discuss properties of driving functions that generate slits which enclose a given angle with the real axis.
• A theorem by O. Roth gives an explicit description of the reachable set of one point in the radial Loewner equation. We prove the analog for the chordal equation.
In this paper we consider the class (θA, B) of parameter-dependent linear systems given by matrices A ∈ ℂ\(^{nxn}\) and B ∈ ℂ\(^{nxm}\). This class is of interest for several applications and the frequently met task for such systems is to steer the origin toward a given target family f(θ) by using an input that is independent from the parameter. This paper provides a collection of necessary and sufficient conditions for ensemble reachability for these systems.
Die Auseinandersetzung mit Simulations- und Modellierungsaufgaben, die mit digitalen Werkzeugen zu bearbeiten sind, stellt veränderte Anforderungen an Mathematiklehrkräfte in der Unterrichtsplanung und -durchführung. Werden digitale Werkzeuge sinnvoll eingesetzt, so unterstützen sie Simulations- und Modellierungsprozesse und ermöglichen realitätsnähere Sachkontexte im Mathematikunterricht. Für die empirische Untersuchung professioneller Kompetenzen zum Lehren des Simulierens und mathematischen Modellierens mit digitalen Werkzeugen ist es notwendig, Aspekte globaler Lehrkompetenzen von (angehenden) Mathematiklehrkräften bereichsspezifisch auszudeuten.
Daher haben wir ein Testinstrument entwickelt, das die Überzeugungen, die Selbstwirksamkeitserwartungen und das fachdidaktische Wissen zum Lehren des Simulierens und mathematischen Modellierens mit digitalen Werkzeugen erfasst. Ergänzt wird das Testinstrument durch selbstberichtete Vorerfahrungen zum eigenen Gebrauch digitaler Werkzeuge sowie zur Verwendung digitaler Werkzeuge in Unterrichtsplanung und -durchführung.
Das Testinstrument ist geeignet, um mittels Analysen von Veranstaltungsgruppen im Prä-Post-Design den Zuwachs der oben beschriebenen Kompetenz von (angehenden) Mathematiklehrkräften zu messen. Somit können in Zukunft anhand der Ergebnisse die Wirksamkeit von Lehrveranstaltungen, die diese Kompetenz fördern (sollen), untersucht und evaluiert werden.
Der Beitrag gliedert sich in zwei Teile: Zunächst werden in der Testbeschreibung das zugrundeliegende Konstrukt und der Anwendungsbereich des Testinstruments sowie dessen Aufbau und Hinweise zur Durchführung beschrieben. Zudem wird die Testgüte anhand der Pilotierungsergebnisse überprüft. Im zweiten Teil befindet sich das vollständige Testinstrument.
In forecasting count processes, practitioners often ignore the discreteness of counts and compute forecasts based on Gaussian approximations instead. For both central and non-central point forecasts, and for various types of count processes, the performance of such approximate point forecasts is analyzed. The considered data-generating processes include different autoregressive schemes with varying model orders, count models with overdispersion or zero inflation, counts with a bounded range, and counts exhibiting trend or seasonality. We conclude that Gaussian forecast approximations should be avoided.
An efficient and accurate computational framework for solving control problems governed by quantum spin systems is presented. Spin systems are extremely important in modern quantum technologies such as nuclear magnetic resonance spectroscopy, quantum imaging and quantum computing. In these applications, two classes of quantum control problems arise: optimal control problems and exact-controllability problems, with a bilinear con- trol structure. These models correspond to the Schrödinger-Pauli equation, describing the time evolution of a spinor, and the Liouville-von Neumann master equation, describing the time evolution of a spinor and a density operator. This thesis focuses on quantum control problems governed by these models. An appropriate definition of the optimiza- tion objectives and of the admissible set of control functions allows to construct controls with specific properties. These properties are in general required by the physics and the technologies involved in quantum control applications. A main purpose of this work is to address non-differentiable quantum control problems. For this reason, a computational framework is developed to address optimal-control prob- lems, with possibly L1 -penalization term in the cost-functional, and exact-controllability problems. In both cases the set of admissible control functions is a subset of a Hilbert space. The bilinear control structure of the quantum model, the L1 -penalization term and the control constraints generate high non-linearities that make difficult to solve and analyse the corresponding control problems. The first part of this thesis focuses on the physical description of the spin of particles and of the magnetic resonance phenomenon. Afterwards, the controlled Schrödinger- Pauli equation and the Liouville-von Neumann master equation are discussed. These equations, like many other controlled quantum models, can be represented by dynamical systems with a bilinear control structure. In the second part of this thesis, theoretical investigations of optimal control problems, with a possible L1 -penalization term in the objective and control constraints, are consid- ered. In particular, existence of solutions, optimality conditions, and regularity properties of the optimal controls are discussed. In order to solve these optimal control problems, semi-smooth Newton methods are developed and proved to be superlinear convergent. The main difficulty in the implementation of a Newton method for optimal control prob- lems comes from the dimension of the Jacobian operator. In a discrete form, the Jacobian is a very large matrix, and this fact makes its construction infeasible from a practical point of view. For this reason, the focus of this work is on inexact Krylov-Newton methods, that combine the Newton method with Krylov iterative solvers for linear systems, and allows to avoid the construction of the discrete Jacobian. In the third part of this thesis, two methodologies for the exact-controllability of quan- tum spin systems are presented. The first method consists of a continuation technique, while the second method is based on a particular reformulation of the exact-control prob- lem. Both these methodologies address minimum L2 -norm exact-controllability problems. In the fourth part, the thesis focuses on the numerical analysis of quantum con- trol problems. In particular, the modified Crank-Nicolson scheme as an adequate time discretization of the Schrödinger equation is discussed, the first-discretize-then-optimize strategy is used to obtain a discrete reduced gradient formula for the differentiable part of the optimization objective, and implementation details and globalization strategies to guarantee an adequate numerical behaviour of semi-smooth Newton methods are treated. In the last part of this work, several numerical experiments are performed to vali- date the theoretical results and demonstrate the ability of the proposed computational framework to solve quantum spin control problems.
We discuss exceptional polynomials, i.e. polynomials over a finite field $k$ that induce bijections over infinitely many finite extensions of $k$. In the first chapters we give the theoretical background to characterize this class of polynomials with Galois theoretic means. This leads to the notion of arithmetic resp. geometric monodromy groups. In the remaining chapters we restrict our attention to polynomials with primitive affine arithmetic monodromy group. We first classify all exceptional polynomials with the fixed field of the affine kernel of the arithmetic monodromy group being of genus less or equal to 2. Next we show that every full affine group can be realized as the monodromy group of a polynomial. In the remaining chapters we classify affine polynomials of a given degree.
In this article we collect some recent results on the global existence of weak solutions for diffuse interface models involving incompressible magnetic fluids. We consider both the cases of matched and unmatched specific densities. For the model involving fluids with identical densities we consider the free energy density to be a double well potential whereas for the unmatched density case it is crucial to work with a singular free energy density.
Extreme value theory is concerned with the stochastic modeling of rare and extreme events. While fundamental theories of classical stochastics - such as the laws of small numbers or the central limit theorem - are used to investigate the asymptotic behavior of the sum of random variables, extreme value theory focuses on the maximum or minimum of a set of observations. The limit distribution of the normalized sample maximum among a sequence of independent and identically distributed random variables can be characterized by means of so-called max-stable distributions.
This dissertation concerns with different aspects of the theory of max-stable random vectors and stochastic processes. In particular, the concept of 'differentiability in distribution' of a max-stable process is introduced and investigated. Moreover, 'generalized max-linear models' are introduced in order to interpolate a known max-stable random vector by a max-stable process. Further, the connection between extreme value theory and multivariate records is established. In particular, so-called 'complete' and 'simple' records are introduced as well as it is examined their asymptotic behavior.
Based on the work of Eisenberg and Noe [2001], Suzuki [2002], Elsinger [2009] and Fischer [2014], we consider a generalization of Merton's asset valuation approach where n firms are linked by cross-ownership of equities and liabilities. Each firm is assumed to have a single outstanding liability, whereas its assets consist of one system-exogenous asset, as well as system-endogenous assets comprising some fraction of other firms' equity and liability, respectively. Following Fischer [2014], one can obtain no-arbitrage prices of equity and the recovery claims of liabilities as solutions of a fixed point problem, and hence obtain no-arbitrage prices of the `firm value' of each firm, which is the value of the firm's liability plus the firm's equity.
In a first step, we consider the two-firm case where explicit formulae for the no-arbitrage prices of the firm values are available (cf. Suzuki [2002]). Since firm values are derivatives of exogenous asset values, the distribution of firm values at maturity can be determined from the distribution of exogenous asset values. The Merton model and most of its known extensions do not account for the cross-ownership structure of the assets owned by the firm. Therefore the assumption of lognormally distributed exogenous assets leads to lognormally distributed firm values in such models, as the values of the liability and the equity add up to the exogenous asset's value (which has lognormal distribution by assumption). Our work therefore starts from lognormally distributed exogenous assets and reveals how cross-ownership, when correctly accounted for in the valuation process, affects the distribution of the firm value, which is not lognormal anymore. In a simulation study we examine the impact of several parameters (amount of cross-ownership of debt and equity, ratio of liabilities to expected exogenous assets value) on the differences between the distribution of firm values obtained from our model and correspondingly matched lognormal distributions. It becomes clear that the assumption of lognormally distributed firm values may lead to both over- and underestimation of the “true" firm values (within the cross-ownership model) and consequently of bankruptcy risk, too.
In a second step, the bankruptcy risk of one firm within the system is analyzed in more detail in a further simulation study, revealing that the correct incorporation of cross-ownership in the valuation procedure is the more important, the tighter the cross-ownership structure between the two firms. Furthermore, depending on the considered type of cross-ownership (debt or equity), the assumption of lognormally distributed firm values is likely to result in an over- resp. underestimation of the actual probability of default. In a similar vein, we consider the Value-at-Risk (VaR) of a firm in the system, which we calculate as the negative α-quantile of the firm value at maturity minus the firm's risk neutral price in t=0, i.e. we consider the (1-α)100%-VaR of the change in firm value. If we let the cross-ownership fractions (i.e. the fraction that one firm holds of another firm's debt or equity) converge to 1 (which is the supremum of the possible values that cross-ownership fractions can take), we can prove that in a system of two firms, the lognormal model will over- resp. underestimate both univariate and bivariate probabilities of default under cross-ownership of debt only resp. cross-ownership of equity only. Furthermore, we provide a formula that allows us to check for an arbitrary scenario of cross-ownership and any non-negative distribution of exogenous assets whether the approximating lognormal model will over- or underestimate the related probability of default of a firm. In particular, any given non-negative distribution of exogenous asset values (non-degenerate in a certain sense) can be transformed into a new, “extreme" distribution of exogenous assets yielding such a low or high actual probability of default that the approximating lognormal model will over- and underestimate this risk, respectively.
After this analysis of the univariate distribution of firm values under cross-ownership in a system of two firms with bivariately lognormally distributed exogenous asset values, we consider the copula of these firm values as a distribution-free measure of the dependency between these firm values. Without cross-ownership, this copula would be the Gaussian copula. Under cross-ownership, we especially consider the behaviour of the copula of firm values in the lower left and upper right corner of the unit square, and depending on the type of cross-ownership and the considered corner, we either obtain error bounds as to how good the copula of firm values under cross-ownership can be approximated with the Gaussian copula, or we see that the copula of firm values can be written as the copula of two linear combinations of exogenous asset values (note that these linear combinations are not lognormally distributed). These insights serve as a basis for our analysis of the tail dependence coefficient of firm values under cross-ownership. Under cross-ownership of debt only, firm values remain upper tail independent, whereas they become perfectly lower tail dependent if the correlation between exogenous asset values exceeds a certain positive threshold, which does not depend on the exact level of cross-ownership. Under cross-ownership of equity only, the situation is reverse in that firm values always remain lower tail independent, but upper tail independence is preserved if and only if the right tail behaviour of both firms’ values is determined by the right tail behaviour of the firms’ own exogenous asset value instead of the respective other firm’s exogenous asset value.
Next, we return to systems of n≥2 firms and analyze sensitivities of no-arbitrage prices of equity and the recovery claims of liabilities with respect to the model parameters. In the literature, such sensitivities are provided with respect to exogenous asset values by Gouriéroux et al. [2012], and we extend the existing results by considering how these no-arbitrage prices depend on the cross-ownership fractions and the level of liabilities. For the former, we can show that all prices are non-decreasing in any cross-ownership fraction in the model, and by use of a version of the Implicit Function Theorem we can also determine exact derivatives. For the latter, we show that the recovery value of debt and the equity value of a firm are non-decreasing and non-increasing in the firm's nominal level of liabilities, respectively, but the firm value is in general not monotone in the firm's level of liabilities. Furthermore, no-arbitrage prices of equity and the recovery claims of liabilities of a firm are in general non-monotone in the nominal level of liabilities of other firms in the system. If we confine ourselves to one type of cross-ownership (i.e. debt or equity), we can derive more precise relationships. All the results can be transferred to risk-neutral prices before maturity.
Finally, following Gouriéroux et al. [2012] and as a kind of extension to the above sensitivity results, we consider how immediate changes in exogenous asset values of one or more firms at maturity affect the financial health of a system of n initially solvent firms. We start with some theoretical considerations on what we call the contagion effect, namely the change in the endogenous asset value of a firm caused by shocks on the exogenous assets of firms within the system. For the two-firm case, an explicit formula is available, making clear that in general (and in particular under cross-ownership of equity only), the effect of contagion can be positive as well as negative, i.e. it can both, mitigate and exacerbate the change in the exogenous asset value of a firm. On the other hand, we cannot generally say that a tighter cross-ownership structure leads to bigger absolute contagion effects. Under cross-ownership of debt only, firms cannot profit from positive shocks beyond the direct effect on exogenous assets, as the contagion effect is always non-positive. Next, we are concerned with spillover effects of negative shocks on a subset of firms to other firms in the system (experiencing non-negative shocks themselves), driving them into default due to large losses in their endogenous asset values. Extending the results of Glasserman and Young [2015], we provide a necessary condition for the shock to cause such an event. This also yields an upper bound for the probability of such an event. We further investigate how the stability of a system of firms exposed to multiple shocks depends on the model parameters in a simulation study. In doing so, we consider three network types (incomplete, core-periphery and ring network) with simultaneous shocks on some of the firms and wiping out a certain percentage of their exogenous assets. Then we analyze for all three types of cross-ownership (debt only, equity only, both debt and equity) how the shock intensity, the shock size, and network parameters as the number of links in the network and the proportion of a firm's debt or equity held within the system of firms influences several output parameters, comprising the total number of defaults and the relative loss in the sum of firm values, among others. Comparing our results to the studies of Nier et al. [2007], Gai and Kapadia [2010] and Elliott et al. [2014], we can only partly confirm their results with respect to the number of defaults. We conclude our work with a theoretical comparison of the complete network (where each firm holds a part of any other firm) and the ring network with respect to the number of defaults caused by a shock on a single firm, as it is done by Allen and Gale [2000]. In line with the literature, we find that under cross-ownership of debt only, complete networks are “robust yet fragile" [Gai and Kapadia, 2010] in that moderate shocks can be completely withstood or drive the firm directly hit by the shock in default, but as soon as the shock exceeds a certain size, all firms are simultaneously in default. In contrast to that, firms default one by one in the ring network, with the first “contagious default" (i.e. a default of a firm not directly hit by the shock) already occurs for smaller shock sizes than under the complete network.
Die Arbeit beschäftigt sich mit dem Einsatz von Origami im Schulunterricht. Genauer beschreibt sie eine Unterrichtssequenz zur Flachfaltbarkeit, einem Teilgebiet des mathematischen Papierfaltens, für den Mathematikunterricht in der Oberstufe an Gymnasien und höheren Schulen. Es werden konkrete Handlungsanweisungen sowie Alternativen ausgeführt und begründet und mit vielen Grafiken erläutert. Ferner werden Ziele dieser Unterrichtssequenz gemäß KMK-Bildungsstandards dargelegt. Anschließend wird ein mathematischer Blick auf die Beschäftigung mit der Flachfaltbarkeit sowie eine Einordnung in die aktuelle Forschungslage gegeben.
Fluids in Gravitational Fields – Well-Balanced Modifications for Astrophysical Finite-Volume Codes
(2021)
Stellar structure can -- in good approximation -- be described as a hydrostatic state, which which arises due to a balance between gravitational force and pressure gradient. Hydrostatic states are static solutions of the full compressible Euler system with gravitational source term, which can be used to model the stellar interior. In order to carry out simulations of dynamical processes occurring in stars, it is vital for the numerical method to accurately maintain the hydrostatic state over a long time period. In this thesis we present different methods to modify astrophysical finite volume codes in order to make them \emph{well-balanced}, preventing them from introducing significant discretization errors close to hydrostatic states. Our well-balanced modifications are constructed so that they can meet the requirements for methods applied in the astrophysical context: They can well-balance arbitrary hydrostatic states with any equation of state that is applied to model thermodynamical relations and they are simple to implement in existing astrophysical finite volume codes. One of our well-balanced modifications follows given solutions exactly and can be applied on any grid geometry. The other methods we introduce, which do no require any a priori knowledge, balance local high order approximations of arbitrary hydrostatic states on a Cartesian grid. All of our modifications allow for high order accuracy of the method. The improved accuracy close to hydrostatic states is verified in various numerical experiments.
We construct a foliation of an asymptotically flat end of a Riemannian manifold by hypersurfaces which are critical points of a natural functional arising in potential theory. These hypersurfaces are perturbations of large coordinate spheres, and they admit solutions of a certain over-determined boundary value problem involving the Laplace–Beltrami operator. In a key step we must invert the Dirichlet-to-Neumann operator, highlighting the nonlocal nature of our problem.
Functions of bounded variation are most important in many fields of mathematics. This thesis investigates spaces of functions of bounded variation with one variable of various types, compares them to other classical function spaces and reveals natural “habitats” of BV-functions. New and almost comprehensive results concerning mapping properties like surjectivity and injectivity, several kinds of continuity and compactness of both linear and nonlinear operators between such spaces are given. A new theory about different types of convergence of sequences of such operators is presented in full detail and applied to a new proof for the continuity of the composition operator in the classical BV-space. The abstract results serve as ingredients to solve Hammerstein and Volterra integral equations using fixed point theory. Many criteria guaranteeing the existence and uniqueness of solutions in BV-type spaces are given and later applied to solve boundary and initial value problems in a nonclassical setting.
A big emphasis is put on a clear and detailed discussion. Many pictures and synoptic tables help to visualize and summarize the most important ideas. Over 160 examples and counterexamples illustrate the many abstract results and how delicate some of them are.
Spiraltypflächen sind Minimalflächen des dreidimensionalen euklidischen Raums, die sich durch hohe Symmetrie gegenüber komplexen Ähnlichkeitsabbildungen der Minimalkurve auszeichnen. Ihren Namen verdanken Sie folgender Eigenschaft: Sie und ihre komplex Homothetischen sind die einzigen auf Spiralflächen abwickelbaren Minimalflächen. Bekannte Spiraltypflächen sind die Spiralminimalflächen (zugleich Minimal- und Spiralflächen) und die Bourflächen (auf Rotationsflächen abwickelbare Minimalflächen). Das Katenoid und die Enneperfläche sind spezielle Bourflächen. In dieser Arbeit werden die Spiraltypflächen auf ihre geometrischen Eigenschaften untersucht. Wir stellen ihre Periodizitäten und Symmetrien fest und versuchen, ausgezeichnete Flächenkurven auf ihnen zu finden. Wir verwenden eine globale Weierstraß-Darstellung der Spiraltypflächen. In dieser Darstellung ergeben die Flächen eine Schar mit einem komplexen Scharparameter. Anhand dieser Darstellung leiten wir sämtliche Symmetrien der Spiraltypflächen zu linearen Ähnlichkeitsabbildungen der Minimalkurve her. Als Spezialfälle erhalten wir die Symmetrien unter Assoziationen und Derivationen (Drehung der Minimalkurve um einen imaginären Drehwinkel), sowie die reellen Symmetrien (Dreh-, Spiegel- und Strecksymmetrien). Unter den Spiraltypflächen gibt es nur zwei translationssymmetrische Flächen. Die Umorientierung einer Spiraltypfläche entspricht (bis auf komplexe Homothetie) dem Vorzeichenwechsel des Flächenparameters. Im Übrigen kann durch einfache Spiegelungen an den Koordinatenebenen beziehungsweise Drehungen um die Koordinatenachsen das Vorzeichen von Real- beziehungsweise Imaginärteil des Flächenparameters umgekehrt werden. Schließlich stellen wir noch ausgezeichnete Flächenkurven auf den Spiraltypflächen vor: Krümmungslinien, Asymptotenlinien und Geodätische, sowie als deren Verallgemeinerungen die Pseudokrümmungslinien und Pseudogeodätischen.
Global Existence and Uniqueness Results for Nematic Liquid Crystal and Magnetoviscoelastic Flows
(2022)
Liquid crystals and polymeric fluids are found in many technical applications with liquid crystal displays probably being the most prominent one. Ferromagnetic materials are well established in industrial and everyday use, e.g. as magnets in generators, transformers and hard drive disks. Among ferromagnetic materials, we find a subclass which undergoes deformations if an external magnetic field is applied. This effect is exploited in actuators, magnetoelastic sensors, and new fluid materials have been produced which retain their induced magnetization during the flow.
A central issue consists of a proper modelling for those materials. Several models exist regarding liquid crystals and liquid crystal flows, but up to now, none of them has provided a full insight into all observed effects. On materials encompassing magnetic, elastic and perhaps even fluid dynamic effects, the mathematical literature seems sparse in terms of models. To some extent, one can unify the modeling of nematic liquid crystals and magnetoviscoelastic materials employing a so-called energetic variational approach.
Using the least action principle from theoretical physics, the actual task reduces to finding appropriate energies describing the observed behavior. The procedure leads to systems of evolutionary partial differential equations, which are analyzed in this work.
From the mathematical point of view, fundamental questions on existence, uniqueness and stability of solutions remain unsolved. Concerning the Ericksen-Leslie system modelling nematic liquid crystal flows, an approximation to this model is given by the so-called Ginzburg-Landau approximation. Solutions to the latter are intended to approximately represent solutions to the Ericksen-Leslie system. Indeed, we verify this presumption in two spatial dimensions. More precisely, it is shown that weak solutions of the Ginzburg-Landau approximation converge to solutions of the Ericksen-Leslie system in the energy space for all positive times of evolution. In order to do so, theory for the Euler equations invented by DiPerna and Majda on weak compactness and concentration measures is used.
The second part of the work deals with a system of partial differential equations modelling magnetoviscoelastic fluids. We provide a well-posedness result in two spatial dimensions for large energies and large times. Along the verification of that conclusion, existing theory on the Ericksen-Leslie system and the harmonic map flow is deployed and suitably extended.
The Cauchy problem for a simplified shallow elastic fluids model, one 3 x 3 system of Temple's type, is studied and a global weak solution is obtained by using the compensated compactness theorem coupled with the total variation estimates on the first and third Riemann invariants, where the second Riemann invariant is singular near the zero layer depth (rho - 0). This work extends in some sense the previous works, (Serre, 1987) and (Leveque and Temple, 1985), which provided the global existence of weak solutions for 2 x 2 strictly hyperbolic system and (Heibig, 1994) for n x n strictly hyperbolic system with smooth Riemann invariants.
In Janssen and Reiss (1988) it was shown that in a location model of a Weibull type sample with shape parameter -1 < a < 1 the k(n) lower extremes are asymptotically local sufficient. In the present paper we show that even global sufficiency holds. Moreover, it turns out that convergence of the given statistical experiments in the deficiency metric does not only hold for compact parameter sets but for the whole real line.
Mathematische Programme mit Gleichgewichtsrestriktionen (oder Komplementaritätsbedingungen), kurz MPECs, sind als äußerst schwere Optimierungsprobleme bekannt. Lokale Minima oder geeignete stationäre Punkte zu finden, ist ein nichttriviales Problem. Diese Arbeit beschreibt, wie man dennoch die spezielle Struktur von MPECs ausnutzen kann und mittels eines Branch-and-Bound-Verfahrens ein globales Minimum von Linearen Programmen mit Gleichgewichtsrestriktionen, kurz LPECs, bekommt. Des Weiteren wird dieser Branch-and-Bound-Algorithmus innerhalb eines Filter-SQPEC-Verfahrens genutzt, um allgemeine MPECs zu lösen. Für das Filter-SQPEC Verfahren wird ein globaler Konvergenzsatz bewiesen. Außerdem werden für beide Verfahren numerische Resultate angegeben.
In this thesis different algorithms for the solution of generalized Nash equilibrium problems with the focus on global convergence properties are developed. A globalized Newton method for the computation of normalized solutions, a nonsmooth algorithm based on an optimization reformulation of the game-theoretic problem, and a merit function approach and an interior point method for the solution of the concatenated Karush-Kuhn-Tucker-system are analyzed theoretically and numerically. The interior point method turns out to be one of the best existing methods for the solution of generalized Nash equilibrium problems.
In dieser Arbeit werden Algorithmen zur Lösung von linearen semidefiniten Programmen beschrieben. Unter einer geeigneten Regularitätsvoraussetzung ist ein semidefinites Programm äquivalent zu seinen Optimalitätsbedingungen. Die Optimalitätsbedingungen bzw. die Zentralen-Pfad-Bedingungen überführen wir zunächst durch matrixwertige NCP-Funktionen in ein nichtlineares Gleichungssystem. Dieses nichtlineare und teilweise nicht differenzierbare Gleichungssystem lösen wir dann mit einem Newton-ähnlichen Verfahren. Durch die Umformulierung in ein nichtlineares Gleichungssystem muss während der Iteration nicht mehr explizit die positive (Semi-)Definitheit der beteiligten Matrizen beachtet werden. Weiter wird gezeigt, dass dieser Ansatz im Gegensatz zu Inneren-Punkte-Methoden sofort symmetrische Suchrichtungen erzeugt. Um globale Konvergenz zu erhalten, werden verschiedene Globalisierungsstrategien (Schrittweitenbestimmung, Trust-Region-Ansatz) untersucht. Für das betrachtete Prädiktor-Korrektor-Verfahren und das Trust-Region-Verfahren wird lokal superlineare Konvergenz unter strikter Komplementarität und Nichtdegeneriertheit gezeigt. Die theoretische Untersuchung eines nichtglatten Newton-Verfahrens liefert ein lokal quadratisches Konvergenzverhalten ohne strikte Komplementarität, wenn die Nichtdegeneriertheitsvoraussetzung geeignet modifiziert wird.
We consider homogeneous spaces G/H with the same rational homotopy as a product of a 1-sphere and a (m+1)-sphere. We show that these spaces have also the rational cohomology of such a sphere product if H is connected and if the quotient has dimension m+2. Furthermore, we prove that if additionally the fundamental group of G/H is cyclic, then G/H is locally a product of a 1-torus and ofA/H, where A/H is a simply connected rational cohomology (m+1)-sphere (and hence classified). If H fails to be connected, then with U as the connected component of H the G-action on the covering space G/U of G/H has connected stabilizers, and the results apply to G/U. To show that under the assumptions above every natural number may be realized as the order of the group of connected components of H we calculate the cohomology of certain homogeneous spaces. We also determine the rational cohomology of the fibre bundle U-->G-->G/U if G/H meets the assumptions above. This is done by considering the respective Leray-Serre spectral sequence. The structure of the cohomology of U-->G-->G/U then gives a second proof for the structure of compact connected Lie groups acting transitively on spaces with the rational homotopy of a product of a 1-sphere and a (m+1)-sphere. Since a quotient of a homogeneous space with the same rational homotopy or cohomology as a product of a 1-sphere and a (m+1)-sphere is not simply connected, there often arises the question whether or not a considered fibre bundle or fibration is orientable. A large amount of space will therefore be given to the problem of showing that certain fibrations are orientable. For compact connected (m+2)-manifolds with cyclic fundamental groups and with the rational homotopy of a product of a 1-sphere and a (m+1)-sphere we show the following: if a connected Lie group acts transitively on the manifold, then the maximal compact subgroups are either transitive, or their orbits are simply connected rational cohomology spheres of codimension 1. Homogeneous spaces with the same rational cohomology or homotopy as a a product of a 1-sphere and a (m+1)-sphere play a role in the study of different types of geometrical objects. They appear for example as focal manifolds of isoparametric hypersurfaces with four distinct principal curvatures. Further examples of such spaces are the point spaces and the line spaces of compact connected generalized quadrangles. We determine the isometry groups of isoparametric hypersurfaces with 4 principal curvatures of multiplicities 1 and m which are transitive on the focal manifold with non-trivial fundamental group. Buildings were introduced by Jacques Tits to give interpretations of simple groups of Lie type. They are a far-reaching generalization of projective spaces, in particular a generalization of projective planes. There is another generalization of projective planes called generalized polygons. A projective plane is the same as a generalized triangle. The generalized polygons are also contained in the class of buildings: they are the buildings of rank 2. To compact quadrangles one can assign a pair of natural numbers called the topological parameters of the quadrangles. We treat the case k=1. It turns out that there are no other point-transitive compact connected Lie groups for (1,m)-quadrangles than the ones for the real orthogonal quadrangles. Furthermore, we solve the problem of three infinite series of group actions which Kramer left as open problems; there are no quadrangles with the homogeneous spaces in question as point spaces (up to maybe a finite number of small parameters in one of the three series).
This thesis deals with the hp-finite element method (FEM) for linear quadratic optimal control problems. Here, a tracking type functional with control costs as regularization shall be minimized subject to an elliptic partial differential equation. In the presence of control constraints, the first order necessary conditions, which are typically used to find optimal solutions numerically, can be formulated as a semi-smooth projection formula. Consequently, optimal solutions may be non-smooth as well. The hp-discretization technique considers this fact and approximates rough functions on fine meshes while using higher order finite elements on domains where the solution is smooth.
The first main achievement of this thesis is the successful application of hp-FEM to two related problem classes: Neumann boundary and interface control problems. They are solved with an a-priori refinement strategy called boundary concentrated (bc) FEM and interface concentrated (ic) FEM, respectively. These strategies generate grids that are heavily refined towards the boundary or interface. We construct an elementwise interpolant that allows to prove algebraic decay of the approximation error for both techniques. Additionally, a detailed analysis of global and local regularity of solutions, which is critical for the speed of convergence, is included. Since the bc- and ic-FEM retain small polynomial degrees for elements touching the boundary and interface, respectively, we are able to deduce novel error estimates in the L2- and L∞-norm. The latter allows an a-priori strategy for updating the regularization parameter in the objective functional to solve bang-bang problems.
Furthermore, we apply the traditional idea of the hp-FEM, i.e., grading the mesh geometrically towards vertices of the domain, for solving optimal control problems (vc-FEM). In doing so, we obtain exponential convergence with respect to the number of unknowns. This is proved with a regularity result in countably normed spaces for the variables of the coupled optimality system.
The second main achievement of this thesis is the development of a fully adaptive hp-interior point method that can solve problems with distributed or Neumann control. The underlying barrier problem yields a non-linear optimality system, which poses a numerical challenge: the numerically stable evaluation of integrals over possibly singular functions in higher order elements. We successfully overcome this difficulty by monitoring the control variable at the integration points and enforcing feasibility in an additional smoothing step. In this work, we prove convergence of an interior point method with smoothing step and derive a-posteriori error estimators. The adaptive mesh refinement is based on the expansion of the solution in a Legendre series. The decay of the coefficients serves as an indicator for smoothness that guides between h- and p-refinement.
The thesis ’Hurwitz’s Complex Continued Fractions - A Historical Approach and Modern Perspectives.’ deals with two branches of mathematics: Number Theory and History of Mathematics. On the first glimpse this might be unexpected, however, on the second view this is a very fruitful combination. Doing research in mathematics, it turns out to be very helpful to be aware of the beginnings and development of the corresponding subject.
In the case of Complex Continued Fractions the origins can easily be traced back to the end of the 19th century (see [Perron, 1954, vl. 1, Ch. 46]). One of their godfathers had been the famous mathematician Adolf Hurwitz. During the study of his transformation from real to complex continued fraction theory [Hurwitz, 1888], our attention was arrested by the article ’Ueber eine besondere Art der Kettenbruch-Entwicklung complexer Grössen’ [Hurwitz, 1895] from 1895 of an author called J. Hurwitz. We were not only surprised when we found out that he was the elder unknown brother Julius, furthermore, Julius Hurwitz introduced a complex continued fraction that also appeared (unmentioned) in an ergodic theoretical work from 1985 [Tanaka, 1985]. Those observations formed the Basis of our main research questions:
What is the historical background of Adolf and Julius Hurwitz and their mathematical studies? and What modern perspectives are provided by their complex continued fraction expansions?
In this work we examine complex continued fractions from various viewpoints. After a brief introduction on real continued fractions, we firstly devote ourselves to the lives of the brothers Adolf and Julius Hurwitz. Two excursions on selected historical aspects in respect to their work complete this historical chapter. In the sequel we shed light on Hurwitz’s, Adolf’s as well as Julius’, approaches to complex continued fraction expansions.
Correspondingly, in the following chapter we take a more modern perspective. Highlights are an ergodic theoretical result, namely a variation on the Döblin-Lenstra Conjecture [Bosma et al., 1983], as well as a result on transcendental numbers in tradition of Roth’s theorem [Roth, 1955]. In two subsequent chapters we are concernced with arithmetical properties of complex continued fractions. Firstly, an analogue to Marshall Hall’s Theorem from 1947 [Hall, 1947] on sums of continued fractions is derived. Secondly, a general approach on new types of continued fractions is presented building on the structural properties of lattices. Finally, in the last chapter we take up this approach and obtain an upper bound for the approximation quality of diophantine approximations by quotients of lattice points in the complex plane generalizing a method of Hermann Minkowski, improved by Hilde Gintner [Gintner, 1936], based on ideas from geometry of numbers.
This work deals with a class of nonlinear dynamical systems exhibiting both continuous and discrete dynamics, which is called as hybrid dynamical system.
We provide a broader framework of generalized hybrid dynamical systems allowing us to handle issues on modeling, stability and interconnections.
Various sufficient stability conditions are proposed by extensions of direct Lyapunov method.
We also explicitly show Lyapunov formulations of the nonlinear small-gain theorems for interconnected input-to-state stable hybrid dynamical systems.
Applications on modeling and stability of hybrid dynamical systems are given by effective strategies of vaccination programs to control a spread of disease in epidemic systems.