Refine
Has Fulltext
- yes (230)
Is part of the Bibliography
- yes (230)
Year of publication
Document Type
- Doctoral Thesis (131)
- Journal article (77)
- Book (5)
- Other (4)
- Report (4)
- Master Thesis (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (12)
- Optimierung (9)
- Extremwertstatistik (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Finite-Elemente-Methode (5)
- Mathematikunterricht (5)
Institute
- Institut für Mathematik (230) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
Purpose: Scarring after glaucoma filtering surgery remains the most frequent cause for bleb failure. The aim of this study was to assess if the postoperative injection of bevacizumab reduces the number of postoperative subconjunctival 5-fluorouracil (5-FU) injections. Further, the effect of bevacizumab as an adjunct to 5-FU on the intraocular pressure (IOP) outcome, bleb morphology, postoperative medications, and complications was evaluated.
Methods: Glaucoma patients (N = 61) who underwent trabeculectomy with mitomycin C were analyzed retrospectively (follow-up period of 25 ± 19 months). Surgery was performed exclusively by one experienced glaucoma specialist using a standardized technique. Patients in group 1 received subconjunctival applications of 5-FU postoperatively. Patients in group 2 received 5-FU and subconjunctival injection of bevacizumab.
Results: Group 1 had 6.4 ± 3.3 (0–15) (mean ± standard deviation and range, respectively) 5-FU injections. Group 2 had 4.0 ± 2.8 (0–12) (mean ± standard deviation and range, respectively) 5-FU injections. The added injection of bevacizumab significantly reduced the mean number of 5-FU injections by 2.4 ± 3.08 (P ≤ 0.005). There was no significantly lower IOP in group 2 when compared to group 1. A significant reduction in vascularization and in cork screw vessels could be found in both groups (P < 0.0001, 7 days to last 5-FU), yet there was no difference between the two groups at the last follow-up. Postoperative complications were significantly higher for both groups when more 5-FU injections were applied. (P = 0.008). No significant difference in best corrected visual acuity (P = 0.852) and visual field testing (P = 0.610) between preoperative to last follow-up could be found between the two groups.
Conclusion: The postoperative injection of bevacizumab reduced the number of subconjunctival 5-FU injections significantly by 2.4 injections. A significant difference in postoperative IOP reduction, bleb morphology, and postoperative medication was not detected.
This thesis gives an overview over mathematical modeling of complex fluids with the discussion of underlying mechanical principles, the introduction of the energetic variational framework, and examples and applications. The purpose is to present a formal energetic variational treatment of energies corresponding to the models of physical phenomena and to derive PDEs for the complex fluid systems. The advantages of this approach over force-based modeling are, e.g., that for complex systems energy terms can be established in a relatively easy way, that force components within a system are not counted twice, and that this approach can naturally combine effects on different scales. We follow a lecture of Professor Dr. Chun Liu from Penn State University, USA, on complex fluids which he gave at the University of Wuerzburg during his Giovanni Prodi professorship in summer 2012. We elaborate on this lecture and consider also parts of his work and publications, and substantially extend the lecture by own calculations and arguments (for papers including an overview over the energetic variational treatment see [HKL10], [Liu11] and references therein).
In this thesis it is shown how the spread of infectious diseases can be described via mathematical models that show the dynamic behavior of epidemics. Ordinary differential equations are used for the modeling process. SIR and SIRS models are distinguished, depending on whether a disease confers immunity to individuals after recovery or not. There are characteristic parameters for each disease like the infection rate or the recovery rate. These parameters indicate how aggressive a disease acts and how long it takes for an individual to recover, respectively. In general the parameters are time-varying and depend on population groups. For this reason, models with multiple subgroups are introduced, and switched systems are used to carry out time-variant parameters.
When investigating such models, the so called disease-free equilibrium is of interest, where no infectives appear within the population. The question is whether there are conditions, under which this equilibrium is stable. Necessary mathematical tools for the stability analysis are presented. The theory of ordinary differential equations, including Lyapunov stability theory, is fundamental. Moreover, convex and nonsmooth analysis, positive systems and differential inclusions are introduced. With these tools, sufficient conditions are given for the disease-free equilibrium of SIS, SIR and SIRS systems to be asymptotically stable.
The Riemann zeta-function forms a central object in multiplicative number theory; its value-distribution encodes deep arithmetic properties of the prime numbers. Here, a crucial role is assigned to the analytic behavior of the zeta-function on the so called critical line. In this thesis we study the value-distribution of the Riemann zeta-function near and on the critical line. Amongst others we focus on the following.
PART I: A modified concept of universality, a-points near the critical line and a denseness conjecture attributed to Ramachandra.
The critical line is a natural boundary of the Voronin-type universality property of the Riemann zeta-function. We modify Voronin's concept by adding a scaling factor to the vertical shifts that appear in Voronin's universality theorem and investigate whether this modified concept is appropriate to keep up a certain universality property of the Riemann zeta-function near and on the critical line. It turns out that it is mainly the functional equation of the Riemann zeta-function that restricts the set of functions which can be approximated by this modified concept around the critical line.
Levinson showed that almost all a-points of the Riemann zeta-function lie in a certain funnel-shaped region around the critical line. We complement Levinson's result: Relying on arguments of the theory of normal families and the notion of filling discs, we detect a-points in this region which are very close to the critical line.
According to a folklore conjecture (often attributed to Ramachandra) one expects that the values of the Riemann zeta-function on the critical line lie dense in the complex numbers. We show that there are certain curves which approach the critical line asymptotically and have the property that the values of the zeta-function on these curves are dense in the complex numbers.
Many of our results in part I are independent of the Euler product representation of the Riemann zeta-function and apply for meromorphic functions that satisfy a Riemann-type functional equation in general.
PART II: Discrete and continuous moments.
The Lindelöf hypothesis deals with the growth behavior of the Riemann zeta-function on the critical line. Due to classical works by Hardy and Littlewood, the Lindelöf hypothesis can be reformulated in terms of power moments to the right of the critical line. Tanaka showed recently that the expected asymptotic formulas for these power moments are true in a certain measure-theoretical sense; roughly speaking he omits a set of Banach density zero from the path of integration of these moments. We provide a discrete and integrated version of Tanaka's result and extend it to a large class of Dirichlet series connected to the Riemann zeta-function.
This paper presents an alternative approach for obtaining a converse Lyapunov theorem for discrete–time systems. The proposed approach is constructive, as it provides an explicit Lyapunov function. The developed converse theorem establishes existence of global Lyapunov functions for globally exponentially stable (GES) systems and semi–global practical Lyapunov functions for globally asymptotically stable systems. Furthermore, for specific classes of sys- tems, the developed converse theorem can be used to establish non–conservatism of a particular type of Lyapunov functions. Most notably, a proof that conewise linear Lyapunov functions are non–conservative for GES conewise linear systems is given and, as a by–product, tractable construction of polyhedral Lyapunov functions for linear systems is attained.
The Factorization Method is a noniterative method to detect the shape and position of conductivity anomalies inside an object. The method was introduced by Kirsch for inverse scattering problems and extended to electrical impedance tomography (EIT) by Brühl and Hanke. Since these pioneering works, substantial progress has been made on the theoretical foundations of the method. The necessary assumptions have been weakened, and the proofs have been considerably simplified. In this work, we aim to summarize this progress and present a state-of-the-art formulation of the Factorization Method for EIT with continuous data. In particular, we formulate the method for general piecewise analytic conductivities and give short and self-contained proofs.
We introduce some mathematical framework for extreme value theory in the space of continuous functions on compact intervals and provide basic definitions and tools. Continuous max-stable processes on [0,1] are characterized by their “distribution functions” G which can be represented via a norm on function space, called D-norm. The high conformity of this setup with the multivariate case leads to the introduction of a functional domain of attraction approach for stochastic processes, which is more general than the usual one based on weak convergence. We also introduce the concept of “sojourn time transformation” and compare several types of convergence on function space. Again in complete accordance with the uni- or multivariate case it is now possible to get functional generalized Pareto distributions (GPD) W via W = 1 + log(G) in the upper tail. In particular, this enables us to derive characterizations of the functional domain of attraction condition for copula processes. Moreover, we investigate the sojourn time above a high threshold of a continuous stochastic process. It turns out that the limit, as the threshold increases, of the expected sojourn time given that it is positive, exists if the copula process corresponding to Y is in the functional domain of attraction of a max-stable process. If the process is in a certain neighborhood of a generalized Pareto process, then we can replace the constant threshold by a general threshold function and we can compute the asymptotic sojourn time distribution.
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
In this thesis, time-optimal control of the bi-steerable robot is addressed. The bi-steerable robot, a vehicle with two independently steerable axles, is a complex nonholonomic system with applications in many areas of land-based robotics. Motion planning and optimal control are challenging tasks for this system, since standard control schemes do not apply. The model of the bi-steerable robot considered here is a reduced kinematic model with the driving velocity and the steering angles of the front and rear axle as inputs. The steering angles of the two axles can be set independently from each other. The reduced kinematic model is a control system with affine and non-affine inputs, as the driving velocity enters the system linearly, whereas the steering angles enter nonlinearly. In this work, a new approach to solve the time-optimal control problem for the bi-steerable robot is presented. In contrast to most standard methods for time-optimal control, our approach does not exclusively rely on discretization and purely numerical methods. Instead, the Pontryagin Maximum Principle is used to characterize candidates for time-optimal solutions. The resultant boundary value problem is solved by optimization to obtain solutions to the path planning problem over a given time horizon. The time horizon is decreased and the path planning is iterated to approximate a time-optimal solution. An optimality condition is introduced which depends on the number of cusps, i.e., reversals of the driving direction of the robot. This optimality condition allows to single out non-optimal solutions with too many cusps. In general, our approach only gives approximations of time-optimal solutions, since only normal regular extremals are considered as solutions to the path planning problem, and the path planning is terminated when an extremal with minimal number of cusps is found. However, for most desired configurations, normal regular extremals with the minimal number of cusps provide time-optimal solutions for the bi-steerable robot. The convergence of the approach is analyzed and its probabilistic completeness is shown. Moreover, simulation results on time-optimal solutions for the bi-steerable robot are presented.
This thesis is devoted to numerical verification of optimality conditions for non-convex optimal control problems. In the first part, we are concerned with a-posteriori verification of sufficient optimality conditions. It is a common knowledge that verification of such conditions for general non-convex PDE-constrained optimization problems is very challenging. We propose a method to verify second-order sufficient conditions for a general class of optimal control problem. If the proposed verification method confirms the fulfillment of the sufficient condition then a-posteriori error estimates can be computed. A special ingredient of our method is an error analysis for the Hessian of the underlying optimization problem. We derive conditions under which positive definiteness of the Hessian of the discrete problem implies positive definiteness of the Hessian of the continuous problem. The results are complemented with numerical experiments. In the second part, we investigate adaptive methods for optimal control problems with finitely many control parameters. We analyze a-posteriori error estimates based on verification of second-order sufficient optimality conditions using the method developed in the first part. Reliability and efficiency of the error estimator are shown. We illustrate through numerical experiments, the use of the estimator in guiding adaptive mesh refinement.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
In the verification of positive Harris recurrence of multiclass queueing networks the stability analysis for the class of fluid networks is of vital interest. This thesis addresses stability of fluid networks from a Lyapunov point of view. In particular, the focus is on converse Lyapunov theorems. To gain an unified approach the considerations are based on generic properties that fluid networks under widely used disciplines have in common. It is shown that the class of closed generic fluid network models (closed GFNs) is too wide to provide a reasonable Lyapunov theory. To overcome this fact the class of strict generic fluid network models (strict GFNs) is introduced. In this class it is required that closed GFNs satisfy additionally a concatenation and a lower semicontinuity condition. We show that for strict GFNs a converse Lyapunov theorem is true which provides a continuous Lyapunov function. Moreover, it is shown that for strict GFNs satisfying a trajectory estimate a smooth converse Lyapunov theorem holds. To see that widely used queueing disciplines fulfill the additional conditions, fluid networks are considered from a differential inclusions perspective. Within this approach it turns out that fluid networks under general work-conserving, priority and proportional processor-sharing disciplines define strict GFNs. Furthermore, we provide an alternative proof for the fact that the Markov process underlying a multiclass queueing network is positive Harris recurrent if the associate fluid network defining a strict GFN is stable. The proof explicitely uses the Lyapunov function admitted by the stable strict GFN. Also, the differential inclusions approach shows that first-in-first-out disciplines play a special role.
We study the symmetrised rank-one convex hull of monoclinic-I martensite (a twelve-variant material) in the context of geometrically-linear elasticity. We construct sets of T3s, which are (non-trivial) symmetrised rank-one convex hulls of 3-tuples of pairwise incompatible strains. Moreover we construct a five-dimensional continuum of T3s and show that its intersection with the boundary of the symmetrised rank-one convex hull is four-dimensional. We also show that there is another kind of monoclinic-I martensite with qualitatively different semi-convex hulls which, so far as we know, has not been experimentally observed. Our strategy is to combine understanding of the algebraic structure of symmetrised rank-one convex cones with knowledge of the faceting structure of the convex polytope formed by the strains.
Consider the situation where two or more images are taken from the same object. After taking the first image, the object is moved or rotated so that the second recording depicts it in a different manner. Additionally, take heed of the possibility that the imaging techniques may have also been changed. One of the main problems in image processing is to determine the spatial relation between such images. The corresponding process of finding the spatial alignment is called “registration”. In this work, we study the optimization problem which corresponds to the registration task. Especially, we exploit the Lie group structure of the set of transformations to construct efficient, intrinsic algorithms. We also apply the algorithms to medical registration tasks. However, the methods developed are not restricted to the field of medical image processing. We also have a closer look at more general forms of optimization problems and show connections to related tasks.
Argumentation and proof have played a fundamental role in mathematics education in recent years. The author of this dissertation would like to investigate the development of the proving process within a dynamic geometry system in order to support tertiary students understanding the proving process. The strengths of this dynamic system stimulate students to formulate conjectures and produce arguments during the proving process. Through empirical research, we classified different levels of proving and proposed a methodological model for proving. This methodological model makes a contribution to improve students’ levels of proving and develop their dynamic visual thinking. We used Toulmin model of argumentation as a theoretical model to analyze the relationship between argumentation and proof. This research also offers some possible explanation so as to why students have cognitive difficulties in constructing proofs and provides mathematics educators with a deeper understanding on the proving process within a dynamic geometry system.
Applications in various research areas such as signal processing, quantum computing, and computer vision, can be described as constrained optimization tasks on certain subsets of tensor products of vector spaces. In this work, we make use of techniques from Riemannian geometry and analyze optimization tasks on subsets of so-called simple tensors which can be equipped with a differentiable structure. In particular, we introduce a generalized Rayleigh-quotient function on the tensor product of Grassmannians and on the tensor product of Lagrange- Grassmannians. Its optimization enables a unified approach to well-known tasks from different areas of numerical linear algebra, such as: best low-rank approximations of tensors (data compression), computing geometric measures of entanglement (quantum computing) and subspace clustering (image processing). We perform a thorough analysis on the critical points of the generalized Rayleigh-quotient and develop intrinsic numerical methods for its optimization. Explicitly, using the techniques from Riemannian optimization, we present two type of algorithms: a Newton-like and a conjugated gradient algorithm. Their performance is analysed and compared with established methods from the literature.
On the Fragility Index
(2011)
The Fragility Index captures the amount of risk in a stochastic system of arbitrary dimension. Its main mathematical tool is the asymptotic distribution of exceedance counts within the system which can be derived by use of multivariate extreme value theory. Thereby the basic assumption is that data comes from a distribution which lies in the domain of attraction of a multivariate extreme value distribution. The Fragility Index itself and its extension can serve as a quantitative measure for tail dependence in arbitrary dimensions. It is linked to the well known extremal index for stochastic processes as well the extremal coefficient of an extreme value distribution.
In this thesis different algorithms for the solution of generalized Nash equilibrium problems with the focus on global convergence properties are developed. A globalized Newton method for the computation of normalized solutions, a nonsmooth algorithm based on an optimization reformulation of the game-theoretic problem, and a merit function approach and an interior point method for the solution of the concatenated Karush-Kuhn-Tucker-system are analyzed theoretically and numerically. The interior point method turns out to be one of the best existing methods for the solution of generalized Nash equilibrium problems.
In the following dissertation we consider three preconditioners of algebraic multigrid type, though they are defined for arbitrary prolongation and restriction operators, we consider them in more detail for the aggregation method. The strengthened Cauchy-Schwarz inequality and the resulting angle between the spaces will be our main interests. In this context we will introduce some modifications. For the problem of the one-dimensional convection we obtain perfect theoretical results. Although this is not the case for more complex problems, the numerical results we present will show that the modifications are also useful in these situation. Additionally, we will consider a symmetric problem in the energy norm and present a simple rule for algebraic aggregation.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
In this thesis we consider a reactive transport model with precipitation dissolution reactions from the geosciences. It consists of PDEs, ODEs, algebraic equations (AEs) and complementary conditions (CCs). After discretization of this model we get a huge nonlinear and nonsmooth equation system. We tackle this system with the semismooth Newton method introduced by Qi and Sun. The focus of this thesis is on the application and convergence of this algorithm. We proof that this algorithm is well defined for this problem and local even quadratic convergent for a BD-regular solution. We also deal with the arising linear equation systems, which are large and sparse, and how they can be solved efficiently. An integral part of this investigation is the boundedness of a certain matrix-valued function, which is shown in a separate chapter. As a side quest we study how extremal eigenvalues (and singular values) of certain PDE-operators, which are involved in our discretized model, can be estimated accurately.
The subject of this thesis are mathematical programs with complementarity conditions (MPCC). At first, an economic example of this problem class is analyzed, the problem of effort maximization in asymmetric n-person contest games. While an analytical solution for this special problem could be derived, this is not possible in general for MPCCs. Therefore, optimality conditions which might be used for numerical approaches where considered next. More precisely, a Fritz-John result for MPCCs with stronger properties than those known so far was derived together with some new constraint qualifications and subsequently used to prove an exact penalty result. Finally, to solve MPCCs numerically, the so called relaxation approach was used. Besides improving the results for existing relaxation methods, a new relaxation with strong convergence properties was suggested and a numerical comparison of all methods based on the MacMPEC collection conducted.
We study reachability matrices R(A, b) = [b,Ab, . . . ,An−1b], where A is an n × n matrix over a field K and b is in Kn. We characterize those matrices that are reachability matrices for some pair (A, b). In the case of a cyclic matrix A and an n-vector of indeterminates x, we derive a factorization of the polynomial det(R(A, x)).
This thesis is devoted to Bernoulli Stochastics, which was initiated by Jakob Bernoulli more than 300 years ago by his master piece 'Ars conjectandi', which can be translated as 'Science of Prediction'. Thus, Jakob Bernoulli's Stochastics focus on prediction in contrast to the later emerging disciplines probability theory, statistics and mathematical statistics. Only recently Jakob Bernoulli's focus was taken up von Collani, who developed a unified theory of uncertainty aiming at making reliable and accurate predictions. In this thesis, teaching material as well as a virtual classroom are developed for fostering ideas and techniques initiated by Jakob Bernoulli and elaborated by Elart von Collani. The thesis is part of an extensively construed project called 'Stochastikon' aiming at introducing Bernoulli Stochastics as a unified science of prediction and measurement under uncertainty. This ambitious aim shall be reached by the development of an internet-based comprehensive system offering the science of Bernoulli Stochastics on any level of application. So far it is planned that the 'Stochastikon' system (http://www.stochastikon.com/) will consist of five subsystems. Two of them are developed and introduced in this thesis. The first one is the e-learning programme 'Stochastikon Magister' and the second one 'Stochastikon Graphics' that provides the entire Stochastikon system with graphical illustrations. E-learning is the outcome of merging education and internet techniques. E-learning is characterized by the facts that teaching and learning are independent of place and time and of the availability of specially trained teachers. Knowledge offering as well as knowledge transferring are realized by using modern information technologies. Nowadays more and more e-learning environments are based on the internet as the primary tool for communication and presentation. E-learning presentation tools are for instance text-files, pictures, graphics, audio and videos, which can be networked with each other. There could be no limit as to the access to teaching contents. Moreover, the students can adapt the speed of learning to their individual abilities. E-learning is particularly appropriate for newly arising scientific and technical disciplines, which generally cannot be presented by traditional learning methods sufficiently well, because neither trained teachers nor textbooks are available. The first part of this dissertation introduces the state of the art of e-learning in statistics, since statistics and Bernoulli Stochastics are both based on probability theory and exhibit many similar features. Since Stochastikon Magister is the first e-learning programme for Bernoulli Stochastics, the educational statistics systems is selected for the purpose of comparison and evaluation. This makes sense as both disciplines are an attempt to handle uncertainty and use methods that often can be directly compared. The second part of this dissertation is devoted to Bernoulli Stochastics. This part aims at outlining the content of two courses, which have been developed for the anticipated e-learning programme Stochastikon Magister in order to show the difficulties in teaching, understanding and applying Bernoulli Stochastics. The third part discusses the realization of the e-learning programme Stochastikon Magister, its design and implementation, which aims at offering a systematic learning of principles and techniques developed in Bernoulli Stochastics. The resulting e-learning programme differs from the commonly developed e-learning programmes as it is an attempt to provide a virtual classroom that simulates all the functions of real classroom teaching. This is in general not necessary, since most of the e-learning programmes aim at supporting existing classroom teaching. The forth part presents two empirical evaluations of Stochastikon Magister. The evaluations are performed by means of comparisons between traditional classroom learning in statistics and e-learning of Bernoulli Stochastics. The aim is to assess the usability and learnability of Stochastikon Magister. Finally, the fifth part of this dissertation is added as an appendix. It refers to Stochastikon Graphics, the fifth component of the entire Stochastikon system. Stochastikon Graphics provides the other components with graphical representations of concepts, procedures and results obtained or used in the framework of Bernoulli Stochastics. The primary aim of this thesis is the development of an appropriate software for the anticipated e-learning environment meant for Bernoulli Stochastics, while the preparation of the necessary teaching material constitutes only a secondary aim used for demonstrating the functionality of the e-learning platform and the scientific novelty of Bernoulli Stochastics. To this end, a first version of two teaching courses are developed, implemented and offered on-line in order to collect practical experiences. The two courses, which were developed as part of this projects are submitted as a supplement to this dissertation. For the time being the first experience with the e-learning programme Stochastikon Magister has been made. Students of different faculties of the University of Würzburg, as well as researchers and engineers, who are involved in the Stochastikon project have obtained access to Stochastikon Magister via internet. They have registered for Stochastikon Magister and participated in the course programme. This thesis reports on two assessments of these first experiences and the results will lead to further improvements with respect to content and organization of Stochastikon Magister.
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
Bei vielen Fragestellungen, in denen sich eine Grundgesamtheit in verschiedene Klassen unterteilt, ist weniger die relative Klassengröße als vielmehr die Anzahl der Klassen von Bedeutung. So interessiert sich beispielsweise der Biologe dafür, wie viele Spezien einer Gattung es gibt, der Numismatiker dafür, wie viele Münzen oder Münzprägestätten es in einer Epoche gab, der Informatiker dafür, wie viele unterschiedlichen Einträge es in einer sehr großen Datenbank gibt, der Programmierer dafür, wie viele Fehler eine Software enthält oder der Germanist dafür, wie groß der Wortschatz eines Autors war oder ist. Dieser Artenreichtum ist die einfachste und intuitivste Art und Weise eine Population oder Grundgesamtheit zu charakterisieren. Jedoch kann nur in Kollektiven, in denen die Gesamtanzahl der Bestandteile bekannt und relativ klein ist, die Anzahl der verschiedenen Spezien durch Erfassung aller bestimmt werden. In allen anderen Fällen ist es notwendig die Spezienanzahl durch Schätzungen zu bestimmen.
It is well-known that a multivariate extreme value distribution can be represented via the D-Norm. However not every norm yields a D-Norm. In this thesis a necessary and sufficient condition is given for a norm to define an extreme value distribution. Applications of this theorem includes a new proof for the bivariate case, the Pickands dependence function and the nested logistic model. Furthermore the GPD-Flow is introduced and first insights were given such that if it converges it converges against the copula of complete dependence.
A new class of optimization problems name 'mathematical programs with vanishing constraints (MPVCs)' is considered. MPVCs are on the one hand very challenging from a theoretical viewpoint, since standard constraint qualifications such as LICQ, MFCQ, or ACQ are most often violated, and hence, the Karush-Kuhn-Tucker conditions do not provide necessary optimality conditions off-hand. Thus, new CQs and the corresponding optimality conditions are investigated. On the other hand, MPVCs have important applications, e.g., in the field of topology optimization. Therefore, numerical algorithms for the solution of MPVCs are designed, investigated and tested for certain problems from truss-topology-optimization.
In the generalized Nash equilibrium problem not only the cost function of a player depends on the rival players' decisions, but also his constraints. This thesis presents different iterative methods for the numerical computation of a generalized Nash equilibrium, some of them globally, others locally superlinearly convergent. These methods are based on either reformulations of the generalized Nash equilibrium problem as an optimization problem, or on a fixed point formulation. The key tool for these reformulations is the Nikaido-Isoda function. Numerical results for various problem from the literature are given.
Mathematische Programme mit Gleichgewichtsrestriktionen (oder Komplementaritätsbedingungen), kurz MPECs, sind als äußerst schwere Optimierungsprobleme bekannt. Lokale Minima oder geeignete stationäre Punkte zu finden, ist ein nichttriviales Problem. Diese Arbeit beschreibt, wie man dennoch die spezielle Struktur von MPECs ausnutzen kann und mittels eines Branch-and-Bound-Verfahrens ein globales Minimum von Linearen Programmen mit Gleichgewichtsrestriktionen, kurz LPECs, bekommt. Des Weiteren wird dieser Branch-and-Bound-Algorithmus innerhalb eines Filter-SQPEC-Verfahrens genutzt, um allgemeine MPECs zu lösen. Für das Filter-SQPEC Verfahren wird ein globaler Konvergenzsatz bewiesen. Außerdem werden für beide Verfahren numerische Resultate angegeben.
It is well known, that the least squares estimator performs poorly in the presence of multicollinearity. One way to overcome this problem is using biased estimators, e.g. ridge regression estimators. In this study an estimation procedure is proposed based on adding a small quantity omega on some or each regressor. The resulting biased estimator is described in dependence of omega and furthermore it is shown that its mean squared error is smaller than the one corresponding to the least squares estimator in the case of highly correlated regressors.
We discuss exceptional polynomials, i.e. polynomials over a finite field $k$ that induce bijections over infinitely many finite extensions of $k$. In the first chapters we give the theoretical background to characterize this class of polynomials with Galois theoretic means. This leads to the notion of arithmetic resp. geometric monodromy groups. In the remaining chapters we restrict our attention to polynomials with primitive affine arithmetic monodromy group. We first classify all exceptional polynomials with the fixed field of the affine kernel of the arithmetic monodromy group being of genus less or equal to 2. Next we show that every full affine group can be realized as the monodromy group of a polynomial. In the remaining chapters we classify affine polynomials of a given degree.
Controllability Aspects of the Lindblad-Kossakowski Master Equation : A Lie-Theoretical Approach
(2009)
One main task, which is considerably important in many applications in quantum control, is to explore the possibilities of steering a quantum system from an initial state to a target state. This thesis focuses on fundamental control-theoretical issues of quantum dynamics described by the Lindblad-Kossakowski master equation which arises as a bilinear control system on some underlying real vector spaces, e.g controllability aspects and the structure of reachable sets. Based on Lie-algebraic methods from nonlinear control theory, the thesis presents a unified approach to control problems of finite dimensional closed and open quantum systems. In particular, a simplified treatment for controllability of closed quantum systems as well as new accessibility results for open quantum systems are obtained. The main tools to derive the results are the well-known classifications of all matrix Lie groups which act transitively on Grassmann manifolds, and respectively, on real vector spaces without the origin. It is also shown in this thesis that accessibiity of the Lindblad-Kossakowski master equation is a generic property. Moreover, based on the theoretical accessibility results, an algorithm is developed to decide when the Lindblad-Kossakowski master equation is accessible.
In der vorliegenden Arbeit werden lineare Systeme elliptischer partieller Differentialgleichungen in schwacher Formulierung auf konischen Gebieten untersucht. Auf einem zunächst unbeschränkten Kegelgebiet betrachten wir den Fall beschränkter und nur von den Winkelvariablen abhängiger Koeffizientenfunktionen. Die durch selbige definierte Bilinearform genüge einer Gårdingschen Ungleichung. In gewichteten Sobolevräumen werden Existenz- und Eindeutigkeitsfragen geklärt, wobei das Problem mittels Fouriertransformation auf eine von einem komplexen Parameter abhängige Familie T(·) von Fredholmoperatoren zurückgeführt wird. Unter Anwendung des Residuenkalküls gewinnen wir eine Darstellung der Lösung in Form einer Zerlegung in einen glatten Anteil einerseits sowie eine endliche Summe von Singulärfunktionen andererseits. Durch Abschneidetechniken werden die gewonnenen Erkenntnisse auf den Fall schwach formulierter elliptischer Systeme auf beschränkten Kegelgebieten unter Formulierung in gewöhnlichen, nicht-gewichteten Sobolevräumen angewendet. Die für Regularitätsfragen maßgeblichen Eigenwerte der Operatorfunktion T mit minimalem positiven Imaginärteil werden im letzten Kapitel der Arbeit am Beispiel der ebenen elastischen Gleichungen numerisch bestimmt.
Many optimization problems for a smooth cost function f on a manifold M can be solved by determining the zeros of a vector field F; such as e.g. the gradient F of the cost function f. If F does not depend on additional parameters, numerous zero-finding techniques are available for this purpose. It is a natural generalization however, to consider time-dependent optimization problems that require the computation of time-varying zeros of time-dependent vector fields F(x,t). Such parametric optimization problems arise in many fields of applied mathematics, in particular path-following problems in robotics, recursive eigenvalue and singular value estimation in signal processing, as well as numerical linear algebra and inverse eigenvalue problems in control theory. In the literature, there are already some tracking algorithms for these tasks, but these do not always adequately respect the manifold structure. Hence, available tracking results can often be improved by implementing methods working directly on the manifold. Thus, intrinsic methods are of interests that evolve during the entire computation on the manifold. It is the task of this thesis, to develop such intrinsic zero finding methods. The main results of this thesis are as follows: - A new class of continuous and discrete tracking algorithms is proposed for computing zeros of time-varying vector fields on Riemannian manifolds. This was achieved by studying the newly introduced time-varying Newton Flow and the time-varying Newton Algorithm on Riemannian manifolds. - Convergence analysis is performed on arbitrary Riemannian manifolds. - Concretization of these results on submanifolds, including for a new class of algorithms via local parameterizations. - More specific results in Euclidean space are obtained by considering inexact and underdetermined time-varying Newton Flows. - Illustration of these newly introduced algorithms by examining time-varying tracking tasks in three application areas: Subspace analysis, matrix decompositions (in particular EVD and SVD) and computer vision.
Das Hauptgebiet der Arbeit stellt die Approximation der Lösungen partieller Differentialgleichungen mit Dirichlet-Randbedingungen durch Splinefunktionen dar. Partielle Differentialgleichungen finden ihre Anwendung beispielsweise in Bereichen der Elektrostatik, der Elastizitätstheorie, der Strömungslehre sowie bei der Untersuchung der Ausbreitung von Wärme und Schall. Manche Approximationsaufgaben besitzen keine eindeutige Lösung. Durch Anwendung der Penalized Least Squares Methode wurde gezeigt, dass die Eindeutigkeit der gesuchten Lösung von gewissen Minimierungsaufgaben sichergestellt werden kann. Unter Umständen lässt sich sogar eine höhere Stabilität des numerischen Verfahrens gewinnen. Für die numerischen Betrachtungen wurde ein umfangreiches, effizientes C-Programm erstellt, welches die Grundlage zur Bestätigung der theoretischen Voraussagen mit den praktischen Anwendungen bildete.
The incidence matrices of many combinatorial structures satisfy the so called rectangular rule, i.e., the scalar product of any two lines of the matrix is at most 1. We study a class of matrices with rectangular rule, the regular block matrices. Some regular block matrices are submatrices of incidence matrices of finite projective planes. Necessary and sufficient conditions are given for regular block matrices, to be submatrices of projective planes. Moreover, regular block matrices are related to another combinatorial structure, the symmetric configurations. In particular, it turns out, that we may conclude the existence of several symmetric configurations from the existence of a projective plane, using this relationship.
We investigate iterative numerical algorithms with shifts as nonlinear discrete-time control systems. Our approach is based on the interpretation of reachable sets as orbits of the system semigroup. In the first part we develop tools for the systematic analysis of the structure of reachable sets of general invertible discrete-time control systems. Therefore we merge classical concepts, such as geometric control theory, semigroup actions and semialgebraic geometry. Moreover, we introduce new concepts such as right divisible systems and the repelling phenomenon. In the second part we apply the semigroup approach to the investigation of concrete numerical iteration schemes. We extend the known results about the reachable sets of classical inverse iteration. Moreover, we investigate the structure of reachable sets and systemgroup orbits of inverse iteration on flag manifolds and Hessenberg varieties, rational iteration schemes, Richardson's method and linear control schemes. In particular we obtain necessary and sufficient conditions for controllability and the appearance of repelling phenomena. Furthermore, a new algorithm for solving linear equations (LQRES) is derived.
A torsion free abelian group of finite rank is called almost completely decomposable if it has a completely decomposable subgroup of finite index. A p-local, p-reduced almost completely decomposable group of type (1,2) is briefly called a (1,2)-group. Almost completely decomposable groups can be represented by matrices over the ring Z/hZ, where h is the exponent of the regulator quotient. This particular choice of representation allows for a better investigation of the decomposability of the group. Arnold and Dugas showed in several of their works that (1,2)-groups with regulator quotient of exponent at least p^7 allow infinitely many isomorphism types of indecomposable groups. It is not known if the exponent 7 is minimal. In this dissertation, this problem is addressed.
This work studies the convergence of trajectories of gradient-like systems. In the first part of this work continuous-time gradient-like systems are examined. Results on the convergence of integral curves of gradient systems to single points of Lojasiewicz and Kurdyka are extended to a class of gradient-like vector fields and gradient-like differential inclusions. In the second part of this work discrete-time gradient-like optimization methods on manifolds are studied. Methods for smooth and for nonsmooth optimization problems are considered. For these methods some convergence results are proven. Additionally the optimization methods for nonsmooth cost functions are applied to sphere packing problems on adjoint orbits.
In this thesis affine-scaling-methods for two different types of mathematical problems are considered. The first type of problems are nonlinear optimization problems subject to bound constraints. A class of new affine-scaling Newton-type methods is introduced. The methods are shown to be locally quadratically convergent without assuming strict complementarity of the solution. The new methods differ from previous ones mainly in the choice of the scaling matrix. The second type of problems are semismooth system of equations with bound constraints. A new affine-scaling trust-region method for these problems is developed. The method is shown to have strong global and local convergence properties under suitable assumptions. Numerical results are presented for a number of problems arising from different areas.
The investigation of multivariate generalized Pareto distributions (GPDs) in the framework of extreme value theory has begun only lately. Recent results show that they can, as in the univariate case, be used in Peaks over Threshold approaches. In this manuscript we investigate the definition of GPDs from Section 5.1 of Falk et al. (2004), which does not differ in the area of interest from those of other authors. We first show some theoretical properties and introduce important examples of GPDs. For the further investigation of these distributions simulation methods are an important part. We describe several methods of simulating GPDs, beginning with an efficient method for the logistic GPD. This algorithm is based on the Shi transformation, which was introduced by Shi (1995) and was used in Stephenson (2003) for the simulation of multivariate extreme value distributions of logistic type. We also present nonparametric and parametric estimation methods in GPD models. We estimate the angular density nonparametrically in arbitrary dimension, where the bivariate case turns out to be a special case. The asymptotic normality of the corresponding estimators is shown. Also in the parametric estimations, which are mainly based on maximum likelihood methods, the asymptotic normality of the estimators is shown under certain regularity conditions. Finally the methods are applied to a real hydrological data set containing water discharges of the rivers Altmühl and Danube in southern Bavaria.
This thesis is concerned with numerical methods for solving nonlinear and mixed complementarity problems. Such problems arise from a variety of applications such as equilibria models of economics, contact and structural mechanics problems, obstacle problems, discrete-time optimal control problems etc. In this thesis we present a new formulation of nonlinear and mixed complementarity problems based on the Fischer-Burmeister function approach. Unlike traditional reformulations, our approach leads to an over-determined system of nonlinear equations. This has the advantage that certain drawbacks of the Fischer-Burmeister approach are avoided. Among other favorable properties of the new formulation, the natural merit function turns out to be differentiable. To solve the arising over-determined system we use a nonsmooth damped Levenberg-Marquardt-type method and investigate its convergence properties. Under mild assumptions, it can be shown that the global and local fast convergence results are similar to some of the better equation-based method. Moreover, the new method turns out to be significantly more robust than the corresponding equation-based method. For the case of large complementarity problems, however, the performance of this method suffers from the need for solving the arising linear least squares problem exactly at each iteration. Therefore, we suggest a modified version which allows inexact solutions of the least squares problems by using an appropriate iterative solver. Under certain assumptions, the favorable convergence properties of the original method are preserved. As an alternative method for mixed complementarity problems, we consider a box constrained least squares formulation along with a projected Levenberg-Marquardt-type method. To globalize this method, trust region strategies are proposed. Several ingredients are used to improve this approach: affine scaling matrices and multi-dimensional filter techniques. Global convergence results as well as local superlinear/quadratic convergence are shown under appropriate assumptions. Combining the advantages of the new methods, a new software for solving mixed complementarity problems is presented.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.
An exhaustive discussion of constraint qualifications (CQ) and stationarity concepts for mathematical programs with equilibrium constraints (MPEC) is presented. It is demonstrated that all but the weakest CQ, Guignard CQ, are too strong for a discussion of MPECs. Therefore, MPEC variants of all the standard CQs are introduced and investigated. A strongly stationary point (which is simply a KKT-point) is seen to be a necessary first order optimality condition only under the strongest CQs, MPEC-LICQ, MPEC-SMFCQ and Guignard CQ. Therefore a whole set of KKT-type conditions is investigated. A simple approach is given to acquire A-stationarity to be a necessary first order condition under MPEC-Guiganrd CQ. Finally, a whole chapter is devoted to investigating M-stationary, among the strongest stationarity concepts, second only to strong stationarity. It is shown to be a necessary first order condition under MPEC-Guignard CQ, the weakest known CQ for MPECs.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.
In dieser Arbeit wird der Bau der (abzählbaren) abelschen p-Gruppen untersucht, durch die Betrachtung der dazugehörigen Quasibasen, die als bestimmte erzeugende Systeme der gegebenen p-Gruppe definiert sind. Die Untersuchung wird insbesondere auf die nichtseparablen p-Gruppen und ihre induktiven Quasibasen bezogen.