Institut für Mathematik
Refine
Has Fulltext
- yes (230)
Is part of the Bibliography
- yes (230)
Year of publication
Document Type
- Doctoral Thesis (131)
- Journal article (77)
- Book (5)
- Other (4)
- Report (4)
- Master Thesis (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (12)
- Optimierung (9)
- Extremwertstatistik (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Finite-Elemente-Methode (5)
- Mathematikunterricht (5)
Institute
- Institut für Mathematik (230)
- Augenklinik und Poliklinik (2)
- Institut für Virologie und Immunbiologie (2)
- Klinik und Poliklinik für Dermatologie, Venerologie und Allergologie (2)
- Theodor-Boveri-Institut für Biowissenschaften (2)
- Graduate School of Science and Technology (1)
- Institut für Informatik (1)
- Klinik und Poliklinik für Psychiatrie, Psychosomatik und Psychotherapie (1)
- Medizinische Klinik und Poliklinik II (1)
- Missionsärztliche Klinik (1)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
In the thesis discrete moments of the Riemann zeta-function and allied Dirichlet series are studied.
In the first part the asymptotic value-distribution of zeta-functions is studied where the samples are taken from a Cauchy random walk on a vertical line inside the critical strip. Building on techniques by Lifshits and Weber analogous results for the Hurwitz zeta-function are derived. Using Atkinson’s dissection this is even generalized to Dirichlet L-functions associated with a primitive character. Both results indicate that the expectation value equals one which shows that the values of these
zeta-function are small on average.
The second part deals with the logarithmic derivative of the Riemann zeta-function on vertical lines and here the samples are with respect to an explicit ergodic transformation. Extending work of Steuding, discrete moments are evaluated and an equivalent formulation for the Riemann Hypothesis in terms of ergodic theory is obtained.
In the third and last part of the thesis, the phenomenon of universality with respect
to stochastic processes is studied. It is shown that certain random shifts of the zeta-function can approximate non-vanishing analytic target functions as good as we please. This result relies on Voronin's universality theorem.
The Cauchy problem for a simplified shallow elastic fluids model, one 3 x 3 system of Temple's type, is studied and a global weak solution is obtained by using the compensated compactness theorem coupled with the total variation estimates on the first and third Riemann invariants, where the second Riemann invariant is singular near the zero layer depth (rho - 0). This work extends in some sense the previous works, (Serre, 1987) and (Leveque and Temple, 1985), which provided the global existence of weak solutions for 2 x 2 strictly hyperbolic system and (Heibig, 1994) for n x n strictly hyperbolic system with smooth Riemann invariants.
Several aspects of the stability analysis of large-scale discrete-time systems are considered. An important feature is that the right-hand side does not have have to be continuous.
In particular, constructive approaches to compute Lyapunov functions are derived and applied to several system classes.
For large-scale systems, which are considered as an interconnection of smaller subsystems, we derive a new class of small-gain results, which do not require the subsystems to be robust in some sense. Moreover, we do not only study sufficiency of the conditions, but rather state an assumption under which these conditions are also necessary.
Moreover, gain construction methods are derived for several types of aggregation, quantifying how large a prescribed set of interconnection gains can be in order that a small-gain condition holds.
Analysis of discretization schemes for Fokker-Planck equations and related optimality systems
(2015)
The Fokker-Planck (FP) equation is a fundamental model in thermodynamic kinetic theories and
statistical mechanics.
In general, the FP equation appears in a number of different fields in natural sciences, for instance in solid-state physics, quantum optics, chemical physics, theoretical biology, and circuit theory. These equations also provide a powerful mean to define
robust control strategies for random models. The FP equations are partial differential equations (PDE) describing the time evolution of the probability density function (PDF) of stochastic processes.
These equations are of different types depending on the underlying stochastic process.
In particular, they are parabolic PDEs for the PDF of Ito processes, and hyperbolic PDEs for piecewise deterministic processes (PDP).
A fundamental axiom of probability calculus requires that the integral of the PDF over all the allowable state space must be equal to one, for all time. Therefore, for the purpose of accurate numerical simulation, a discretized FP equation must guarantee conservativeness of the total probability. Furthermore, since the
solution of the FP equation represents a probability density, any numerical scheme that approximates the FP equation is required to guarantee the positivity of the solution. In addition, an approximation scheme must be accurate and stable.
For these purposes, for parabolic FP equations on bounded domains, we investigate the Chang-Cooper (CC) scheme for space discretization and first- and
second-order backward time differencing. We prove that the resulting
space-time discretization schemes are accurate, conditionally stable, conservative, and preserve positivity.
Further, we discuss a finite difference discretization for the FP system corresponding to a PDP process in a bounded domain.
Next, we discuss FP equations in unbounded domains.
In this case, finite-difference or finite-element methods cannot be applied. By employing a suitable set of basis functions, spectral methods allow to treat unbounded domains. Since FP solutions decay exponentially at infinity, we consider Hermite functions as basis functions, which are Hermite polynomials multiplied by a Gaussian.
To this end, the Hermite spectral discretization is applied
to two different FP equations; the parabolic PDE corresponding to Ito processes, and the system of hyperbolic PDEs corresponding to a PDP process. The resulting discretized schemes are analyzed. Stability and spectral accuracy of the Hermite spectral discretization of the FP problems is proved. Furthermore, we investigate the conservativity of the solutions of FP equations discretized with the Hermite spectral scheme.
In the last part of this thesis, we discuss optimal control problems governed by FP equations on the characterization of their solution by optimality systems. We then investigate the Hermite spectral discretization of FP optimality systems in unbounded domains.
Within the framework of Hermite discretization, we obtain sparse-band systems of ordinary differential equations. We analyze the accuracy of the discretization schemes by showing spectral convergence in approximating the state, the adjoint, and the control variables that appear in the FP optimality systems.
To validate our theoretical estimates, we present results of numerical experiments.
Background
It is hypothesized that because of higher mast cell numbers and mediator release, mastocytosis predisposes patients for systemic immediate-type hypersensitivity reactions to certain drugs including non-steroidal anti-inflammatory drugs (NSAID).
Objective
To clarify whether patients with NSAID hypersensitivity show increased basal serum tryptase levels as sign for underlying mast cell disease.
Methods
As part of our allergy work-up, basal serum tryptase levels were determined in all patients with a diagnosis of NSAID hypersensitivity and the severity of the reaction was graded. Patients with confirmed IgE-mediated hymenoptera venom allergy served as a comparison group.
Results
Out of 284 patients with NSAID hypersensitivity, 26 were identified with basal serum tryptase > 10.0 ng/mL (9.2%). In contrast, significantly (P = .004) more hymenoptera venom allergic patients had elevated tryptase > 10.0 ng/mL (83 out of 484; 17.1%). Basal tryptase > 20.0 ng/mL was indicative for severe anaphylaxis only in venom allergic subjects (29 patients; 4x grade 2 and 25x grade 3 anaphylaxis), but not in NSAID hypersensitive patients (6 patients; 4x grade 1, 2x grade 2).
Conclusions
In contrast to hymenoptera venom allergy, NSAID hypersensitivity do not seem to be associated with elevated basal serum tryptase levels and levels > 20 ng/mL were not related to increased severity of the clinical reaction. This suggests that mastocytosis patients may be treated with NSAID without special precautions.
In this thesis it is shown how the spread of infectious diseases can be described via mathematical models that show the dynamic behavior of epidemics. Ordinary differential equations are used for the modeling process. SIR and SIRS models are distinguished, depending on whether a disease confers immunity to individuals after recovery or not. There are characteristic parameters for each disease like the infection rate or the recovery rate. These parameters indicate how aggressive a disease acts and how long it takes for an individual to recover, respectively. In general the parameters are time-varying and depend on population groups. For this reason, models with multiple subgroups are introduced, and switched systems are used to carry out time-variant parameters.
When investigating such models, the so called disease-free equilibrium is of interest, where no infectives appear within the population. The question is whether there are conditions, under which this equilibrium is stable. Necessary mathematical tools for the stability analysis are presented. The theory of ordinary differential equations, including Lyapunov stability theory, is fundamental. Moreover, convex and nonsmooth analysis, positive systems and differential inclusions are introduced. With these tools, sufficient conditions are given for the disease-free equilibrium of SIS, SIR and SIRS systems to be asymptotically stable.
In this thesis we study smoothness properties of primal and dual gap functions for generalized Nash equilibrium problems (GNEPs) and finite-dimensional quasi-variational inequalities (QVIs). These gap functions are optimal value functions of primal and dual reformulations of a corresponding GNEP or QVI as a constrained or unconstrained optimization problem. Depending on the problem type, the primal reformulation uses regularized Nikaido-Isoda or regularized gap function approaches. For player convex GNEPs and QVIs of the so-called generalized `moving set' type the respective primal gap functions are continuously differentiable. In general, however, these primal gap functions are nonsmooth for both problems. Hence, we investigate their continuity and differentiability properties under suitable assumptions. Here, our main result states that, apart from special cases, all locally minimal points of the primal reformulations are points of differentiability of the corresponding primal gap function.
Furthermore, we develop dual gap functions for a class of GNEPs and QVIs and ensuing unconstrained optimization reformulations of these problems based on an idea by Dietrich (``A smooth dual gap function solution to a class of quasivariational inequalities'', Journal of Mathematical Analysis and Applications 235, 1999, pp. 380--393). For this purpose we rewrite the primal gap functions as a difference of two strongly convex functions and employ the Toland-Singer duality theory. The resulting dual gap functions are continuously differentiable and, under suitable assumptions, have piecewise smooth gradients. Our theoretical analysis is complemented by numerical experiments. The solution methods employed make use of the first-order information established by the aforementioned theoretical investigations.
In attempting to solve the regular inverse Galois problem for arbitrary subfields K of C (particularly for K=Q), a very important result by Fried and Völklein reduces the existence of regular Galois extensions F|K(t) with Galois group G to the existence of K-rational points on components of certain moduli spaces for families of covers of the projective line, known as Hurwitz spaces.
In some cases, the existence of rational points on Hurwitz spaces has been proven by theoretical criteria. In general, however, the question whether a given Hurwitz space has any rational point remains a very difficult problem. In concrete cases, it may be tackled by an explicit computation of a Hurwitz space and the corresponding family of covers.
The aim of this work is to collect and expand on the various techniques that may be used to solve such computational problems and apply them to tackle several families of Galois theoretic interest. In particular, in Chapter 5, we compute explicit curve equations for Hurwitz spaces for certain families of \(M_{24}\) and \(M_{23}\).
These are (to my knowledge) the first examples of explicitly computed Hurwitz spaces of such high genus. They might be used to realize \(M_{23}\) as a regular Galois group over Q if one manages to find suitable points on them.
Apart from the calculation of explicit algebraic equations, we produce complex approximations for polynomials with genus zero ramification of several different ramification types in \(M_{24}\) and \(M_{23}\). These may be used as starting points for similar computations.
The main motivation for these computations is the fact that \(M_{23}\) is currently the only remaining sporadic group that is not known to occur as a Galois group over Q.
We also compute the first explicit polynomials with Galois groups \(G=P\Gamma L_3(4), PGL_3(4), PSL_3(4)\) and \(PSL_5(2)\) over Q(t).
Special attention will be given to reality questions. As an application we compute the first examples of totally real polynomials with Galois groups \(PGL_2(11)\) and \(PSL_3(3)\) over Q.
As a suggestion for further research, we describe an explicit algorithmic version of "Algebraic Patching", following the theory described e.g. by M. Jarden. This could be used to conquer some problems regarding families of covers of genus g>0.
Finally, we present explicit Magma implementations for several of the most important algorithms involved in our computations.
The Riemann zeta-function forms a central object in multiplicative number theory; its value-distribution encodes deep arithmetic properties of the prime numbers. Here, a crucial role is assigned to the analytic behavior of the zeta-function on the so called critical line. In this thesis we study the value-distribution of the Riemann zeta-function near and on the critical line. Amongst others we focus on the following.
PART I: A modified concept of universality, a-points near the critical line and a denseness conjecture attributed to Ramachandra.
The critical line is a natural boundary of the Voronin-type universality property of the Riemann zeta-function. We modify Voronin's concept by adding a scaling factor to the vertical shifts that appear in Voronin's universality theorem and investigate whether this modified concept is appropriate to keep up a certain universality property of the Riemann zeta-function near and on the critical line. It turns out that it is mainly the functional equation of the Riemann zeta-function that restricts the set of functions which can be approximated by this modified concept around the critical line.
Levinson showed that almost all a-points of the Riemann zeta-function lie in a certain funnel-shaped region around the critical line. We complement Levinson's result: Relying on arguments of the theory of normal families and the notion of filling discs, we detect a-points in this region which are very close to the critical line.
According to a folklore conjecture (often attributed to Ramachandra) one expects that the values of the Riemann zeta-function on the critical line lie dense in the complex numbers. We show that there are certain curves which approach the critical line asymptotically and have the property that the values of the zeta-function on these curves are dense in the complex numbers.
Many of our results in part I are independent of the Euler product representation of the Riemann zeta-function and apply for meromorphic functions that satisfy a Riemann-type functional equation in general.
PART II: Discrete and continuous moments.
The Lindelöf hypothesis deals with the growth behavior of the Riemann zeta-function on the critical line. Due to classical works by Hardy and Littlewood, the Lindelöf hypothesis can be reformulated in terms of power moments to the right of the critical line. Tanaka showed recently that the expected asymptotic formulas for these power moments are true in a certain measure-theoretical sense; roughly speaking he omits a set of Banach density zero from the path of integration of these moments. We provide a discrete and integrated version of Tanaka's result and extend it to a large class of Dirichlet series connected to the Riemann zeta-function.
The work at hand studies problems from Loewner theory and is divided into two parts:
In part 1 (chapter 2) we present the basic notions of Loewner theory. Here we use a modern form which was developed by F. Bracci, M. Contreras, S. Díaz-Madrigal et al. and which can be applied to certain higher dimensional complex manifolds.
We look at two domains in more detail: the Euclidean unit ball and the polydisc. Here we consider two classes of biholomorphic mappings which were introduced by T. Poreda and G. Kohr as generalizations of the class S.
We prove a conjecture of G. Kohr about support points of these classes. The proof relies on the observation that the classes describe so called Runge domains, which follows from a result by L. Arosio, F. Bracci and E. F. Wold.
Furthermore, we prove a conjecture of G. Kohr about support points of a class of biholomorphic mappings that comes from applying the Roper-Suffridge extension operator to the class S.
In part 2 (chapter 3) we consider one special Loewner equation: the chordal multiple-slit equation in the upper half-plane.
After describing basic properties of this equation we look at the problem, whether one can choose the coefficient functions in this equation to be constant. D. Prokhorov proved this statement under the assumption that the slits are piecewise analytic. We use a completely different idea to solve the problem in its general form.
As the Loewner equation with constant coefficients holds everywhere (and not just almost everywhere), this result generalizes Loewner’s original idea to the multiple-slit case.
Moreover, we consider the following problems:
• The “simple-curve problem” asks which driving functions describe the growth of simple curves (in contrast to curves that touch itself). We discuss necessary and sufficient conditions, generalize a theorem of J. Lind, D. Marshall and S. Rohde to the multiple-slit equation and we give an example of a set of driving functions which generate simple curves because of a certain self-similarity property.
• We discuss properties of driving functions that generate slits which enclose a given angle with the real axis.
• A theorem by O. Roth gives an explicit description of the reachable set of one point in the radial Loewner equation. We prove the analog for the chordal equation.
Background
Referring to individuals with reactivity to honey bee and Vespula venom in diagnostic tests, the umbrella terms “double sensitization” or “double positivity” cover patients with true clinical double allergy and those allergic to a single venom with asymptomatic sensitization to the other. There is no international consensus on whether immunotherapy regimens should generally include both venoms in double sensitized patients.
Objective
We investigated the long-term outcome of single venom-based immunotherapy with regard to potential risk factors for treatment failure and specifically compared the risk of relapse in mono sensitized and double sensitized patients.
Methods
Re-sting data were obtained from 635 patients who had completed at least 3 years of immunotherapy between 1988 and 2008. The adequate venom for immunotherapy was selected using an algorithm based on clinical details and the results of diagnostic tests.
Results
Of 635 patients, 351 (55.3%) were double sensitized to both venoms. The overall re-exposure rate to Hymenoptera stings during and after immunotherapy was 62.4%; the relapse rate was 7.1% (6.0% in mono sensitized, 7.8% in double sensitized patients). Recurring anaphylaxis was statistically less severe than the index sting reaction (P = 0.004). Double sensitization was not significantly related to relapsing anaphylaxis (P = 0.56), but there was a tendency towards an increased risk of relapse in a subgroup of patients with equal reactivity to both venoms in diagnostic tests (P = 0.15).
Conclusions
Single venom-based immunotherapy over 3 to 5 years effectively and long-lastingly protects the vast majority of both mono sensitized and double sensitized Hymenoptera venom allergic patients. Double venom immunotherapy is indicated in clinically double allergic patients reporting systemic reactions to stings of both Hymenoptera and in those with equal reactivity to both venoms in diagnostic tests who have not reliably identified the culprit stinging insect.
Human herpesvirus-6 (HHV-6) exists in latent form either as a nuclear episome or integrated into human chromosomes in more than 90% of healthy individuals without causing clinical symptoms. Immunosuppression and stress conditions can reactivate HHV-6 replication, associated with clinical complications and even death. We have previously shown that co-infection of Chlamydia trachomatis and HHV-6 promotes chlamydial persistence and increases viral uptake in an in vitro cell culture model. Here we investigated C. trachomatis-induced HHV-6 activation in cell lines and fresh blood samples from patients having Chromosomally integrated HHV-6 (CiHHV-6). We observed activation of latent HHV-6 DNA replication in CiHHV-6 cell lines and fresh blood cells without formation of viral particles. Interestingly, we detected HHV-6 DNA in blood as well as cervical swabs from C. trachomatis-infected women. Low virus titers correlated with high C. trachomatis load and vice versa, demonstrating a potentially significant interaction of these pathogens in blood cells and in the cervix of infected patients. Our data suggest a thus far underestimated interference of HHV-6 and C. trachomatis with a likely impact on the disease outcome as consequence of co-infection.
Purpose: Scarring after glaucoma filtering surgery remains the most frequent cause for bleb failure. The aim of this study was to assess if the postoperative injection of bevacizumab reduces the number of postoperative subconjunctival 5-fluorouracil (5-FU) injections. Further, the effect of bevacizumab as an adjunct to 5-FU on the intraocular pressure (IOP) outcome, bleb morphology, postoperative medications, and complications was evaluated.
Methods: Glaucoma patients (N = 61) who underwent trabeculectomy with mitomycin C were analyzed retrospectively (follow-up period of 25 ± 19 months). Surgery was performed exclusively by one experienced glaucoma specialist using a standardized technique. Patients in group 1 received subconjunctival applications of 5-FU postoperatively. Patients in group 2 received 5-FU and subconjunctival injection of bevacizumab.
Results: Group 1 had 6.4 ± 3.3 (0–15) (mean ± standard deviation and range, respectively) 5-FU injections. Group 2 had 4.0 ± 2.8 (0–12) (mean ± standard deviation and range, respectively) 5-FU injections. The added injection of bevacizumab significantly reduced the mean number of 5-FU injections by 2.4 ± 3.08 (P ≤ 0.005). There was no significantly lower IOP in group 2 when compared to group 1. A significant reduction in vascularization and in cork screw vessels could be found in both groups (P < 0.0001, 7 days to last 5-FU), yet there was no difference between the two groups at the last follow-up. Postoperative complications were significantly higher for both groups when more 5-FU injections were applied. (P = 0.008). No significant difference in best corrected visual acuity (P = 0.852) and visual field testing (P = 0.610) between preoperative to last follow-up could be found between the two groups.
Conclusion: The postoperative injection of bevacizumab reduced the number of subconjunctival 5-FU injections significantly by 2.4 injections. A significant difference in postoperative IOP reduction, bleb morphology, and postoperative medication was not detected.
The Factorization Method is a noniterative method to detect the shape and position of conductivity anomalies inside an object. The method was introduced by Kirsch for inverse scattering problems and extended to electrical impedance tomography (EIT) by Brühl and Hanke. Since these pioneering works, substantial progress has been made on the theoretical foundations of the method. The necessary assumptions have been weakened, and the proofs have been considerably simplified. In this work, we aim to summarize this progress and present a state-of-the-art formulation of the Factorization Method for EIT with continuous data. In particular, we formulate the method for general piecewise analytic conductivities and give short and self-contained proofs.
This thesis gives an overview over mathematical modeling of complex fluids with the discussion of underlying mechanical principles, the introduction of the energetic variational framework, and examples and applications. The purpose is to present a formal energetic variational treatment of energies corresponding to the models of physical phenomena and to derive PDEs for the complex fluid systems. The advantages of this approach over force-based modeling are, e.g., that for complex systems energy terms can be established in a relatively easy way, that force components within a system are not counted twice, and that this approach can naturally combine effects on different scales. We follow a lecture of Professor Dr. Chun Liu from Penn State University, USA, on complex fluids which he gave at the University of Wuerzburg during his Giovanni Prodi professorship in summer 2012. We elaborate on this lecture and consider also parts of his work and publications, and substantially extend the lecture by own calculations and arguments (for papers including an overview over the energetic variational treatment see [HKL10], [Liu11] and references therein).
Applications in various research areas such as signal processing, quantum computing, and computer vision, can be described as constrained optimization tasks on certain subsets of tensor products of vector spaces. In this work, we make use of techniques from Riemannian geometry and analyze optimization tasks on subsets of so-called simple tensors which can be equipped with a differentiable structure. In particular, we introduce a generalized Rayleigh-quotient function on the tensor product of Grassmannians and on the tensor product of Lagrange- Grassmannians. Its optimization enables a unified approach to well-known tasks from different areas of numerical linear algebra, such as: best low-rank approximations of tensors (data compression), computing geometric measures of entanglement (quantum computing) and subspace clustering (image processing). We perform a thorough analysis on the critical points of the generalized Rayleigh-quotient and develop intrinsic numerical methods for its optimization. Explicitly, using the techniques from Riemannian optimization, we present two type of algorithms: a Newton-like and a conjugated gradient algorithm. Their performance is analysed and compared with established methods from the literature.
Argumentation and proof have played a fundamental role in mathematics education in recent years. The author of this dissertation would like to investigate the development of the proving process within a dynamic geometry system in order to support tertiary students understanding the proving process. The strengths of this dynamic system stimulate students to formulate conjectures and produce arguments during the proving process. Through empirical research, we classified different levels of proving and proposed a methodological model for proving. This methodological model makes a contribution to improve students’ levels of proving and develop their dynamic visual thinking. We used Toulmin model of argumentation as a theoretical model to analyze the relationship between argumentation and proof. This research also offers some possible explanation so as to why students have cognitive difficulties in constructing proofs and provides mathematics educators with a deeper understanding on the proving process within a dynamic geometry system.
This paper presents an alternative approach for obtaining a converse Lyapunov theorem for discrete–time systems. The proposed approach is constructive, as it provides an explicit Lyapunov function. The developed converse theorem establishes existence of global Lyapunov functions for globally exponentially stable (GES) systems and semi–global practical Lyapunov functions for globally asymptotically stable systems. Furthermore, for specific classes of sys- tems, the developed converse theorem can be used to establish non–conservatism of a particular type of Lyapunov functions. Most notably, a proof that conewise linear Lyapunov functions are non–conservative for GES conewise linear systems is given and, as a by–product, tractable construction of polyhedral Lyapunov functions for linear systems is attained.
This thesis is devoted to numerical verification of optimality conditions for non-convex optimal control problems. In the first part, we are concerned with a-posteriori verification of sufficient optimality conditions. It is a common knowledge that verification of such conditions for general non-convex PDE-constrained optimization problems is very challenging. We propose a method to verify second-order sufficient conditions for a general class of optimal control problem. If the proposed verification method confirms the fulfillment of the sufficient condition then a-posteriori error estimates can be computed. A special ingredient of our method is an error analysis for the Hessian of the underlying optimization problem. We derive conditions under which positive definiteness of the Hessian of the discrete problem implies positive definiteness of the Hessian of the continuous problem. The results are complemented with numerical experiments. In the second part, we investigate adaptive methods for optimal control problems with finitely many control parameters. We analyze a-posteriori error estimates based on verification of second-order sufficient optimality conditions using the method developed in the first part. Reliability and efficiency of the error estimator are shown. We illustrate through numerical experiments, the use of the estimator in guiding adaptive mesh refinement.
In this thesis, time-optimal control of the bi-steerable robot is addressed. The bi-steerable robot, a vehicle with two independently steerable axles, is a complex nonholonomic system with applications in many areas of land-based robotics. Motion planning and optimal control are challenging tasks for this system, since standard control schemes do not apply. The model of the bi-steerable robot considered here is a reduced kinematic model with the driving velocity and the steering angles of the front and rear axle as inputs. The steering angles of the two axles can be set independently from each other. The reduced kinematic model is a control system with affine and non-affine inputs, as the driving velocity enters the system linearly, whereas the steering angles enter nonlinearly. In this work, a new approach to solve the time-optimal control problem for the bi-steerable robot is presented. In contrast to most standard methods for time-optimal control, our approach does not exclusively rely on discretization and purely numerical methods. Instead, the Pontryagin Maximum Principle is used to characterize candidates for time-optimal solutions. The resultant boundary value problem is solved by optimization to obtain solutions to the path planning problem over a given time horizon. The time horizon is decreased and the path planning is iterated to approximate a time-optimal solution. An optimality condition is introduced which depends on the number of cusps, i.e., reversals of the driving direction of the robot. This optimality condition allows to single out non-optimal solutions with too many cusps. In general, our approach only gives approximations of time-optimal solutions, since only normal regular extremals are considered as solutions to the path planning problem, and the path planning is terminated when an extremal with minimal number of cusps is found. However, for most desired configurations, normal regular extremals with the minimal number of cusps provide time-optimal solutions for the bi-steerable robot. The convergence of the approach is analyzed and its probabilistic completeness is shown. Moreover, simulation results on time-optimal solutions for the bi-steerable robot are presented.
We introduce some mathematical framework for extreme value theory in the space of continuous functions on compact intervals and provide basic definitions and tools. Continuous max-stable processes on [0,1] are characterized by their “distribution functions” G which can be represented via a norm on function space, called D-norm. The high conformity of this setup with the multivariate case leads to the introduction of a functional domain of attraction approach for stochastic processes, which is more general than the usual one based on weak convergence. We also introduce the concept of “sojourn time transformation” and compare several types of convergence on function space. Again in complete accordance with the uni- or multivariate case it is now possible to get functional generalized Pareto distributions (GPD) W via W = 1 + log(G) in the upper tail. In particular, this enables us to derive characterizations of the functional domain of attraction condition for copula processes. Moreover, we investigate the sojourn time above a high threshold of a continuous stochastic process. It turns out that the limit, as the threshold increases, of the expected sojourn time given that it is positive, exists if the copula process corresponding to Y is in the functional domain of attraction of a max-stable process. If the process is in a certain neighborhood of a generalized Pareto process, then we can replace the constant threshold by a general threshold function and we can compute the asymptotic sojourn time distribution.
On the Fragility Index
(2011)
The Fragility Index captures the amount of risk in a stochastic system of arbitrary dimension. Its main mathematical tool is the asymptotic distribution of exceedance counts within the system which can be derived by use of multivariate extreme value theory. Thereby the basic assumption is that data comes from a distribution which lies in the domain of attraction of a multivariate extreme value distribution. The Fragility Index itself and its extension can serve as a quantitative measure for tail dependence in arbitrary dimensions. It is linked to the well known extremal index for stochastic processes as well the extremal coefficient of an extreme value distribution.
We study reachability matrices R(A, b) = [b,Ab, . . . ,An−1b], where A is an n × n matrix over a field K and b is in Kn. We characterize those matrices that are reachability matrices for some pair (A, b). In the case of a cyclic matrix A and an n-vector of indeterminates x, we derive a factorization of the polynomial det(R(A, x)).
We study the symmetrised rank-one convex hull of monoclinic-I martensite (a twelve-variant material) in the context of geometrically-linear elasticity. We construct sets of T3s, which are (non-trivial) symmetrised rank-one convex hulls of 3-tuples of pairwise incompatible strains. Moreover we construct a five-dimensional continuum of T3s and show that its intersection with the boundary of the symmetrised rank-one convex hull is four-dimensional. We also show that there is another kind of monoclinic-I martensite with qualitatively different semi-convex hulls which, so far as we know, has not been experimentally observed. Our strategy is to combine understanding of the algebraic structure of symmetrised rank-one convex cones with knowledge of the faceting structure of the convex polytope formed by the strains.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
In the verification of positive Harris recurrence of multiclass queueing networks the stability analysis for the class of fluid networks is of vital interest. This thesis addresses stability of fluid networks from a Lyapunov point of view. In particular, the focus is on converse Lyapunov theorems. To gain an unified approach the considerations are based on generic properties that fluid networks under widely used disciplines have in common. It is shown that the class of closed generic fluid network models (closed GFNs) is too wide to provide a reasonable Lyapunov theory. To overcome this fact the class of strict generic fluid network models (strict GFNs) is introduced. In this class it is required that closed GFNs satisfy additionally a concatenation and a lower semicontinuity condition. We show that for strict GFNs a converse Lyapunov theorem is true which provides a continuous Lyapunov function. Moreover, it is shown that for strict GFNs satisfying a trajectory estimate a smooth converse Lyapunov theorem holds. To see that widely used queueing disciplines fulfill the additional conditions, fluid networks are considered from a differential inclusions perspective. Within this approach it turns out that fluid networks under general work-conserving, priority and proportional processor-sharing disciplines define strict GFNs. Furthermore, we provide an alternative proof for the fact that the Markov process underlying a multiclass queueing network is positive Harris recurrent if the associate fluid network defining a strict GFN is stable. The proof explicitely uses the Lyapunov function admitted by the stable strict GFN. Also, the differential inclusions approach shows that first-in-first-out disciplines play a special role.
Bei vielen Fragestellungen, in denen sich eine Grundgesamtheit in verschiedene Klassen unterteilt, ist weniger die relative Klassengröße als vielmehr die Anzahl der Klassen von Bedeutung. So interessiert sich beispielsweise der Biologe dafür, wie viele Spezien einer Gattung es gibt, der Numismatiker dafür, wie viele Münzen oder Münzprägestätten es in einer Epoche gab, der Informatiker dafür, wie viele unterschiedlichen Einträge es in einer sehr großen Datenbank gibt, der Programmierer dafür, wie viele Fehler eine Software enthält oder der Germanist dafür, wie groß der Wortschatz eines Autors war oder ist. Dieser Artenreichtum ist die einfachste und intuitivste Art und Weise eine Population oder Grundgesamtheit zu charakterisieren. Jedoch kann nur in Kollektiven, in denen die Gesamtanzahl der Bestandteile bekannt und relativ klein ist, die Anzahl der verschiedenen Spezien durch Erfassung aller bestimmt werden. In allen anderen Fällen ist es notwendig die Spezienanzahl durch Schätzungen zu bestimmen.
Consider the situation where two or more images are taken from the same object. After taking the first image, the object is moved or rotated so that the second recording depicts it in a different manner. Additionally, take heed of the possibility that the imaging techniques may have also been changed. One of the main problems in image processing is to determine the spatial relation between such images. The corresponding process of finding the spatial alignment is called “registration”. In this work, we study the optimization problem which corresponds to the registration task. Especially, we exploit the Lie group structure of the set of transformations to construct efficient, intrinsic algorithms. We also apply the algorithms to medical registration tasks. However, the methods developed are not restricted to the field of medical image processing. We also have a closer look at more general forms of optimization problems and show connections to related tasks.
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
In this thesis different algorithms for the solution of generalized Nash equilibrium problems with the focus on global convergence properties are developed. A globalized Newton method for the computation of normalized solutions, a nonsmooth algorithm based on an optimization reformulation of the game-theoretic problem, and a merit function approach and an interior point method for the solution of the concatenated Karush-Kuhn-Tucker-system are analyzed theoretically and numerically. The interior point method turns out to be one of the best existing methods for the solution of generalized Nash equilibrium problems.
In this thesis we consider a reactive transport model with precipitation dissolution reactions from the geosciences. It consists of PDEs, ODEs, algebraic equations (AEs) and complementary conditions (CCs). After discretization of this model we get a huge nonlinear and nonsmooth equation system. We tackle this system with the semismooth Newton method introduced by Qi and Sun. The focus of this thesis is on the application and convergence of this algorithm. We proof that this algorithm is well defined for this problem and local even quadratic convergent for a BD-regular solution. We also deal with the arising linear equation systems, which are large and sparse, and how they can be solved efficiently. An integral part of this investigation is the boundedness of a certain matrix-valued function, which is shown in a separate chapter. As a side quest we study how extremal eigenvalues (and singular values) of certain PDE-operators, which are involved in our discretized model, can be estimated accurately.
The subject of this thesis are mathematical programs with complementarity conditions (MPCC). At first, an economic example of this problem class is analyzed, the problem of effort maximization in asymmetric n-person contest games. While an analytical solution for this special problem could be derived, this is not possible in general for MPCCs. Therefore, optimality conditions which might be used for numerical approaches where considered next. More precisely, a Fritz-John result for MPCCs with stronger properties than those known so far was derived together with some new constraint qualifications and subsequently used to prove an exact penalty result. Finally, to solve MPCCs numerically, the so called relaxation approach was used. Besides improving the results for existing relaxation methods, a new relaxation with strong convergence properties was suggested and a numerical comparison of all methods based on the MacMPEC collection conducted.
In the following dissertation we consider three preconditioners of algebraic multigrid type, though they are defined for arbitrary prolongation and restriction operators, we consider them in more detail for the aggregation method. The strengthened Cauchy-Schwarz inequality and the resulting angle between the spaces will be our main interests. In this context we will introduce some modifications. For the problem of the one-dimensional convection we obtain perfect theoretical results. Although this is not the case for more complex problems, the numerical results we present will show that the modifications are also useful in these situation. Additionally, we will consider a symmetric problem in the energy norm and present a simple rule for algebraic aggregation.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
This thesis is devoted to Bernoulli Stochastics, which was initiated by Jakob Bernoulli more than 300 years ago by his master piece 'Ars conjectandi', which can be translated as 'Science of Prediction'. Thus, Jakob Bernoulli's Stochastics focus on prediction in contrast to the later emerging disciplines probability theory, statistics and mathematical statistics. Only recently Jakob Bernoulli's focus was taken up von Collani, who developed a unified theory of uncertainty aiming at making reliable and accurate predictions. In this thesis, teaching material as well as a virtual classroom are developed for fostering ideas and techniques initiated by Jakob Bernoulli and elaborated by Elart von Collani. The thesis is part of an extensively construed project called 'Stochastikon' aiming at introducing Bernoulli Stochastics as a unified science of prediction and measurement under uncertainty. This ambitious aim shall be reached by the development of an internet-based comprehensive system offering the science of Bernoulli Stochastics on any level of application. So far it is planned that the 'Stochastikon' system (http://www.stochastikon.com/) will consist of five subsystems. Two of them are developed and introduced in this thesis. The first one is the e-learning programme 'Stochastikon Magister' and the second one 'Stochastikon Graphics' that provides the entire Stochastikon system with graphical illustrations. E-learning is the outcome of merging education and internet techniques. E-learning is characterized by the facts that teaching and learning are independent of place and time and of the availability of specially trained teachers. Knowledge offering as well as knowledge transferring are realized by using modern information technologies. Nowadays more and more e-learning environments are based on the internet as the primary tool for communication and presentation. E-learning presentation tools are for instance text-files, pictures, graphics, audio and videos, which can be networked with each other. There could be no limit as to the access to teaching contents. Moreover, the students can adapt the speed of learning to their individual abilities. E-learning is particularly appropriate for newly arising scientific and technical disciplines, which generally cannot be presented by traditional learning methods sufficiently well, because neither trained teachers nor textbooks are available. The first part of this dissertation introduces the state of the art of e-learning in statistics, since statistics and Bernoulli Stochastics are both based on probability theory and exhibit many similar features. Since Stochastikon Magister is the first e-learning programme for Bernoulli Stochastics, the educational statistics systems is selected for the purpose of comparison and evaluation. This makes sense as both disciplines are an attempt to handle uncertainty and use methods that often can be directly compared. The second part of this dissertation is devoted to Bernoulli Stochastics. This part aims at outlining the content of two courses, which have been developed for the anticipated e-learning programme Stochastikon Magister in order to show the difficulties in teaching, understanding and applying Bernoulli Stochastics. The third part discusses the realization of the e-learning programme Stochastikon Magister, its design and implementation, which aims at offering a systematic learning of principles and techniques developed in Bernoulli Stochastics. The resulting e-learning programme differs from the commonly developed e-learning programmes as it is an attempt to provide a virtual classroom that simulates all the functions of real classroom teaching. This is in general not necessary, since most of the e-learning programmes aim at supporting existing classroom teaching. The forth part presents two empirical evaluations of Stochastikon Magister. The evaluations are performed by means of comparisons between traditional classroom learning in statistics and e-learning of Bernoulli Stochastics. The aim is to assess the usability and learnability of Stochastikon Magister. Finally, the fifth part of this dissertation is added as an appendix. It refers to Stochastikon Graphics, the fifth component of the entire Stochastikon system. Stochastikon Graphics provides the other components with graphical representations of concepts, procedures and results obtained or used in the framework of Bernoulli Stochastics. The primary aim of this thesis is the development of an appropriate software for the anticipated e-learning environment meant for Bernoulli Stochastics, while the preparation of the necessary teaching material constitutes only a secondary aim used for demonstrating the functionality of the e-learning platform and the scientific novelty of Bernoulli Stochastics. To this end, a first version of two teaching courses are developed, implemented and offered on-line in order to collect practical experiences. The two courses, which were developed as part of this projects are submitted as a supplement to this dissertation. For the time being the first experience with the e-learning programme Stochastikon Magister has been made. Students of different faculties of the University of Würzburg, as well as researchers and engineers, who are involved in the Stochastikon project have obtained access to Stochastikon Magister via internet. They have registered for Stochastikon Magister and participated in the course programme. This thesis reports on two assessments of these first experiences and the results will lead to further improvements with respect to content and organization of Stochastikon Magister.
Controllability Aspects of the Lindblad-Kossakowski Master Equation : A Lie-Theoretical Approach
(2009)
One main task, which is considerably important in many applications in quantum control, is to explore the possibilities of steering a quantum system from an initial state to a target state. This thesis focuses on fundamental control-theoretical issues of quantum dynamics described by the Lindblad-Kossakowski master equation which arises as a bilinear control system on some underlying real vector spaces, e.g controllability aspects and the structure of reachable sets. Based on Lie-algebraic methods from nonlinear control theory, the thesis presents a unified approach to control problems of finite dimensional closed and open quantum systems. In particular, a simplified treatment for controllability of closed quantum systems as well as new accessibility results for open quantum systems are obtained. The main tools to derive the results are the well-known classifications of all matrix Lie groups which act transitively on Grassmann manifolds, and respectively, on real vector spaces without the origin. It is also shown in this thesis that accessibiity of the Lindblad-Kossakowski master equation is a generic property. Moreover, based on the theoretical accessibility results, an algorithm is developed to decide when the Lindblad-Kossakowski master equation is accessible.
In Janssen and Reiss (1988) it was shown that in a location model of a Weibull type sample with shape parameter -1 < a < 1 the k(n) lower extremes are asymptotically local sufficient. In the present paper we show that even global sufficiency holds. Moreover, it turns out that convergence of the given statistical experiments in the deficiency metric does not only hold for compact parameter sets but for the whole real line.
The aim of the present paper is to clarify the role of extreme order statistics in general statistical models. This is done within the general setup of statistical experiments in LeCam's sense. Under the assumption of monotone likelihood ratios, we prove that a sequence of experiments is asymptotically Gaussian if, and only if, a fixed number of extremes asymptotically does not contain any information. In other words: A fixed number of extremes asymptotically contains information iff the Poisson part of the limit experiment is non-trivial. Suggested by this result, we propose a new extreme value model given by local alternatives. The local structure is described by introducing the space of extreme value tangents. It turns out that under local alternatives a new class of extreme value distributions appears as limit distributions. Moreover, explicit representations of the Poisson limit experiments via Poisson point processes are found. As a concrete example nonparametric tests for Frechet type distributions against stochastically larger alternatives are treated. We find asymptotically optimal tests within certain threshold models.
It is shown that the rate of convergence in the von Mises conditions of extreme value theory determines the distance of the underlying distribution function F from a generalized Pareto distribution. The distance is measured in terms of the pertaining densities with the limit being ultimately attained if and only if F is ultimately a generalized Pareto distribution. Consequently, the rate of convergence of the extremes in an lid sample, whether in terms of the distribution of the largest order statistics or of corresponding empirical truncated point processes, is determined by the rate of convergence in the von Mises condition. We prove that the converse is also true.
In the generalized Nash equilibrium problem not only the cost function of a player depends on the rival players' decisions, but also his constraints. This thesis presents different iterative methods for the numerical computation of a generalized Nash equilibrium, some of them globally, others locally superlinearly convergent. These methods are based on either reformulations of the generalized Nash equilibrium problem as an optimization problem, or on a fixed point formulation. The key tool for these reformulations is the Nikaido-Isoda function. Numerical results for various problem from the literature are given.
It is well-known that a multivariate extreme value distribution can be represented via the D-Norm. However not every norm yields a D-Norm. In this thesis a necessary and sufficient condition is given for a norm to define an extreme value distribution. Applications of this theorem includes a new proof for the bivariate case, the Pickands dependence function and the nested logistic model. Furthermore the GPD-Flow is introduced and first insights were given such that if it converges it converges against the copula of complete dependence.
A new class of optimization problems name 'mathematical programs with vanishing constraints (MPVCs)' is considered. MPVCs are on the one hand very challenging from a theoretical viewpoint, since standard constraint qualifications such as LICQ, MFCQ, or ACQ are most often violated, and hence, the Karush-Kuhn-Tucker conditions do not provide necessary optimality conditions off-hand. Thus, new CQs and the corresponding optimality conditions are investigated. On the other hand, MPVCs have important applications, e.g., in the field of topology optimization. Therefore, numerical algorithms for the solution of MPVCs are designed, investigated and tested for certain problems from truss-topology-optimization.