Refine
Has Fulltext
- yes (222)
Is part of the Bibliography
- yes (222)
Year of publication
Document Type
- Doctoral Thesis (128)
- Journal article (73)
- Book (5)
- Report (4)
- Master Thesis (3)
- Other (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (11)
- Extremwertstatistik (8)
- Optimierung (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Mathematikunterricht (5)
- Stabilität (5)
Institute
- Institut für Mathematik (222) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
This thesis is devoted to numerical verification of optimality conditions for non-convex optimal control problems. In the first part, we are concerned with a-posteriori verification of sufficient optimality conditions. It is a common knowledge that verification of such conditions for general non-convex PDE-constrained optimization problems is very challenging. We propose a method to verify second-order sufficient conditions for a general class of optimal control problem. If the proposed verification method confirms the fulfillment of the sufficient condition then a-posteriori error estimates can be computed. A special ingredient of our method is an error analysis for the Hessian of the underlying optimization problem. We derive conditions under which positive definiteness of the Hessian of the discrete problem implies positive definiteness of the Hessian of the continuous problem. The results are complemented with numerical experiments. In the second part, we investigate adaptive methods for optimal control problems with finitely many control parameters. We analyze a-posteriori error estimates based on verification of second-order sufficient optimality conditions using the method developed in the first part. Reliability and efficiency of the error estimator are shown. We illustrate through numerical experiments, the use of the estimator in guiding adaptive mesh refinement.
The bounded input bounded output (BIBO) stability for a nonlinear Caputo fractional system with time‐varying bounded delay and nonlinear output is studied. Utilizing the Razumikhin method, Lyapunov functions and appropriate fractional derivatives of Lyapunov functions some new bounded input bounded output stability criteria are derived. Also, explicit and independent on the initial time bounds of the output are provided. Uniform BIBO stability and uniform BIBO stability with input threshold are studied. A numerical simulation is carried out to show the system's dynamic response, and demonstrate the effectiveness of our theoretical results.
We give a collection of 16 examples which show that compositions \(g\) \(\circ\) \(f\) of well-behaved functions \(f\) and \(g\) can be badly behaved. Remarkably, in 10 of the 16 examples it suffices to take as outer function \(g\) simply a power-type or characteristic function. Such a collection of examples may serve as a source of exercises for a calculus course.
Extreme value theory aims at modeling extreme but rare events from a probabilistic point of view. It is well-known that so-called generalized Pareto distributions, which are briefly reviewed in Chapter 1, are the only reasonable probability distributions suited for modeling observations above a high threshold, such as waves exceeding the height of a certain dike, earthquakes having at least a certain intensity, and, after applying a simple transformation, share prices falling below some low threshold. However, there are cases for which a generalized Pareto model might fail. Therefore, Chapter 2 derives certain neighborhoods of a generalized Pareto distribution and provides several statistical tests for these neighborhoods, where the cases of observing finite dimensional data and of observing continuous functions on [0,1] are considered. By using a notation based on so-called D-norms it is shown that these tests consistently link both frameworks, the finite dimensional and the functional one. Since the derivation of the asymptotic distributions of the test statistics requires certain technical restrictions, Chapter 3 analyzes these assumptions in more detail. It provides in particular some examples of distributions that satisfy the null hypothesis and of those that do not. Since continuous copula processes are crucial tools for the functional versions of the proposed tests, it is also discussed whether those copula processes actually exist for a given set of data. Moreover, some practical advice is given how to choose the free parameters incorporated in the test statistics. Finally, a simulation study in Chapter 4 compares the in total three different test statistics with another test found in the literature that has a similar null hypothesis. This thesis ends with a short summary of the results and an outlook to further open questions.
Finite volume methods for compressible Euler equations suffer from an excessive diffusion in the limit of low Mach numbers. This PhD thesis explores new approaches to overcome this.
The analysis of a simpler set of equations that also possess a low Mach number limit is found to give valuable insights. These equations are the acoustic equations obtained as a linearization of the Euler equations. For both systems the limit is characterized by a divergencefree velocity. This constraint is nontrivial only in multiple spatial dimensions. As the Jacobians of the acoustic system do not commute, acoustics cannot be reduced to some kind of multi-dimensional advection. Therefore first an exact solution in multiple spatial dimensions is obtained. It is shown that the low Mach number limit can be interpreted as a limit of long times.
It is found that the origin of the inability of a scheme to resolve the low Mach number limit is the lack a discrete counterpart to the limit of long times. Numerical schemes whose discrete stationary states discretize all the analytic stationary states of the PDE are called stationarity preserving. It is shown that for the acoustic equations, stationarity preserving schemes are vorticity preserving and are those that are able to resolve the low Mach limit (low Mach compliant). This establishes a new link between these three concepts.
Stationarity preservation is studied in detail for both dimensionally split and multi-dimensional schemes for linear acoustics. In particular it is explained why the same multi-dimensional stencils appear in literature in very different contexts: These stencils are unique discretizations of the divergence that allow for stabilizing stationarity preserving diffusion.
Stationarity preservation can also be generalized to nonlinear systems such as the Euler equations. Several ways how such numerical schemes can be constructed for the Euler equations are presented. In particular a low Mach compliant numerical scheme is derived that uses a novel construction idea. Its diffusion is chosen such that it depends on the velocity divergence rather than just derivatives of the different velocity components. This is demonstrated to overcome the low Mach number problem. The scheme shows satisfactory results in numerical simulations and has been found to be stable under explicit time integration.
We present a technique for computing multi-branch-point covers with prescribed ramification and demonstrate the applicability of our method in relatively large degrees by computing several families of polynomials with symplectic and linear Galois groups.
As a first application, we present polynomials over \(\mathbb{Q}(\alpha,t)\) for the primitive rank-3 groups \(PSp_4(3)\) and \(PSp_4(3).C_2\) of degree 27 and for the 2-transitive group \(PSp_6(2)\) in its actions on 28 and 36 points, respectively. Moreover, the degree-28 polynomial for \(PSp_6(2)\) admits infinitely many totally real specializations.
Next, we present the first (to the best of our knowledge) explicit polynomials for the 2-transitive linear groups \(PSL_4(3)\) and \(PGL_4(3)\) of degree 40, and the imprimitive group \(Aut(PGL_4(3))\) of degree 80.
Additionally, we negatively answer a question by König whether there exists a degree-63 rational function with rational coefficients and monodromy group \(PSL_6(2)\) ramified over at least four points. This is achieved due to the explicit computation of the corresponding hyperelliptic genus-3 Hurwitz curve parameterizing this family, followed by a search for rational points on it. As a byproduct of our calculations we obtain the first explicit \(Aut(PSL_6(2))\)-realizations over \(\mathbb{Q}(t)\).
At last, we present a technique by Elkies for bounding the transitivity degree of Galois groups. This provides an alternative way to verify the Galois groups from the previous chapters and also yields a proof that the monodromy group of a degree-276 cover computed by Monien is isomorphic to the sporadic 2-transitive Conway group \(Co_3\).
Theoretical and numerical investigation of optimal control problems governed by kinetic models
(2021)
This thesis is devoted to the numerical and theoretical analysis of ensemble optimal control problems governed by kinetic models. The formulation and study of these problems have been put forward in recent years by R.W. Brockett with the motivation that ensemble control may provide a more general and robust control framework for dynamical systems. Following this formulation, a Liouville (or continuity) equation with an unbounded drift function is considered together with a class of cost functionals that include tracking of ensembles of trajectories of dynamical systems and different control costs. Specifically, $L^2$, $H^1$ and $L^1$ control costs are taken into account which leads to non--smooth optimization problems. For the theoretical investigation of the resulting optimal control problems, a well--posedness theory in weighted Sobolev spaces is presented for Liouville and related transport equations. Specifically, existence and uniqueness results for these equations and energy estimates in suitable norms are provided; in particular norms in weighted Sobolev spaces. Then, non--smooth optimal control problems governed by the Liouville equation are formulated with a control mechanism in the drift function. Further, box--constraints on the control are imposed. The control--to--state map is introduced, that associates to any control the unique solution of the corresponding Liouville equation. Important properties of this map are investigated, specifically, that it is well--defined, continuous and Frechet differentiable. Using the first two properties, the existence of solutions to the optimal control problems is shown. While proving the differentiability, a loss of regularity is encountered, that is natural to hyperbolic equations. This leads to the need of the investigation of the control--to--state map in the topology of weighted Sobolev spaces. Exploiting the Frechet differentiability, it is possible to characterize solutions to the optimal control problem as solutions to an optimality system. This system consists of the Liouville equation, its optimization adjoint in the form of a transport equation, and a gradient inequality. Numerical methodologies for solving Liouville and transport equations are presented that are based on a non--smooth Lagrange optimization framework. For this purpose, approximation and solution schemes for such equations are developed and analyzed. For the approximation of the Liouville model and its optimization adjoint, a combination of a Kurganov--Tadmor method, a Runge--Kutta scheme, and a Strang splitting method are discussed. Stability and second--order accuracy of these resulting schemes are proven in the discrete $L^1$ norm. In addition, conservation of mass and positivity preservation are confirmed for the solution method of the Liouville model. As numerical optimization strategy, an adapted Krylow--Newton method is applied. Since the control is considered to be an element of $H^1$ and to obey certain box--constraints, a method for calculating a $H^1$ projection is presented. Since the optimal control problem is non-smooth, a semi-smooth adaption of Newton's method is taken into account. Results of numerical experiments are presented that successfully validate the proposed deterministic framework. After the discussion of deterministic schemes, the linear space--homogeneous Keilson--Storer master equation is investigated. This equation was originally developed for the modelling of Brownian motion of particles immersed in a fluid and is a representative model of the class of linear Boltzmann equations. The well--posedness of the Keilson--Storer master equation is investigated and energy estimates in different topologies are derived. To solve this equation numerically, Monte Carlo methods are considered. Such methods take advantage of the kinetic formulation of the Liouville equation and directly implement the behaviour of the system of particles under consideration. This includes the probabilistic behaviour of the collisions between particles. Optimal control problems are formulated with an objective that is constituted of certain expected values in velocity space and the $L^2$ and $H^1$ costs of the control. The problems are governed by the Keilson--Storer master equation and the control mechanism is considered to be within the collision kernel. The objective of the optimal control of this model is to drive an ensemble of particles to acquire a desired mean velocity and to achieve a desired final velocity configuration. Existence of solutions of the optimal control problem is proven and a Keilson--Storer optimality system characterizing the solution of the proposed optimal control problem is obtained. The optimality system is used to construct a gradient--based optimization strategy in the framework of Monte--Carlo methods. This task requires to accommodate the resulting adjoint Keilson--Storer model in a form that is consistent with the kinetic formulation. For this reason, we derive an adjoint Keilson--Storer collision kernel and an additional source term. A similar approach is presented in the case of a linear space--inhomogeneous kinetic model with external forces and with Keilson--Storer collision term. In this framework, a control mechanism in the form of an external space--dependent force is investigated. The purpose of this control is to steer the multi--particle system to follow a desired mean velocity and position and to reach a desired final configuration in phase space. An optimal control problem using the formulation of ensemble controls is stated with an objective that is constituted of expected values in phase space and $H^1$ costs of the control. For solving the optimal control problems, a gradient--based computational strategy in the framework of Monte Carlo methods is developed. Part of this is the denoising of the distribution functions calculated by Monte Carlo algorithms using methods of the realm of partial differential equations. A standalone C++ code is presented that implements the developed non--linear conjugated gradient strategy. Results of numerical experiments confirm the ability of the designed probabilistic control framework to operate as desired. An outlook section about optimal control problems governed by non--linear space--inhomogeneous kinetic models completes this thesis.
This paper is devoted to the numerical analysis of non-smooth ensemble optimal control problems governed by the Liouville (continuity) equation that have been originally proposed by R.W. Brockett with the purpose of determining an efficient and robust control strategy for dynamical systems. A numerical methodology for solving these problems is presented that is based on a non-smooth Lagrange optimization framework where the optimal controls are characterized as solutions to the related optimality systems. For this purpose, approximation and solution schemes are developed and analysed. Specifically, for the approximation of the Liouville model and its optimization adjoint, a combination of a Kurganov–Tadmor method, a Runge–Kutta scheme, and a Strang splitting method are discussed. The resulting optimality system is solved by a projected semi-smooth Krylov–Newton method. Results of numerical experiments are presented that successfully validate the proposed framework.
Der Einzug des Rechners in den Mathematikunterricht hat eine Vielzahl neuer Möglichkeiten der Darstellung mit sich gebracht, darunter auch multiple, dynamisch verbundene Repräsentationen mathematischer Probleme. Die Arbeit beantwortet die Frage, ob und wie diese Repräsentationsarten von Schülerinnen und Schüler in Argumentationen genutzt werden. In der empirischen Untersuchung wurde dabei einerseits quantitativ erforscht, wie groß der Einfluss der in der Aufgabenstellung gegebenen Repräsentationsform auf die schriftliche Argumentationen der Schülerinnen und Schüler ist. Andererseits wurden durch eine qualitative Analyse spezifische Nutzungsweisen identifiziert und mittels Toulmins Argumentationsmodell beschrieben. Diese Erkenntnisse wurden genutzt, um Konsequenzen bezüglich der Verwendung von multiplen und/oder dynamischen Repräsentationen im Mathematikunterricht der Sekundarstufe zu formulieren.
The goal of this thesis is to investigate conformal mappings onto circular arc polygon domains, i.e. domains that are bounded by polygons consisting of circular arcs instead of line segments.
Conformal mappings onto circular arc polygon domains contain parameters in addition to the classical parameters of the Schwarz-Christoffel transformation. To contribute to the parameter problem of conformal mappings from the unit disk onto circular arc polygon domains, we investigate two special cases of these mappings. In the first case we can describe the additional parameters if the bounding circular arc polygon is a polygon with straight sides. In the second case we provide an approximation for the additional parameters if the circular arc polygon domain satisfies some symmetry conditions. These results allow us to draw conclusions on the connection between these additional parameters and the classical parameters of the mapping.
For conformal mappings onto multiply connected circular arc polygon domains, we provide an alternative construction of the mapping formula without using the Schottky-Klein prime function. In the process of constructing our main result, mappings for domains of connectivity three or greater, we also provide a formula for conformal mappings onto doubly connected circular arc polygon domains. The comparison of these mapping formulas with already known mappings allows us to provide values for some of the parameters of the mappings onto doubly connected circular arc polygon domains if the image domain is a polygonal domain.
The different components of the mapping formula are constructed by using a slightly modified variant of the Poincaré theta series. This construction includes the design of a function to remove unwanted poles and of different versions of functions that are analytic on the domain of definition of the mapping functions and satisfy some special functional equations.
We also provide the necessary concepts to numerically evaluate the conformal mappings onto multiply connected circular arc polygon domains. As the evaluation of such a map requires the solution of a differential equation, we provide a possible configuration of curves inside the preimage domain to solve the equation along them in addition to a description of the procedure for computing either the formula for the doubly connected case or the case of connectivity three or greater. We also describe the procedures for solving the parameter problem for multiply connected circular arc polygon domains.
Many optimization problems for a smooth cost function f on a manifold M can be solved by determining the zeros of a vector field F; such as e.g. the gradient F of the cost function f. If F does not depend on additional parameters, numerous zero-finding techniques are available for this purpose. It is a natural generalization however, to consider time-dependent optimization problems that require the computation of time-varying zeros of time-dependent vector fields F(x,t). Such parametric optimization problems arise in many fields of applied mathematics, in particular path-following problems in robotics, recursive eigenvalue and singular value estimation in signal processing, as well as numerical linear algebra and inverse eigenvalue problems in control theory. In the literature, there are already some tracking algorithms for these tasks, but these do not always adequately respect the manifold structure. Hence, available tracking results can often be improved by implementing methods working directly on the manifold. Thus, intrinsic methods are of interests that evolve during the entire computation on the manifold. It is the task of this thesis, to develop such intrinsic zero finding methods. The main results of this thesis are as follows: - A new class of continuous and discrete tracking algorithms is proposed for computing zeros of time-varying vector fields on Riemannian manifolds. This was achieved by studying the newly introduced time-varying Newton Flow and the time-varying Newton Algorithm on Riemannian manifolds. - Convergence analysis is performed on arbitrary Riemannian manifolds. - Concretization of these results on submanifolds, including for a new class of algorithms via local parameterizations. - More specific results in Euclidean space are obtained by considering inexact and underdetermined time-varying Newton Flows. - Illustration of these newly introduced algorithms by examining time-varying tracking tasks in three application areas: Subspace analysis, matrix decompositions (in particular EVD and SVD) and computer vision.
Fluids in Gravitational Fields – Well-Balanced Modifications for Astrophysical Finite-Volume Codes
(2021)
Stellar structure can -- in good approximation -- be described as a hydrostatic state, which which arises due to a balance between gravitational force and pressure gradient. Hydrostatic states are static solutions of the full compressible Euler system with gravitational source term, which can be used to model the stellar interior. In order to carry out simulations of dynamical processes occurring in stars, it is vital for the numerical method to accurately maintain the hydrostatic state over a long time period. In this thesis we present different methods to modify astrophysical finite volume codes in order to make them \emph{well-balanced}, preventing them from introducing significant discretization errors close to hydrostatic states. Our well-balanced modifications are constructed so that they can meet the requirements for methods applied in the astrophysical context: They can well-balance arbitrary hydrostatic states with any equation of state that is applied to model thermodynamical relations and they are simple to implement in existing astrophysical finite volume codes. One of our well-balanced modifications follows given solutions exactly and can be applied on any grid geometry. The other methods we introduce, which do no require any a priori knowledge, balance local high order approximations of arbitrary hydrostatic states on a Cartesian grid. All of our modifications allow for high order accuracy of the method. The improved accuracy close to hydrostatic states is verified in various numerical experiments.
The goal of this thesis is to study the topological and algebraic properties of the quasiconformal automorphism groups of simply and multiply connected domains in the complex plain, in which the quasiconformal automorphism groups are endowed with the supremum metric on the underlying domain. More precisely, questions concerning central topological properties such as (local) compactness, (path)-connectedness and separability and their dependence on the boundary of the corresponding domains are studied, as well as completeness with respect to the supremum metric. Moreover, special subsets of the quasiconformal automorphism group of the unit disk are investigated, and concrete quasiconformal automorphisms are constructed. Finally, a possible application of quasiconformal unit disk automorphisms to symmetric cryptography is presented, in which a quasiconformal cryptosystem is defined and studied.
We consider homogeneous spaces G/H with the same rational homotopy as a product of a 1-sphere and a (m+1)-sphere. We show that these spaces have also the rational cohomology of such a sphere product if H is connected and if the quotient has dimension m+2. Furthermore, we prove that if additionally the fundamental group of G/H is cyclic, then G/H is locally a product of a 1-torus and ofA/H, where A/H is a simply connected rational cohomology (m+1)-sphere (and hence classified). If H fails to be connected, then with U as the connected component of H the G-action on the covering space G/U of G/H has connected stabilizers, and the results apply to G/U. To show that under the assumptions above every natural number may be realized as the order of the group of connected components of H we calculate the cohomology of certain homogeneous spaces. We also determine the rational cohomology of the fibre bundle U-->G-->G/U if G/H meets the assumptions above. This is done by considering the respective Leray-Serre spectral sequence. The structure of the cohomology of U-->G-->G/U then gives a second proof for the structure of compact connected Lie groups acting transitively on spaces with the rational homotopy of a product of a 1-sphere and a (m+1)-sphere. Since a quotient of a homogeneous space with the same rational homotopy or cohomology as a product of a 1-sphere and a (m+1)-sphere is not simply connected, there often arises the question whether or not a considered fibre bundle or fibration is orientable. A large amount of space will therefore be given to the problem of showing that certain fibrations are orientable. For compact connected (m+2)-manifolds with cyclic fundamental groups and with the rational homotopy of a product of a 1-sphere and a (m+1)-sphere we show the following: if a connected Lie group acts transitively on the manifold, then the maximal compact subgroups are either transitive, or their orbits are simply connected rational cohomology spheres of codimension 1. Homogeneous spaces with the same rational cohomology or homotopy as a a product of a 1-sphere and a (m+1)-sphere play a role in the study of different types of geometrical objects. They appear for example as focal manifolds of isoparametric hypersurfaces with four distinct principal curvatures. Further examples of such spaces are the point spaces and the line spaces of compact connected generalized quadrangles. We determine the isometry groups of isoparametric hypersurfaces with 4 principal curvatures of multiplicities 1 and m which are transitive on the focal manifold with non-trivial fundamental group. Buildings were introduced by Jacques Tits to give interpretations of simple groups of Lie type. They are a far-reaching generalization of projective spaces, in particular a generalization of projective planes. There is another generalization of projective planes called generalized polygons. A projective plane is the same as a generalized triangle. The generalized polygons are also contained in the class of buildings: they are the buildings of rank 2. To compact quadrangles one can assign a pair of natural numbers called the topological parameters of the quadrangles. We treat the case k=1. It turns out that there are no other point-transitive compact connected Lie groups for (1,m)-quadrangles than the ones for the real orthogonal quadrangles. Furthermore, we solve the problem of three infinite series of group actions which Kramer left as open problems; there are no quadrangles with the homogeneous spaces in question as point spaces (up to maybe a finite number of small parameters in one of the three series).
This thesis deals with a new so-called sequential quadratic Hamiltonian (SQH) iterative scheme to solve optimal control problems with differential models and cost functionals ranging from smooth to discontinuous and non-convex. This scheme is based on the Pontryagin maximum principle (PMP) that provides necessary optimality conditions for an optimal solution. In this framework, a Hamiltonian function is defined that attains its minimum pointwise at the optimal solution of the corresponding optimal control problem. In the SQH scheme, this Hamiltonian function is augmented by a quadratic penalty term consisting of the current control function and the control function from the previous iteration. The heart of the SQH scheme is to minimize this augmented Hamiltonian function pointwise in order to determine a control update. Since the PMP does not require any differ- entiability with respect to the control argument, the SQH scheme can be used to solve optimal control problems with both smooth and non-convex or even discontinuous cost functionals. The main achievement of the thesis is the formulation of a robust and efficient SQH scheme and a framework in which the convergence analysis of the SQH scheme can be carried out. In this framework, convergence of the scheme means that the calculated solution fulfills the PMP condition. The governing differential models of the considered optimal control problems are ordinary differential equations (ODEs) and partial differential equations (PDEs). In the PDE case, elliptic and parabolic equations as well as the Fokker-Planck (FP) equation are considered. For both the ODE and the PDE cases, assumptions are formulated for which it can be proved that a solution to an optimal control problem has to fulfill the PMP. The obtained results are essential for the discussion of the convergence analysis of the SQH scheme. This analysis has two parts. The first one is the well-posedness of the scheme which means that all steps of the scheme can be carried out and provide a result in finite time. The second part part is the PMP consistency of the solution. This means that the solution of the SQH scheme fulfills the PMP conditions. In the ODE case, the following results are obtained that state well-posedness of the SQH scheme and the PMP consistency of the corresponding solution. Lemma 7 states the existence of a pointwise minimum of the augmented Hamiltonian. Lemma 11 proves the existence of a weight of the quadratic penalty term such that the minimization of the corresponding augmented Hamiltonian results in a control updated that reduces the value of the cost functional. Lemma 12 states that the SQH scheme stops if an iterate is PMP optimal. Theorem 13 proves the cost functional reducing properties of the SQH control updates. The main result is given in Theorem 14, which states the pointwise convergence of the SQH scheme towards a PMP consistent solution. In this ODE framework, the SQH method is applied to two optimal control problems. The first one is an optimal quantum control problem where it is shown that the SQH method converges much faster to an optimal solution than a globalized Newton method. The second optimal control problem is an optimal tumor treatment problem with a system of coupled highly non-linear state equations that describe the tumor growth. It is shown that the framework in which the convergence of the SQH scheme is proved is applicable for this highly non-linear case. Next, the case of PDE control problems is considered. First a general framework is discussed in which a solution to the corresponding optimal control problem fulfills the PMP conditions. In this case, many theoretical estimates are presented in Theorem 59 and Theorem 64 to prove in particular the essential boundedness of the state and adjoint variables. The steps for the convergence analysis of the SQH scheme are analogous to that of the ODE case and result in Theorem 27 that states the PMP consistency of the solution obtained with the SQH scheme. This framework is applied to different elliptic and parabolic optimal control problems, including linear and bilinear control mechanisms, as well as non-linear state equations. Moreover, the SQH method is discussed for solving a state-constrained optimal control problem in an augmented formulation. In this case, it is shown in Theorem 30 that for increasing the weight of the augmentation term, which penalizes the violation of the state constraint, the measure of this state constraint violation by the corresponding solution converges to zero. Furthermore, an optimal control problem with a non-smooth L\(^1\)-tracking term and a non-smooth state equation is investigated. For this purpose, an adjoint equation is defined and the SQH method is used to solve the corresponding optimal control problem. The final part of this thesis is devoted to a class of FP models related to specific stochastic processes. The discussion starts with a focus on random walks where also jumps are included. This framework allows a derivation of a discrete FP model corresponding to a continuous FP model with jumps and boundary conditions ranging from absorbing to totally reflecting. This discussion allows the consideration of the drift-control resulting from an anisotropic probability of the steps of the random walk. Thereafter, in the PMP framework, two drift-diffusion processes and the corresponding FP models with two different control strategies for an optimal control problem with an expectation functional are considered. In the first strategy, the controls depend on time and in the second one, the controls depend on space and time. In both cases a solution to the corresponding optimal control problem is characterized with the PMP conditions, stated in Theorem 48 and Theorem 49. The well-posedness of the SQH scheme is shown in both cases and further conditions are discussed that ensure the convergence of the SQH scheme to a PMP consistent solution. The case of a space and time dependent control strategy results in a special structure of the corresponding PMP conditions that is exploited in another solution method, the so-called direct Hamiltonian (DH) method.
A sequential quadratic Hamiltonian (SQH) scheme for solving different classes of non-smooth and non-convex PDE optimal control problems is investigated considering seven different benchmark problems with increasing difficulty. These problems include linear and nonlinear PDEs with linear and bilinear control mechanisms, non-convex and discontinuous costs of the controls, L\(^1\) tracking terms, and the case of state constraints.
The SQH method is based on the characterisation of optimality of PDE optimal control problems by the Pontryagin's maximum principle (PMP). For each problem, a theoretical discussion of the PMP optimality condition is given and results of numerical experiments are presented that demonstrate the large range of applicability of the SQH scheme.
The characterization and numerical solution of two non-smooth optimal control problems governed by a Fokker–Planck (FP) equation are investigated in the framework of the Pontryagin maximum principle (PMP). The two FP control problems are related to the problem of determining open- and closed-loop controls for a stochastic process whose probability density function is modelled by the FP equation. In both cases, existence and PMP characterisation of optimal controls are proved, and PMP-based numerical optimization schemes are implemented that solve the PMP optimality conditions to determine the controls sought. Results of experiments are presented that successfully validate the proposed computational framework and allow to compare the two control strategies.
Circadian endogenous clocks of eukaryotic organisms are an established and rapidly developing research field. To investigate and simulate in an effective model the effect of external stimuli on such clocks and their components we developed a software framework for download and simulation. The application is useful to understand the different involved effects in a mathematical simple and effective model. This concerns the effects of Zeitgebers, feedback loops and further modifying components. We start from a known mathematical oscillator model, which is based on experimental molecular findings. This is extended with an effective framework that includes the impact of external stimuli on the circadian oscillations including high dose pharmacological treatment. In particular, the external stimuli framework defines a systematic procedure by input-output-interfaces to couple different oscillators. The framework is validated by providing phase response curves and ranges of entrainment. Furthermore, Aschoffs rule is computationally investigated. It is shown how the external stimuli framework can be used to study biological effects like points of singularity or oscillators integrating different signals at once. The mathematical framework and formalism is generic and allows to study in general the effect of external stimuli on oscillators and other biological processes. For an easy replication of each numerical experiment presented in this work and an easy implementation of the framework the corresponding Mathematica files are fully made available. They can be downloaded at the following link: https://www.biozentrum.uni-wuerzburg.de/bioinfo/computing/circadian/.
In this thesis we consider a reactive transport model with precipitation dissolution reactions from the geosciences. It consists of PDEs, ODEs, algebraic equations (AEs) and complementary conditions (CCs). After discretization of this model we get a huge nonlinear and nonsmooth equation system. We tackle this system with the semismooth Newton method introduced by Qi and Sun. The focus of this thesis is on the application and convergence of this algorithm. We proof that this algorithm is well defined for this problem and local even quadratic convergent for a BD-regular solution. We also deal with the arising linear equation systems, which are large and sparse, and how they can be solved efficiently. An integral part of this investigation is the boundedness of a certain matrix-valued function, which is shown in a separate chapter. As a side quest we study how extremal eigenvalues (and singular values) of certain PDE-operators, which are involved in our discretized model, can be estimated accurately.
The first goal of this thesis is to generalize Loewner's famous differential equation to multiply connected domains. The resulting differential equations are known as Komatu--Loewner differential equations. We discuss Komatu--Loewner equations for canonical domains (circular slit disks, circular slit annuli and parallel slit half-planes). Additionally, we give a generalisation to several slits and discuss parametrisations that lead to constant coefficients. Moreover, we compare Komatu--Loewner equations with several slits to single slit Loewner equations.
Finally we generalise Komatu--Loewner equations to hulls satisfying a local growth property.
ADMM-Type Methods for Optimization and Generalized Nash Equilibrium Problems in Hilbert Spaces
(2020)
This thesis is concerned with a certain class of algorithms for the solution of constrained optimization problems and generalized Nash equilibrium problems in Hilbert spaces. This class of algorithms is inspired by the alternating direction method of multipliers (ADMM) and eliminates the constraints using an augmented Lagrangian approach. The alternating direction method consists of splitting the augmented Lagrangian subproblem into smaller and more easily manageable parts.
Before the algorithms are discussed, a substantial amount of background material, including the theory of Banach and Hilbert spaces, fixed-point iterations as well as convex and monotone set-valued analysis, is presented. Thereafter, certain optimization problems and generalized Nash equilibrium problems are reformulated and analyzed using variational inequalities and set-valued mappings. The analysis of the algorithms developed in the course of this thesis is rooted in these reformulations as variational inequalities and set-valued mappings.
The first algorithms discussed and analyzed are one weakly and one strongly convergent ADMM-type algorithm for convex, linearly constrained optimization. By equipping the associated Hilbert space with the correct weighted scalar product, the analysis of these two methods is accomplished using the proximal point method and the Halpern method.
The rest of the thesis is concerned with the development and analysis of ADMM-type algorithms for generalized Nash equilibrium problems that jointly share a linear equality constraint. The first class of these algorithms is completely parallelizable and uses a forward-backward idea for the analysis, whereas the second class of algorithms can be interpreted as a direct extension of the classical ADMM-method to generalized Nash equilibrium problems.
At the end of this thesis, the numerical behavior of the discussed algorithms is demonstrated on a collection of examples.
This thesis is devoted to a theoretical and numerical investigation of methods to solve open-loop non zero-sum differential Nash games. These problems arise in many applications, e.g., biology, economics, physics, where competition between different agents appears. In this case, the goal of each agent is in contrast with those of the others, and a competition game can be interpreted as a coupled optimization problem for which, in general, an optimal solution does not exist. In fact, an optimal strategy for one player may be unsatisfactory for the others. For this reason, a solution of a game is sought as an equilibrium and among the solutions concepts proposed in the literature, that of Nash equilibrium (NE) is the focus of this thesis. The building blocks of the resulting differential Nash games are a dynamical model with different control functions associated with different players that pursue non-cooperative objectives. In particular, the aim of this thesis is on differential models having linear or bilinear state-strategy structures. In this framework, in the first chapter, some well-known results are recalled, especially for non-cooperative linear-quadratic differential Nash games. Then, a bilinear Nash game is formulated and analysed. The main achievement in this chapter is Theorem 1.4.2 concerning existence of Nash equilibria for non-cooperative differential bilinear games. This result is obtained assuming a sufficiently small time horizon T, and an estimate of T is provided in Lemma 1.4.8 using specific properties of the regularized Nikaido-Isoda function. In Chapter 2, in order to solve a bilinear Nash game, a semi-smooth Newton (SSN) scheme combined with a relaxation method is investigated, where the choice of a SSN scheme is motivated by the presence of constraints on the players’ actions that make the problem non-smooth. The resulting method is proved to be locally convergent in Theorem 2.1, and an estimate on the relaxation parameter is also obtained that relates the relaxation factor to the time horizon of a Nash equilibrium and to the other parameters of the game. For the bilinear Nash game, a Nash bargaining problem is also introduced and discussed, aiming at determining an improvement of all players’ objectives with respect to the Nash equilibrium. A characterization of a bargaining solution is given in Theorem 2.2.1 and a numerical scheme based on this result is presented that allows to compute this solution on the Pareto frontier. Results of numerical experiments based on a quantum model of two spin-particles and on a population dynamics model with two competing species are presented that successfully validate the proposed algorithms. In Chapter 3 a functional formulation of the classical homicidal chauffeur (HC) Nash game is introduced and a new numerical framework for its solution in a time-optimal formulation is discussed. This methodology combines a Hamiltonian based scheme, with proximal penalty to determine the time horizon where the game takes place, with a Lagrangian optimal control approach and relaxation to solve the Nash game at a fixed end-time. The resulting numerical optimization scheme has a bilevel structure, which aims at decoupling the computation of the end-time from the solution of the pursuit-evader game. Several numerical experiments are performed to show the ability of the proposed algorithm to solve the HC game. Focusing on the case where a collision may occur, the time for this event is determined. The last part of this thesis deals with the analysis of a novel sequential quadratic Hamiltonian (SQH) scheme for solving open-loop differential Nash games. This method is formulated in the framework of Pontryagin’s maximum principle and represents an efficient and robust extension of the successive approximations strategy in the realm of Nash games. In the SQH method, the Hamilton-Pontryagin functions are augmented by a quadratic penalty term and the Nikaido-Isoda function is used as a selection criterion. Based on this fact, the key idea of this SQH scheme is that the PMP characterization of Nash games leads to a finite-dimensional Nash game for any fixed time. A class of problems for which this finite-dimensional game admits a unique solution is identified and for this class of games theoretical results are presented that prove the well-posedness of the proposed scheme. In particular, Proposition 4.2.1 is proved to show that the selection criterion on the Nikaido-Isoda function is fulfilled. A comparison of the computational performances of the SQH scheme and the SSN-relaxation method previously discussed is shown. Applications to linear-quadratic Nash games and variants with control constraints, weighted L1 costs of the players’ actions and tracking objectives are presented that corroborate the theoretical statements.
A sequentialquadratic Hamiltonian schemefor solving open-loop differential Nash games is proposed and investigated. This method is formulated in the framework of the Pontryagin maximum principle and represents an efficient and robust extension of the successive approximations strategy for solving optimal control problems. Theoretical results are presented that prove the well-posedness of the proposed scheme, and results of numerical experiments are reported that successfully validate its computational performance.
We study the symmetrised rank-one convex hull of monoclinic-I martensite (a twelve-variant material) in the context of geometrically-linear elasticity. We construct sets of T3s, which are (non-trivial) symmetrised rank-one convex hulls of 3-tuples of pairwise incompatible strains. Moreover we construct a five-dimensional continuum of T3s and show that its intersection with the boundary of the symmetrised rank-one convex hull is four-dimensional. We also show that there is another kind of monoclinic-I martensite with qualitatively different semi-convex hulls which, so far as we know, has not been experimentally observed. Our strategy is to combine understanding of the algebraic structure of symmetrised rank-one convex cones with knowledge of the faceting structure of the convex polytope formed by the strains.
The Riemann zeta-function forms a central object in multiplicative number theory; its value-distribution encodes deep arithmetic properties of the prime numbers. Here, a crucial role is assigned to the analytic behavior of the zeta-function on the so called critical line. In this thesis we study the value-distribution of the Riemann zeta-function near and on the critical line. Amongst others we focus on the following.
PART I: A modified concept of universality, a-points near the critical line and a denseness conjecture attributed to Ramachandra.
The critical line is a natural boundary of the Voronin-type universality property of the Riemann zeta-function. We modify Voronin's concept by adding a scaling factor to the vertical shifts that appear in Voronin's universality theorem and investigate whether this modified concept is appropriate to keep up a certain universality property of the Riemann zeta-function near and on the critical line. It turns out that it is mainly the functional equation of the Riemann zeta-function that restricts the set of functions which can be approximated by this modified concept around the critical line.
Levinson showed that almost all a-points of the Riemann zeta-function lie in a certain funnel-shaped region around the critical line. We complement Levinson's result: Relying on arguments of the theory of normal families and the notion of filling discs, we detect a-points in this region which are very close to the critical line.
According to a folklore conjecture (often attributed to Ramachandra) one expects that the values of the Riemann zeta-function on the critical line lie dense in the complex numbers. We show that there are certain curves which approach the critical line asymptotically and have the property that the values of the zeta-function on these curves are dense in the complex numbers.
Many of our results in part I are independent of the Euler product representation of the Riemann zeta-function and apply for meromorphic functions that satisfy a Riemann-type functional equation in general.
PART II: Discrete and continuous moments.
The Lindelöf hypothesis deals with the growth behavior of the Riemann zeta-function on the critical line. Due to classical works by Hardy and Littlewood, the Lindelöf hypothesis can be reformulated in terms of power moments to the right of the critical line. Tanaka showed recently that the expected asymptotic formulas for these power moments are true in a certain measure-theoretical sense; roughly speaking he omits a set of Banach density zero from the path of integration of these moments. We provide a discrete and integrated version of Tanaka's result and extend it to a large class of Dirichlet series connected to the Riemann zeta-function.
An efficient and accurate computational framework for solving control problems governed by quantum spin systems is presented. Spin systems are extremely important in modern quantum technologies such as nuclear magnetic resonance spectroscopy, quantum imaging and quantum computing. In these applications, two classes of quantum control problems arise: optimal control problems and exact-controllability problems, with a bilinear con- trol structure. These models correspond to the Schrödinger-Pauli equation, describing the time evolution of a spinor, and the Liouville-von Neumann master equation, describing the time evolution of a spinor and a density operator. This thesis focuses on quantum control problems governed by these models. An appropriate definition of the optimiza- tion objectives and of the admissible set of control functions allows to construct controls with specific properties. These properties are in general required by the physics and the technologies involved in quantum control applications. A main purpose of this work is to address non-differentiable quantum control problems. For this reason, a computational framework is developed to address optimal-control prob- lems, with possibly L1 -penalization term in the cost-functional, and exact-controllability problems. In both cases the set of admissible control functions is a subset of a Hilbert space. The bilinear control structure of the quantum model, the L1 -penalization term and the control constraints generate high non-linearities that make difficult to solve and analyse the corresponding control problems. The first part of this thesis focuses on the physical description of the spin of particles and of the magnetic resonance phenomenon. Afterwards, the controlled Schrödinger- Pauli equation and the Liouville-von Neumann master equation are discussed. These equations, like many other controlled quantum models, can be represented by dynamical systems with a bilinear control structure. In the second part of this thesis, theoretical investigations of optimal control problems, with a possible L1 -penalization term in the objective and control constraints, are consid- ered. In particular, existence of solutions, optimality conditions, and regularity properties of the optimal controls are discussed. In order to solve these optimal control problems, semi-smooth Newton methods are developed and proved to be superlinear convergent. The main difficulty in the implementation of a Newton method for optimal control prob- lems comes from the dimension of the Jacobian operator. In a discrete form, the Jacobian is a very large matrix, and this fact makes its construction infeasible from a practical point of view. For this reason, the focus of this work is on inexact Krylov-Newton methods, that combine the Newton method with Krylov iterative solvers for linear systems, and allows to avoid the construction of the discrete Jacobian. In the third part of this thesis, two methodologies for the exact-controllability of quan- tum spin systems are presented. The first method consists of a continuation technique, while the second method is based on a particular reformulation of the exact-control prob- lem. Both these methodologies address minimum L2 -norm exact-controllability problems. In the fourth part, the thesis focuses on the numerical analysis of quantum con- trol problems. In particular, the modified Crank-Nicolson scheme as an adequate time discretization of the Schrödinger equation is discussed, the first-discretize-then-optimize strategy is used to obtain a discrete reduced gradient formula for the differentiable part of the optimization objective, and implementation details and globalization strategies to guarantee an adequate numerical behaviour of semi-smooth Newton methods are treated. In the last part of this work, several numerical experiments are performed to vali- date the theoretical results and demonstrate the ability of the proposed computational framework to solve quantum spin control problems.
Applications in various research areas such as signal processing, quantum computing, and computer vision, can be described as constrained optimization tasks on certain subsets of tensor products of vector spaces. In this work, we make use of techniques from Riemannian geometry and analyze optimization tasks on subsets of so-called simple tensors which can be equipped with a differentiable structure. In particular, we introduce a generalized Rayleigh-quotient function on the tensor product of Grassmannians and on the tensor product of Lagrange- Grassmannians. Its optimization enables a unified approach to well-known tasks from different areas of numerical linear algebra, such as: best low-rank approximations of tensors (data compression), computing geometric measures of entanglement (quantum computing) and subspace clustering (image processing). We perform a thorough analysis on the critical points of the generalized Rayleigh-quotient and develop intrinsic numerical methods for its optimization. Explicitly, using the techniques from Riemannian optimization, we present two type of algorithms: a Newton-like and a conjugated gradient algorithm. Their performance is analysed and compared with established methods from the literature.
In this work, we consider impulsive dynamical systems evolving on an infinite-dimensional space and subjected to external perturbations. We look for stability conditions that guarantee the input-to-state stability for such systems. Our new dwell-time conditions allow the situation, where both continuous and discrete dynamics can be unstable simultaneously. Lyapunov like methods are developed for this purpose. Illustrative finite and infinite dimensional examples are provided to demonstrate the application of the main results. These examples cannot be treated by any other published approach and demonstrate the effectiveness of our results.
Many modern statistically efficient methods come with tremendous computational challenges, often leading to large-scale optimisation problems. In this work, we examine such computational issues for recently developed estimation methods in nonparametric regression with a specific view on image denoising. We consider in particular certain variational multiscale estimators which are statistically optimal in minimax sense, yet computationally intensive. Such an estimator is computed as the minimiser of a smoothness functional (e.g., TV norm) over the class of all estimators such that none of its coefficients with respect to a given multiscale dictionary is statistically significant. The so obtained multiscale Nemirowski-Dantzig estimator (MIND) can incorporate any convex smoothness functional and combine it with a proper dictionary including wavelets, curvelets and shearlets. The computation of MIND in general requires to solve a high-dimensional constrained convex optimisation problem with a specific structure of the constraints induced by the statistical multiscale testing criterion. To solve this explicitly, we discuss three different algorithmic approaches: the Chambolle-Pock, ADMM and semismooth Newton algorithms. Algorithmic details and an explicit implementation is presented and the solutions are then compared numerically in a simulation study and on various test images. We thereby recommend the Chambolle-Pock algorithm in most cases for its fast convergence. We stress that our analysis can also be transferred to signal recovery and other denoising problems to recover more general objects whenever it is possible to borrow statistical strength from data patches of similar object structure.
To study coisotropic reduction in the context of deformation quantization we introduce constraint manifolds and constraint algebras as the basic objects encoding the additional information needed to define a reduction. General properties of various categories of constraint objects and their compatiblity with reduction are examined. A constraint Serre-Swan theorem, identifying constraint vector bundles with certain finitely generated projective constraint modules, as well as a constraint symbol calculus are proved. After developing the general deformation theory of constraint algebras, including constraint Hochschild cohomology and constraint differential graded Lie algebras, the second constraint Hochschild cohomology for the constraint algebra of functions on a constraint flat space is computed.
Teil 1 der Arbeit beinhaltet eine Zusammenfassung grundlegender funktionalanalytischer Ergebnisse sowie eine Einführung in die Integral- und Differentialrechnung in Frécheträumen. Insbesondere wird in Kapitel 2 eine ausführliche Darstellung des Lebesgue-Bochner-Integrals auf Frécheträumen geliefert. Teil 2 behandelt die Theorie der linearen Differentialgleichungen auf Frécheträumen. Dazu werden in Kapitel 3 stark differenzierbare Halbgruppen und deren infinitesimale Generatoren charakterisiert. In Kapitel 4 werden diese Ergebnisse benutzt, um lineare Evolutionsgleichungen (von hyperbolischem oder parabolischem Typ) zu untersuchen. Teil 3 enthält die zentralen Resultate der Arbeit. In Kapitel 5 werden zwei Existenz- und Eindeutigkeitssätze für nichtlineare gewöhnliche Differentialgleichungen in zahmen Frécheträumen bewiesen. Kapitel 6 liefert eine Anwendung der Ergebnisse aus Kapitel 5 auf nichtlineare partielle Differentialgleichungen erster Ordnung.
A completely decomposable group is a direct sum of subgroups of the rationals. An almost completely decomposable group is a torsion free abelian group that contains a completely decomposable group as subgroup of finite index. Tight subgroups are maximal subgroups (with respect to set inclusion) among the completely decomposable subgroups of an almost completely decomposable group. In this dissertation we show an extended version of the theorem of Bezout, give a new criterion for the tightness of a completely decomposable subgroup, derive some conditions under which a tight subgroup is regulating and generalize a theorem of Campagna. We give an example of an almost completely decomposable group, all of whose regulating subgroups do not have a quotient with minimal exponent. We show that among the types of elements of a coset modulo a completely decomposable group there exists a unique maximal type and define this type to be -the- coset type. We give criteria for tightness and regulating in term of coset types as well as a representation of the type subgroups using coset types. We introduce the notion of reducible cosets and show their key role for transitions from one completely decomposable subgroup up to another one containing the first one as a proper subgroup. We give an example of a tight, but not regulating subgroup which contains the regulator. We develop the notion of a fully single covered subset of a lattice, show that V-free implies fully single covered, but not necessarily vice versa, and we define an equivalence relation on the set of all finite subsets of a given lattice. We develop some extension of ordinary Hasse diagrams, and apply the lattice theoretic results on the lattice of types and almost completely decomposable groups.
In this thesis different algorithms for the solution of generalized Nash equilibrium problems with the focus on global convergence properties are developed. A globalized Newton method for the computation of normalized solutions, a nonsmooth algorithm based on an optimization reformulation of the game-theoretic problem, and a merit function approach and an interior point method for the solution of the concatenated Karush-Kuhn-Tucker-system are analyzed theoretically and numerically. The interior point method turns out to be one of the best existing methods for the solution of generalized Nash equilibrium problems.
Bivariate copula monitoring
(2022)
The assumption of multivariate normality underlying the Hotelling T\(^{2}\) chart is often violated for process data. The multivariate dependency structure can be separated from marginals with the help of copula theory, which permits to model association structures beyond the covariance matrix. Copula‐based estimation and testing routines have reached maturity regarding a variety of practical applications. We have constructed a rich design matrix for the comparison of the Hotelling T\(^{2}\) chart with the copula test by Verdier and the copula test by Vuong, which allows for weighting the observations adaptively. Based on the design matrix, we have conducted a large and computationally intensive simulation study. The results show that the copula test by Verdier performs better than Hotelling T\(^{2}\) in a large variety of out‐of‐control cases, whereas the weighted Vuong scheme often fails to provide an improvement.
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
Bei vielen Fragestellungen, in denen sich eine Grundgesamtheit in verschiedene Klassen unterteilt, ist weniger die relative Klassengröße als vielmehr die Anzahl der Klassen von Bedeutung. So interessiert sich beispielsweise der Biologe dafür, wie viele Spezien einer Gattung es gibt, der Numismatiker dafür, wie viele Münzen oder Münzprägestätten es in einer Epoche gab, der Informatiker dafür, wie viele unterschiedlichen Einträge es in einer sehr großen Datenbank gibt, der Programmierer dafür, wie viele Fehler eine Software enthält oder der Germanist dafür, wie groß der Wortschatz eines Autors war oder ist. Dieser Artenreichtum ist die einfachste und intuitivste Art und Weise eine Population oder Grundgesamtheit zu charakterisieren. Jedoch kann nur in Kollektiven, in denen die Gesamtanzahl der Bestandteile bekannt und relativ klein ist, die Anzahl der verschiedenen Spezien durch Erfassung aller bestimmt werden. In allen anderen Fällen ist es notwendig die Spezienanzahl durch Schätzungen zu bestimmen.
In this paper we derive new results on multivariate extremes and D-norms. In particular we establish new characterizations of the multivariate max-domain of attraction property. The limit distribution of certain multivariate exceedances above high thresholds is derived, and the distribution of that generator of a D-norm on R\(^{d}\), whose components sum up to d, is obtained. Finally we introduce exchangeable D-norms and show that the set of exchangeable D-norms is a simplex.
It is shown that the rate of convergence in the von Mises conditions of extreme value theory determines the distance of the underlying distribution function F from a generalized Pareto distribution. The distance is measured in terms of the pertaining densities with the limit being ultimately attained if and only if F is ultimately a generalized Pareto distribution. Consequently, the rate of convergence of the extremes in an lid sample, whether in terms of the distribution of the largest order statistics or of corresponding empirical truncated point processes, is determined by the rate of convergence in the von Mises condition. We prove that the converse is also true.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.