Refine
Has Fulltext
- yes (222)
Is part of the Bibliography
- yes (222)
Year of publication
Document Type
- Doctoral Thesis (128)
- Journal article (73)
- Book (5)
- Report (4)
- Master Thesis (3)
- Other (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (11)
- Extremwertstatistik (8)
- Optimierung (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Mathematikunterricht (5)
- Stabilität (5)
Institute
- Institut für Mathematik (222) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
This thesis is devoted to numerical verification of optimality conditions for non-convex optimal control problems. In the first part, we are concerned with a-posteriori verification of sufficient optimality conditions. It is a common knowledge that verification of such conditions for general non-convex PDE-constrained optimization problems is very challenging. We propose a method to verify second-order sufficient conditions for a general class of optimal control problem. If the proposed verification method confirms the fulfillment of the sufficient condition then a-posteriori error estimates can be computed. A special ingredient of our method is an error analysis for the Hessian of the underlying optimization problem. We derive conditions under which positive definiteness of the Hessian of the discrete problem implies positive definiteness of the Hessian of the continuous problem. The results are complemented with numerical experiments. In the second part, we investigate adaptive methods for optimal control problems with finitely many control parameters. We analyze a-posteriori error estimates based on verification of second-order sufficient optimality conditions using the method developed in the first part. Reliability and efficiency of the error estimator are shown. We illustrate through numerical experiments, the use of the estimator in guiding adaptive mesh refinement.
The bounded input bounded output (BIBO) stability for a nonlinear Caputo fractional system with time‐varying bounded delay and nonlinear output is studied. Utilizing the Razumikhin method, Lyapunov functions and appropriate fractional derivatives of Lyapunov functions some new bounded input bounded output stability criteria are derived. Also, explicit and independent on the initial time bounds of the output are provided. Uniform BIBO stability and uniform BIBO stability with input threshold are studied. A numerical simulation is carried out to show the system's dynamic response, and demonstrate the effectiveness of our theoretical results.
We give a collection of 16 examples which show that compositions \(g\) \(\circ\) \(f\) of well-behaved functions \(f\) and \(g\) can be badly behaved. Remarkably, in 10 of the 16 examples it suffices to take as outer function \(g\) simply a power-type or characteristic function. Such a collection of examples may serve as a source of exercises for a calculus course.
Extreme value theory aims at modeling extreme but rare events from a probabilistic point of view. It is well-known that so-called generalized Pareto distributions, which are briefly reviewed in Chapter 1, are the only reasonable probability distributions suited for modeling observations above a high threshold, such as waves exceeding the height of a certain dike, earthquakes having at least a certain intensity, and, after applying a simple transformation, share prices falling below some low threshold. However, there are cases for which a generalized Pareto model might fail. Therefore, Chapter 2 derives certain neighborhoods of a generalized Pareto distribution and provides several statistical tests for these neighborhoods, where the cases of observing finite dimensional data and of observing continuous functions on [0,1] are considered. By using a notation based on so-called D-norms it is shown that these tests consistently link both frameworks, the finite dimensional and the functional one. Since the derivation of the asymptotic distributions of the test statistics requires certain technical restrictions, Chapter 3 analyzes these assumptions in more detail. It provides in particular some examples of distributions that satisfy the null hypothesis and of those that do not. Since continuous copula processes are crucial tools for the functional versions of the proposed tests, it is also discussed whether those copula processes actually exist for a given set of data. Moreover, some practical advice is given how to choose the free parameters incorporated in the test statistics. Finally, a simulation study in Chapter 4 compares the in total three different test statistics with another test found in the literature that has a similar null hypothesis. This thesis ends with a short summary of the results and an outlook to further open questions.
Finite volume methods for compressible Euler equations suffer from an excessive diffusion in the limit of low Mach numbers. This PhD thesis explores new approaches to overcome this.
The analysis of a simpler set of equations that also possess a low Mach number limit is found to give valuable insights. These equations are the acoustic equations obtained as a linearization of the Euler equations. For both systems the limit is characterized by a divergencefree velocity. This constraint is nontrivial only in multiple spatial dimensions. As the Jacobians of the acoustic system do not commute, acoustics cannot be reduced to some kind of multi-dimensional advection. Therefore first an exact solution in multiple spatial dimensions is obtained. It is shown that the low Mach number limit can be interpreted as a limit of long times.
It is found that the origin of the inability of a scheme to resolve the low Mach number limit is the lack a discrete counterpart to the limit of long times. Numerical schemes whose discrete stationary states discretize all the analytic stationary states of the PDE are called stationarity preserving. It is shown that for the acoustic equations, stationarity preserving schemes are vorticity preserving and are those that are able to resolve the low Mach limit (low Mach compliant). This establishes a new link between these three concepts.
Stationarity preservation is studied in detail for both dimensionally split and multi-dimensional schemes for linear acoustics. In particular it is explained why the same multi-dimensional stencils appear in literature in very different contexts: These stencils are unique discretizations of the divergence that allow for stabilizing stationarity preserving diffusion.
Stationarity preservation can also be generalized to nonlinear systems such as the Euler equations. Several ways how such numerical schemes can be constructed for the Euler equations are presented. In particular a low Mach compliant numerical scheme is derived that uses a novel construction idea. Its diffusion is chosen such that it depends on the velocity divergence rather than just derivatives of the different velocity components. This is demonstrated to overcome the low Mach number problem. The scheme shows satisfactory results in numerical simulations and has been found to be stable under explicit time integration.
We present a technique for computing multi-branch-point covers with prescribed ramification and demonstrate the applicability of our method in relatively large degrees by computing several families of polynomials with symplectic and linear Galois groups.
As a first application, we present polynomials over \(\mathbb{Q}(\alpha,t)\) for the primitive rank-3 groups \(PSp_4(3)\) and \(PSp_4(3).C_2\) of degree 27 and for the 2-transitive group \(PSp_6(2)\) in its actions on 28 and 36 points, respectively. Moreover, the degree-28 polynomial for \(PSp_6(2)\) admits infinitely many totally real specializations.
Next, we present the first (to the best of our knowledge) explicit polynomials for the 2-transitive linear groups \(PSL_4(3)\) and \(PGL_4(3)\) of degree 40, and the imprimitive group \(Aut(PGL_4(3))\) of degree 80.
Additionally, we negatively answer a question by König whether there exists a degree-63 rational function with rational coefficients and monodromy group \(PSL_6(2)\) ramified over at least four points. This is achieved due to the explicit computation of the corresponding hyperelliptic genus-3 Hurwitz curve parameterizing this family, followed by a search for rational points on it. As a byproduct of our calculations we obtain the first explicit \(Aut(PSL_6(2))\)-realizations over \(\mathbb{Q}(t)\).
At last, we present a technique by Elkies for bounding the transitivity degree of Galois groups. This provides an alternative way to verify the Galois groups from the previous chapters and also yields a proof that the monodromy group of a degree-276 cover computed by Monien is isomorphic to the sporadic 2-transitive Conway group \(Co_3\).
Theoretical and numerical investigation of optimal control problems governed by kinetic models
(2021)
This thesis is devoted to the numerical and theoretical analysis of ensemble optimal control problems governed by kinetic models. The formulation and study of these problems have been put forward in recent years by R.W. Brockett with the motivation that ensemble control may provide a more general and robust control framework for dynamical systems. Following this formulation, a Liouville (or continuity) equation with an unbounded drift function is considered together with a class of cost functionals that include tracking of ensembles of trajectories of dynamical systems and different control costs. Specifically, $L^2$, $H^1$ and $L^1$ control costs are taken into account which leads to non--smooth optimization problems. For the theoretical investigation of the resulting optimal control problems, a well--posedness theory in weighted Sobolev spaces is presented for Liouville and related transport equations. Specifically, existence and uniqueness results for these equations and energy estimates in suitable norms are provided; in particular norms in weighted Sobolev spaces. Then, non--smooth optimal control problems governed by the Liouville equation are formulated with a control mechanism in the drift function. Further, box--constraints on the control are imposed. The control--to--state map is introduced, that associates to any control the unique solution of the corresponding Liouville equation. Important properties of this map are investigated, specifically, that it is well--defined, continuous and Frechet differentiable. Using the first two properties, the existence of solutions to the optimal control problems is shown. While proving the differentiability, a loss of regularity is encountered, that is natural to hyperbolic equations. This leads to the need of the investigation of the control--to--state map in the topology of weighted Sobolev spaces. Exploiting the Frechet differentiability, it is possible to characterize solutions to the optimal control problem as solutions to an optimality system. This system consists of the Liouville equation, its optimization adjoint in the form of a transport equation, and a gradient inequality. Numerical methodologies for solving Liouville and transport equations are presented that are based on a non--smooth Lagrange optimization framework. For this purpose, approximation and solution schemes for such equations are developed and analyzed. For the approximation of the Liouville model and its optimization adjoint, a combination of a Kurganov--Tadmor method, a Runge--Kutta scheme, and a Strang splitting method are discussed. Stability and second--order accuracy of these resulting schemes are proven in the discrete $L^1$ norm. In addition, conservation of mass and positivity preservation are confirmed for the solution method of the Liouville model. As numerical optimization strategy, an adapted Krylow--Newton method is applied. Since the control is considered to be an element of $H^1$ and to obey certain box--constraints, a method for calculating a $H^1$ projection is presented. Since the optimal control problem is non-smooth, a semi-smooth adaption of Newton's method is taken into account. Results of numerical experiments are presented that successfully validate the proposed deterministic framework. After the discussion of deterministic schemes, the linear space--homogeneous Keilson--Storer master equation is investigated. This equation was originally developed for the modelling of Brownian motion of particles immersed in a fluid and is a representative model of the class of linear Boltzmann equations. The well--posedness of the Keilson--Storer master equation is investigated and energy estimates in different topologies are derived. To solve this equation numerically, Monte Carlo methods are considered. Such methods take advantage of the kinetic formulation of the Liouville equation and directly implement the behaviour of the system of particles under consideration. This includes the probabilistic behaviour of the collisions between particles. Optimal control problems are formulated with an objective that is constituted of certain expected values in velocity space and the $L^2$ and $H^1$ costs of the control. The problems are governed by the Keilson--Storer master equation and the control mechanism is considered to be within the collision kernel. The objective of the optimal control of this model is to drive an ensemble of particles to acquire a desired mean velocity and to achieve a desired final velocity configuration. Existence of solutions of the optimal control problem is proven and a Keilson--Storer optimality system characterizing the solution of the proposed optimal control problem is obtained. The optimality system is used to construct a gradient--based optimization strategy in the framework of Monte--Carlo methods. This task requires to accommodate the resulting adjoint Keilson--Storer model in a form that is consistent with the kinetic formulation. For this reason, we derive an adjoint Keilson--Storer collision kernel and an additional source term. A similar approach is presented in the case of a linear space--inhomogeneous kinetic model with external forces and with Keilson--Storer collision term. In this framework, a control mechanism in the form of an external space--dependent force is investigated. The purpose of this control is to steer the multi--particle system to follow a desired mean velocity and position and to reach a desired final configuration in phase space. An optimal control problem using the formulation of ensemble controls is stated with an objective that is constituted of expected values in phase space and $H^1$ costs of the control. For solving the optimal control problems, a gradient--based computational strategy in the framework of Monte Carlo methods is developed. Part of this is the denoising of the distribution functions calculated by Monte Carlo algorithms using methods of the realm of partial differential equations. A standalone C++ code is presented that implements the developed non--linear conjugated gradient strategy. Results of numerical experiments confirm the ability of the designed probabilistic control framework to operate as desired. An outlook section about optimal control problems governed by non--linear space--inhomogeneous kinetic models completes this thesis.
This paper is devoted to the numerical analysis of non-smooth ensemble optimal control problems governed by the Liouville (continuity) equation that have been originally proposed by R.W. Brockett with the purpose of determining an efficient and robust control strategy for dynamical systems. A numerical methodology for solving these problems is presented that is based on a non-smooth Lagrange optimization framework where the optimal controls are characterized as solutions to the related optimality systems. For this purpose, approximation and solution schemes are developed and analysed. Specifically, for the approximation of the Liouville model and its optimization adjoint, a combination of a Kurganov–Tadmor method, a Runge–Kutta scheme, and a Strang splitting method are discussed. The resulting optimality system is solved by a projected semi-smooth Krylov–Newton method. Results of numerical experiments are presented that successfully validate the proposed framework.
Der Einzug des Rechners in den Mathematikunterricht hat eine Vielzahl neuer Möglichkeiten der Darstellung mit sich gebracht, darunter auch multiple, dynamisch verbundene Repräsentationen mathematischer Probleme. Die Arbeit beantwortet die Frage, ob und wie diese Repräsentationsarten von Schülerinnen und Schüler in Argumentationen genutzt werden. In der empirischen Untersuchung wurde dabei einerseits quantitativ erforscht, wie groß der Einfluss der in der Aufgabenstellung gegebenen Repräsentationsform auf die schriftliche Argumentationen der Schülerinnen und Schüler ist. Andererseits wurden durch eine qualitative Analyse spezifische Nutzungsweisen identifiziert und mittels Toulmins Argumentationsmodell beschrieben. Diese Erkenntnisse wurden genutzt, um Konsequenzen bezüglich der Verwendung von multiplen und/oder dynamischen Repräsentationen im Mathematikunterricht der Sekundarstufe zu formulieren.
The goal of this thesis is to investigate conformal mappings onto circular arc polygon domains, i.e. domains that are bounded by polygons consisting of circular arcs instead of line segments.
Conformal mappings onto circular arc polygon domains contain parameters in addition to the classical parameters of the Schwarz-Christoffel transformation. To contribute to the parameter problem of conformal mappings from the unit disk onto circular arc polygon domains, we investigate two special cases of these mappings. In the first case we can describe the additional parameters if the bounding circular arc polygon is a polygon with straight sides. In the second case we provide an approximation for the additional parameters if the circular arc polygon domain satisfies some symmetry conditions. These results allow us to draw conclusions on the connection between these additional parameters and the classical parameters of the mapping.
For conformal mappings onto multiply connected circular arc polygon domains, we provide an alternative construction of the mapping formula without using the Schottky-Klein prime function. In the process of constructing our main result, mappings for domains of connectivity three or greater, we also provide a formula for conformal mappings onto doubly connected circular arc polygon domains. The comparison of these mapping formulas with already known mappings allows us to provide values for some of the parameters of the mappings onto doubly connected circular arc polygon domains if the image domain is a polygonal domain.
The different components of the mapping formula are constructed by using a slightly modified variant of the Poincaré theta series. This construction includes the design of a function to remove unwanted poles and of different versions of functions that are analytic on the domain of definition of the mapping functions and satisfy some special functional equations.
We also provide the necessary concepts to numerically evaluate the conformal mappings onto multiply connected circular arc polygon domains. As the evaluation of such a map requires the solution of a differential equation, we provide a possible configuration of curves inside the preimage domain to solve the equation along them in addition to a description of the procedure for computing either the formula for the doubly connected case or the case of connectivity three or greater. We also describe the procedures for solving the parameter problem for multiply connected circular arc polygon domains.
Many optimization problems for a smooth cost function f on a manifold M can be solved by determining the zeros of a vector field F; such as e.g. the gradient F of the cost function f. If F does not depend on additional parameters, numerous zero-finding techniques are available for this purpose. It is a natural generalization however, to consider time-dependent optimization problems that require the computation of time-varying zeros of time-dependent vector fields F(x,t). Such parametric optimization problems arise in many fields of applied mathematics, in particular path-following problems in robotics, recursive eigenvalue and singular value estimation in signal processing, as well as numerical linear algebra and inverse eigenvalue problems in control theory. In the literature, there are already some tracking algorithms for these tasks, but these do not always adequately respect the manifold structure. Hence, available tracking results can often be improved by implementing methods working directly on the manifold. Thus, intrinsic methods are of interests that evolve during the entire computation on the manifold. It is the task of this thesis, to develop such intrinsic zero finding methods. The main results of this thesis are as follows: - A new class of continuous and discrete tracking algorithms is proposed for computing zeros of time-varying vector fields on Riemannian manifolds. This was achieved by studying the newly introduced time-varying Newton Flow and the time-varying Newton Algorithm on Riemannian manifolds. - Convergence analysis is performed on arbitrary Riemannian manifolds. - Concretization of these results on submanifolds, including for a new class of algorithms via local parameterizations. - More specific results in Euclidean space are obtained by considering inexact and underdetermined time-varying Newton Flows. - Illustration of these newly introduced algorithms by examining time-varying tracking tasks in three application areas: Subspace analysis, matrix decompositions (in particular EVD and SVD) and computer vision.
Fluids in Gravitational Fields – Well-Balanced Modifications for Astrophysical Finite-Volume Codes
(2021)
Stellar structure can -- in good approximation -- be described as a hydrostatic state, which which arises due to a balance between gravitational force and pressure gradient. Hydrostatic states are static solutions of the full compressible Euler system with gravitational source term, which can be used to model the stellar interior. In order to carry out simulations of dynamical processes occurring in stars, it is vital for the numerical method to accurately maintain the hydrostatic state over a long time period. In this thesis we present different methods to modify astrophysical finite volume codes in order to make them \emph{well-balanced}, preventing them from introducing significant discretization errors close to hydrostatic states. Our well-balanced modifications are constructed so that they can meet the requirements for methods applied in the astrophysical context: They can well-balance arbitrary hydrostatic states with any equation of state that is applied to model thermodynamical relations and they are simple to implement in existing astrophysical finite volume codes. One of our well-balanced modifications follows given solutions exactly and can be applied on any grid geometry. The other methods we introduce, which do no require any a priori knowledge, balance local high order approximations of arbitrary hydrostatic states on a Cartesian grid. All of our modifications allow for high order accuracy of the method. The improved accuracy close to hydrostatic states is verified in various numerical experiments.
The goal of this thesis is to study the topological and algebraic properties of the quasiconformal automorphism groups of simply and multiply connected domains in the complex plain, in which the quasiconformal automorphism groups are endowed with the supremum metric on the underlying domain. More precisely, questions concerning central topological properties such as (local) compactness, (path)-connectedness and separability and their dependence on the boundary of the corresponding domains are studied, as well as completeness with respect to the supremum metric. Moreover, special subsets of the quasiconformal automorphism group of the unit disk are investigated, and concrete quasiconformal automorphisms are constructed. Finally, a possible application of quasiconformal unit disk automorphisms to symmetric cryptography is presented, in which a quasiconformal cryptosystem is defined and studied.
We consider homogeneous spaces G/H with the same rational homotopy as a product of a 1-sphere and a (m+1)-sphere. We show that these spaces have also the rational cohomology of such a sphere product if H is connected and if the quotient has dimension m+2. Furthermore, we prove that if additionally the fundamental group of G/H is cyclic, then G/H is locally a product of a 1-torus and ofA/H, where A/H is a simply connected rational cohomology (m+1)-sphere (and hence classified). If H fails to be connected, then with U as the connected component of H the G-action on the covering space G/U of G/H has connected stabilizers, and the results apply to G/U. To show that under the assumptions above every natural number may be realized as the order of the group of connected components of H we calculate the cohomology of certain homogeneous spaces. We also determine the rational cohomology of the fibre bundle U-->G-->G/U if G/H meets the assumptions above. This is done by considering the respective Leray-Serre spectral sequence. The structure of the cohomology of U-->G-->G/U then gives a second proof for the structure of compact connected Lie groups acting transitively on spaces with the rational homotopy of a product of a 1-sphere and a (m+1)-sphere. Since a quotient of a homogeneous space with the same rational homotopy or cohomology as a product of a 1-sphere and a (m+1)-sphere is not simply connected, there often arises the question whether or not a considered fibre bundle or fibration is orientable. A large amount of space will therefore be given to the problem of showing that certain fibrations are orientable. For compact connected (m+2)-manifolds with cyclic fundamental groups and with the rational homotopy of a product of a 1-sphere and a (m+1)-sphere we show the following: if a connected Lie group acts transitively on the manifold, then the maximal compact subgroups are either transitive, or their orbits are simply connected rational cohomology spheres of codimension 1. Homogeneous spaces with the same rational cohomology or homotopy as a a product of a 1-sphere and a (m+1)-sphere play a role in the study of different types of geometrical objects. They appear for example as focal manifolds of isoparametric hypersurfaces with four distinct principal curvatures. Further examples of such spaces are the point spaces and the line spaces of compact connected generalized quadrangles. We determine the isometry groups of isoparametric hypersurfaces with 4 principal curvatures of multiplicities 1 and m which are transitive on the focal manifold with non-trivial fundamental group. Buildings were introduced by Jacques Tits to give interpretations of simple groups of Lie type. They are a far-reaching generalization of projective spaces, in particular a generalization of projective planes. There is another generalization of projective planes called generalized polygons. A projective plane is the same as a generalized triangle. The generalized polygons are also contained in the class of buildings: they are the buildings of rank 2. To compact quadrangles one can assign a pair of natural numbers called the topological parameters of the quadrangles. We treat the case k=1. It turns out that there are no other point-transitive compact connected Lie groups for (1,m)-quadrangles than the ones for the real orthogonal quadrangles. Furthermore, we solve the problem of three infinite series of group actions which Kramer left as open problems; there are no quadrangles with the homogeneous spaces in question as point spaces (up to maybe a finite number of small parameters in one of the three series).
This thesis deals with a new so-called sequential quadratic Hamiltonian (SQH) iterative scheme to solve optimal control problems with differential models and cost functionals ranging from smooth to discontinuous and non-convex. This scheme is based on the Pontryagin maximum principle (PMP) that provides necessary optimality conditions for an optimal solution. In this framework, a Hamiltonian function is defined that attains its minimum pointwise at the optimal solution of the corresponding optimal control problem. In the SQH scheme, this Hamiltonian function is augmented by a quadratic penalty term consisting of the current control function and the control function from the previous iteration. The heart of the SQH scheme is to minimize this augmented Hamiltonian function pointwise in order to determine a control update. Since the PMP does not require any differ- entiability with respect to the control argument, the SQH scheme can be used to solve optimal control problems with both smooth and non-convex or even discontinuous cost functionals. The main achievement of the thesis is the formulation of a robust and efficient SQH scheme and a framework in which the convergence analysis of the SQH scheme can be carried out. In this framework, convergence of the scheme means that the calculated solution fulfills the PMP condition. The governing differential models of the considered optimal control problems are ordinary differential equations (ODEs) and partial differential equations (PDEs). In the PDE case, elliptic and parabolic equations as well as the Fokker-Planck (FP) equation are considered. For both the ODE and the PDE cases, assumptions are formulated for which it can be proved that a solution to an optimal control problem has to fulfill the PMP. The obtained results are essential for the discussion of the convergence analysis of the SQH scheme. This analysis has two parts. The first one is the well-posedness of the scheme which means that all steps of the scheme can be carried out and provide a result in finite time. The second part part is the PMP consistency of the solution. This means that the solution of the SQH scheme fulfills the PMP conditions. In the ODE case, the following results are obtained that state well-posedness of the SQH scheme and the PMP consistency of the corresponding solution. Lemma 7 states the existence of a pointwise minimum of the augmented Hamiltonian. Lemma 11 proves the existence of a weight of the quadratic penalty term such that the minimization of the corresponding augmented Hamiltonian results in a control updated that reduces the value of the cost functional. Lemma 12 states that the SQH scheme stops if an iterate is PMP optimal. Theorem 13 proves the cost functional reducing properties of the SQH control updates. The main result is given in Theorem 14, which states the pointwise convergence of the SQH scheme towards a PMP consistent solution. In this ODE framework, the SQH method is applied to two optimal control problems. The first one is an optimal quantum control problem where it is shown that the SQH method converges much faster to an optimal solution than a globalized Newton method. The second optimal control problem is an optimal tumor treatment problem with a system of coupled highly non-linear state equations that describe the tumor growth. It is shown that the framework in which the convergence of the SQH scheme is proved is applicable for this highly non-linear case. Next, the case of PDE control problems is considered. First a general framework is discussed in which a solution to the corresponding optimal control problem fulfills the PMP conditions. In this case, many theoretical estimates are presented in Theorem 59 and Theorem 64 to prove in particular the essential boundedness of the state and adjoint variables. The steps for the convergence analysis of the SQH scheme are analogous to that of the ODE case and result in Theorem 27 that states the PMP consistency of the solution obtained with the SQH scheme. This framework is applied to different elliptic and parabolic optimal control problems, including linear and bilinear control mechanisms, as well as non-linear state equations. Moreover, the SQH method is discussed for solving a state-constrained optimal control problem in an augmented formulation. In this case, it is shown in Theorem 30 that for increasing the weight of the augmentation term, which penalizes the violation of the state constraint, the measure of this state constraint violation by the corresponding solution converges to zero. Furthermore, an optimal control problem with a non-smooth L\(^1\)-tracking term and a non-smooth state equation is investigated. For this purpose, an adjoint equation is defined and the SQH method is used to solve the corresponding optimal control problem. The final part of this thesis is devoted to a class of FP models related to specific stochastic processes. The discussion starts with a focus on random walks where also jumps are included. This framework allows a derivation of a discrete FP model corresponding to a continuous FP model with jumps and boundary conditions ranging from absorbing to totally reflecting. This discussion allows the consideration of the drift-control resulting from an anisotropic probability of the steps of the random walk. Thereafter, in the PMP framework, two drift-diffusion processes and the corresponding FP models with two different control strategies for an optimal control problem with an expectation functional are considered. In the first strategy, the controls depend on time and in the second one, the controls depend on space and time. In both cases a solution to the corresponding optimal control problem is characterized with the PMP conditions, stated in Theorem 48 and Theorem 49. The well-posedness of the SQH scheme is shown in both cases and further conditions are discussed that ensure the convergence of the SQH scheme to a PMP consistent solution. The case of a space and time dependent control strategy results in a special structure of the corresponding PMP conditions that is exploited in another solution method, the so-called direct Hamiltonian (DH) method.
A sequential quadratic Hamiltonian (SQH) scheme for solving different classes of non-smooth and non-convex PDE optimal control problems is investigated considering seven different benchmark problems with increasing difficulty. These problems include linear and nonlinear PDEs with linear and bilinear control mechanisms, non-convex and discontinuous costs of the controls, L\(^1\) tracking terms, and the case of state constraints.
The SQH method is based on the characterisation of optimality of PDE optimal control problems by the Pontryagin's maximum principle (PMP). For each problem, a theoretical discussion of the PMP optimality condition is given and results of numerical experiments are presented that demonstrate the large range of applicability of the SQH scheme.
The characterization and numerical solution of two non-smooth optimal control problems governed by a Fokker–Planck (FP) equation are investigated in the framework of the Pontryagin maximum principle (PMP). The two FP control problems are related to the problem of determining open- and closed-loop controls for a stochastic process whose probability density function is modelled by the FP equation. In both cases, existence and PMP characterisation of optimal controls are proved, and PMP-based numerical optimization schemes are implemented that solve the PMP optimality conditions to determine the controls sought. Results of experiments are presented that successfully validate the proposed computational framework and allow to compare the two control strategies.
Circadian endogenous clocks of eukaryotic organisms are an established and rapidly developing research field. To investigate and simulate in an effective model the effect of external stimuli on such clocks and their components we developed a software framework for download and simulation. The application is useful to understand the different involved effects in a mathematical simple and effective model. This concerns the effects of Zeitgebers, feedback loops and further modifying components. We start from a known mathematical oscillator model, which is based on experimental molecular findings. This is extended with an effective framework that includes the impact of external stimuli on the circadian oscillations including high dose pharmacological treatment. In particular, the external stimuli framework defines a systematic procedure by input-output-interfaces to couple different oscillators. The framework is validated by providing phase response curves and ranges of entrainment. Furthermore, Aschoffs rule is computationally investigated. It is shown how the external stimuli framework can be used to study biological effects like points of singularity or oscillators integrating different signals at once. The mathematical framework and formalism is generic and allows to study in general the effect of external stimuli on oscillators and other biological processes. For an easy replication of each numerical experiment presented in this work and an easy implementation of the framework the corresponding Mathematica files are fully made available. They can be downloaded at the following link: https://www.biozentrum.uni-wuerzburg.de/bioinfo/computing/circadian/.
In this thesis we consider a reactive transport model with precipitation dissolution reactions from the geosciences. It consists of PDEs, ODEs, algebraic equations (AEs) and complementary conditions (CCs). After discretization of this model we get a huge nonlinear and nonsmooth equation system. We tackle this system with the semismooth Newton method introduced by Qi and Sun. The focus of this thesis is on the application and convergence of this algorithm. We proof that this algorithm is well defined for this problem and local even quadratic convergent for a BD-regular solution. We also deal with the arising linear equation systems, which are large and sparse, and how they can be solved efficiently. An integral part of this investigation is the boundedness of a certain matrix-valued function, which is shown in a separate chapter. As a side quest we study how extremal eigenvalues (and singular values) of certain PDE-operators, which are involved in our discretized model, can be estimated accurately.
The first goal of this thesis is to generalize Loewner's famous differential equation to multiply connected domains. The resulting differential equations are known as Komatu--Loewner differential equations. We discuss Komatu--Loewner equations for canonical domains (circular slit disks, circular slit annuli and parallel slit half-planes). Additionally, we give a generalisation to several slits and discuss parametrisations that lead to constant coefficients. Moreover, we compare Komatu--Loewner equations with several slits to single slit Loewner equations.
Finally we generalise Komatu--Loewner equations to hulls satisfying a local growth property.
ADMM-Type Methods for Optimization and Generalized Nash Equilibrium Problems in Hilbert Spaces
(2020)
This thesis is concerned with a certain class of algorithms for the solution of constrained optimization problems and generalized Nash equilibrium problems in Hilbert spaces. This class of algorithms is inspired by the alternating direction method of multipliers (ADMM) and eliminates the constraints using an augmented Lagrangian approach. The alternating direction method consists of splitting the augmented Lagrangian subproblem into smaller and more easily manageable parts.
Before the algorithms are discussed, a substantial amount of background material, including the theory of Banach and Hilbert spaces, fixed-point iterations as well as convex and monotone set-valued analysis, is presented. Thereafter, certain optimization problems and generalized Nash equilibrium problems are reformulated and analyzed using variational inequalities and set-valued mappings. The analysis of the algorithms developed in the course of this thesis is rooted in these reformulations as variational inequalities and set-valued mappings.
The first algorithms discussed and analyzed are one weakly and one strongly convergent ADMM-type algorithm for convex, linearly constrained optimization. By equipping the associated Hilbert space with the correct weighted scalar product, the analysis of these two methods is accomplished using the proximal point method and the Halpern method.
The rest of the thesis is concerned with the development and analysis of ADMM-type algorithms for generalized Nash equilibrium problems that jointly share a linear equality constraint. The first class of these algorithms is completely parallelizable and uses a forward-backward idea for the analysis, whereas the second class of algorithms can be interpreted as a direct extension of the classical ADMM-method to generalized Nash equilibrium problems.
At the end of this thesis, the numerical behavior of the discussed algorithms is demonstrated on a collection of examples.
This thesis is devoted to a theoretical and numerical investigation of methods to solve open-loop non zero-sum differential Nash games. These problems arise in many applications, e.g., biology, economics, physics, where competition between different agents appears. In this case, the goal of each agent is in contrast with those of the others, and a competition game can be interpreted as a coupled optimization problem for which, in general, an optimal solution does not exist. In fact, an optimal strategy for one player may be unsatisfactory for the others. For this reason, a solution of a game is sought as an equilibrium and among the solutions concepts proposed in the literature, that of Nash equilibrium (NE) is the focus of this thesis. The building blocks of the resulting differential Nash games are a dynamical model with different control functions associated with different players that pursue non-cooperative objectives. In particular, the aim of this thesis is on differential models having linear or bilinear state-strategy structures. In this framework, in the first chapter, some well-known results are recalled, especially for non-cooperative linear-quadratic differential Nash games. Then, a bilinear Nash game is formulated and analysed. The main achievement in this chapter is Theorem 1.4.2 concerning existence of Nash equilibria for non-cooperative differential bilinear games. This result is obtained assuming a sufficiently small time horizon T, and an estimate of T is provided in Lemma 1.4.8 using specific properties of the regularized Nikaido-Isoda function. In Chapter 2, in order to solve a bilinear Nash game, a semi-smooth Newton (SSN) scheme combined with a relaxation method is investigated, where the choice of a SSN scheme is motivated by the presence of constraints on the players’ actions that make the problem non-smooth. The resulting method is proved to be locally convergent in Theorem 2.1, and an estimate on the relaxation parameter is also obtained that relates the relaxation factor to the time horizon of a Nash equilibrium and to the other parameters of the game. For the bilinear Nash game, a Nash bargaining problem is also introduced and discussed, aiming at determining an improvement of all players’ objectives with respect to the Nash equilibrium. A characterization of a bargaining solution is given in Theorem 2.2.1 and a numerical scheme based on this result is presented that allows to compute this solution on the Pareto frontier. Results of numerical experiments based on a quantum model of two spin-particles and on a population dynamics model with two competing species are presented that successfully validate the proposed algorithms. In Chapter 3 a functional formulation of the classical homicidal chauffeur (HC) Nash game is introduced and a new numerical framework for its solution in a time-optimal formulation is discussed. This methodology combines a Hamiltonian based scheme, with proximal penalty to determine the time horizon where the game takes place, with a Lagrangian optimal control approach and relaxation to solve the Nash game at a fixed end-time. The resulting numerical optimization scheme has a bilevel structure, which aims at decoupling the computation of the end-time from the solution of the pursuit-evader game. Several numerical experiments are performed to show the ability of the proposed algorithm to solve the HC game. Focusing on the case where a collision may occur, the time for this event is determined. The last part of this thesis deals with the analysis of a novel sequential quadratic Hamiltonian (SQH) scheme for solving open-loop differential Nash games. This method is formulated in the framework of Pontryagin’s maximum principle and represents an efficient and robust extension of the successive approximations strategy in the realm of Nash games. In the SQH method, the Hamilton-Pontryagin functions are augmented by a quadratic penalty term and the Nikaido-Isoda function is used as a selection criterion. Based on this fact, the key idea of this SQH scheme is that the PMP characterization of Nash games leads to a finite-dimensional Nash game for any fixed time. A class of problems for which this finite-dimensional game admits a unique solution is identified and for this class of games theoretical results are presented that prove the well-posedness of the proposed scheme. In particular, Proposition 4.2.1 is proved to show that the selection criterion on the Nikaido-Isoda function is fulfilled. A comparison of the computational performances of the SQH scheme and the SSN-relaxation method previously discussed is shown. Applications to linear-quadratic Nash games and variants with control constraints, weighted L1 costs of the players’ actions and tracking objectives are presented that corroborate the theoretical statements.
A sequentialquadratic Hamiltonian schemefor solving open-loop differential Nash games is proposed and investigated. This method is formulated in the framework of the Pontryagin maximum principle and represents an efficient and robust extension of the successive approximations strategy for solving optimal control problems. Theoretical results are presented that prove the well-posedness of the proposed scheme, and results of numerical experiments are reported that successfully validate its computational performance.
We study the symmetrised rank-one convex hull of monoclinic-I martensite (a twelve-variant material) in the context of geometrically-linear elasticity. We construct sets of T3s, which are (non-trivial) symmetrised rank-one convex hulls of 3-tuples of pairwise incompatible strains. Moreover we construct a five-dimensional continuum of T3s and show that its intersection with the boundary of the symmetrised rank-one convex hull is four-dimensional. We also show that there is another kind of monoclinic-I martensite with qualitatively different semi-convex hulls which, so far as we know, has not been experimentally observed. Our strategy is to combine understanding of the algebraic structure of symmetrised rank-one convex cones with knowledge of the faceting structure of the convex polytope formed by the strains.
The Riemann zeta-function forms a central object in multiplicative number theory; its value-distribution encodes deep arithmetic properties of the prime numbers. Here, a crucial role is assigned to the analytic behavior of the zeta-function on the so called critical line. In this thesis we study the value-distribution of the Riemann zeta-function near and on the critical line. Amongst others we focus on the following.
PART I: A modified concept of universality, a-points near the critical line and a denseness conjecture attributed to Ramachandra.
The critical line is a natural boundary of the Voronin-type universality property of the Riemann zeta-function. We modify Voronin's concept by adding a scaling factor to the vertical shifts that appear in Voronin's universality theorem and investigate whether this modified concept is appropriate to keep up a certain universality property of the Riemann zeta-function near and on the critical line. It turns out that it is mainly the functional equation of the Riemann zeta-function that restricts the set of functions which can be approximated by this modified concept around the critical line.
Levinson showed that almost all a-points of the Riemann zeta-function lie in a certain funnel-shaped region around the critical line. We complement Levinson's result: Relying on arguments of the theory of normal families and the notion of filling discs, we detect a-points in this region which are very close to the critical line.
According to a folklore conjecture (often attributed to Ramachandra) one expects that the values of the Riemann zeta-function on the critical line lie dense in the complex numbers. We show that there are certain curves which approach the critical line asymptotically and have the property that the values of the zeta-function on these curves are dense in the complex numbers.
Many of our results in part I are independent of the Euler product representation of the Riemann zeta-function and apply for meromorphic functions that satisfy a Riemann-type functional equation in general.
PART II: Discrete and continuous moments.
The Lindelöf hypothesis deals with the growth behavior of the Riemann zeta-function on the critical line. Due to classical works by Hardy and Littlewood, the Lindelöf hypothesis can be reformulated in terms of power moments to the right of the critical line. Tanaka showed recently that the expected asymptotic formulas for these power moments are true in a certain measure-theoretical sense; roughly speaking he omits a set of Banach density zero from the path of integration of these moments. We provide a discrete and integrated version of Tanaka's result and extend it to a large class of Dirichlet series connected to the Riemann zeta-function.
An efficient and accurate computational framework for solving control problems governed by quantum spin systems is presented. Spin systems are extremely important in modern quantum technologies such as nuclear magnetic resonance spectroscopy, quantum imaging and quantum computing. In these applications, two classes of quantum control problems arise: optimal control problems and exact-controllability problems, with a bilinear con- trol structure. These models correspond to the Schrödinger-Pauli equation, describing the time evolution of a spinor, and the Liouville-von Neumann master equation, describing the time evolution of a spinor and a density operator. This thesis focuses on quantum control problems governed by these models. An appropriate definition of the optimiza- tion objectives and of the admissible set of control functions allows to construct controls with specific properties. These properties are in general required by the physics and the technologies involved in quantum control applications. A main purpose of this work is to address non-differentiable quantum control problems. For this reason, a computational framework is developed to address optimal-control prob- lems, with possibly L1 -penalization term in the cost-functional, and exact-controllability problems. In both cases the set of admissible control functions is a subset of a Hilbert space. The bilinear control structure of the quantum model, the L1 -penalization term and the control constraints generate high non-linearities that make difficult to solve and analyse the corresponding control problems. The first part of this thesis focuses on the physical description of the spin of particles and of the magnetic resonance phenomenon. Afterwards, the controlled Schrödinger- Pauli equation and the Liouville-von Neumann master equation are discussed. These equations, like many other controlled quantum models, can be represented by dynamical systems with a bilinear control structure. In the second part of this thesis, theoretical investigations of optimal control problems, with a possible L1 -penalization term in the objective and control constraints, are consid- ered. In particular, existence of solutions, optimality conditions, and regularity properties of the optimal controls are discussed. In order to solve these optimal control problems, semi-smooth Newton methods are developed and proved to be superlinear convergent. The main difficulty in the implementation of a Newton method for optimal control prob- lems comes from the dimension of the Jacobian operator. In a discrete form, the Jacobian is a very large matrix, and this fact makes its construction infeasible from a practical point of view. For this reason, the focus of this work is on inexact Krylov-Newton methods, that combine the Newton method with Krylov iterative solvers for linear systems, and allows to avoid the construction of the discrete Jacobian. In the third part of this thesis, two methodologies for the exact-controllability of quan- tum spin systems are presented. The first method consists of a continuation technique, while the second method is based on a particular reformulation of the exact-control prob- lem. Both these methodologies address minimum L2 -norm exact-controllability problems. In the fourth part, the thesis focuses on the numerical analysis of quantum con- trol problems. In particular, the modified Crank-Nicolson scheme as an adequate time discretization of the Schrödinger equation is discussed, the first-discretize-then-optimize strategy is used to obtain a discrete reduced gradient formula for the differentiable part of the optimization objective, and implementation details and globalization strategies to guarantee an adequate numerical behaviour of semi-smooth Newton methods are treated. In the last part of this work, several numerical experiments are performed to vali- date the theoretical results and demonstrate the ability of the proposed computational framework to solve quantum spin control problems.
Applications in various research areas such as signal processing, quantum computing, and computer vision, can be described as constrained optimization tasks on certain subsets of tensor products of vector spaces. In this work, we make use of techniques from Riemannian geometry and analyze optimization tasks on subsets of so-called simple tensors which can be equipped with a differentiable structure. In particular, we introduce a generalized Rayleigh-quotient function on the tensor product of Grassmannians and on the tensor product of Lagrange- Grassmannians. Its optimization enables a unified approach to well-known tasks from different areas of numerical linear algebra, such as: best low-rank approximations of tensors (data compression), computing geometric measures of entanglement (quantum computing) and subspace clustering (image processing). We perform a thorough analysis on the critical points of the generalized Rayleigh-quotient and develop intrinsic numerical methods for its optimization. Explicitly, using the techniques from Riemannian optimization, we present two type of algorithms: a Newton-like and a conjugated gradient algorithm. Their performance is analysed and compared with established methods from the literature.
In this work, we consider impulsive dynamical systems evolving on an infinite-dimensional space and subjected to external perturbations. We look for stability conditions that guarantee the input-to-state stability for such systems. Our new dwell-time conditions allow the situation, where both continuous and discrete dynamics can be unstable simultaneously. Lyapunov like methods are developed for this purpose. Illustrative finite and infinite dimensional examples are provided to demonstrate the application of the main results. These examples cannot be treated by any other published approach and demonstrate the effectiveness of our results.
Many modern statistically efficient methods come with tremendous computational challenges, often leading to large-scale optimisation problems. In this work, we examine such computational issues for recently developed estimation methods in nonparametric regression with a specific view on image denoising. We consider in particular certain variational multiscale estimators which are statistically optimal in minimax sense, yet computationally intensive. Such an estimator is computed as the minimiser of a smoothness functional (e.g., TV norm) over the class of all estimators such that none of its coefficients with respect to a given multiscale dictionary is statistically significant. The so obtained multiscale Nemirowski-Dantzig estimator (MIND) can incorporate any convex smoothness functional and combine it with a proper dictionary including wavelets, curvelets and shearlets. The computation of MIND in general requires to solve a high-dimensional constrained convex optimisation problem with a specific structure of the constraints induced by the statistical multiscale testing criterion. To solve this explicitly, we discuss three different algorithmic approaches: the Chambolle-Pock, ADMM and semismooth Newton algorithms. Algorithmic details and an explicit implementation is presented and the solutions are then compared numerically in a simulation study and on various test images. We thereby recommend the Chambolle-Pock algorithm in most cases for its fast convergence. We stress that our analysis can also be transferred to signal recovery and other denoising problems to recover more general objects whenever it is possible to borrow statistical strength from data patches of similar object structure.
To study coisotropic reduction in the context of deformation quantization we introduce constraint manifolds and constraint algebras as the basic objects encoding the additional information needed to define a reduction. General properties of various categories of constraint objects and their compatiblity with reduction are examined. A constraint Serre-Swan theorem, identifying constraint vector bundles with certain finitely generated projective constraint modules, as well as a constraint symbol calculus are proved. After developing the general deformation theory of constraint algebras, including constraint Hochschild cohomology and constraint differential graded Lie algebras, the second constraint Hochschild cohomology for the constraint algebra of functions on a constraint flat space is computed.
Teil 1 der Arbeit beinhaltet eine Zusammenfassung grundlegender funktionalanalytischer Ergebnisse sowie eine Einführung in die Integral- und Differentialrechnung in Frécheträumen. Insbesondere wird in Kapitel 2 eine ausführliche Darstellung des Lebesgue-Bochner-Integrals auf Frécheträumen geliefert. Teil 2 behandelt die Theorie der linearen Differentialgleichungen auf Frécheträumen. Dazu werden in Kapitel 3 stark differenzierbare Halbgruppen und deren infinitesimale Generatoren charakterisiert. In Kapitel 4 werden diese Ergebnisse benutzt, um lineare Evolutionsgleichungen (von hyperbolischem oder parabolischem Typ) zu untersuchen. Teil 3 enthält die zentralen Resultate der Arbeit. In Kapitel 5 werden zwei Existenz- und Eindeutigkeitssätze für nichtlineare gewöhnliche Differentialgleichungen in zahmen Frécheträumen bewiesen. Kapitel 6 liefert eine Anwendung der Ergebnisse aus Kapitel 5 auf nichtlineare partielle Differentialgleichungen erster Ordnung.
A completely decomposable group is a direct sum of subgroups of the rationals. An almost completely decomposable group is a torsion free abelian group that contains a completely decomposable group as subgroup of finite index. Tight subgroups are maximal subgroups (with respect to set inclusion) among the completely decomposable subgroups of an almost completely decomposable group. In this dissertation we show an extended version of the theorem of Bezout, give a new criterion for the tightness of a completely decomposable subgroup, derive some conditions under which a tight subgroup is regulating and generalize a theorem of Campagna. We give an example of an almost completely decomposable group, all of whose regulating subgroups do not have a quotient with minimal exponent. We show that among the types of elements of a coset modulo a completely decomposable group there exists a unique maximal type and define this type to be -the- coset type. We give criteria for tightness and regulating in term of coset types as well as a representation of the type subgroups using coset types. We introduce the notion of reducible cosets and show their key role for transitions from one completely decomposable subgroup up to another one containing the first one as a proper subgroup. We give an example of a tight, but not regulating subgroup which contains the regulator. We develop the notion of a fully single covered subset of a lattice, show that V-free implies fully single covered, but not necessarily vice versa, and we define an equivalence relation on the set of all finite subsets of a given lattice. We develop some extension of ordinary Hasse diagrams, and apply the lattice theoretic results on the lattice of types and almost completely decomposable groups.
In this thesis different algorithms for the solution of generalized Nash equilibrium problems with the focus on global convergence properties are developed. A globalized Newton method for the computation of normalized solutions, a nonsmooth algorithm based on an optimization reformulation of the game-theoretic problem, and a merit function approach and an interior point method for the solution of the concatenated Karush-Kuhn-Tucker-system are analyzed theoretically and numerically. The interior point method turns out to be one of the best existing methods for the solution of generalized Nash equilibrium problems.
Bivariate copula monitoring
(2022)
The assumption of multivariate normality underlying the Hotelling T\(^{2}\) chart is often violated for process data. The multivariate dependency structure can be separated from marginals with the help of copula theory, which permits to model association structures beyond the covariance matrix. Copula‐based estimation and testing routines have reached maturity regarding a variety of practical applications. We have constructed a rich design matrix for the comparison of the Hotelling T\(^{2}\) chart with the copula test by Verdier and the copula test by Vuong, which allows for weighting the observations adaptively. Based on the design matrix, we have conducted a large and computationally intensive simulation study. The results show that the copula test by Verdier performs better than Hotelling T\(^{2}\) in a large variety of out‐of‐control cases, whereas the weighted Vuong scheme often fails to provide an improvement.
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
Bei vielen Fragestellungen, in denen sich eine Grundgesamtheit in verschiedene Klassen unterteilt, ist weniger die relative Klassengröße als vielmehr die Anzahl der Klassen von Bedeutung. So interessiert sich beispielsweise der Biologe dafür, wie viele Spezien einer Gattung es gibt, der Numismatiker dafür, wie viele Münzen oder Münzprägestätten es in einer Epoche gab, der Informatiker dafür, wie viele unterschiedlichen Einträge es in einer sehr großen Datenbank gibt, der Programmierer dafür, wie viele Fehler eine Software enthält oder der Germanist dafür, wie groß der Wortschatz eines Autors war oder ist. Dieser Artenreichtum ist die einfachste und intuitivste Art und Weise eine Population oder Grundgesamtheit zu charakterisieren. Jedoch kann nur in Kollektiven, in denen die Gesamtanzahl der Bestandteile bekannt und relativ klein ist, die Anzahl der verschiedenen Spezien durch Erfassung aller bestimmt werden. In allen anderen Fällen ist es notwendig die Spezienanzahl durch Schätzungen zu bestimmen.
In this paper we derive new results on multivariate extremes and D-norms. In particular we establish new characterizations of the multivariate max-domain of attraction property. The limit distribution of certain multivariate exceedances above high thresholds is derived, and the distribution of that generator of a D-norm on R\(^{d}\), whose components sum up to d, is obtained. Finally we introduce exchangeable D-norms and show that the set of exchangeable D-norms is a simplex.
It is shown that the rate of convergence in the von Mises conditions of extreme value theory determines the distance of the underlying distribution function F from a generalized Pareto distribution. The distance is measured in terms of the pertaining densities with the limit being ultimately attained if and only if F is ultimately a generalized Pareto distribution. Consequently, the rate of convergence of the extremes in an lid sample, whether in terms of the distribution of the largest order statistics or of corresponding empirical truncated point processes, is determined by the rate of convergence in the von Mises condition. We prove that the converse is also true.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
We construct a foliation of an asymptotically flat end of a Riemannian manifold by hypersurfaces which are critical points of a natural functional arising in potential theory. These hypersurfaces are perturbations of large coordinate spheres, and they admit solutions of a certain over-determined boundary value problem involving the Laplace–Beltrami operator. In a key step we must invert the Dirichlet-to-Neumann operator, highlighting the nonlocal nature of our problem.
We consider a class of “wild” initial data to the compressible Euler system that give rise to infinitely many admissible weak solutions via the method of convex integration. We identify the closure of this class in the natural L1-topology and show that its complement is rather large, specifically it is an open dense set.
We study reachability matrices R(A, b) = [b,Ab, . . . ,An−1b], where A is an n × n matrix over a field K and b is in Kn. We characterize those matrices that are reachability matrices for some pair (A, b). In the case of a cyclic matrix A and an n-vector of indeterminates x, we derive a factorization of the polynomial det(R(A, x)).
An exhaustive discussion of constraint qualifications (CQ) and stationarity concepts for mathematical programs with equilibrium constraints (MPEC) is presented. It is demonstrated that all but the weakest CQ, Guignard CQ, are too strong for a discussion of MPECs. Therefore, MPEC variants of all the standard CQs are introduced and investigated. A strongly stationary point (which is simply a KKT-point) is seen to be a necessary first order optimality condition only under the strongest CQs, MPEC-LICQ, MPEC-SMFCQ and Guignard CQ. Therefore a whole set of KKT-type conditions is investigated. A simple approach is given to acquire A-stationarity to be a necessary first order condition under MPEC-Guiganrd CQ. Finally, a whole chapter is devoted to investigating M-stationary, among the strongest stationarity concepts, second only to strong stationarity. It is shown to be a necessary first order condition under MPEC-Guignard CQ, the weakest known CQ for MPECs.
Reine Untergruppen von vollständig zerlegbaren torsionsfreien abelschen Gruppen werden Butlergruppen genannt. Eine solche Gruppe läßt sich als endliche Summe von rationalen Rang-1-Gruppen darstellen. Eine solche Darstellung ist nicht eindeutig. Daher werden Methoden entwickelt, die zu einer Darstellung mit reinen Summanden führen. Weiter kann aus dieser Darstellung sowohl die kritische Typenmenge als auch die Typuntergruppen direkt abgelesen werden. Dies vereinfacht die Behandlung von Butlergruppen mit dem Computer und gestattet darüberhinaus eine elegantere Darstellung.
This doctoral thesis is concerned with the mathematical modeling of magnetoelastic materials and the analysis of PDE systems describing these materials and obtained from a variational approach.
The purpose is to capture the behavior of elastic particles that are not only magnetic but exhibit a magnetic domain structure which is well described by the micromagnetic energy and the Landau-Lifshitz-Gilbert equation of the magnetization. The equation of motion for the material’s velocity is derived in a continuum mechanical setting from an energy ansatz. In the modeling process, the focus is on the interplay between Lagrangian and Eulerian coordinate systems to combine elasticity and magnetism in one model without the assumption of small deformations.
The resulting general PDE system is simplified using special assumptions. Existence of weak solutions is proved for two variants of the PDE system, one including gradient flow dynamics on the magnetization, and the other featuring the Landau-Lifshitz-Gilbert equation. The proof is based on a Galerkin method and a fixed point argument. The analysis of the PDE system with the Landau-Lifshitz-Gilbert equation uses a more involved approach to obtain weak solutions based on G. Carbou and P. Fabrie 2001.
This thesis gives an overview over mathematical modeling of complex fluids with the discussion of underlying mechanical principles, the introduction of the energetic variational framework, and examples and applications. The purpose is to present a formal energetic variational treatment of energies corresponding to the models of physical phenomena and to derive PDEs for the complex fluid systems. The advantages of this approach over force-based modeling are, e.g., that for complex systems energy terms can be established in a relatively easy way, that force components within a system are not counted twice, and that this approach can naturally combine effects on different scales. We follow a lecture of Professor Dr. Chun Liu from Penn State University, USA, on complex fluids which he gave at the University of Wuerzburg during his Giovanni Prodi professorship in summer 2012. We elaborate on this lecture and consider also parts of his work and publications, and substantially extend the lecture by own calculations and arguments (for papers including an overview over the energetic variational treatment see [HKL10], [Liu11] and references therein).
Purpose: Scarring after glaucoma filtering surgery remains the most frequent cause for bleb failure. The aim of this study was to assess if the postoperative injection of bevacizumab reduces the number of postoperative subconjunctival 5-fluorouracil (5-FU) injections. Further, the effect of bevacizumab as an adjunct to 5-FU on the intraocular pressure (IOP) outcome, bleb morphology, postoperative medications, and complications was evaluated.
Methods: Glaucoma patients (N = 61) who underwent trabeculectomy with mitomycin C were analyzed retrospectively (follow-up period of 25 ± 19 months). Surgery was performed exclusively by one experienced glaucoma specialist using a standardized technique. Patients in group 1 received subconjunctival applications of 5-FU postoperatively. Patients in group 2 received 5-FU and subconjunctival injection of bevacizumab.
Results: Group 1 had 6.4 ± 3.3 (0–15) (mean ± standard deviation and range, respectively) 5-FU injections. Group 2 had 4.0 ± 2.8 (0–12) (mean ± standard deviation and range, respectively) 5-FU injections. The added injection of bevacizumab significantly reduced the mean number of 5-FU injections by 2.4 ± 3.08 (P ≤ 0.005). There was no significantly lower IOP in group 2 when compared to group 1. A significant reduction in vascularization and in cork screw vessels could be found in both groups (P < 0.0001, 7 days to last 5-FU), yet there was no difference between the two groups at the last follow-up. Postoperative complications were significantly higher for both groups when more 5-FU injections were applied. (P = 0.008). No significant difference in best corrected visual acuity (P = 0.852) and visual field testing (P = 0.610) between preoperative to last follow-up could be found between the two groups.
Conclusion: The postoperative injection of bevacizumab reduced the number of subconjunctival 5-FU injections significantly by 2.4 injections. A significant difference in postoperative IOP reduction, bleb morphology, and postoperative medication was not detected.
This thesis covers a wide range of results for when a random vector is in the max-domain of attraction of max-stable random vector. It states some new theoretical results in D-norm terminology, but also gives an explaination why most approaches to multivariate extremes are equivalent to this specific approach. Then it covers new methods to deal with high-dimensional extremes, ranging from dimension reduction to exploratory methods and explaining why the Huessler-Reiss model is a powerful parametric model in multivariate extremes on par with the multivariate Gaussian distribution in multivariate regular statistics. It also gives new results for estimating and inferring the multivariate extremal dependence structure, strategies for choosing thresholds and compares the behavior of local and global threshold approaches. The methods are demonstrated in an artifical simulation study, but also on German weather data.
An explicit Runge-Kutta discontinuous Galerkin (RKDG) method is used to device numerical schemes for both the compressible Euler equations of gas dynamics and the ideal magneto- hydrodynamical (MHD) model. These systems of conservation laws are known to have discontinuous solutions. Discontinuities are the source of spurious oscillations in the solution profile of the numerical approximation, when a high order accurate numerical method is used. Different techniques are reviewed in order to control spurious oscillations. A shock detection technique is shown to be useful in order to determine the regions where the spurious oscillations appear such that a Limiter can be used to eliminate these numeric artifacts. To guarantee the positivity of specific variables like the density and the pressure, a positivity preserving limiter is used. Furthermore, a numerical flux, proven to preserve the entropy stability of the semi-discrete DG scheme for the MHD system is used. Finally, the numerical schemes are implemented using the deal.II C++ libraries in the dflo code. The solution of common test cases show the capability of the method.
The main theme of this thesis is the development of multigrid and hierarchical matrix solution procedures with almost linear computational complexity for classes of partial integro-differential problems. An elliptic partial integro-differential equation, a convection-diffusion partial integro-differential equation and a convection-diffusion partial integro-differential optimality system are investigated. In the first part of this work, an efficient multigrid finite-differences scheme for solving an elliptic
Fredholm partial integro-differential equation (PIDE) is discussed. This scheme combines a second-order accurate finite difference discretization and a Simpson's quadrature rule to approximate the PIDE problem and a multigrid scheme and a fast
multilevel integration method of the Fredholm operator allowing the fast solution of the PIDE problem. Theoretical estimates of second-order accuracy and results of local Fourier analysis of convergence of the proposed multigrid scheme
are presented. Results of numerical experiments validate these estimates and demonstrate optimal computational complexity of the proposed framework that includes numerical experiments for elliptic PIDE problems with singular kernels. The experience gained in this part of the work is used for the investigation of convection diffusion partial-integro differential equations in the second part of this thesis.
Convection-diffusion PIDE problems are discretized using a finite volume scheme referred to as the Chang and Cooper (CC) scheme and a quadrature rule. Also for this class of PIDE problems and this numerical setting, a stability and accuracy analysis of the CC scheme combined with a Simpson's quadrature rule is presented proving second-order accuracy of the numerical solution. To extend and investigate the proposed approximation and solution strategy to the case of systems of convection-diffusion PIDE, an optimal control problem governed by this model is considered. In this case the research focus is the CC-Simpson's discretization of the optimality system and its solution by the proposed multigrid strategy. Second-order accuracy of the optimization solution is proved and results of local Fourier analysis are presented that provide sharp convergence estimates of the optimal computational complexity of the multigrid-fast integration technique.
While (geometric) multigrid techniques require ad-hoc implementation depending on the structure of the PIDE problem and on the dimensionality of the domain where the problem is considered, the hierarchical matrix framework allows a more general treatment that exploits the algebraic structure of the problem at hand. In this thesis, this framework is extended to the case of combined differential and integral problems considering the case of a convection-diffusion PIDE. In this case, the starting point is the CC discretization of the convection-diffusion operator combined with the trapezoidal quadrature rule. The hierarchical matrix approach exploits the algebraic nature of the hierarchical matrices for blockwise approximations by low-rank matrices of the sparse convection-diffusion approximation and enables data sparse representation of the fully populated matrix where all essential matrix operations are performed with at most logarithmic optimal complexity. The factorization of part of or the whole coefficient matrix is used as a preconditioner to the solution of the PIDE problem using a generalized minimum residual (GMRes) procedure as a solver.
Numerical analysis estimates of the accuracy of the
finite-volume and trapezoidal rule approximation are
presented and combined with estimates of the
hierarchical matrix approximation and with the
accuracy of the GMRes iterates. Results of numerical experiments are reported that
successfully validate the theoretical estimates and
the optimal computational complexity of the proposed hierarchical matrix
solution procedure. These results include an extension to higher dimensions and an application to the time evolution of the probability density function of a jump diffusion process.
An efficient multigrid finite-differences scheme for solving elliptic Fredholm partial integro-differential equations (PIDE) is discussed. This scheme combines a second-order accurate finite difference discretization of the PIDE problem with a multigrid scheme that includes a fast multilevel integration of the Fredholm operator allowing the fast solution of the PIDE problem. Theoretical estimates of second-order accuracy and results of local Fourier analysis of convergence of the proposed multigrid scheme are presented. Results of numerical experiments validate these estimates and demonstrate optimal computational complexity of the proposed framework.
The topic of this thesis is the theoretical and numerical analysis of optimal control problems, whose differential constraints are given by Fokker-Planck models related to jump-diffusion processes. We tackle the issue of controlling a stochastic process by formulating a deterministic optimization problem. The
key idea of our approach is to focus on the probability density function of the process,
whose time evolution is modeled by the Fokker-Planck equation. Our control framework is advantageous since it allows to model the action of the control over the entire range of the process, whose statistics are characterized by the shape of its probability density function.
We first investigate jump-diffusion processes, illustrating their main properties. We define stochastic initial-value problems and present results on the existence and uniqueness of their solutions. We then discuss how numerical solutions of stochastic problems are computed, focusing on the Euler-Maruyama method.
We put our attention to jump-diffusion models with time- and space-dependent coefficients and jumps given by a compound Poisson process. We derive the related Fokker-Planck equations, which take the form of partial integro-differential equations. Their differential term is governed by a parabolic operator, while the nonlocal integral operator is due to the presence of the jumps. The derivation is carried out in two cases. On the one hand, we consider a process with unbounded range. On the other hand, we confine the dynamic of the sample paths to a bounded domain, and thus the behavior of the process in proximity of the boundaries has to be specified. Throughout this thesis, we set the barriers of the domain to be reflecting.
The Fokker-Planck equation, endowed with initial and boundary conditions, gives rise to Fokker-Planck problems. Their solvability is discussed in suitable functional spaces. The properties of their solutions are examined, namely their regularity, positivity and probability mass conservation. Since closed-form solutions to Fokker-Planck problems are usually not available, one has to resort to numerical methods.
The first main achievement of this thesis is the definition and analysis of conservative and positive-preserving numerical methods for Fokker-Planck problems. Our SIMEX1 and SIMEX2 (Splitting-Implicit-Explicit) schemes are defined within the framework given by the method of lines. The differential operator is discretized by a finite volume scheme given by the Chang-Cooper method, while the integral operator is approximated by a mid-point rule. This leads to a large system of ordinary differential equations, that we approximate with the Strang-Marchuk splitting method. This technique decomposes the original problem in a
sequence of different subproblems with simpler structure, which are separately solved and linked to each other through initial conditions and final solutions. After performing the splitting step, we carry out the time integration with first- and second-order time-differencing methods. These steps give rise to the SIMEX1 and SIMEX2 methods, respectively.
A full convergence and stability analysis of our schemes is included. Moreover, we are able to prove that the positivity and the mass conservation of the solution to Fokker-Planck problems are satisfied at the discrete level by the numerical solutions computed with the SIMEX schemes.
The second main achievement of this thesis is the theoretical analysis and the numerical solution of optimal control problems governed by Fokker-Planck models. The field of optimal control deals with finding control functions in such a way that given cost functionals are minimized. Our framework aims at the minimization of the difference between a known sequence of values and the first moment of a jump-diffusion process; therefore, this formulation can also be considered as a parameter estimation problem for stochastic processes. Two cases are discussed, in which the form of the cost functional is continuous-in-time and discrete-in-time, respectively.
The control variable enters the state equation as a coefficient of the Fokker-Planck partial integro-differential operator. We also include in the cost functional a $L^1$-penalization term, which enhances the sparsity of the solution. Therefore, the resulting optimization problem is nonconvex and nonsmooth. We derive the first-order optimality systems satisfied by the optimal solution. The computation of the optimal solution is carried out by means of proximal iterative schemes in an infinite-dimensional framework.
A framework for the optimal sparse-control of the probability density function of a jump-diffusion process is presented. This framework is based on the partial integro-differential Fokker-Planck (FP) equation that governs the time evolution of the probability density function of this process. In the stochastic process and, correspondingly, in the FP model the control function enters as a time-dependent coefficient. The objectives of the control are to minimize a discrete-in-time, resp. continuous-in-time, tracking functionals and its L2- and L1-costs, where the latter is considered to promote control sparsity. An efficient proximal scheme for solving these optimal control problems is considered. Results of numerical experiments are presented to validate the theoretical results and the computational effectiveness of the proposed control framework.
Several aspects of the stability analysis of large-scale discrete-time systems are considered. An important feature is that the right-hand side does not have have to be continuous.
In particular, constructive approaches to compute Lyapunov functions are derived and applied to several system classes.
For large-scale systems, which are considered as an interconnection of smaller subsystems, we derive a new class of small-gain results, which do not require the subsystems to be robust in some sense. Moreover, we do not only study sufficiency of the conditions, but rather state an assumption under which these conditions are also necessary.
Moreover, gain construction methods are derived for several types of aggregation, quantifying how large a prescribed set of interconnection gains can be in order that a small-gain condition holds.
This paper presents an alternative approach for obtaining a converse Lyapunov theorem for discrete–time systems. The proposed approach is constructive, as it provides an explicit Lyapunov function. The developed converse theorem establishes existence of global Lyapunov functions for globally exponentially stable (GES) systems and semi–global practical Lyapunov functions for globally asymptotically stable systems. Furthermore, for specific classes of sys- tems, the developed converse theorem can be used to establish non–conservatism of a particular type of Lyapunov functions. Most notably, a proof that conewise linear Lyapunov functions are non–conservative for GES conewise linear systems is given and, as a by–product, tractable construction of polyhedral Lyapunov functions for linear systems is attained.
Die Auseinandersetzung mit Simulations- und Modellierungsaufgaben, die mit digitalen Werkzeugen zu bearbeiten sind, stellt veränderte Anforderungen an Mathematiklehrkräfte in der Unterrichtsplanung und -durchführung. Werden digitale Werkzeuge sinnvoll eingesetzt, so unterstützen sie Simulations- und Modellierungsprozesse und ermöglichen realitätsnähere Sachkontexte im Mathematikunterricht. Für die empirische Untersuchung professioneller Kompetenzen zum Lehren des Simulierens und mathematischen Modellierens mit digitalen Werkzeugen ist es notwendig, Aspekte globaler Lehrkompetenzen von (angehenden) Mathematiklehrkräften bereichsspezifisch auszudeuten.
Daher haben wir ein Testinstrument entwickelt, das die Überzeugungen, die Selbstwirksamkeitserwartungen und das fachdidaktische Wissen zum Lehren des Simulierens und mathematischen Modellierens mit digitalen Werkzeugen erfasst. Ergänzt wird das Testinstrument durch selbstberichtete Vorerfahrungen zum eigenen Gebrauch digitaler Werkzeuge sowie zur Verwendung digitaler Werkzeuge in Unterrichtsplanung und -durchführung.
Das Testinstrument ist geeignet, um mittels Analysen von Veranstaltungsgruppen im Prä-Post-Design den Zuwachs der oben beschriebenen Kompetenz von (angehenden) Mathematiklehrkräften zu messen. Somit können in Zukunft anhand der Ergebnisse die Wirksamkeit von Lehrveranstaltungen, die diese Kompetenz fördern (sollen), untersucht und evaluiert werden.
Der Beitrag gliedert sich in zwei Teile: Zunächst werden in der Testbeschreibung das zugrundeliegende Konstrukt und der Anwendungsbereich des Testinstruments sowie dessen Aufbau und Hinweise zur Durchführung beschrieben. Zudem wird die Testgüte anhand der Pilotierungsergebnisse überprüft. Im zweiten Teil befindet sich das vollständige Testinstrument.
Providing adaptive, independence-preserving and theory-guided support to students in dealing with real-world problems in mathematics lessons is a major challenge for teachers in their professional practice. This paper examines this challenge in the context of simulations and mathematical modelling with digital tools: in addition to mathematical difficulties when autonomously working out individual solutions, students may also experience challenges when using digital tools. These challenges need to be closely examined and diagnosed, and might – if necessary – have to be overcome by intervention in such a way that the students can subsequently continue working independently. Thus, if a difficulty arises in the working process, two knowledge dimensions are necessary in order to provide adapted support to students. For teaching simulations and mathematical modelling with digital tools, more specifically, these knowledge dimensions are: pedagogical content knowledge about simulation and modelling processes supported by digital tools (this includes knowledge about phases and difficulties in the working process) and pedagogical content knowledge about interventions during the mentioned processes (focussing on characteristics of suitable interventions as well as their implementation and effects on the students’ working process). The two knowledge dimensions represent cognitive dispositions as the basis for the conceptualisation and operationalisation of a so-called adaptive intervention competence for teaching simulations and mathematical modelling with digital tools. In our article, we present a domain-specific process model and distinguish different types of teacher interventions. Then we describe the design and content of a university course at two German universities aiming to promote this domain-specific professional adaptive intervention competence, among others. In a study using a quasi-experimental pre-post design (N = 146), we confirm that the structure of cognitive dispositions of adaptive intervention competence for teaching simulations and mathematical modelling with digital tools can be described empirically by a two-dimensional model. In addition, the effectiveness of the course is examined and confirmed quantitatively. Finally, the results are discussed, especially against the background of the sample and the research design, and conclusions are derived for possibilities of promoting professional adaptive intervention competence in university courses.
Ein bekanntes heuristisches Prinzip von A. Bloch beschreibt die Korrespondenz zwischen Kriterien für die Konstanz ganzer Funktionen und Normalitätskriterien. In der vorliegenden Dissertation untersuchen wir die Gültigkeit des Blochschen Prinzip bei Lückenreihenproblemen sowie Zusammenhänge zwischen Normalitätsfragen und der Semidualität von einer bzw. von zwei Funktionen. Die ersten beiden Kapitel stellen die im folgenden benötigten Hilfsmittel aus der Nevanlinnaschen Wertverteilungstheorie und der Normalitätstheorie bereit. Im dritten Kapitel beweisen wir ein neues Normalitätskriterium für Familien holomorpher Funktionen, für die ein Differentialpolynom einer bestimmten Gestalt nullstellenfrei ist. Dies verallgemeinert frühere Resultate von Hayman, Drasin, Langley und Chen & Hua. Kapitel 4 ist dem Beweis eines unserer im folgenden wichtigsten Hilfsmittel gewidmet: eines tiefliegenden Konvergenzsatzes von H. Cartan über Familien von p-Tupeln holomorpher nullstellenfreier Funktionen, welche einer linearen Relation unterliegen. In Kapitel 5 werden die Konzepte der Dualität und Semidualität eingeführt und die Verbindung zu Normalitätsfragen diskutiert. Die neuen Ergebnisse über Lückenreihen finden sich im sechsten Kapitel. Der Schwerpunkt liegt hierbei zum einen auf sog. AP-Lückenreihen, zum anderen auf allgemeinen Konstruktionsverfahren, mit denen sich neue semiduale Lückenstrukturen aus bereits bekannten gewinnen lassen. Zahlreiche unserer Beweise beruhen wesentlich auf dem Satz von Cartan aus Kapitel 4. Im siebten Kapitel erweitern wir unsere Semidualitätsuntersuchungen auf Mengen aus zwei Funktionen. Wir ziehen Normalitätskriterien (vor allem das in Kapitel 3 bewiesene sowie den Satz von Cartan) heran, um spezielle Mengen als nichtsemidual zu identifizieren. Zuletzt konstruieren wir ein Beispiel einer semidualen Menge aus zwei Funktionen.
A basic mental model (BMM—in German ‘Grundvorstellung’) of a mathematical concept is a content-related interpretation that gives meaning to this concept. This paper defines normative and individual BMMs and concretizes them using the integral as an example. Four BMMs are developed about the concept of definite integral, sometimes used in specific teaching approaches: the BMMs of area, reconstruction, average, and accumulation. Based on theoretical work, in this paper we ask how these BMMs could be identified empirically. A test instrument was developed, piloted, validated and applied with 428 students in first-year mathematics courses. The test results show that the four normative BMMs of the integral can be detected and separated empirically. Moreover, the results allow a comparison of the existing individual BMMs and the requested normative BMMs. Consequences for future developments are discussed.
The concept of derivative is characterised with reference to four basic mental models. These are described as theoretical constructs based on theoretical considerations. The four basic mental models—local rate of change, tangent slope, local linearity and amplification factor—are not only quantified empirically but are also validated. To this end, a test instrument for measuring students’ characteristics of basic mental models is presented and analysed regarding quality criteria.
Mathematics students (n = 266) were tested with this instrument. The test results show that the four basic mental models of the derivative can be reconstructed among the students with different characteristics. The tangent slope has the highest agreement values across all tasks. The agreement on explanations based on the basic mental model of rate of change is not as strongly established among students as one would expect due to framework settings in the school system by means of curricula and educational standards. The basic mental model of local linearity plays a rather subordinate role. The amplification factor achieves the lowest agreement values. In addition, cluster analysis was conducted to identify different subgroups of the student population. Moreover, the test results can be attributed to characteristics of the task types as well as to the students’ previous experiences from mathematics classes by means of qualitative interpretation. These and other results of students’ basic mental models of the derivative are presented and discussed in detail.
The incidence matrices of many combinatorial structures satisfy the so called rectangular rule, i.e., the scalar product of any two lines of the matrix is at most 1. We study a class of matrices with rectangular rule, the regular block matrices. Some regular block matrices are submatrices of incidence matrices of finite projective planes. Necessary and sufficient conditions are given for regular block matrices, to be submatrices of projective planes. Moreover, regular block matrices are related to another combinatorial structure, the symmetric configurations. In particular, it turns out, that we may conclude the existence of several symmetric configurations from the existence of a projective plane, using this relationship.
In this paper we introduce a theoretical framework concerned with fostering functional thinking in Grade 8 students by utilizing digital technologies. This framework is meant to be used to guide the systematic variation of tasks for implementation in the classroom while using digital technologies. Examples of problems and tasks illustrate this process. Additionally, results of an empirical investigation with Grade 8 students, which focusses on the students’ skills with digital technologies, how they utilize these tools when engaging with the developed tasks, and how they influence their functional thinking, are presented. The research aim is to investigate in which way tasks designed according to the theoretical framework could promote functional thinking while using digital technologies in the sense of the operative principle. The results show that the developed framework — Function-Operation-Matrix — is a sound basis for initiating students’ actions in the sense of the operative principle, to foster the development of functional thinking in its three aspects, namely, assignment, co-variation and object, and that digital technologies can support this process in a meaningful way.
We derive a multi-species BGK model with velocity-dependent collision frequency for a non-reactive, multi-component gas mixture. The model is derived by minimizing a weighted entropy under the constraint that the number of particles of each species, total momentum, and total energy are conserved. We prove that this minimization problem admits a unique solution for very general collision frequencies. Moreover, we prove that the model satisfies an H-Theorem and characterize the form of equilibrium.
The thesis focuses on the valuation of firms in a system context where cross-holdings of the firms in liabilities and equities are allowed and, therefore, systemic risk can be modeled on a structural level. A main property of such models is that for the determination of the firm values a pricing equilibrium has to be found. While there exists a small but growing amount of research on the existence and the uniqueness of such price equilibria, the literature is still somewhat inconsistent. An example for this fact is that different authors define the underlying financial system on differing ways. Moreover, only few articles pay intense attention on procedures to find the pricing equilibria. In the existing publications, the provided algorithms mainly reflect the individual authors' particular approach to the problem. Additionally, all existing methods do have the drawback of potentially infinite runtime.
For these reasons, the objects of this thesis are as follows. First, a definition of a financial system is introduced in its most general form in Chapter 2. It is shown that under a fairly mild regularity condition the financial system has a unique existing payment equilibrium. In Chapter 3, some extensions and differing definitions of financial systems that exist in literature are presented and it is shown how these models can be embedded into the general model from the proceeding chapter. Second, an overview of existing valuation algorithms to find the equilibrium is given in Chapter 4, where the existing methods are generalized and their corresponding mathematical properties are highlighted. Third, a complete new class of valuation algorithms is developed in Chapter 4 that includes the additional information whether a firm is in default or solvent under a current payment vector. This results in procedures that are able find the solution of the system in a finite number of iteration steps. In Chapter 5, the developed concepts of Chapter 4 are applied to more general financial systems where more than one seniority level of debt is present. Chapter 6 develops optimal starting vectors for non-finite algorithms and Chapter 7 compares the existing and the new developed algorithms concerning their efficiency in an extensive simulation study covering a wide range of possible settings for financial systems.
In this thesis we study smoothness properties of primal and dual gap functions for generalized Nash equilibrium problems (GNEPs) and finite-dimensional quasi-variational inequalities (QVIs). These gap functions are optimal value functions of primal and dual reformulations of a corresponding GNEP or QVI as a constrained or unconstrained optimization problem. Depending on the problem type, the primal reformulation uses regularized Nikaido-Isoda or regularized gap function approaches. For player convex GNEPs and QVIs of the so-called generalized `moving set' type the respective primal gap functions are continuously differentiable. In general, however, these primal gap functions are nonsmooth for both problems. Hence, we investigate their continuity and differentiability properties under suitable assumptions. Here, our main result states that, apart from special cases, all locally minimal points of the primal reformulations are points of differentiability of the corresponding primal gap function.
Furthermore, we develop dual gap functions for a class of GNEPs and QVIs and ensuing unconstrained optimization reformulations of these problems based on an idea by Dietrich (``A smooth dual gap function solution to a class of quasivariational inequalities'', Journal of Mathematical Analysis and Applications 235, 1999, pp. 380--393). For this purpose we rewrite the primal gap functions as a difference of two strongly convex functions and employ the Toland-Singer duality theory. The resulting dual gap functions are continuously differentiable and, under suitable assumptions, have piecewise smooth gradients. Our theoretical analysis is complemented by numerical experiments. The solution methods employed make use of the first-order information established by the aforementioned theoretical investigations.
The Factorization Method is a noniterative method to detect the shape and position of conductivity anomalies inside an object. The method was introduced by Kirsch for inverse scattering problems and extended to electrical impedance tomography (EIT) by Brühl and Hanke. Since these pioneering works, substantial progress has been made on the theoretical foundations of the method. The necessary assumptions have been weakened, and the proofs have been considerably simplified. In this work, we aim to summarize this progress and present a state-of-the-art formulation of the Factorization Method for EIT with continuous data. In particular, we formulate the method for general piecewise analytic conductivities and give short and self-contained proofs.
For a connected real Lie group G we consider the canonical standard-ordered star product arising from the canonical global symbol calculus based on the half-commutator connection of G. This star product trivially converges on polynomial functions on T\(^*\)G thanks to its homogeneity. We define a nuclear Fréchet algebra of certain analytic functions on T\(^*\)G, for which the standard-ordered star product is shown to be a well-defined continuous multiplication, depending holomorphically on the deformation parameter \(\hbar\). This nuclear Fréchet algebra is realized as the completed (projective) tensor product of a nuclear Fréchet algebra of entire functions on G with an appropriate nuclear Fréchet algebra of functions on \({\mathfrak {g}}^*\). The passage to the Weyl-ordered star product, i.e. the Gutt star product on T\(^*\)G, is shown to preserve this function space, yielding the continuity of the Gutt star product with holomorphic dependence on \(\hbar\).
Background
The prevalence of obesity is rising. Obesity can lead to cardiovascular and ventilatory complications through multiple mechanisms. Cardiac and pulmonary function in asymptomatic subjects and the effect of structured dietary programs on cardiac and pulmonary function is unclear.
Objective
To determine lung and cardiac function in asymptomatic obese adults and to evaluate whether weight loss positively affects functional parameters.
Methods
We prospectively evaluated bodyplethysmographic and echocardiographic data in asymptomatic subjects undergoing a structured one-year weight reduction program.
Results
74 subjects (32 male, 42 female; mean age 42±12 years) with an average BMI 42.5±7.9, body weight 123.7±24.9 kg were enrolled. Body weight correlated negatively with vital capacity (R = −0.42, p<0.001), FEV1 (R = −0.497, p<0.001) and positively with P 0.1 (R = 0.32, p = 0.02) and myocardial mass (R = 0.419, p = 0.002). After 4 months the study subjects had significantly reduced their body weight (−26.0±11.8 kg) and BMI (−8.9±3.8) associated with a significant improvement of lung function (absolute changes: vital capacity +5.5±7.5% pred., p<0.001; FEV1+9.8±8.3% pred., p<0.001, ITGV+16.4±16.0% pred., p<0.001, SR tot −17.4±41.5% pred., p<0.01). Moreover, P0.1/Pimax decreased to 47.7% (p<0.01) indicating a decreased respiratory load. The change of FEV1 correlated significantly with the change of body weight (R = −0.31, p = 0.03). Echocardiography demonstrated reduced myocardial wall thickness (−0.08±0.2 cm, p = 0.02) and improved left ventricular myocardial performance index (−0.16±0.35, p = 0.02). Mitral annular plane systolic excursion (+0.14, p = 0.03) and pulmonary outflow acceleration time (AT +26.65±41.3 ms, p = 0.001) increased.
Conclusion
Even in asymptomatic individuals obesity is associated with abnormalities in pulmonary and cardiac function and increased myocardial mass. All the abnormalities can be reversed by a weight reduction program.
In this paper we study properties of the Laplace approximation of the posterior distribution arising in nonlinear Bayesian inverse problems. Our work is motivated by Schillings et al. (Numer Math 145:915–971, 2020. https://doi.org/10.1007/s00211-020-01131-1), where it is shown that in such a setting the Laplace approximation error in Hellinger distance converges to zero in the order of the noise level. Here, we prove novel error estimates for a given noise level that also quantify the effect due to the nonlinearity of the forward mapping and the dimension of the problem. In particular, we are interested in settings in which a linear forward mapping is perturbed by a small nonlinear mapping. Our results indicate that in this case, the Laplace approximation error is of the size of the perturbation. The paper provides insight into Bayesian inference in nonlinear inverse problems, where linearization of the forward mapping has suitable approximation properties.
In financial mathematics, it is a typical approach to approximate financial markets operating in discrete time by continuous-time models such as the Black Scholes model. Fitting this model gives rise to difficulties due to the discrete nature of market data. We thus model the pricing process of financial derivatives by the Black Scholes equation, where the volatility is a function of a finite number of random variables. This reflects an influence of uncertain factors when determining volatility. The aim is to quantify the effect of this uncertainty when computing the price of derivatives. Our underlying method is the generalized Polynomial Chaos (gPC) method in order to numerically compute the uncertainty of the solution by the stochastic Galerkin approach and a finite difference method. We present an efficient numerical variation of this method, which is based on a machine learning technique, the so-called Bi-Fidelity approach. This is illustrated with numerical examples.
Chemotaxis describes the movement of an organism, such as single or multi-cellular organisms and bacteria, in response to a chemical stimulus. Two widely used models to describe the phenomenon are the celebrated Keller–Segel equation and a chemotaxis kinetic equation. These two equations describe the organism's movement at the macro- and mesoscopic level, respectively, and are asymptotically equivalent in the parabolic regime. The way in which the organism responds to a chemical stimulus is embedded in the diffusion/advection coefficients of the Keller–Segel equation or the turning kernel of the chemotaxis kinetic equation. Experiments are conducted to measure the time dynamics of the organisms' population level movement when reacting to certain stimulation. From this, one infers the chemotaxis response, which constitutes an inverse problem. In this paper, we discuss the relation between both the macro- and mesoscopic inverse problems, each of which is associated with two different forward models. The discussion is presented in the Bayesian framework, where the posterior distribution of the turning kernel of the organism population is sought. We prove the asymptotic equivalence of the two posterior distributions.
This thesis is concerned with applying the total variation (TV) regularizer to surfaces and different types of shape optimization problems. The resulting problems are challenging since they suffer from the non-differentiability of the TV-seminorm, but unlike most other priors it favors piecewise constant solutions, which results in piecewise flat geometries for shape optimization problems.The first part of this thesis deals with an analogue of the TV image reconstruction approach [Rudin, Osher, Fatemi (Physica D, 1992)] for images on smooth surfaces. A rigorous analytical framework is developed for this model and its Fenchel predual, which is a quadratic optimization problem with pointwise inequality constraints on the surface. A function space interior point method is proposed to solve it. Afterwards, a discrete variant (DTV) based on a nodal quadrature formula is defined for piecewise polynomial, globally discontinuous and continuous finite element functions on triangulated surface meshes. DTV has favorable properties, which include a convenient dual representation. Next, an analogue of the total variation prior for the normal vector field along the boundary of smooth shapes in 3D is introduced. Its analysis is based on a differential geometric setting in which the unit normal vector is viewed as an element of the two-dimensional sphere manifold. Shape calculus is used to characterize the relevant derivatives and an variant of the split Bregman method for manifold valued functions is proposed. This is followed by an extension of the total variation prior for the normal vector field for piecewise flat surfaces and the previous variant of split Bregman method is adapted. Numerical experiments confirm that the new prior favours polyhedral shapes.
It is well-known that a multivariate extreme value distribution can be represented via the D-Norm. However not every norm yields a D-Norm. In this thesis a necessary and sufficient condition is given for a norm to define an extreme value distribution. Applications of this theorem includes a new proof for the bivariate case, the Pickands dependence function and the nested logistic model. Furthermore the GPD-Flow is introduced and first insights were given such that if it converges it converges against the copula of complete dependence.
We introduce some mathematical framework for extreme value theory in the space of continuous functions on compact intervals and provide basic definitions and tools. Continuous max-stable processes on [0,1] are characterized by their “distribution functions” G which can be represented via a norm on function space, called D-norm. The high conformity of this setup with the multivariate case leads to the introduction of a functional domain of attraction approach for stochastic processes, which is more general than the usual one based on weak convergence. We also introduce the concept of “sojourn time transformation” and compare several types of convergence on function space. Again in complete accordance with the uni- or multivariate case it is now possible to get functional generalized Pareto distributions (GPD) W via W = 1 + log(G) in the upper tail. In particular, this enables us to derive characterizations of the functional domain of attraction condition for copula processes. Moreover, we investigate the sojourn time above a high threshold of a continuous stochastic process. It turns out that the limit, as the threshold increases, of the expected sojourn time given that it is positive, exists if the copula process corresponding to Y is in the functional domain of attraction of a max-stable process. If the process is in a certain neighborhood of a generalized Pareto process, then we can replace the constant threshold by a general threshold function and we can compute the asymptotic sojourn time distribution.
A new class of optimization problems name 'mathematical programs with vanishing constraints (MPVCs)' is considered. MPVCs are on the one hand very challenging from a theoretical viewpoint, since standard constraint qualifications such as LICQ, MFCQ, or ACQ are most often violated, and hence, the Karush-Kuhn-Tucker conditions do not provide necessary optimality conditions off-hand. Thus, new CQs and the corresponding optimality conditions are investigated. On the other hand, MPVCs have important applications, e.g., in the field of topology optimization. Therefore, numerical algorithms for the solution of MPVCs are designed, investigated and tested for certain problems from truss-topology-optimization.
In forecasting count processes, practitioners often ignore the discreteness of counts and compute forecasts based on Gaussian approximations instead. For both central and non-central point forecasts, and for various types of count processes, the performance of such approximate point forecasts is analyzed. The considered data-generating processes include different autoregressive schemes with varying model orders, count models with overdispersion or zero inflation, counts with a bounded range, and counts exhibiting trend or seasonality. We conclude that Gaussian forecast approximations should be avoided.
One of the major motivations for the analysis and modeling of time series data is the forecasting of future outcomes. The use of interval forecasts instead of point forecasts allows us to incorporate the apparent forecast uncertainty. When forecasting count time series, one also has to account for the discreteness of the range, which is done by using coherent prediction intervals (PIs) relying on a count model. We provide a comprehensive performance analysis of coherent PIs for diverse types of count processes. We also compare them to approximate PIs that are computed based on a Gaussian approximation. Our analyses rely on an extensive simulation study. It turns out that the Gaussian approximations do considerably worse than the coherent PIs. Furthermore, special characteristics such as overdispersion, zero inflation, or trend clearly affect the PIs' performance. We conclude by presenting two empirical applications of PIs for count time series: the demand for blood bags in a hospital and the number of company liquidations in Germany.
Risk measures are commonly used to prepare for a prospective occurrence of an adverse event. If we are concerned with discrete risk phenomena such as counts of natural disasters, counts of infections by a serious disease, or counts of certain economic events, then the required risk forecasts are to be computed for an underlying count process. In practice, however, the discrete nature of count data is sometimes ignored and risk forecasts are calculated based on Gaussian time series models. But even if methods from count time series analysis are used in an adequate manner, the performance of risk forecasting is affected by estimation uncertainty as well as certain discreteness phenomena. To get a thorough overview of the aforementioned issues in risk forecasting of count processes, a comprehensive simulation study was done considering a broad variety of risk measures and count time series models. It becomes clear that Gaussian approximate risk forecasts substantially distort risk assessment and, thus, should be avoided. In order to account for the apparent estimation uncertainty in risk forecasting, we use bootstrap approaches for count time series. The relevance and the application of the proposed approaches are illustrated by real data examples about counts of storm surges and counts of financial transactions.
Dysfunction of dopaminergic neurotransmission has been implicated in HIV infection. We showed previously increased dopamine (DA) levels in CSF of therapy-naïve HIV patients and an inverse correlation between CSF DA and CD4 counts in the periphery, suggesting adverse effects of high levels of DA on HIV infection. In the current study including a total of 167 HIV-positive and negative donors from Germany and South Africa (SA), we investigated the mechanistic background for the increase of CSF DA in HIV individuals. Interestingly, we found that the DAT 10/10-repeat allele is present more frequently within HIV individuals than in uninfected subjects. Logistic regression analysis adjusted for gender and ethnicity showed an odds ratio for HIV infection in DAT 10/10 allele carriers of 3.93 (95 % CI 1.72–8.96; p = 0.001, Fishers exact test). 42.6 % HIV-infected patients harbored the DAT 10/10 allele compared to only 10.5 % uninfected DAT 10/10 carriers in SA (odds ratio 6.31), whereas 68.1 versus 40.9 %, respectively, in Germany (odds ratio 3.08). Subjects homozygous for the 10-repeat allele had higher amounts of CSF DA and reduced DAT mRNA expression but similar disease severity compared with those carrying other DAT genotypes. These intriguing and novel findings show the mutual interaction between DA and HIV, suggesting caution in the interpretation of CNS DA alterations in HIV infection solely as a secondary phenomenon to the virus and open the door for larger studies investigating consequences of the DAT functional polymorphism on HIV epidemiology and progression of disease.
The aim of the present paper is to clarify the role of extreme order statistics in general statistical models. This is done within the general setup of statistical experiments in LeCam's sense. Under the assumption of monotone likelihood ratios, we prove that a sequence of experiments is asymptotically Gaussian if, and only if, a fixed number of extremes asymptotically does not contain any information. In other words: A fixed number of extremes asymptotically contains information iff the Poisson part of the limit experiment is non-trivial. Suggested by this result, we propose a new extreme value model given by local alternatives. The local structure is described by introducing the space of extreme value tangents. It turns out that under local alternatives a new class of extreme value distributions appears as limit distributions. Moreover, explicit representations of the Poisson limit experiments via Poisson point processes are found. As a concrete example nonparametric tests for Frechet type distributions against stochastically larger alternatives are treated. We find asymptotically optimal tests within certain threshold models.
This thesis, first, is devoted to the theoretical and numerical investigation of an augmented Lagrangian method for the solution of optimization problems with geometric constraints, subsequently, as well as constrained structured optimization problems featuring a composite objective function and set-membership constraints. It is then concerned to convergence and rate-of-convergence analysis of proximal gradient methods for the composite optimization problems in the presence of the Kurdyka--{\L}ojasiewicz property without global Lipschitz assumption.
Die fast vollständig zerlegbaren Gruppen bilden eine Teilklasse der Butlergruppen. Das Konzept des Regulators, d.h. der Durchschnitt aller regulierenden Untergruppen, ist unverzichtbar für fast vollständig zerlegbare Gruppen. Dieses Konzept lässt sich in natürlicher Weise auf die ganze Klasse der Butlergruppen fortsetzen. Allerdings lässt sich die Regulatorbildung im allgemeineren Fall der Butlergruppen a priori iterieren. Damit stellt sich erst einmal die Frage, ob es überhaupt Butlergruppen gibt mit Regulatorketten, der Länge größer als 1. Ein erstes Beispiel der Länge 2 wurde 1997 von Lehrmann und Mutzbauer konstruiert. In dieser Dissertation wurden mit konzeptionell neuen Techniken Butlergruppen mit beliebiger vorgegebener endlicher Kettenlänge angegeben. Grundsätzliche Schwierigkeiten bei diesem Unterfangen resultieren aus dem Fehlen, bzw. der Unmöglichkeit, einer kanonischen Darstellung von Butlergruppen. Man verwendet die allseits gebrauchte Summendarstellung für Butlergruppen. Genau an dieser Stelle bedarf es völlig neuer Methoden, verglichen mit den fast vollständig zerlegbaren Gruppen mit ihrer kanonischen Regulatordarstellung. Alle Teilaufgaben bei der anstehenden Konstruktion von Butlergruppen, die für fast vollständig zerlegbare Gruppen Standard sind, werden hierbei problematisch, u.a. die Bildung reiner Hüllen, die Bestimmung regulierender Untergruppen und die Regulatorbildung.
We investigate iterative numerical algorithms with shifts as nonlinear discrete-time control systems. Our approach is based on the interpretation of reachable sets as orbits of the system semigroup. In the first part we develop tools for the systematic analysis of the structure of reachable sets of general invertible discrete-time control systems. Therefore we merge classical concepts, such as geometric control theory, semigroup actions and semialgebraic geometry. Moreover, we introduce new concepts such as right divisible systems and the repelling phenomenon. In the second part we apply the semigroup approach to the investigation of concrete numerical iteration schemes. We extend the known results about the reachable sets of classical inverse iteration. Moreover, we investigate the structure of reachable sets and systemgroup orbits of inverse iteration on flag manifolds and Hessenberg varieties, rational iteration schemes, Richardson's method and linear control schemes. In particular we obtain necessary and sufficient conditions for controllability and the appearance of repelling phenomena. Furthermore, a new algorithm for solving linear equations (LQRES) is derived.
This paper studies differential graded modules and representations up to homotopy of Lie n-algebroids, for general \(n\in {\mathbb {N}}\). The adjoint and coadjoint modules are described, and the corresponding split versions of the adjoint and coadjoint representations up to homotopy are explained. In particular, the case of Lie 2-algebroids is analysed in detail. The compatibility of a Poisson bracket with the homological vector field of a Lie n-algebroid is shown to be equivalent to a morphism from the coadjoint module to the adjoint module, leading to an alternative characterisation of non-degeneracy of higher Poisson structures. Moreover, the Weil algebra of a Lie n-algebroid is computed explicitly in terms of splittings, and representations up to homotopy of Lie n-algebroids are used to encode decomposed VB-Lie n-algebroid structures on double vector bundles.
Nowadays, science, technology, engineering, and mathematics (STEM) play a critical role in a nation’s global competitiveness and prosperity. Thus, there is a need to educate students in these subjects to meet the current and future demands of personal life and society. While applications, especially in science, engineering, and technology, are directly obvious, mathematics underpins the other STEM disciplines. It is recognized that mathematics is the foundation for all other STEM disciplines; the role of mathematics in classrooms is not clear yet. Therefore, the question arises: What is the current role of mathematics in secondary STEM classrooms? To answer this question, we conducted a systematic literature review based on three publication databases (Web of Science, ERIC, and EBSCO Teacher Referral Center). This literature review paper is intended to contribute to the current state of the role of mathematics in STEM education in secondary classrooms. Through the search, starting with 1910 documents, only 14 eligible documents were found. In these, mathematics is often seen as a minor matter and a means to an end in the eyes of science educators. From this, we conclude that the role of mathematics in the STEM classroom should be further strengthened. Overall, the paper highlights a major research gap, and proposes possible initial solutions to close it.
Ó. Blasco and S. Pott showed that the supremum of operator norms over L\(^{2}\) of all bicommutators (with the same symbol) of one-parameter Haar multipliers dominates the biparameter dyadic product BMO norm of the symbol itself. In the present work we extend this result to the Bloom setting, and to any exponent 1 < p < ∞. The main tool is a new characterization in terms of paraproducts and two-weight John–Nirenberg inequalities for dyadic product BMO in the Bloom setting. We also extend our results to the whole scale of indexed spaces between little bmo and product BMO in the general multiparameter setting, with the appropriate iterated commutator in each case.
In this article we collect some recent results on the global existence of weak solutions for diffuse interface models involving incompressible magnetic fluids. We consider both the cases of matched and unmatched specific densities. For the model involving fluids with identical densities we consider the free energy density to be a double well potential whereas for the unmatched density case it is crucial to work with a singular free energy density.
Prediction intervals are needed in many industrial applications. Frequently in mass production, small subgroups of unknown size with a lifetime behavior differing from the remainder of the population exist. A risk assessment for such a subgroup consists of two steps: i) the estimation of the subgroup size, and ii) the estimation of the lifetime behavior of this subgroup. This thesis covers both steps. An efficient practical method to estimate the size of a subgroup is presented and benchmarked against other methods. A prediction interval procedure which includes prior information in form of a Beta distribution is provided. This scheme is applied to the prediction of binomial and negative binomial counts. The effect of the population size on the prediction of the future number of failures is considered for a Weibull lifetime distribution, whose parameters are estimated from censored field data. Methods to obtain a prediction interval for the future number of failures with unknown sample size are presented. In many applications, failures are reported with a delay. The effects of such a reporting delay on the coverage properties of prediction intervals for the future number of failures are studied. The total failure probability of the two steps can be decomposed as a product probability. One-sided confidence intervals for such a product probability are presented.